I agree that OpenAI seems to be losing momentum. A chart recently shared here illustrates this trend.
It shows that while top models could maintain their position for about six months over the past year, this duration has decreased to just two months in the last 2-3 months.
The chart also indicates that models are starting to cluster at the top. This doesn't necessarily mean they're not improving—it's based on ELO-based relative rankings, not absolute ones. However, it does suggest that the quality and availability of models are becoming more of a commodity. As models become cheaper and more accessible, their overall quality is rising.
As for OpenAI, it remains near the top in many areas. In some use cases, their models offer the best performance for the price. However, the shift from a research-focused company to a product-oriented one has cost them talent, particularly in research. This shift may slow them down, especially as Anthropic seems to be taking the lead in model quality.
That said, OpenAI still has an edge in one crucial area: scaling and productizing APIs. They currently outperform Anthropic in this regard. On the other hand, Meta's open-source strategy and partnerships with model-hosting networks, along with Groq and Cerebras using custom hardware for incredible speeds, could challenge OpenAI's lead in API services.
Another area where OpenAI might still hold an advantage is in partnerships. They are actively securing data through various collaborations, though it's unclear if Anthropic is doing the same.
In summary, the landscape is complex. While OpenAI may be losing ground in some areas, it could still be advancing in others. However, when it comes to research, it does seem like they are currently on the decline.