So, LLMs Gen AI, and RAG are completely useless then?
Is that your opinion?
BTW in the end about RAG being outdated, what do you mean? RAG is exactly used to find relevant up-to-date information to feed into context.
This does not mean that it works perfectly but it does allow LLMs to work with data that is not in their learning dataset.
It does suffer from issues, but still is more useful than not having such a tool.
Even if your opinion is narrowed to GenAI it still comes out very one-sided and I would insist in my above comment still being fair.
There are as many successful use cases and failures.
Some most interesting examples are medical diagnoses and heard of multiple cases and studies as well that show that LLMs are approaching nd even outperforming human experts in some forms of diagnosis
For example, just 1 minute search pulls this
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000341
If you want I can pull out way more such studies, but you can too.
I have read at least 2 broader studies over the last year on where LLMs are going with diagnosis.
There are doctors themselves reporting that o1 is capable of writing rehabilitation plans comparable to the ones they write themselves.
So yes, unless you address such studies as well, your article reads as confirmation bias against GenAI.