Foundation models thus have the potential to change our industry permanently. It is a revolution that fundamentally rethinks the way we deal with AI. Overall, AI is becoming easier to access and use, but the question of how to handle it is becoming more difficult.
Overall, the value proposition of full automation for productive use should not be set too high. AI in the sense of “augmented intelligence” is about the intelligent interaction of humans and machines. Transformer models help to drastically reduce workloads; in most cases, final quality control and decision-making will still be carried out by humans in the future.
Even if the first applications of Transformer models are strongly related to the generation and summarisation of texts, more and more new application scenarios will emerge in the future. New applications such as filling out spreadsheets in controlling, researching protein sequences or developing software code already exist today. In the near future, solutions will increasingly emerge in the areas of 3D, design, architecture and virtual worlds. Only Transformer models will make the development of virtual objects in the metaverse economically scalable.
The extreme development speed of new foundation models and upgrades of existing models in a weekly innovation cycle, as well as the fact that more and more knowledge is represented digitally (Internet of Everything) and thus the potential basis of language models continues to grow, will lead to further leaps in quality and new application scenarios in the near future: “THE AI FUTURE HAS JUST BEGUN”.