The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a mor
AI models have a logarithmic progress. In the end the amount of resources needed to increase the performance goes up rapidly. If you have to double the number of processors and data to add another few percent you eventually run out of data and money. This was true with previous systems, even chess engines. It was expected here too, and will be true for the successor of LLMs.
AI models have a logarithmic progress. In the end the amount of resources needed to increase the performance goes up rapidly. If you have to double the number of processors and data to add another few percent you eventually run out of data and money. This was true with previous systems, even chess engines. It was expected here too, and will be true for the successor of LLMs.