I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs and for that to be a dead end.
Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.
In this scenario reaching the goal would require an entirely different base technology, and incremental improvements to what we have now do not eventually lead to AGI.
Kinda like incremental improvements to cars or even trains won’t eventually get us to Mars.
Firstly, I’ve been talking about improvements in AI technology broadly, not any specific subfield. Secondly, you can’t know that. While I doubt LLMs will directly lead to AGI, I wouldn’t claim this with absolute certainty - there’s always a chance they do, or at the very least, that they help us discover what the next step should be.
It’s true that I can’t know for sure that they won’t lead to AGI (or like you say give clues) - however it’s definitely a scenario I can imagine, and that’s what I was responding to: The idea that incremental improvements Must lead to a given goal. I don’t think that’s the case.
Here in particular I think it’s not only possible that it won’t, it’s even somewhat likely.
This doesn’t just apply to AGI, same could be said about any technology. If it can be created and there’s value in creating it, then it’ll just be a matter of time untill someone invents it unless we go extinct before that.
It also applies to technologies that don’t in fact exist but could. Those are much harder to name (besides sci-fi) since almost by definition we don’t know about most of them. Nor how many, compared to existing tech.
I’m not actually saying it’s impossible, just saying that local maximums (as described by the other users here) are a thing and it’s possible to be trapped for a very long time by them. Potentially forever, but you’re right that odds of breaking out do increase over time.
Yeah, I agree with all of this. What I’m pushing back against is the absolute, dismissive tone some people take whenever the potential dangers of AGI are brought up. Once someone is at least willing to accept the likely reality that we’ll have AGI at some point, then we can move on to debating the timescale.
If an asteroid impact were predicted 100 years from now, at what point should we start taking steps to prevent it? Framing it this way makes it feel more urgent - at least to me.
I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs and for that to be a dead end.
Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.
(Edit, clarified)
Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.
In this scenario reaching the goal would require an entirely different base technology, and incremental improvements to what we have now do not eventually lead to AGI.
Kinda like incremental improvements to cars or even trains won’t eventually get us to Mars.
Firstly, I’ve been talking about improvements in AI technology broadly, not any specific subfield. Secondly, you can’t know that. While I doubt LLMs will directly lead to AGI, I wouldn’t claim this with absolute certainty - there’s always a chance they do, or at the very least, that they help us discover what the next step should be.
It’s true that I can’t know for sure that they won’t lead to AGI (or like you say give clues) - however it’s definitely a scenario I can imagine, and that’s what I was responding to: The idea that incremental improvements Must lead to a given goal. I don’t think that’s the case. Here in particular I think it’s not only possible that it won’t, it’s even somewhat likely.
This doesn’t just apply to AGI, same could be said about any technology. If it can be created and there’s value in creating it, then it’ll just be a matter of time untill someone invents it unless we go extinct before that.
It also applies to technologies that don’t in fact exist but could. Those are much harder to name (besides sci-fi) since almost by definition we don’t know about most of them. Nor how many, compared to existing tech.
I’m not actually saying it’s impossible, just saying that local maximums (as described by the other users here) are a thing and it’s possible to be trapped for a very long time by them. Potentially forever, but you’re right that odds of breaking out do increase over time.
Yeah, I agree with all of this. What I’m pushing back against is the absolute, dismissive tone some people take whenever the potential dangers of AGI are brought up. Once someone is at least willing to accept the likely reality that we’ll have AGI at some point, then we can move on to debating the timescale.
If an asteroid impact were predicted 100 years from now, at what point should we start taking steps to prevent it? Framing it this way makes it feel more urgent - at least to me.
Just like incremental improvements in the bicycle will eventually allow for hypersonic peddling.