- 83 Posts
- 42 Comments
dynomightOPMtodynomight internet forum•Y’all are over-complicating these AI-risk argumentsEnglish
1·1 个月前Ah, so the argument is more general than “reproduction” through running different physical copies, but also includes the AI self-improving? This again seems plausible to me, but still seems like something not everyone would agree with. It’s possible, for example, that the “300 IQ AI” only appears at the end of some long process of recursive self-improvement, at which stage physical limits mean it can’t get much better without new hardware requiring some kind of human intervention.
I guess my goal is not to lay out the most likely scenario for AI-risk, but rather the scenario that requires the fewest assumptions, that’s the hardest to dispute?
dynomightOPMtodynomight internet forum•Y’all are over-complicating these AI-risk argumentsEnglish
1·1 个月前I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.
Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I’m not quite sure how to use those resources to reduce risk…) But this definitely is not implied by the minimal argument.
dynomightOPMtodynomight internet forum•Y’all are over-complicating these AI-risk argumentsEnglish
1·1 个月前I certainly agree that makes the scenario more concerning. But I worry that it also increases the “surface area of disagreement”. Some people might reject the metaphor on the grounds that they think—say—that AI will require such enormous computational resources and there are physical limits on how quickly more compute can be created that AI can’t “reproduce”.
It’s certainly possible that I’m misinterpreting them, but I don’t think I understand what you’re suggesting. How do you interpret “Substack eugenics alarm”?
Interestingly, lots of people now seem excited about alpha school, where pay-for-performance is apparently a core principle!
This is a tangent but I’ve always been fascinated by the question of what people would spend their time on given extremely long lifespans. One theory would be art, literature, etc. But maybe you’d get tired of all that and what you’d really enjoy is more basic things like good meals and physical comfort? Or maybe you’d just meditate all the time?
Deciding if you’ll like something before you’ve tasted it is a great example. Probably we all do that to some degree with all sorts of things?
P.S. Instead of Moby Dick try War and Peace!
Thanks, I really like the idea of “performing enjoying”. I’d heard of the Ben Franklin effect before, but not the conjectured explanation. (The other conjectured explanations on Wikipedia are interesting, too.)
dynomightOPMtodynomight internet forum•New colors without shooting lasers into your eyesEnglish
2·4 个月前That’s what I see, too—if I’m able to hold my focus exactly constant. It seems to disappear as soon as I move my eyes even a little bit.
I considered getting a CGM, but they all seemed to require all sorts of cloud services and apps and stuff that wouldn’t work for me.
I didn’t reap UPP, though I think I read this review: https://www.newyorker.com/magazine/2023/07/31/ultra-processed-people-chris-van-tulleken-book-review (should I?)
I think this is a fair argument. Current AIs are quite bad about “knowing if they know”. I think it’s likely that we can/will solve this problem, but I don’t have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.
dynomightOPMtodynomight internet forum•A deep critique of AI 2027’s bad timeline modelsEnglish
21·5 个月前FWIW, I think this is a great post. But I really don’t like the way people are treating it like a “knockout blow” against AI 2027. It’s healthy debate!
I’m sure many people feel the same way. But wouldn’t that just make that observation even stronger—people care about animal welfare so much that they’d like to go even further than in-ovo testing?
Agree with your first point. For the second point, I felt like I had to add some artifice because otherwise the morally correct choice in almost all situations would seem to obviously be “ask humanity and let it choose for itself”! Which is correct, but not very interesting.
(In any case, I’m not actually that interested in these particular moral puzzles, I have other purposes in asking…)
dynomightOPMtodynomight internet forum•My advice on (internet) writing, for what it’s worthEnglish
2·5 个月前Subscription confirmed!
dynomightOPMtodynomight internet forum•My advice on (internet) writing, for what it’s worthEnglish
2·5 个月前Confirmed!
(PS I love pedantic emails)
dynomightOPMtodynomight internet forum•My advice on (internet) writing, for what it’s worthEnglish
1·5 个月前I first tried it with my RSS reader, but I also get an error if i just try to load that URL in a web browser. (Any browser.)
dynomightOPMtodynomight internet forum•My advice on (internet) writing, for what it’s worthEnglish
1·5 个月前Confirmed! Though this seems to mostly work, I also seem to get some kind of parse error. You might want to check if there’s a problem.
Ah, I see, very nice. I wonder if it might make sense to declare the dimensions that are supposed to match once and for all when you wrap the function?
E.g. perhaps you could write:
@new_wrap('m, n, m n->') def my_op(x,y,a): return y @ jnp.linalg.solve(a,x)to declare the matching dimensions of the wrapped function and then call it with something like
Z = my_op('i [:], j [:], i j [: :]->i j', X, Y, A)It’s a small thing but it seems like the matching declaration should be done “once and for all”?
(On the other hand, I guess there might be cases where the way things match depend on the arguments…)
Edit: Or perhaps if you declare the matching shapes when you wrap the function you wouldn’t actually need to use brackets at all, and could just call it as:
Z = my_op('i :, j :, i j : :->i j', X, Y, A)?















This advice generally makes me sad, but still worth thinking about.