• 83 Posts
  • 42 Comments
Joined 1 年前
cake
Cake day: 2024年7月10日

help-circle











  • Ah, so the argument is more general than “reproduction” through running different physical copies, but also includes the AI self-improving? This again seems plausible to me, but still seems like something not everyone would agree with. It’s possible, for example, that the “300 IQ AI” only appears at the end of some long process of recursive self-improvement, at which stage physical limits mean it can’t get much better without new hardware requiring some kind of human intervention.

    I guess my goal is not to lay out the most likely scenario for AI-risk, but rather the scenario that requires the fewest assumptions, that’s the hardest to dispute?


  • I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.

    Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I’m not quite sure how to use those resources to reduce risk…) But this definitely is not implied by the minimal argument.








  • dynomightOPMtodynomight internet forumYou can try to like things
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 个月前

    This is a tangent but I’ve always been fascinated by the question of what people would spend their time on given extremely long lifespans. One theory would be art, literature, etc. But maybe you’d get tired of all that and what you’d really enjoy is more basic things like good meals and physical comfort? Or maybe you’d just meditate all the time?
















  • Agree with your first point. For the second point, I felt like I had to add some artifice because otherwise the morally correct choice in almost all situations would seem to obviously be “ask humanity and let it choose for itself”! Which is correct, but not very interesting.

    (In any case, I’m not actually that interested in these particular moral puzzles, I have other purposes in asking…)






  • Ah, I see, very nice. I wonder if it might make sense to declare the dimensions that are supposed to match once and for all when you wrap the function?

    E.g. perhaps you could write:

    @new_wrap('m, n, m n->')
    def my_op(x,y,a):
        return y @ jnp.linalg.solve(a,x)
    

    to declare the matching dimensions of the wrapped function and then call it with something like

    Z = my_op('i [:], j [:], i j [: :]->i j', X, Y, A)
    

    It’s a small thing but it seems like the matching declaration should be done “once and for all”?

    (On the other hand, I guess there might be cases where the way things match depend on the arguments…)

    Edit: Or perhaps if you declare the matching shapes when you wrap the function you wouldn’t actually need to use brackets at all, and could just call it as:

    Z = my_op('i :, j :, i j : :->i j', X, Y, A)
    

    ?