The Jinn Is Out of the Bottle

Besides the impending arraignment of Donald Trump, I’m hearing a lot of fulminating about artificial intelligence including the letter, signed by such luminaries as Elon Musk, Steve Wozniak, and many others asking for a pause in training artificial intelligence systems more powerful than ChatGPT-4. Don’t they realize that it’s too late?

Knowledge is international now. There are excellent, capable technologists in many, many other countries, some of them not particularly friendly to the United States. Nothing we do here will have the slightest impact outside of the United States and, in all likelihood, not on all developers within the United States. Not only should we not have the hiatus they’re asking for, from a strategic standpoint we need it to continue.

One more thing that I don’t see being emphasized enough. There’s no guarantee that the guidance being given by these language engines is correct. I’ve been playing around with them and they are quite frequently, verifiably incorrect.

So, yes, ChatGPT-4 is amusing, interesting, and useful. Full steam ahead, O Brave New World!

5 comments… add one
  • steve Link

    Agreed. When I read that I thought that in some idealized world it would be wonderful if we thought ahead and could plan and get people to agree on how we should handle AI as it develops (and while we are at it I want a unicorn). Its just not the world we live in.

    I think the transition will be difficult. Even with the error rates we see now it can help but will require double checking. Instead fo being a time saver it may cost more time. Once the error rates are lower you will need speech recognition software to be much better than it is now to be truly useful and you will still need some redundancy. Finally, there will be a culture clash. You will have the IT/computer geeks installing and adapting this stuff to do what they think needs done which will often be much different than what the end user needs. Then add in, especially in the case of medicine that much like EMRs admin people will want it to focus on billing.

    Eventually it should be pretty cool and I can certainly imagine many wonderful ways to use it once the bugs are worked out and we add some new tech, but transition wont be so easy.

    Steve

  • TastyBits Link

    Future AI is the new future flying cars, or today it would be self driving flying cars. It ain’t gonna happen.

    The claimed AI is wormed-over and rephrased human knowledge. Remove human knowledge, and you have nothing. It cannot create anything new. It must abide by the rules programmed into it.

    AI could never create Relativity from Newtonian science. It is not a simple evolution of scientific thought. Einstein envisioned the universe in a completely different way. (It is very complex, and most of the explanations must be cartoonish for people to understand.)

    The universe is not just expanding, but its expansion is accelerating. According to known science, this is impossible, and dark matter and dark energy are the modern aether used to explain the otherwise unexplainable.

    Eventually, a physics breakthrough will alter our understanding of the universe, and it will require thinking outside the known box. It will upend everything we know, and it will be outside “the rules”.

    Eventually, AI will supplement robots, and like robots, it will be dumb. AI will not do anything outside its programming, and its programming will be limited by the knowledge of the programmers.

    So, a robot will weld, and an AI inspector will check the weld. When a human knocks the AI Inspector out of alignment, it will fail perfect welds.

    A computer is just an adding machine. Logic circuits simply add binary numbers. Combining logic circuits creates truth tables. You could do the same thing with a pencil and paper, but you would be a lot slower.

    When I am playing a computer game, I will occasionally realize that the characters are just zeros and ones being displayed on a screen. No matter how random it seems, it is not.

    Actually, a real AI would need to be allowed to become psychotic. It would need to be programmed to accept what we consider unreal. For example, dark matter could be ghosts, and dark energy could be ghost movement.

  • TastyBits Link

    When AI can properly translate Finnegans Wake, give me a call, and when AI can explain it, the world might end.

  • Drew Link

    “Knowledge is international now. There are excellent, capable technologists in many, many other countries, some of them not particularly friendly to the United States. Nothing we do here will have the slightest impact outside of the United States and, in all likelihood, not on all developers within the United States. Not only should we not have the hiatus they’re asking for, from a strategic standpoint we need it to continue.”

    Heh.

    CO2 emissions are international now. There are many advanced economies producing emissions, some of them not particularly friendly to the United States. Nothing we do here will have the slightest impact outside of the United States except, in all likelihood, destroying industries within the United States. Not only should we not have the deindustrialization they’re asking for, from a strategic standpoint we need avoid economic suicide.

    Now I’m not sayin’. I’m just sayin’.

  • CO2 emissions are international now. There are many advanced economies producing emissions, some of them not particularly friendly to the United States. Nothing we do here will have the slightest impact outside of the United States except, in all likelihood, destroying industries within the United States. Not only should we not have the deindustrialization they’re asking for, from a strategic standpoint we need avoid economic suicide.

    Actually, I agree with that. It’s among the reasons I think we need to produce more of what we consume and more of what we use to produce what we consume domestically, develop more nuclear power, and emphasize CCS.

Leave a Comment