Here's Monday's "Now that you mention it, duh" reading: Ramez Naam, adviser for the Acceleration Studies Foundation and futurist with an impressive track record, lucidly argues that the "Singularity" is not something we should expect in our lifetime. One obvious reason: Why the hell do we even need a sentient AI in the first place? As he puts it:
Would you like a self-driving car that has its own opinions? That might someday decide it doesn't feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that's extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer. Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn't need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we'll likely prefer it that way.
Post via Jacki Morie, who knows what she's talking about here.
Please share this post:Tweet