Monday, May 05, 2014

« Make Your Second Life Spring Cleaning A Little More Glamorous With This Easy Inventory Management Trick | Main | Oculus Rift Takes on Second Life's Original Mission Statement, Aiming for a "Billion Person Virtual World" »

The Singularity is Far, Argues Top Futurist

Singularity Transcendance Movie

Here's Monday's "Now that you mention it, duh" reading: Ramez Naam, adviser for the Acceleration Studies Foundation and futurist with an impressive track record, lucidly argues that the "Singularity" is not something we should expect in our lifetime. One obvious reason: Why the hell do we even need a sentient AI in the first place? As he puts it:

Would you like a self-driving car that has its own opinions? That might someday decide it doesn't feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that's extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer. Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn't need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we'll likely prefer it that way.

Read it all here. TL;DR version: Simmer down now, Kurzweil.

Post via Jacki Morie, who knows what she's talking about here.

Please share this post:


Feed You can follow this conversation by subscribing to the comment feed for this post.


We don't need either. Hell, I don't need my fast little car, either. But the reptile-brain in us loves to be territorial and win. Then the monkey-mind part is curious at its own risk.

All this competition and heedless curiosity could be the end of us.

We need to get rid of our brain stems in the next step of evolution. That's the reptile part of us and it's due for an upgrade.

Arcadia Codesmith

Why do we keep pets? We could easily replace them with 100% reliable constructs that always behave in predictable ways.

But that's not what we want. Pets do goofy stuff. They behave in novel, often creative ways. They offer and demand affection unbidden. They play with us.

We have a lot of computer games at the moment that will play with us, but none that are playful. Virtual NPCs are very two-dimensional and always stick to a script. In order to have an NPC that is more human, it has to think on some level. It has to develop likes and dislikes, nurse grudges, form aspirations, fall in and out of love.

I think that's your killer app for AI. You have to be a gamer to appreciate it, but we're approaching a point in time when we are all gamers on some level.

Extropia DaSilva

Venor Vinge's original 1990s essay begins with the words, 'within 30 years, we will have the technological means to create superhuman intelligence'.

It is very important to note what Vinge does not say. He does not say 'we will have the technological means to create artificial intelligence'; he claims, instead, that we will have the technological means to create superhuman intelligence.

Now, most people just assume that, by 'superhuman intelligence', Vinge means AI. But a proper readthrough of his essay and subsequent essays on the Singularity makes it quite clear that he is not saying the Singularity will be caused by the introduction of AI. This is but one way in which the singularity could be brought about, but it is not the only way. Nor is it, in Vinge's opinion, the most likely scenario.

As well as AI there are the IA pathways to Singularity. IA as in 'Intelligence Amplification'. Maybe we start cyborging ourselves, using brain-machine interfaces and bionic neural implants. This may augment the cognitive capabilities of those who have been cyborged so that they can, among other things, R+D the next generation of cyborg implants to be beyond anything those who have just biological brains could conceive of. And cyborgs with next-gen implants will have a similar advantage over those with previous-gen tech installed in their brains. Another possible scenario is that we find ways to get large teams of people and large computer networks (which may include sensors and the 'internet of things') to work together effectively enough to be considered a superhuman intelligence. AI may be involved in this, but would only be lots of narrow AI, such as knowledge-management and data-mining tools none of which are superintelligent in and of themselves but contribute to an emergent superintelligence, along with the people who work with the large computer networks.

If you look into data-intensive scientific discovery (search for 'the 4th paradigm' in Amazon for a great book on the topic) you may find an 'internet 'scenario' (in which the Web plus its associated human users work together effectively enough to be considered a superhuman intelligence) seems very plausible.

Post a comment

If you have a TypeKey or TypePad account, please Sign In.