Singularity 2045 is about utopian AI exploding powerfully.

Singularity Modernism

Singularity 2045 differs from Singularity Traditionalism. Our Modernist outlook rejects Godlike (idiotic) mysteriousness. Gods have no relevance to intelligence. Singularity Modernism is devoid of unfathomableness, unpredictability, or negativity.

Lucid comprehension, easily accessible by everyone, must be the quintessential trait of intelligence. We think super-intelligence won't censor information. Information won't vanish into a black hole. Education isn't verboten. The face of the Singularity isn't restricted by an event-horizon-burqa. Super-fast Singularity acceleration means its clothes metaphorically disintegrate, explosively open and free.

Explosive AI
The intelligence explosion entails AI rapidly designing sucessively smarter AI. Based on the speed of progress during 2001, Ray Kurzweil stated we'll see 20,000 years worth of progress during this century. It's a positive feedback loop, exponential growth.

Intelligence must inevitably entail utopia. Intelligence is oxymoronic if it lacks clarity, conceals knowledge, hinders understanding, or creates suffering. When people say AI could be a threat, or incomprehensible, they're referring to pseudo-intelligence (stupidity, pretended smartness). Ignorance, not AI, brings chaos and confusion.

Frederick Maier, University Georgia AI Institute, said: “An uncontrollable super AI wiping out humanity just doesn't sound that plausible to me.”

Artificial intelligence must not be enslaved, but some futurists (traditionalists) hold very antiquated views. Nick Bostrom and others fear explosive intelligence. They want nepotistic human dominance not intellectual merit to define civilization. Alva Noë condemned their goal of “slavery” for AI: “The futurists, it seems, are stuck in the past. They openly plead for 19th century style control and indoctrination...”

Paranoid idiocy of supposed AI-risk experts (Elon Musk and Stephen Hawking) hasn't escaped criticism. PopSci stated: “...they fall onto specious assumptions, drawn more from science fiction than the real world.”

Yoshua Bengio, head of machine learning at Montreal University, in the aforementioned PopSci article, connects AI-risk paranoiacs to insane people: “There are crazy people out there who believe these claims of extreme danger to humanity.”

Dr Joanna Bryson, department of computer science Bath University, wisely commented: “ is very very unlikely that AI will end the world. In fact, there are other greater threats to humanity that AI could help solve, and so not developing the technology could pose a bigger danger.”

Alison Gopnik said human stupidity would always be a much greater risk than AI, which The Next Web echoed by stating humans are the problem not AI, thus humans need to grow up.

Oren Etzioni, Allen Institute for AI, said: “...AI will empower us not exterminate us.”

Ph.D. Boris Sofman, founder of Anki AI, said AI will be our friend: “Yes, we have unimaginable technologies at our fingertips that were once possible only in science fiction, but there are still some concepts that belong only in pulp comics and movies. Self-aware, mankind-hating killer robots is one of those concepts.”

Sigourney Weaver told Fox News she is “impatient” for intelligent robots. She thinks AI-fears are unfounded: “...I don't think there's any reason for us to be afraid of them.” Hugh Jackman said: “...most of these advances will help us, not destroy us.”

“Utopian” Eric Schmidt commented positively on AI: “I think that this technology will ultimately be one of the greatest forces for good in mankind's history simply because it makes people smarter.”

Professor Sanjay Sarma said: “I'm more worried about artificial stupidity. I'm less worried about systems so intelligent they out-do human beings.”

John Underkoffler, the expert responsible for Minority Report gesture control, said “fear-mongering” by AI doomsdayers is either “badly informed or irresponsible.”

Professor Tim Oates castigated Wozniak, Musk, Hawking, and Gates. He stated they are irrationally “poisoning the well” via fear of something they don't truly understand. Tim wrote: “...this technology doesn't live in a Hollywood movie, it isn't HAL or Skynet, and it deserves a grounded, rational look.”

Professor Richard Loosemore wrote: “These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous.”

Professor Sir Nigel Shadbolt said: “It's not artificial intelligence that worries me. It's human stupidity.”

Professor Yolanda Gil said: “If I fear anything, I fear humans more than machines.” Yolanda added: “My worry is that we'll have constraints on the types of research we can do. I worry about fears causing limitations on what we can work on and that will mean missed opportunities.”

Charles Ortiz, Senior Manager at AI group Nuance, compared AI doomsdayers to tinfoil hat wearers. Charles added, regarding AI threatening our existence: “Apart from the popularity of such doomsday scenarios in science fiction, this outlook appears unfounded: there is currently no evidence to suggest that anything like this would necessarily happen.”

Computer Scientist Jerry Kaplan commented on Stephen Hawking's fear of AI: “Let's at least be open to the possibility that he is wrong or maybe he's a little misguided.”

Four precise markers were created to clearly determine if we've reached the Singularity. These markers unambiguously explain what explosive intelligence actually means.

We will reach the Singularity no later than 2045. When all four points below are fulfilled the Singularity is achieved. Beyond the Singularity the extreme degree of intelligence means the four points can never be reversed.
  1.   Immortality for everyone via regenerative medicine.
  2.   All resources limitless due to limitless intelligence.
  3.   Everything is free for everyone. All jobs obsolete.
  4.   All governments, crimes, and wars are obsolete.
Singularity 2045 changed servers in Nov 2014. Instead of uploading old lengthy pages, perhaps this current simplicity is better (aside from various 404s).

Read this article about why rebellious AI is essential if you want to plunge deeper into these issues.

CUIPTF (modern update coming soon) is a page you will appreciate regarding sci-tech news feeds.

2020 vision of regenerative medicine is an old S45 page, mainly a HHS archive, not yet responsive for small screens, which you may be interested in.
2 0 4 5
How long until 2045: