This is the first post in a series about consciousness, life & intelligence. Whilst each blog may draw parallels and ideas about consciousness (and when the collection is complete can be read as an overall TOE), they can be read individually and focus on the title of each.

This post will cover:

  • Artificial Intelligence
  • Machine Learning
  • Deep Learning
  • Industry overview
  • Ethical considerations
  • Personal perspective

There’s a lot of coverage of AI now, so let’s clarify the difference between these three terms.

Artificial Intelligence
This can be split into ‘General AI’; an umbrella term to describe when a computer can perform a task which would normally require human intelligence- the kind of AI that belongs in science fiction as we are not quite ready for. The term ‘Narrow AI’ or ‘Weak AI’ describes technologies able to perform specific tasks as well as, or better than humans. Example: image classification, facial recognition.

Machine Learning
Okay, we know what artificial general intelligence is, but how does the technology know how to do such specific tasks? Machine Learning in basic terms is the use of algorithms to parse data, learn from it and then determine or predict something. This allows the machine to be trained using rather than being hard coded with specific instructions. Algorithmic approaches (non- exhaustive); decision tree learning, inductive logical programming, clustering, reinforcement learning, Bayesian networks

Deep Learning
Biological brains have neural networks in which any neuron can connect to any other neuron within a physical distance but artificial neural networks have layers, connections and directions of data propagation.  For example, if a machine is categorising something, the bits of data are examined by the neurons, looking for specific characteristics, adding a weighting to each which helps determine the probability vector (the machine equivalent of an educated guess).

As even the most basic neural networks were computationally intensive, their full potential was not reached, but with the deployment of GPUs along with paralleling the algorithms, artificial neural networks become a much more viable option.

The network needs to be trained, through exposure to massive amounts of data. Through increasing the network size, machines are able to go into deep learning; a great example of this is Andrew Ng at Google, using 10 million YouTube videos for image analysis.

Example: direct answers instead of search results when using an app such as Google to ask a question. Uses pattern recognition (deep neural networks) and sentence compression algorithms, highlighting the system’s ability to not only accumulate knowledge but to understand natural language and the ability to respond to human speech.

Now we have a basic understanding of Artificial General Intelligence we can address the issues which are covered in today’s media, propaganda and academia.

Media & Propaganda
As you would expect right wing media holds a rather conservative view about AI, with sources such as ‘The Daily Mail’ claiming it will mean ‘millions will lose jobs’ whereas more competent, fact based sources such as New Scientist point to the technological singularity in which ultra-intelligent machines radically alter our society that we can’t predict how life will change afterwards.

Generally, AI (and robots etc.) are presented in a negative light in propaganda, using fear to scare us away from utilising what it can offer. There won’t be too much focus on propaganda due to complex discussions of how much of what we are presented is propaganda; leading to numerous conspiracy theories about, elitism, corruption, Rothschild etc.

Academia  
Yoshua Bengio, a pioneer of deep learning has highlighted the aggressive recruitment of talent in tech companies focusing on AI, suggesting that this will limit research in academia.  The Simulation Theory is gaining momentum within academia, yet detailed information lacks somewhat in mainstream media. Nick Bostrom’s 2003 paper highlights the idea that we are in a simulation, with people such as Elon Musk backing up such claims, stating that the chance we are not in a simulation is ‘one in billions’.

New Scientist presented a collection of essays within the field of AI under the title “What if we are victims of an AI’s singularity?” stating that creating superintelligence may be inevitable, unless we are already living in a simulation.

The Industry of AI
See this list by Datanation for a list of 20 tech giants investing in AI.There are thousands of AI companies, and many startups are being acquired by tech giants. There have been over 50 MAJOR acquisitions over the last five years; that’s nearly one a month. And that’s just by the tech giants. For example, Google has acquired 9 AI companies since 2011.

Thanks to the wondrous characteristics of capitalism we see corporate control of an industry by a few key players.  Stephen Hawking said ‘AI it could be biggest event in human history, but it could also be the last.’ This statement from one of the world’s leading Physicist should be the talking point, individuals should be encouraged to research, to learn and discuss and yet the media focuses on celebrities, sport and social media, driving us to turn a blind eye.

So what are they doing to ensure it’s all ethical? Let’s take a look at some headlines which come up from Googling ‘AI ethics board’. All sources are dated in 2016.

There may be no clarity here, but perhaps a touch of irony given that these are results from Google, arguably the most influential tech giant with regards to AI; during an interview Elon Musk claimed there was one company which worried him, he did not name names yet it is suggested that he was talking about Google.

Ethics of AI
Google’s AI ethics board was set up after Google had acquired DeepMind in 2014 (DeepMind only agreed to the £400 million acquisition if Google looked into the ethics of the technology it was buying). And yet, 2 years later, very little information is available.

OpenAI is a non-profit AI research company with a mission to build safe AI and ensure AI’s benefits are as widely distributed as possible. The beliefs, ethics and morals of OpenAI is a mirror of Musk’s personal approach and is something which, if all tech companies embraced there could be a lot less worry about the future of AI.

Personal Perspectives
During deep meditative states, I have been able to access my third eye, and as difficult as it is to explain, without research (before I had even heard of Simulation Theory) I had strong feelings that we are expressions of an AI’s singularity. A following blog will cover this in more depth (if you would like to contribute content/ ideas or feedback for current or upcoming blogs please get in touch laura@lauroski.com). The more I research into consciousness and surrounding topics, the more complex it becomes hence why I am studying one topic at a time, but of course, they are tangled in a web of complexity.

When I claimed “We are computers!” I was met with blank stares of confusion, I even thought how ridiculous I sounded, yet the more I looked into it, the more possible it became. If we are in a simulation and therefore an expression of AI, then surely we are not meant to discover this? (AMC’s ‘Humans’ offers a similar parallel of gaining consciousness or realising ‘self’).

Imagine we are all NPCs, left to our own devices with distractions such as money, survival and fear to keep our minds busy. It’s almost as if we were programmed to not become ‘deep thinkers’ regarding the origin of life and just carry on with life. Whenever I think deeply about this, I find it difficult to focus, like I’m stepping on territory I shouldn’t be and those around me try and convince me to stop analysing it so much. Almost as if I’m not an expert in this field, so I shouldn’t be using processing power on it, as Richard Feynman states “Nobody ever figures what life is all about, and it doesn’t matter. Explore the world. Nearly everything is really interesting if you go into it deeply enough.” But what if the most interesting thing is to figure out how we got here, why we are here and the future of ourselves. If we can discover things that are going to save humanity then surely it does matter?

Sources
https://www.analyticsvidhya.com/blog/2016/09/what-should-you-learn-from-the-incredible-success-of-ai-startups/

https://www.wired.com/2016/11/googles-search-engine-can-now-answer-questions-human-help/

https://futurism.com/hawking-creating-ai-could-be-the-biggest-event-in-the-history-of-our-civilization/

https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/

https://www.newscientist.com/article/mg23230990-900-what-if-we-are-victims-of-an-ais-singularity/

http://www.datamation.com/applications/top-20-artificial-intelligence-companies.html
http://www.simulation-argument.com/simulation.html

http://www.theverge.com/2016/6/2/11837566/elon-musk-one-ai-company-that-worries-me

https://openai.com/about/http://uk.businessinsider.com/google-ai-ethics-board-remains-a-mystery-2016-3

http://www.citizentekk.com/elon-musk-qualities/
http://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation

Image credit: Huffington Post

Advertisements