POSTED 12.06.17 / BY HOTWASABI

AI & machine learning

Artificial intelligence (AI) is something most people only relate to from sci-fi movies. But the reality is that AI is already working its way into our daily lives through the many apps and business services we use. With the large-scale adoption of smartphones and rise in cloud services, applications from banking and transportation to marketing and medical care are actually extensions of AI “brains” in the cloud that get smarter and more intuitive every day. Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. It focuses on the development of computer programs that can change when exposed to new data, using algorithms that iteratively learn from that data, enabling computers to find hidden insights without being specifically programmed where to look. The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results.

 

  • The AI and Machine Learning revolution has arrived, and it has the potential to radically change everything about the way we live and work. This is evident in the level of venture capital being invested in this technology; from 2011 to 2015, investments in AI startups rose from $282 million to $2.4 billion, a staggering 751 percent increase.

 

AI can be classified into one of two fundamental groups:

Applied AI; also known as advanced information processing, aims to use AI to produce commercially viable systems. Applied AI has enjoyed considerable success in solutions ranging from medical diagnosis, through intelligently trading stocks and shares, to maneuvering autonomous vehicles. Natural language processing (NLP)  exploring the interactions between computers and human (natural) languages is also a growing area of interest.

Generalized AI; these are systems or devices which can in theory handle any task. These are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI. It covers the ability of machines to learn and evolve through exposure to new data. More specifically, it refers to algorithms that allow machines to learn from data inputs, rather than being limited to following programmed instructions. Machine learning is one of the four characteristics that a system must possess in order to be considered artificially intelligent.

 

The most powerful form of machine learning being used today is called “deep learning”. This builds a complex mathematical structure called a neural network based on vast quantities of data. Designed to be analogous to how a human brain works, neural networks themselves were first described in the 1930s. But it’s only in the last three or four years that computers have become powerful enough to use them effectively. There are 3 main factors to consider when designing a machine learning system:

Feature extraction; determines what data to use in the model. Sometimes this can simply mean dumping all the raw data straight in, but many machine learning techniques can build new variables — called “features” — which can aggregate important signals that are spread out over many variables in the raw data.

Regularization; determines how the data are weighted within the model. Regularization is a way to split the difference between a flexible model and a conservative model, and this is usually calculated by adding a “penalty for complexity” which forces the model to stay simple. There are many flavors of regularization, but the most popular one, called “LASSO”, is a simple way to combine both selection and shrinkage, and it’s probably a good default for most applications.

Cross-validation; this tests the accuracy of the model. The most important test is whether the model is accurate “out of sample”, which is when the model is making predictions for data it has never seen before.

 

Deep learning is only possible with big data, because you need a tremendous amount of data to “teach” AI systems. The other component necessary for deep learning is the algorithmic power to make sense of all that data. The enormous scale of data available to firms can pose several challenges. Big data may require advanced software and hardware to handle and store it, and the analysis of data has to adapt to the size of the dataset. AI applications powered by machine learning depend on data to develop more predictive models, the more data and using data that represents the concepts you need to learn, makes AI applications better. Analytics will always struggle to keep up as more data [total data expected to rise from the 4.4 zettabytes created in 2013 to 44 zettabytes by 2020, and by 2025 the total will hit 180 zettabytes - according to IDC] and better algorithms become available, more automation is possible along with better predictions.

 

AI adds an intelligence layer to big data to tackle complex analytical tasks much faster than humans could ever hope to. Instead of telling machines what to do, we let them figure it out for themselves based on the data we give them. And ultimately, they tell us what to do. Artificial intelligence (AI) has come a long way over the past decade and already some of these applications are changing the nature of business in various different fields. The combination of AI and big data in the financial industry has had a major impact on how the stock market works. As the mathematical models the industry uses to predict market shifts have synthesized more and more data over time, they’ve become smarter — to the point where some believe they will eventually be able to accurately predict both market trends and human influences on the market.

 

Another area where AI and Machine Learning has made a big impact is medicine; deep-learning systems have already been developed to help diagnose breast and heart diseases, and recently a group of researchers from the University of Nottingham, U.K. developed a machine learning algorithm that had the ability to predict a patient’s chances of having a heart attack or stroke. AI can predict how likely it is a patient will contract a certain disease in a number of seconds at an accuracy level just as good as a qualified doctor. It’s even been known to pick up on infection vectors that doctors have missed.

 

Industrial and manufacturing companies over recent years have started to invest in AI/Machine Learning solutions where the value of AI lies in transforming data from multiple sensors and routine hardware into intelligent predictions for better and faster decision-making. 15 billion machines are currently connected to the Internet. By 2020, Cisco predicts the number will surpass 50 billion. Connecting machines together into intelligent automated systems in the cloud is the next major step in the evolution of manufacturing and industry. However there are specific issues facing Industrial adoption of AI solutions:

Industrial Data Is Often Inaccurate; An agricultural application may need to calculate how far a combine needs to drill, putting a moisture sensor into the ground to take measurements will create the relevant data set. However the readings can be skewed by extreme temperatures, accidental man-handling, hardware malfunctions, or even a worm that’s been accidentally skewered by the device.

For Industrial applications AI Runs both On The Edge as well as The Cloud; Consumer data is processed in the cloud on computing clusters with seemingly infinite capacity. In these predictions there’s a low level of consequence of false analysis (a consumer is not overly bothered by occasional inaccurate purchase or music recommendations). The stakes and responsiveness are much higher for industrial applications where large sums of money and human lives can be at risk. In these cases, industrial features cannot be trusted to fully run on the cloud and must be implemented on location, at the edge i.e. data is generated by sensors on the edge, served to algorithms, modeled on the cloud, and then moved back to the edge for implementation, such as smart safety systems used to predict machine failure or worker risk by using big data and AI/machine learning, the actionable outcome has to be implemented by local control of the machine, or specific warning to workers.

Complex Models Must Be Interpretable; Consumers rarely ask why Amazon makes specific recommendations. When the stakes are higher, people ask questions. Technicians who have been in the field for many years will not trust machines that cannot explain their predictions.

 

There are many apps and services that now relay on AI/Machine Learning as a key enabler of the solution. Facebook's News Feed uses machine learning to personalize each member's feed. If a member frequently stops scrolling in order to read or "like" a particular friend's posts, the News Feed will start to show more of that friend's activity earlier in the feed. Machine learning combined with linguistic rule creation allows companies to utilize Twitter feeds to Know what customers are saying about them. One of the most common applications for machine learning tools is to make predictions; Personalized recommendations for customers, forecasting long-term customer loyalty, anticipating the future performance of employees, rating the credit risk of loan applicants. Using data from multiple sources, AI can build a store of knowledge that will ultimately enable accurate predictions about you as a consumer that are based not just on what you buy, but on how much time you spend in a particular part of a site or store, what you look at while you’re there, what

you do buy compared with what you don’t. AI/Machine Learning is also changing the nature of human computer interaction design, moving away from screen and more towards natural or hidden UI where people interact by using voice, gestures, or even just by thought.

 

Google's initial implementation of AI is an open-source software suite called TensorFlow. This was built from the ground up to be usable by both researchers at the company attempting to understand the powerful models they create, as well as the engineers who are already taking them, and using them to categorise photos or let people search with their voice. In 2013, Google picked up deep learning and neural network startup DNNresearch from the computer science department at the University of Toronto. This acquisition reportedly helped Google make major upgrades to its image search feature. In 2014 Google acquired British company DeepMind Technologies for some $600M and with this recently beat a human world champion in the board game “Go”. More recently, in Q1’17, Google acquired predictive analytics platform Kaggle. Google is also poised to begin a grand experiment in using machine learning to widen access to healthcare. If it is successful, it could see the company help protect millions of people with diabetes from an eye disease that leads to blindness.

 

Some of the world’s largest companies and most recognized universities are turning to AI and machine learning to boost productivity. Microsoft for one is working on an automatic translation AI that will transform your PowerPoint presentation into any language in real time. Microsoft Ventures has co-led funding of two artificial intelligence startups, Agolo and Bonsai. Angolo uses AI to create summaries of information, in real-time, while Bonsai enables the automated management of machine learning algorithms. IBM have been a long standing proponent of AI with the creation of Watson; combining artificial intelligence (AI) and sophisticated analytical software for optimal performance as a “question answering” machine capable of answering questions posed in natural language, developed in IBM's DeepQA project. The Watson supercomputer processes at a rate of 80 teraflops to replicate (or surpass) a high-functioning human’s ability to answer questions. Watson accesses 90 servers with a combined data store of over 200 million pages of information, which it processes against six million logic rules. Apple has been ramping up its M&A activity, and now ranked second with a total of 7 acquisitions. It recently acquired Tel Aviv-based RealFace, valued at $2M. Apple also recently went public with AI & ML kits as part of iOS 11, in attempt to fuel the opportunity in AI development using iOS devices.

 

General Electric (GE) the author of Industrial IoT recently acquired Bit Stew Systems providing MIx Core, a platform for handling data integration, data analysis, and predictive automation for connected devices on the Industrial IoT. GE also acquired Wise.io; a machine-learning powered service that delivers rapid, collaborative prototyping and production deployment of advanced machine learning solutions, and helps businesses find patterns and trends in their vast data stores.

 

Other companies such as Spotify has been snapping up AI-focused startups recently, with the aim to improve its content recommendations and targetted advertisements. Uber acquired Geometric Intelligence, an AI startup co-founded by noted scientists aiming to 'redefine the boundaries of machine learning'. Facebook is reportedly using artificial intelligence to produce detailed maps illustrating population density and access to internet across the globe. MetaMind and Salesforce.com have combined be able to offer customers real AI solutions with breakthrough capabilities that further automate and personalise customer support, marketing automation, and many other business processes. Element AI — a Montreal-based startup and incubator recently announced a Series A round of $102 million to establish itself as the go-to place for any companies (big or small) that want to include AI solutions in their businesses.

 

Machine Learning differs from traditional statistics is that you’re not focused on causality. That is, you might not need to know what happens when you change the environment. Instead you are focusing on prediction, which means you might only need a model of the environment to make the right decision. AI and machine learning are already automating and improving many everyday tasks, like mobile search or the organization of photos. AI is also helping a new breed of companies disrupt industries from medical research to agriculture. AI is also driving the advancement of voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as behavioral algorithms, suggestive searches and autonomously-powered self-driving vehicles. AI Machine Learning is also a big part of the success behind Online recommendations like those from Amazon and Netflix. It is also being used by chatbots for customer support (eg. IKEA Ask Anna), and in the creation of financial summaries, sports recaps, and fantasy sports reports, along with powering many smart home devices such as Google Nest and Honeywell Lyric thermostats. Many video games are also now using AI as part of the game design process and the game mechanics.

 

As with most technological revolutions, AI & Machine Learning come with their own unique set of problems and issues. These include; the need for vasts amounts of data to power deep learning systems; our inability to create AI that is good at more than one task; and the lack of insight we have into how these systems work in the first place. Big tech firms like Google, Facebook, Apple, IBM, and Microsoft have access to abundant data and so can afford to run inefficient machine learning systems, and improve them over time. Smaller startups might have innovative ideas, but they won’t be able to follow through without access to big data. The problem, is not really about finding ways to distribute data, but more about making our deep learning systems more efficient and able to work with less data.

 

Another big problem is making techniques like deep learning more understandable to their creators and accountable to their users. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build the systems cannot fully explain their behavior.  It is the interplay of calculations inside deep neural networks that are crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. Explain-ability will be at the core of any evolving relationship between humans and intelligent machines, otherwise it will be hard to predict when failures might occur — and it’s inevitable they will.  Whilst it might just be a fact of the nature of intelligence that only part of it is exposed to rational explanation, some of it is just instinctual, or subconscious, or inscrutable. We may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. One example of this is starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach.

 

With the right mix of technical skill and human judgment, machine learning can be a useful new tool for decision-makers trying to make sense of the inherent problems of big data. Everything we have of value as human beings, as a civilization, is the result of our intelligence and what AI could do is essentially be a catalyst that transforms human intelligence and gives us the ability to move our civilization forward in all kinds of ways; helping to conquer challenges such as sustainable Fusion energy, reduce world hunger with more effective farming methods, and providing unparalleled computational capability with quantum computers. Many people share a concern we are racing towards the Singularity – a point at which artificial intelligence outstrips our own, and machines go on to improve themselves at an exponential rate. Some experts are even suggesting that AI will fully exceed human intelligence within the next three decades. It is clear that AI & Machine Learning still has a long way to go before it can be seen to show signs of true intelligence, it is incumbent on humanity to ensure, that during this period of AI evolution, we are fully prepared for that moment and have established a way in which AI respects and fits within the social norms of the future.

 

CATEGORIES

Design

Future

Big Data

Development

AI/Machine Learning

Technology

ABOUT ME

I am passionate about innovative design and creating user experiences at the intersection of art, science and technology.

LATEST POSTS

GALLERY

Design Methods

Quantified Self

LEAVE A COMMENT

Submitting Form...

The server encountered an error.

Form received.

<

© 2022 Copyright CYXperience