Sep 22, 2016 Maria Ritola
You remember the first time you spotted a self-driving car in traffic? Or the picture in the newspaper in 2016 of the surprised and emotional Lee Sedol after AlphaGo beat him at Go, an ancient Chinese board game? A breakthrough no one expected to see in ten years materialized as the algorithm developed by Google DeepMind took a move that no human ever would, and changed the course of the game.
Looking back at these advances in AI, it’s fair to say that we haven’t lacked moments of awe in the past few years. Machines have become increasingly better at spotting patterns in huge amounts of data. They are already as accurate as humans, or perhaps even better , in recognizing images or identifying words in recordings. This, in turn, has led to a sharp increase in powerful AI applications, such as improved disease diagnostics, real-time object recognition in various contexts or simply better search algorithms.
All this is impressive, but there’s still one fundamental capability that computers haven’t been able to crack: an understanding of human language, our most important tool for grasping what this world is about. As described by Will Knight in his recent article, succeeding in teaching computers to understand language will determine the scale and the character of what is turning into an artificial intelligence revolution. Either we’ll have machines that we can have conversations with – machines that become an intimate part of our everyday life – or AI systems will become more autonomous black boxes.
The new frontier of AI research aims to crack the puzzle and bring language to the core of AI systems. It is a hard problem in computer science since human language is rarely precise. Mastering language requires not only understanding words but also concepts and their relations to each other. This involves several factors such as understanding that the term charge has different meanings depending on the situation. Music can be divided into different genres in various different ways. People have various perceptions of the world depending on their generation or their cultural background. Language is considered far more complex than image recognition due to the fact that words, unlike images, are ambiguous, contextual symbols.
Currently, our machines can grasp the meaning of simple language, and can speak back to you. X.ai’s Amy Ingram, a robot trained to schedule your meetings, and chatbots like Alexa, can do this. When it comes to more complex content where common sense and contextual understanding is needed, they are oftentimes lost. You ask Siri to call you an ambulance and she might respond that she will, from now on, call you an ambulance. AI jokes will continue to amuse, if not kill us, until we discover a way to pass on our knowledge on how the world actually works to machines.
At Iris AI, we’re building an AI to pass a special field of knowledge to machines: science. Our AI science assistant helps students and researchers map out and find information for their R&D process, PhD or any other research project. We want to remove subjective biases or the need to know the taxonomy or vocabulary from the search process. The version we released today works with research paper abstracts. Just drop in a URL of an abstract to the tool, and you get an interactive research map on the topic of your interest curated by Iris.
We built this 2.0 version of Iris with various different techniques including one called deep learning. The same approach made self-driving vehicles and the Go victory possible. Deep learning mimics the human brain by roughly modeling the way neurons and synapses change in the neocortex when exposed to new information. You feed new data into the algorithm and it passes that information across layers, “firing” the artificial neurons relevant to the input. The software, originally developed in the 70s, became powerful in identifying patterns in data in recent years thanks to the exponential growth in computing power and available labelled datasets.
To improve, the artificial neural networks need training. A common way to do that is through a technique called backpropagation. The approach analyzes whether the right neurons are activated in the network and adjusts them accordingly. A huge set of labelled data is needed to build an effective training loop. Google Self-Driving Cars, for example, have driven millions of miles in the past few years to identify pedestrians and other objects on the street. Similarly to the Google car, Iris is exposed to a huge amount of data from which she grasps the meaning of texts. In addition, she learns from her AI trainer community of 500 people, who in the past four months have trained more than 1000 text inputs. The labelled datasets are not important just for training, but for verifying and assessing the quality of the results, too.
Neural and semantic models are mainstream approaches when it comes to teaching machines to understand language. However, the main difference between them is that neural models don’t start from grasping the rules of the language, i.e. grammar. Instead, the neural models assume that the important information is carried by the words themselves, and not that much by their tense or order. Then they try to attach those concepts to a wider context, or mathematically speaking, to the vector space where each word has a numeric representation. Just like when kids learn to speak, the models start from grasping concepts and understanding their relations. Gradually, they hone their skills to pick up more complex language structures. Artificial neural networks are trained to understand language roughly with the same logic.
This is the state of art in AI – and it is just the beginning. Although we are yet to reach – and surpass – the level of human intelligence in language understanding, there’s reason to believe that we’re not far from that point. Thanks to the fast pace of progress that we are currently experiencing in various fields of technology and the next generation of AI algorithms, such as the generative models that are being developed by companies like Vicarious, similar breakthroughs to Go might be just around the corner.
That’s where the real turning point of AI lies. A point where we can actually start talking about AI that will be able to seamlessly interact with human beings and that will augment our capabilities.
This December, with Atomico and Orrick, Slush will once again publish the annual State of European Tech report. It is the most comprehensive deep-dive into the European ecosystem. Coming out for the fourth time, this year’s report will bring diversity and inclusion into focus, to highlight the importance of the issue at hand. Take the survey […]
One of the Slush core missions now and forever is the advancement of entrepreneurs’ and investors’ abilities to navigate the tech ecosystem and invest with impact. Yesterday, we took yet another leap towards attaining this goal with the release of The State of South Asian Tech (SEA) report in Singapore, yesterday. Like the State of […]
Happeo, a digital workplace platform focusing on smoothening the collaboration between employees of large companies, recently raised $8M for their seed round, having names such as Gapps, Vendep, DN Capital and Maki.vc amongst their investors. We had a chat with Happeo CEO Perttu Ojansuu in order to learn a bit more about the journey behind […]
Slush talks a lot about the different phases of founding a startup. We talk about building, scaling, growing, recruiting, funding and everything in between. All of these things are essential, but to understand the relevance of everything we do, we should also look into the very DNA of what a startup is and where they […]
Helsinki-based Yousician, the world’s leading music education company, has been on an almost decade long journey of learning about gamifying, community building, and of course, learning new instruments. CEO and co-founder Chris Thür has been leading the charge towards the company’s goal of making musicality as common as literacy, and overseen lots of change in […]
Last week, a new era for Slush Global Impact Accelerator kicked off. Slush hosted a workshop for startup hub managers – coming from five different countries – in Botswana. For the past three years, the GIA team has been running the program all the way from Finland. However, now we’ve decided to take off from […]
Day 2 at Slush Tokyo 2018 started with a bang, as IDEO Partner Tom Kelley walked into the Dome to have a Fireside Chat with Slush Tokyo CEO Antti Sonninen. The two discussed Slush and design thinking, and Tom took further questions from the audience during his Q&A session at the Slush Cafe. IDEO Partner […]
Among all the Swedish startup success stories, the story of Truecaller definitely stands out. Their app has turned the traditional phonebook into a practical call screening app, and their founder and CEO Alan Mamedi started it all at 25 years old. Being a smart man, he has taken the opportunity to learn. He now wants to share […]
After Al Gore’s phenomenal Opening Keynote Speech, Founder Stage was taken over by some of the leading names of European tech. Niklas Zennström, Brent Hoberman, David Thevenon and Sophie Bendz came together to talk about the European entrepreneurial ecosystem and the importance of daring to address the biggest problems in the world. What do the […]
Make sure to secure yours now.
We collect cookies to make your experience here smoother.