Sep 22, 2016 Maria Ritola
You remember the first time you spotted a self-driving car in traffic? Or the picture in the newspaper in 2016 of the surprised and emotional Lee Sedol after AlphaGo beat him at Go, an ancient Chinese board game? A breakthrough no one expected to see in ten years materialized as the algorithm developed by Google DeepMind took a move that no human ever would, and changed the course of the game.
Looking back at these advances in AI, it’s fair to say that we haven’t lacked moments of awe in the past few years. Machines have become increasingly better at spotting patterns in huge amounts of data. They are already as accurate as humans, or perhaps even better , in recognizing images or identifying words in recordings. This, in turn, has led to a sharp increase in powerful AI applications, such as improved disease diagnostics, real-time object recognition in various contexts or simply better search algorithms.
All this is impressive, but there’s still one fundamental capability that computers haven’t been able to crack: an understanding of human language, our most important tool for grasping what this world is about. As described by Will Knight in his recent article, succeeding in teaching computers to understand language will determine the scale and the character of what is turning into an artificial intelligence revolution. Either we’ll have machines that we can have conversations with – machines that become an intimate part of our everyday life – or AI systems will become more autonomous black boxes.
The new frontier of AI research aims to crack the puzzle and bring language to the core of AI systems. It is a hard problem in computer science since human language is rarely precise. Mastering language requires not only understanding words but also concepts and their relations to each other. This involves several factors such as understanding that the term charge has different meanings depending on the situation. Music can be divided into different genres in various different ways. People have various perceptions of the world depending on their generation or their cultural background. Language is considered far more complex than image recognition due to the fact that words, unlike images, are ambiguous, contextual symbols.
Currently, our machines can grasp the meaning of simple language, and can speak back to you. X.ai’s Amy Ingram, a robot trained to schedule your meetings, and chatbots like Alexa, can do this. When it comes to more complex content where common sense and contextual understanding is needed, they are oftentimes lost. You ask Siri to call you an ambulance and she might respond that she will, from now on, call you an ambulance. AI jokes will continue to amuse, if not kill us, until we discover a way to pass on our knowledge on how the world actually works to machines.
At Iris AI, we’re building an AI to pass a special field of knowledge to machines: science. Our AI science assistant helps students and researchers map out and find information for their R&D process, PhD or any other research project. We want to remove subjective biases or the need to know the taxonomy or vocabulary from the search process. The version we released today works with research paper abstracts. Just drop in a URL of an abstract to the tool, and you get an interactive research map on the topic of your interest curated by Iris.
We built this 2.0 version of Iris with various different techniques including one called deep learning. The same approach made self-driving vehicles and the Go victory possible. Deep learning mimics the human brain by roughly modeling the way neurons and synapses change in the neocortex when exposed to new information. You feed new data into the algorithm and it passes that information across layers, “firing” the artificial neurons relevant to the input. The software, originally developed in the 70s, became powerful in identifying patterns in data in recent years thanks to the exponential growth in computing power and available labelled datasets.
To improve, the artificial neural networks need training. A common way to do that is through a technique called backpropagation. The approach analyzes whether the right neurons are activated in the network and adjusts them accordingly. A huge set of labelled data is needed to build an effective training loop. Google Self-Driving Cars, for example, have driven millions of miles in the past few years to identify pedestrians and other objects on the street. Similarly to the Google car, Iris is exposed to a huge amount of data from which she grasps the meaning of texts. In addition, she learns from her AI trainer community of 500 people, who in the past four months have trained more than 1000 text inputs. The labelled datasets are not important just for training, but for verifying and assessing the quality of the results, too.
Neural and semantic models are mainstream approaches when it comes to teaching machines to understand language. However, the main difference between them is that neural models don’t start from grasping the rules of the language, i.e. grammar. Instead, the neural models assume that the important information is carried by the words themselves, and not that much by their tense or order. Then they try to attach those concepts to a wider context, or mathematically speaking, to the vector space where each word has a numeric representation. Just like when kids learn to speak, the models start from grasping concepts and understanding their relations. Gradually, they hone their skills to pick up more complex language structures. Artificial neural networks are trained to understand language roughly with the same logic.
This is the state of art in AI – and it is just the beginning. Although we are yet to reach – and surpass – the level of human intelligence in language understanding, there’s reason to believe that we’re not far from that point. Thanks to the fast pace of progress that we are currently experiencing in various fields of technology and the next generation of AI algorithms, such as the generative models that are being developed by companies like Vicarious, similar breakthroughs to Go might be just around the corner.
That’s where the real turning point of AI lies. A point where we can actually start talking about AI that will be able to seamlessly interact with human beings and that will augment our capabilities.
Earlier this month, Swedish digital health app KRY announced they had raised €53M ($66M) in series B funding, led by Index Ventures. This brings their total funding since 2014 to approximately €79 million ($92 million). Thus, we’re excited to announce KRY’s CEO and co-founder Johannes Schildt as a speaker at Slush 2018! KRY is in […]
Helsinki-based Yousician, the world’s leading music education company, has been on an almost decade long journey of learning about gamifying, community building, and of course, learning new instruments. CEO and co-founder Chris Thür has been leading the charge towards the company’s goal of making musicality as common as literacy, and overseen lots of change in […]
Sustainability. A big, scary word to wrap your head around. It encompasses such humble goals as fighting climate change, providing safe and humane living conditions for all, maintaining biodiversity, eliminating toxins from our ecosystems, and generally just making sure that humans have a better, safer, fairer world to live in now and in the future. […]
Keep your eyes on Finland: a new record has been set! Finnish startups and early-stage growth companies raised over €349 million in funding, in 2017. This is the highest amount of business angel and venture capital funding to date, according to the statistics collected by Finnish Business Angels Network ry (FiBAN) and Finnish Venture Capital […]
Got a stellar startup based in Northern Europe? Are you looking to create new markets or reshape the existing ones with the help of AI, machine learning, or new user experiences, perhaps? Read on, superstar! With big names such as Supercell’s Ilkka Paananen and Mikko Kodisoja, Niklas Zennström from Atomico, Mistletoe Venture Partners International and […]
If you listened to Al Gore when he gave his Opening Keynote Speech at Slush 2017, this shouldn’t be news to you, as he let the secret out while congratulating Andreas Saari on his new position… Don’t remember? That’s fine. As long as you’ll listen more carefully when he talks about climate change, we’re all […]
Real estate agents, beware: there’s a new kid on the block. Blok, the virtual real estate agent automating apartment sales, was founded by Rudi Skogman – one of the early Slushers – shortly after he sold his own Helsinki apartment. The apartment sold quickly, being the awesome, well-located pad it was. Still, there was one […]
Who’s in charge here? That question has been asked all over Slush Music this year. Who’s in charge of rights? Who’s in charge of data? Who’s in charge of the industry? While these conversations are going on, it’s been reiterated constantly that the Nordics are known for community and to dismantle the traditional hierarchical structure. […]
On the Slush Music stage, three powerful women Linda Portnoff, CEO of Musiksverige and Founder of Riteband, Anya Trybala, Founder of Synth Babes and The Banshee Club Via Musikka, and Claudia Schwarz, VP at Music Tech Germany discussed the current state of women in the music industry. As phrases like: “I’d like to see more women […]
We collect cookies to make your experience here smoother.
Sign up to our newsletter. We inform you about this and that every once in a while.