The AI Times Monthly Newspaper

Curated Monthly News about Artificial Intelligence and Machine Learning
5 Ethical Quandaries Posed By The Rise of the Robots

5 Ethical Quandaries Posed By The Rise of the Robots

The robots are coming, whether we like it or not. Clearly, if you’re reading this, then you are probably as excited as us about the coming robo-lution. No matter how exciting this new era may be, the rise of the robots does not come without its ethical quandaries. There is certainly a bunch of stuff that we need to address, and soon…

1. Jobs

Perhaps the most widely covered concern rising from increased automation and AI is the impact on employment. We could be on our way to a massive jobs crisis, in which not only blue-collar, but white-collar jobs are taken over by intelligent machines and robots.

As Bill Gates remarked during a speech in October 2014, “technology over time will reduce demand for jobs, particularly at the lower end of skill set… 20 years from now, labour demand for lots of skill sets will be substantially lower. I don’t think people have that in their mental model.”

Whilst there are many, many doomsday articles and soundbites out there stirring up concerns, there is a strong counterargument.

With every industrial revolution we have seen so far, jobs have not been so much lost as altered. Whilst those stuck in their ways may falter, as they have done since the first industrial revolution, when jobs in agriculture dramatically reduced and people migrated to the cities to work in manufacturing as the emphasis of work changed from independent production to mass production,

new opportunities will inevitably become available.

Electricity and telephones marked the second industrial revolution in the early twentieth century, creating lots of fear and distrust amongst the general population, who were concerned about the safety and implications of bringing this new, strange technology into our homes. In the third industrial revolution, the internet was born, transforming the way we work, the way we live, the way we communicate, the way we play, in absolutely unprecedented ways. Our society now barely resembles that of the pre-internet era.

The internet had a massive effect on the way we do our jobs. Email, digital documents, design, ecommerce, and so on, evolved business in a way that both streamlined and facilitated our workloads. But, moreover, the internet also created swathes of new jobs that had never existed before. Web designers and developers, for example, were jobs that simply didn’t exist before. Writers’ lives have changed immeasurably – no longer were they clamouring for their own column in a local rag, or praying for a publisher to give their manuscript the thumbs-up. With the internet, anybody could be a writer – for better or worse – by starting a blog or becoming a content creator.

The point is that we cannot anticipate the new roles that will emerge from the rise of the robots. Coding and programming, data science in all its many facets, and general STEM-related roles will, of course, abound. However, we can also expect a price premium on work that can only be accomplished by humans. Rather than competing against the vast computing power of the coming AI explosion, we should focus on stuff that we can do better than them. As business author Don Peppers stated via an article on LinkedIn in 2012:

“One way [to beat the machines] is to become very good at dealing with interpersonal issues – people skills. The other way is not to focus on solving problems but on discovering them.”

2. The Ethical Decisions of Self-Driving Cars

There is a runaway train on the tracks. Ahead, there are four people tied to the track, unable to move. The train is headed straight for them. You have control of a lever that would allow the train to switch to a different set of tracks. However, you notice that there is one person on that other set of tracks. You have two options:

Do nothing, and the trolley kills the four people on the main track.

Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

What answer would a self-driving car give to this famous philosophical problem, or its associated variations?

There will, undoubtedly, be similar problems that a self-driving autonomous vehicle would have to make in the course of its lifetime. The vehicle’s software would have to make a snap decision, at least one of which might cause the vehicle’s passengers harm, or else harm to an outside party or parties.

3. Ambient Advertising

The home of the future will be smart. Like something out of Beauty and the Beast, we will live in homes where our items and appliances will ‘talk’ to one another. Of course, this is a fountain of endless possibilities for making our lives easier and allowing us to get on with the business of, say, retraining for a new job in the automated world. But the way in which this technology is being developed has a dark underbelly.

Why would companies want our home appliances to talk to one another, and to us in fact? Well, the answer is to sell us stuff. Yes, we need to buy things like groceries and new clothes anyway, and ambient advertising suggesting highly personalised products using AI may make our shopping decisions more accurate, but do we really want that?

Many people already feel burdened by the overwhelming barrage of advertising, and the individual’s mass moniker, ‘consumer’, so will we really take to such high-level marketing in our own homes? What are the ethics of this? How much can brands really manipulate us? And if brands can do it, why not our governments?

4. Law Enforcement and Judgment

Imagine a world in which you are judged for crime, not by a fellow person, but by an intelligent robot. Seems pretty dystopian, right? Well, yes. But, considering judges have to work to the letter of the law, consulting legal precedents and laws without emotion, maybe a robot judge would not be such a bad thing. We hear stories all the time of judges going lenient on college boy rapists because they have a ‘bright future’ ahead, or deciding that a victim was ‘asking for it’. If we take personal bias out of the equation, dealing only in facts, will we have a fairer justice system?

On the other hand, robocops are a more troubling prospect. Whilst legal judgments are generally black or white, law enforcement has a lot more context to take into account. Certain actions, taken out of context, could be perceived as a threat, whilst seemingly innocent actions can conceal one. How do you quantify this into data for a robocop to act upon?

Allowing artificial intelligence to have control of crime and punishment is highly problematic. How far will a robot take its responsibility? How much responsibility should it have? Could it merely be a dangerous step towards an AI-ruled police state?

5. Relationships and Sex

Have you seen the film, ‘Her’? If not, go and watch it now.

Finished? Okay, so – we live in a society where people are overwhelmingly lonely. Many people would do anything to find someone they could really connect with, someone who listens and cares about them. What if you had a voice assistant, perhaps one that sounds like Scarlett Johansson, that attends to your every need? Inevitably, you would form a bond with them.

Whilst the technology is nowhere near ready yet, as AI develops, there is every reason that it will become intelligent enough for natural conversation, and perhaps something approaching independent thought. What does such a thing do for our already strained human connections? And, does it matter?

Then, of course, there’s the murky business of sexbots. The idea of a robotic sexual partner makes most of us a bit squeamish (though there are those who swear by it…) but maybe they’re not such a bad idea. I mean, our attitudes towards sex have evolved massively over the last 100 years – hey, even the last 50 – so what’s to say we won’t all be getting it on robo-style one day? And another thing, sex toys are a thing; why wouldn’t we take our buzzing buddies to the logical conclusion? Now that’s something to debate…

These are just five questions that are being hotly discussed by technologists, journalists, and even governments at the moment. There are more, so perhaps expect a follow-up piece soon. In the meantime, if you have any thoughts to add, drop us a tweet – let’s get this discussion flowing!

Augmenting The Self: Enhancement, Evolution, and Ethics

Augmenting The Self: Enhancement, Evolution, and Ethics

Transhumanism is the belief that we can use science and technology to evolve beyond our biological bodies and the limitations these bodies place on us. As such, the augmentation of the body and mind is at the forefront of the movement; a movement that is not quite so niche as you might expect.

Technologists and futurists, from Elon Musk and Ray Kurzweil, to Bill Gates and Prof. Stephen Hawking, are all convinced that artificial superintelligence is on its way – that within a matter of perhaps only two decades, AI technology will exceed human intelligence. At that point, these experts warn, humanity could face an unprecedented existential crisis, to rival that of climate change and overpopulation. Elon Musk has gone as far to say that it is our “biggest existential threat”.

Whilst some, like Hawking, advise that we should strongly consider what technologies we progress with and what the future ramifications could be, Musk and Kurzweil (Head of Engineering at Google) take another approach. What is their solution? To evolve.

Enhancement

Among Ray Kurzweil’s ideas for how we can evolve is through the use of nanobots. He has gone on record repeatedly to state that, by 2030, we will have nanobots flowing through our bloodstreams. These nanobots, he predicts, will go around healing us at the earliest signs of illness or disease, thus keeping us healthier for longer. These tiny robots, that can be injected using a normal hypodermic syringe, could also play their part in uploading our minds to the cloud. Yes, you read that right… uploading our minds.

Mind upload might sound like the craziest thing you’ve ever heard, but to the chaps in Silicon Valley, it’s old news. Facebook announced earlier in 2017 that they are working on an interface that will allow us to post directly to Facebook using only the power of our minds. And, famously, Elon Musk has backed a company called Neuralink, whose business is to create neural laces that can be injected into the jugular vein, unfold like an umbrella onto our brain, and enhance our cognitive ability – mind upload is a key part of this.

Of course, these predictions would seem absurd if they weren’t coming from some of the greatest minds on the planet. Renowned author and academic, Noam Chomsky, is, however, critical of whether this mind-control thing will ever really work. He has repeatedly told interviewers that it is patently impossible, given the limits of our understanding of thought and consciousness. Either he’s right, or those big science guys know some things we, and Chomsky, don’t.

These ideas form part of the upper end of the scale of transhumanism. However, we are already seeing small bodily enhancements powered by technology creeping into everyday life. For example, a tech startup in Sweden has recently offered their staff the opportunity to have implants placed under the fatty skin of the palm below the thumb. Why? To enable them to open the office doors, use printers, and order food. It’s very basic tech, no more complicated than that with which we microchip our pets, but what it represents is something far bigger.

Evolution

Prostheses, from artificial legs to glasses and contact lenses, have helped us move beyond our weaknesses for centuries. But now, rather than focusing on disabilities and impediments, transhumanists are arguing that, essentially, we are all impeded. No longer suited to our environment, and with the looming threat of artificial superintelligence knocking like the Big Bad Wolf at the door of our straw hut, it’s time those impediments were set right. As Kurzweil puts it:

“Biology is a software process. Our bodies are made up of trillions of cells, each governed by this process. You and I are walking around with outdated software running in our bodies, which evolved in a very different era.”

At no point in the history of the Earth has a species consciously enacted its own evolution. But with the strongest cognitive ability on the planet, humans have evolved to a point at which this possibility is well within our capability.

Evolution, historically, has taken place when a species finds itself ill-adapted to its environment. When the environment is not sufficiently nourishing, the organism must alter the way it functions in order to thrive within that environment. It’s a simple case of evolve or die; survival of the fittest, if you will.

Few with half a (feeble human) brain would argue that our environment is changing. We live on a planet that we have dominated, that we wantonly destroy to feed our every need and every whim. And artificial intelligence technology, the Frankenstein’s monster we are mid-way through creating, has the capacity to alter that environment even more dramatically. If this really is the future, evolving into it is the logical answer.

Ethics

Okay, so let’s say that someone perfects the technology that allows us to inject nanobots into our bodies to keep us fit and healthy. What if we perfect gene therapy to extend healthy, vital life exponentially? And what if Elon Musk and Neuralink manage to work out the whole neural lacing thing and make us superhero smart, smart enough to rival AI? Sounds great, right? But the question is, who gets the privilege?

We all know that technology starts out expensive. Just look at the ubiquitous smartphone, or even the personal computer. Yes, these enhancements will start out as being only available to an elite few, but – like HIV drugs or the iPhone, they will become available to all via the laws of trickle-down economics. That is, if the current capitalist system holds. Who can say?

Then, of course, there’s the basic question of whether we should be doing this at all. Well, the fact is that we are way past that. We already have AIs talking to one another in a language that researchers cannot understand, AIs that can beat the world’s best Go player (a feat that far surpasses the impressiveness of winning a game of chess), and AIs that can name guinea pigs (holy moly!). To evolve, as Musk and Kurzweil would argue,  is not just a whimsical move, but a defensive one.

Is Artificial Intelligence Really the Next Technological Revolution?

Is Artificial Intelligence Really the Next Technological Revolution?

Is Artificial Intelligence Really the Next Technological Revolution?

A comparison of AI with previous technological breakthroughs

There’s no shortage of hype around artificial intelligence. Fueled by recent scientific advances in the field, AI is now characterized as the “new electricity”—a technological breakthrough that will revolutionize the world.

But are we sure that’s the case?

Many booms and busts have punctuated AI’s nearly half century of history. Excessive expectations and promises, which drove the first AI bubble in the 1980s, have been followed by decreased funding and interest — the so-called “AI winters.” But this time feels different. Five billion in venture capital was funneled into AI last year. Coupled with recent acquisitions of AI startups by tech companies such as Facebook, Google, and Apple, and the exploding interest by other companies — reflected in the skyrocketing mentions of AI in company earnings calls — it seems rather obvious that AI is here to stay.

But is AI indeed the next major technological revolution? Is there a generic structure of technological revolutions that can be identified historically? If so, can the insights of previous technological revolutions be applied to AI? And if AI represents a major technological breakthrough that is comparable to electricity and steam, in which phase of its development do we currently find ourselves?

In her work on the economics of innovation and technological change, socio-economist Carlota Perez has traced the discontinuities and regularities in the process of innovation. Similarly to Thomas Kuhn’s work on the nature of scientific discoveries — in which scientific revolutions disrupt the process of science and trigger the formation of new scientific paradigms — Perez identifies a sequence of technological revolutions and “techno-economic paradigms” that have disrupted our industries and societies.

A technological revolution — which locally disrupts a specific market or industry in terms of new inputs, methods, and technologies — becomes a techno-economic paradigm when it starts to globally transform organizational structures, business models, and strategies in markets and sectors beyond which the technological breakthrough had initially erupted. Techno-economic paradigms, in other words, represent a collectively shared best practice model of the most successful and profitable uses of the new innovations. By enabling the wide-spread diffusion and adoption of the emerging technologies across economies and societies, techno-economic paradigms will fundamentally affect our socio-institutional frameworks.

As Perez has shown, two distinct phases can be identified in each technological revolution. There is an “installation” phase, in which innovators and entrepreneurs explore the potential of the new technology. In this phase, the diffusion of a breakthrough technology is often driven by a financial bubble. The installation phase is followed by a “turning point” or phase of readjustment — in which the bubble bursts — and the “deployment” period, which diffuses the new technological system across industries, economies, and societies.

Each technological revolution can be characterized further in terms of a specific life cycle, which, as Perez documents, tends to last around half a century (see image 1). Perez identifies four distinct phases within such a life cycle: an initial period, which is characterized by explosive growth and innovations and new products; a phase of constellation, in which new industries, infrastructures, and technology systems are built out; the full expansion of innovation; and the last phase, which is defined by technological maturity and market saturation.

Perez defines a technological revolution as a set of interrelated radical breakthroughs — that is, singular innovations — that form a constellation of interdependent technologies. A technological revolution, in other words, is a cluster of clusters, or a system of systems of technological innovations. The recent major breakthrough in information technology, for example, formed such a technology system around microprocessors and other integrated semiconductors, from which new technological trajectories opened up: personal computers, software, telecommunication, and the internet emerged from the initial technological system. These new technological systems subsequently created strong inter-dependence and feedbacks between technologies and markets. The defining features of technological revolutions — as opposed to a random collection of singular innovations — are thus the following: (1) they are interconnected and interdependent in their technologies and markets, and; (2) they have the disruptive potential to radically transform the rest of the economy and society.

Historically, Perez identifies five such major technological meta-systems, which were initially triggered by a technological (or scientific) breakthrough and, then, expanded across industries and economies. The first such disruption of the late 18th century was organized around the mechanization of factories, water power, and the canal networks. This was followed by the second revolution, which initiated the age of steam and railways. In the late 19th century, electricity, steel, and heavy engineering intensified international trade and globalization. In the last century, two technological revolutions transformed our economic and industrial system: the age of oil, mass production and the automobile was followed by the era of information and communication technology.

What made these technological disruptions revolutionary were not only the new interrelated technologies, industries, and infrastructures but their transformative potential defined in terms of extraordinary increases in productivity that they enabled. When a technological revolution propagates across industries and economies, it radically transforms the cost structure of production by providing new powerful inputs (such as steel, oil, or microelectronics). Thereby, it unleashes new innovations and interrelated technological systems, which renew existing industries and create new ones.

Perez provides a powerful framework that can be applied to the current state of AI. Given Perez’ conceptual model of the diffusion of technological innovations, the new AI industries and systems that are forming now can be located between phase one, the period of “paradigm configuration,” and the phase of “full constellation,” in which new industries emerge and infrastructures get installed. The explosive growth and innovation we are experiencing at the moment typically characterizes phase one. While new industries, technological systems, and infrastructures emerge in phase two — which results in intensified investment and market growth — the technological revolution is transforming its core industries, but has not yet permeated economies and societies as a whole.

While the recent extraordinary investments in AI might lead to another bubble — which might indicate the “turning point” or phase of readjustment in Perez’ model — it seems that the economic space today is, indeed, different than during the last AI bubble (or, perhaps, the bursting of the first AI bubble in the 1980s already marked the “turning point” — over-inflated expectations crashed when cheap UNIX workstations triggered the fall of over-priced expert systems running on LISP and the Dreyfus brothers published their Mind over Machine, which undermined some of the pretentious and flawed assumptions of the first generation of AI research). Not only has there been massive growth in computation, GPUs, storage, datasets, user demand, high levels of R&D, and VC investment, but governments have also started novel AI initiatives. The UK government recently announced increased funding for AI research; the Chinese government gave AI priority status in R&D and commercialization; and the US government has funded AI research last year with more than $1 billion.

The role of public R&D is singularly important for technological revolutions as the previous five major technological surges have all been, to some extent, government-sponsored (such as the canals and railways networks, or the Internet, which has been heavily funded by government agencies such as DARPA). Historically, the synergistic financing of governments and financial capital, such as venture capital, has been crucial for the diffusion and adoption of technological breakthroughs and their consolidation into techno-economic paradigms.

But in what sense, then, does AI share the features of the previous technological revolutions that can be historically identified? The emerging AI technology systems clearly exhibit the interconnectedness and interdependence in their technologies and markets, which characterize the previous technological revolutions. AI represents not just another new dynamic industry that is added to the existing production structure; rather it provides the means to modernize almost all existing industries and activities. New AI-powered industries and infrastructures are forming at the moment that not only fundamentally re-organize existing industries, but have started to deeply affect organizational structures, business models, and strategies. As it was the case with steam and electricity, these technological and scientific breakthroughs are not only productivity-enhancing in the core industries but are beginning to permeate various peripheral sectors and markets.

In this sense, AI has all the features of what economists call a general purpose technology (GPT). In economics, a GPT is defined as a generic technology, which (1) can be improved, (2) can be widely used and applied, and (3) expands the space of possible innovations and investments. Similar to historical GPTs, such as the steam engine, electricity, or microelectronics, these new interconnected and interdependent AI-based technology systems and markets have not only the potential to enable innovations in products, processes, and organizational structures — as previous GPTs did — but also to radically transform our economic, social, political structures.

AI has all the defining features of previous technological revolutions — it is becoming a cluster of interrelated generic technologies and organizing principles that are starting to spread far beyond the confines of a specific industry. At the core of all the previous technological revolutions has been an all-pervasive low-cost input, often a new material or energy source combined with novel products, processes, and infrastructures. Similar to electricity, steam, or microelectronics, AI — fueled by GPU-accelerated computing, massive increases in available data, and drastically reduced costs — seems to be on the cusp of becoming such a cheap and ubiquitous new input. Similar to steam in the 18th century or electricity today, distributed AI could soon power almost all products and processes and deeply permeate existing and novel infrastructures and industries.

Indeed, AI could become the “new electricity.” We are not there yet. But given Perez’ model of diffusion and adoption of technological innovation, we may indeed be at the cusp of a revolution.

AI STARTUP SHERPA WINS THREE PRESTIGIOUS AWARDS IN TWO MONTHS

AI STARTUP SHERPA WINS THREE PRESTIGIOUS AWARDS IN TWO MONTHS

STARTUP SHERPA WINS THREE PRESTIGIOUS AWARDS IN TWO MONTHS

  • SHERPA has been awarded three major awards in the past two months. The Red Herring 100 Winner Europe 2017, the White Bull Award and the Best Mobile App Award
  • SHERPA was a finalist at CognitionX, an awards ceremony in London which recognises excellence in Artificial Intelligence, where it was surpassed only by Deep Mind (Google)
  • SHERPA has also been featured by the technology consultancy GARTNER as one of the top three intelligent apps of the moment

 

Bilbao, June 29th, 2017 – SHERPA, the predictive Artificial Intelligence platform, has been recognised as one of the best European startups, winning three major awards in the past two months. SHERPA, a personal assistant which integrates with smart devices, has been awarded the Red Herring Europe Winner 2017, the White Bull Award 2017 and the Best Mobile App Award.

The Red Herring Top 100 Europe celebrates the top private companies in the European region. Red Herring’s editorial team analyses hundreds of cutting edge companies and technologies and selects those that are positioned to grow at an explosive rate.

The White Bull Awards, held in association with the multinational Qualcomm, bring together Europe’s top technology and media leaders, entrepreneurs, innovators, investors, and visionaries. They select the best startups in Europe based on the criteria of innovation, leadership, growth and potential for growth.

CognitionX, in association with the Alan Turing Institute, awards the best and most innovative contributions to Artificial Intelligence of the year in a ceremony in London. SHERPA was shortlisted in the Best Innovation in Artificial Intelligence category, surpassed only by Google Deep Mind.

Following on from these successes, SHERPA’s mobile app has won the Best Mobile App Interface Award.

As a result of the above and because of SHERPA’s new algorithms, SHERPA’s app has also been recognised by GARTNER (the prestigious technology consultancy) as one of the three intelligent apps of the moment.

These aren’t the only awards SHERPA has been given recently. In December 2016, SHERPA won the prestigious Digital Top 50 Award. The Digital Top 50 Awards were founded by Google, McKinsey y Rocket Internet to recognise and reward bold talent, cutting-edge innovation and sharp business acumen amongst the most promising European startups.

 

About SHERPA

SHERPA is the start-up responsible for creating the Predictive Personal Assistant that directly competes with those developed by Google and Apple. With headquarters in Bilbao, the platform is based on the most advanced technology and algorithms in Artificial Intelligence.

In 2015, SHERPA reached an agreement with Samsung to install its software in the Korean company’s devices, last year SHERPA raised $6.5 million in a Series A round of financing.

 

You can find more information on SHERPA here: http://sher.pa

 

Related links:

 

For more information (press): Text100 – Virginia Huerta (Virginia.huerta@text100.es) +34 91.561.94.15 / press@sher.pa

Artificial Intelligence Startup Biggerpan Welcomes World-Renowned Researcher Dr. Gregory Grefenstette as Chief Scientific Officer to Lead its Predictive Technology Works


Artificial Intelligence Startup Biggerpan Welcomes World-Renowned Researcher Dr. Gregory Grefenstette as Chief Scientific Officer to Lead its Predictive Technology Works

A scientific expert and reputed technology leader, Dr. Grefenstette will lead research and development efforts of the world’s first predictive artificial intelligence for the mobile web

 

SAN FRANCISCO, June 13, 2017 /PRNewswire/ — Biggerpan, a startup company which develops a predictive artificial intelligence (AI) technology that anticipates people’s needs on mobile, is pleased to welcome Dr. Gregory Grefenstette as Chief Scientific Officer. A world-renowned expert, Dr. Grefenstette brings more than 30 years of experience in the AI industry, and is considered a leading researcher in the field of natural language processing (NLP) and information retrieval. In his newly appointed role, he is responsible for driving the continued research and development of Biggerpan’s breakthrough technology.

Biggerpan’s mission is to make the Internet smart on mobile by building the first AI that predicts what you want, so you don’t have to search. “As we are shifting away from the traditional keyboard and mouse paradigm, people will need to rely more and more on predictive interfaces,” said Eric Poindessault, co-founder and CEO at Biggerpan. “Today we target the mobile user experience, where 75% of the Internet use is happening right now, tomorrow think virtual and augmented reality.”

Derived from the latest research in natural language processing, a branch of AI which extracts meaning from text, Biggerpan’s proprietary technology is able to analyze and understand any web page in real time in order to make the most relevant recommendations. For example, if you are reading an article about a movie, it instantly offers you to watch the trailer or to buy tickets online. Biggerpan’s technology takes text comprehension to new heights, as it understands the meaning of each word based on the context of an entire page rather than just the surrounding words, which allows for a more effective disambiguation. It goes further thanks to a unique multi-class entity recognition approach which allows it to identify topics almost instantaneously.

Dr. Gregory Grefenstette joins the team as an authority in natural language processing, as he has continuously been pioneering the fields of cross-language information retrieval and of distributional semantics, the induction and extraction of meaning from large quantities of text. Sought after as a keynote speaker, he is named inventor in 20 granted U.S. patents, has authored and edited four books, and published hundreds of research papers in the most prestigious scientific journals. Dr. Grefenstette previously held chief scientific officer positions at Xerox Research Centre Europe, search engine company Exalead, Clairvoyance Corporation and with the French CEA, and was a senior researcher at numerous top-tier institutions such as INRIA and the Florida Institute for Human & Machine Cognition (IHMC). A graduate from Stanford University, Dr. Grefenstette initially studied mathematical sciences at the Massachusetts Institute of Technology and later received a PhD in computer science from the University of Pittsburgh.

“Today, the power and capacity of a computer is underused. We can leverage algorithms to provide powerful predictions, avoiding the frustrations of typing and searching on a small device,” said Dr. Gregory Grefenstette. “I am excited to be part of a team that is at the forefront of such innovation and look forward to incorporate my years of research into a useful real-world application through the development of this technology.”

Dr. Grefenstette now brings an unrivaled level of expertise and experience to Biggerpan. His past work and interests acutely align with the company’s forward-thinking vision, fostering ideal conditions for future enrichment of the technology, and overall company success.

“We are very happy to welcome Dr. Gregory Grefenstette to the team,” said Eric Poindessault. “As a founding father of modern NLP, his extensive knowledge and experience will accelerate the development of our AI technology and propel us forward in achieving our mission.”

ABOUT

Biggerpan is a French-American startup which develops a predictive artificial intelligence that leverages context to make real-time recommendations. The company’s mission statement is to build a brain for the mobile web, to allow a better integration of the technology into our lives, without all the pain and frictions that are found in traditional mobile online activities.

The first product released by Biggerpan is Ulli, a smart mobile web browser that simplifies the experience on a mobile device by recommending the most relevant content, services and purchases for people to navigate based on the context of their current browsing. The iOS app was nominated 2016 Mobile App of the Year by Product Hunt and featured on the Emerging Tech Tour at Mobile World Congress 2017.

Visit www.biggerpan.com for more information including a video.

CONTACT

Luc Hancock
1-415-867-4031
luc@biggerpan.com
facebook.com/biggerpan.inc
twitter.com/ulliapp

LONDON DEEP LEARNING IN RETAIL & ADVERTISING SUMMIT, DAY 1 HIGHLIGHTS

Original

The use of deep learning in retail and advertising, is rapidly expanding and becoming an integral part of consumerism.

We’re almost at the end of day 1 of our Deep Learning in Retail and Advertising Summit in London, and we’ve brought together data scientists, engineers, CTOs, CEOs and leading retailers to explore the impact of deep learning and AI on the industry.

Deep Learning Trends & Customer Insight


Ben Chamberlain, Senior Big Data Engineer from ASOS kicked off this morning’s discussion by exploring the impact that deep learning has in predicting the customer lifetime value in e-commerce (CLTV).

Deep learning works really well for deterministic tasks, and as CTLV is absolutely not a deterministic task, it’s extremely difficult – I don’t even know my own value to ASOS, it’s a very different kind of problem.

In an ideal world, ASOS would ‘know every action a customer will make for the rest of time, but [they] can’t do that’. Identifying and distinguishing between high and low value customers allows companies to optimise market spend and minimise exposure to unprofitable customers. He explained how ‘a large percentage of customers churn and have a 0% CTLV, whilst some will spend millions each year on ASOS.’ This makes machine learning incredibly complex to implement, and when it was first implemented, it was ‘wrong for the vast majority of customers’. To overcome this, Chamberlain explained how they tested two models: the widened deep model, and the random forest model with neural embeddings which combines combines automatic feature learning through deep neural models with hand-crafted features to produce CLTV estimations that outperform either paradigm used in isolation. Model two has some merit, and this implementation of deep neural networks was tested and adopted by ASOS.

Hear more from ASOS:
@b_p_chamberlain: Our new paper on neural embeddings in hyperbolic space. http://arxiv.org/abs/1705.10359  Talking about this at #reworkretail in LondonIn a CTLV paper published by ASOS, they expand on the implementation of this model, following on from the success ‘of DNN’s in vision, speech recognition, and recommendation systems’, which was influenced by Yann LeCun, Yoshua Bengio, and Geoffrey‚ Hinton’s 2015 paper, Deep Learning, (2015). Hear from this trio of innovators ahead of their appearance at our Deep Learning Summit Montreal where they have just been announced as the Panel of Pioneers, check out our blog post here.

 

Forecasting & Recommendations

The accuracy of probabilistic forecasts is integral to Amazon’s business optimisation process, and machine learning scientist, Jan Gasthaus spoke about the methods they are currently using.

Why bother forecasting?

If I knew the future, I could make optimal decisions. However, I don’t know the future. But what’s the next best thing? Creating accurate predictions taking into account past data to capture the uncertainty and predict more accurately. This allows me to give estimates and quantify them.

People are currently using what Gasthaus called ‘the onion approach’ where ‘you peel away bits you understand and you’re left with the part of the problem that the probabilistic time series can digest.’ Whilst this method has several pros such as its de-facto standard and decomposition, there are also several obstacles such as the amount of manual work required as well as its inability to learn patterns across time series. For example, Gasthaus explained how it is problematic if they ‘care about the forecast in a three week window for example rather than a specific day.’

To overcome this, Amazon are using black box deep learning functions to from simpler building blocks and learn them end to end to come up with their model of distribution and train neural networks. This novel method they are proposing  to produce accurate forecasts is DeepAR. This is ‘based on training an auto-regressive recurrent network model on a large number of related time series.’ Here, the input is the time series of past values, and the output is the estimated joint distribution. ‘Deep learning methodology applied to forecasting yields flexible, accurate, and scalable forecasting systems where models can learn complex temporal patterns across time.’

Dr Janet Bastiman‏ @Yssybyl: @Dr Janet Bastiman @Yssybyl: Predicting the future @amazon by Jan Gasthaus with #DeepLearning – comparison to past approaches #ReworkRETAIL

We next heard from Rami Al-Salman who explained how Trivago are ‘using a culmination of artificial neural networks, word embedding, deep learning and image search to optimise the results that users receive for each distinct search.’ Trivago serves millions of queries every day, and one of the biggest challenges is ‘predicting the intention of users queries’ and providing appropriately corresponding recommendations, for example ‘when a user types czech + currency we will want to recommend “koruna” as an additional search keyword’. This word embedding and use of is ‘one of the most exciting topics nowadays, as it learns a low-dimensional vector representation of words from huge amounts of unconstructed data as well as capturing the semantics of data.’ Deep learning methods are progressing so rapidly in natural language processing and computer vision, Trivago have applied these advancements to their model to provide an improved user experience. Trivago took ‘6 million hotel reviews and put them through vectors, and it’s possible to use word2vec because it’s scalable. Where a normal categorisation would take days to produce, word2vec produces results in less than 30 minutes.’  Al-Salman explained that word2vec allows them to learn the representation of words and give accurate suggestions. He also revealed that in the near future, deep learning will be applied to classify hotels to provide better search facilities.

Hear more from Trivago:
DeepTags: Integration of Various VGI Resources Towards Enhanced Data Quality
Warehouse & Stock Optimisation

Calvin Seward from Zalando spoke today about two issues they are working to overcome in warehouse and stock optimisation: the picker routing problem, and the order batching problem. In a physical warehouse there are rows and aisles of stock and ‘you’ve got a bunch of locations in the warehouse that pickers have to visit – it’s super inefficient.’

One you come up with an optimal route, you can drive efficiency and save money. That’s the goal of our project.

To overcome this problem, research scientist Seward explained how they developed the OCaPi (Optimal Cart Pick) Algorithm to calculate the optimal route to walk. This algorithm however, still has a runtime of around 1 second.

The second problem of order batching lies when ‘customers have ordered a bunch of things. We want to split these into different pick tours, but we can’t assign one order to multiple pick lists because there’s no way to bring the order together.’

By implementing a group force optimisation strategy, Zalando saw an 8.4% increase in efficiency. Additionally, Seward explained that by using neural networks, they can estimate the pick route length and by combining this with OCaPi they have created ‘a black box strategy where we get the neural network to learn from the examples’ which is a whole lot faster. With only the OCaPi running the results were never better than 0.3 seconds, but on the same CPU with the addition of the neural network approach it gets down to milliseconds.

Hear more from Zalando:
Zalando are using AI in several aspects of their business and have also created Fashion DNA to make the properties of their products more accessible, by collecting disjointed information in their catalog and mapping it into an abstract mathematical space – the “fashion space”. We spoke to Roland Vollgraf, Research Lead at Zalando Research, who expanded on this.Check out the interview here.

Couldn’t make it to London?

Register for on-demand post-event access to receive all the slides, presentation videos and interviews from the summit, or check out our calendar of upcoming events here.

View our upcoming events calendar and register here.

HOW ONE BOSTON STARTUP IS OVERCOMING FLIGHT DELAYS AND CANCELLATIONS

Original

Over the last year, Boston has seen its tech scene flourish, and when our team headed out for the Deep Learning Summit and the Deep Learning in Healthcare Summit at the end of May, we heard first hand from some of these thriving companies.

Having traveled back from Boston on the day of the British Airways IT meltdown, it really drew on the functionality of Freebird, a B2B travel tech startup based in Boston, and how it could have saved the day had we been booked onto a BA flight back to the UK.

@getfreebird: British Airways outage impacts 75K travelers and £100m costs. Freebird travelers rebook next flight on any airline.

Everyone knows that sinking feeling when you’re waiting at the airport and your flight flashes up ‘delayed’. You wish you’d got to the airport a couple of hours later, or on the occasion of cancelled flights not bothered at all, but how do we know ‘which of the over 30 million commercial flights in the US will get actually delayed or cancelled?‘ Freebird has built a business based on using data science to answer that question, their number one priority is to eliminate the stress, delay, and massive inconvenience delays can cause – they know that ‘getting there matters’.

Sam Zimmerman, CTO & Co-founder spoke at the Deep Learning Summit Boston last week where he explained that the Freebird team have created a real-time predictive analytics engine based on dynamic data sets and deep-learning algorithms. In the event of a cancellation or severe delay, with Freebird you can skip the line and instantly book a new ticket (on any airline) at no extra cost.

But how does this work?

Freebird started out with the intention of serving the B2C market, and after a successful incubation period realised that the ‘corporate market needed something for travel agents to better take care of their passengers, which was one thing that [they] had already validated’ with the B2C model. For companies to get their team to meetings or conferences in a limited and often tight time schedule is often of paramount importance, and there was an obvious gap in the market and a need in the corporate space for a tech solution to these travel inconveniences. Not only are disruptions bad for business, but each year they have an annual cost of $60 billion, and the US has a travel insurance spend of over $3 billion.

‘We can’t stop flights from being cancelled, but we’re doing the next best thing’.

Screenshot 2017-06-06 170525png

Zimmerman spoke about the multitude of data the platform amalgamates in order to compute prices correctly and calculate the appropriate quotes. The ‘cutting edge predictive analytics tool takes into consideration weather data, flight pricing and availability, to price the booking solution and to inform companies of the micro risk their travellers are facing every day.’ Using these dynamic data sets they are able to construct and train deep learning algorithms to generate an accurate output determining the likelihood of these disruptions. He explained are not an insurance company, but a technical company solving a technical problem to help improve the industry by buying ‘low cost last minute airline tickets that typically go unsold.

Although Freebird began its life as a self contained mobile app, it is now a multi-platform agnostic medium that sends disruption notifications via sms, email, or the airline carrier’s app to notify passengers as promptly as possible about any delays. This is so that the app can be integrated with the travel agents’ systems and provide a smooth user experience that doesn’t require any additional software from the passenger.

@pradipt: #Flight #Disruption #Startup Freebird Snares #Funding from General Catalyst and Accomplice

Want to hear from more companies working with Deep Learning? Our next Deep Learning Summit is in London 21-22nd September.
Confirmed attendees include: Google Deep Mind, Facebook AI Research, Jukedeck, Aplha-i, Facebook, OpenAI

GoCompare launches a search for the sharpest data experts

GoCompare launches a search for the sharpest data experts

GoCompare, the comparison website based in Newport, south Wales, has partnered with technology consultants, Kubrick Group, to launch a graduate challenge to find the data leaders of the future. Successful applicants will secure a fully-paid place on Kubrick’s coveted 18-week big data engineering training course, followed by the potential for a two-year data science development programme at GoCompare.

 

The challenge, which is open for applications until 10 July, forms part of GoCompare’s ambitions to become the tech employer of choice in the region.

 

Jackson Hull, chief technology officer at GoCompare, said: “We’re committed to developing world-class talent in Wales, and through this challenge we want to inject a bit of fun into the recruitment process, with the significant incentive of a paid-for training course and the chance to secure a position at GoCompare for the successful applicant or applicants.

 

“The challenge is open to recent graduates and those in the early stages of their career who want to pursue a new opportunity. Anyone who is confident with mathematics, and is familiar with data and how it can be used in real-life applications to make people’s lives easier, is encouraged to take part in the challenge.”

 

Jackson continued: “For those who make the cut, there’s a place on an in-demand 18-week course in London provided by Kubrick Group, the technology consultants, who will pay them a salary while they train. And we’ll even contribute to their living expenses on top of this.

 

“Successful candidates will be employed as consultants by Kubrick Group and will have the chance to join GoCompare on a two-year data science development programme, working on exciting and ground-breaking projects alongside a skilled, dedicated and supportive team.

 

“Top performers of the data science development programme will be offered a full time role at GoCompare and given the on-going support they need to pursue a hugely rewarding career at the cutting edge of data science.

 

“If this appeals to you, I’d encourage you to apply today.”

 

Simon Walker, managing partner at Kubrick Group, added: “It’s really exciting to work with GoCompare as they move to using data science to improve their customer experience. It’s great to see they are utilising Kubrick’s training to help them achieve this.”

 

Anyone interested in applying can do so here: www.gocompare.com/data-science/

CREATING A MORE EFFICIENT SHOPPING EXPERIENCE WITH DEEP LEARNING

CREATING A MORE EFFICIENT SHOPPING EXPERIENCE WITH DEEP LEARNING

Retail is undergoing an artificial intelligence revolution. The latest advancements in deep learning algorithms are now impacting all corners of the industry, from stock optimization and smart warehousing, to search, recommendation systems and forecasting.

These significant advancements in AI, data science and deep learning are giving online shoppers and sales staff the tools they need for the most the efficient shopping experience possible. At Instacart, these technologies are allowing them to predict the sequence that customers pick items in specific store locations, in some cases saving their product pickers upwards of 10% of their time spent locating and gathering items in-store. This efficiency is extremely valuable in the competitive and continually evolving world of online shopping.

At the 2017 Machine Intelligence Summit in San Francisco, Jeremy Stanley, VP of Data Science at Instacart, shared expertise on how deep learning can be used to create more efficient online shopping, with insights into the data collection, mobile technology and machine learning approaches they are applying to enable on-demand grocery delivery. View his presentation with slides below.

Instacart has revolutionized grocery shopping by bringing groceries to your door in a little as an hour. The crux of the company is their shoppers, who shop in brick and mortar stores and bring the food to customers thousands of times per hour. Making these shoppers as efficient as possible is critical to the business. Hear how Instacart is applying deep learning to the shopping list to improve shopper efficiency, predicting the sequence that shoppers pick items in specific store locations – in some cases saving significant time in-store. Here Jeremy discusses the data collection, mobile technology and machine learning approaches Instacart is applying to enable on-demand grocery delivery.

View a selection of presentations from the 2017 Machine Intelligence Summit in San Francisco here, or contact Chloe cpang@re-work.co for video membership options.

BOOTSTRAPPING AN INTELLIGENT RECOMMENDER SYSTEM

BOOTSTRAPPING AN INTELLIGENT RECOMMENDER SYSTEM

In many different web services, machine learning is being used for recommendation systems that help users tackle information overload: there are simply too many movies, songs, and books for users to usefully browse through. Without such tools, some services are rapidly falling behind and losing customers.

Travel is a little bit different, as the world does not have millions of cities, but finding new, interesting places to travel to is still a challenge. Years ago, Skyscanner started it’s ‘everywhere’ search which allows users to find the cheapest places possible that they could travel to, leading to research showing that price is one of many factors that make a place attractive and interesting.

Neal Lathia, Senior Data Scientist at Skyscanner, will join us at the Machine Intelligence Summit in Amsterdam, to share how the company bootstrapped a destination recommender system using rich implicit data generated by millions of users, along with simple algorithmic approaches, and experiments that gauge how localised and personalised recommendation affects user engagement. I spoke to Neal ahead of the event to learn more.

Please tell us more about your work at Skyscanner.

As a Senior Data Scientist, my focus is on designing and building machine learning features for Skyscanner’s mobile app. Since joining, just under a year ago, the projects I’ve been working on have related to recommendation and search result ranking. However, the app creates a very rich ecosystem of data, and we have already identified a number of other opportunities ahead.

What do you feel are the leading factors enabling recent advancements in machine learning for recommendation systems?

Many of the near state-of-the-art algorithms for recommendation systems have been open sourced- which is always welcome news! The research field has also always been driven by open data challenges. Most importantly, the research community has always taken a multidisciplinary approach – not all recommender system challenges need to be solved with machine learning.

Which industries have the biggest potential to be impacted by advancements in recommendation systems?

As someone who has a background in recommender systems, it is difficult for me to try to envisage any industry without the lens of recommendation potential. There are so many facets of life where personalised information could be useful – from healthcare to travel and beyond.

What developments can we expect to see in machine intelligence in the travel industry in the next 5 years?

Many of the best known travel sites online have a distinct focus on price – helping users find the cheapest flight, hotel, or car (Skyscanner is no exception to this!). As these services gain greater smartphone traction, and data (e.g., flight statuses and prices) becomes available in real-time, the travel industry is going to become a ripe domain for machine intelligence applications.

Outside of your own field, what area of machine learning do you will see the most progress in the next 5 years?

There is no doubt that recent advances in neural networks have lead to wonderful results in the areas of reinforcement learning and machine vision – I expect that progress to continue to accelerate. I’m looking forward to interesting products that may arise from these areas of research.

Neal Lathia will be speaking at the Machine Intelligence Summit, taking place alongside the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam on 28-29 June. Meet with and learn from leading experts about how AI will impact transport, manufacturing, healthcare, retail and more.
Other confirmed speakers include Roland Vollgraf, Research Lead, Zalando Research; Alexandros Karatzoglou, Scientific Director, Télefonica; Sven Behnke, Head of Autonomous Intelligent Systems Group, University of Bonn; Damian Borth, Director of the Deep Learning Competence Center, DFKI; Daniel Gebler, CTO, Picnic; and Adam Grzywaczewski, Deep Learning Solution Architect, NVIDIA. View more speakers and topics here.
Tickets are limited for this event. Register to attend now.

Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.

The Future of Investment in AI Could Be Decided By AI Itself

The Future of Investment in AI Could Be Decided By AI Itself

Many believe that artificial intelligence is set to reshape global economies and society, with AI expected to double economic growth. But what are the opportunities and challenges to investing in the current world of AI?

In 2016, the AI market was worth just $644 million, according to Tractica. This year that amount is due to almost double and continue to grow exponentially, with predictions showing it to reach $36.8 billion by 2025. With so many pioneering companies and world-leaders focusing their attention and funding on artificial intelligence fields such as deep learning and machine intelligence, the momentum for progress is well underway.

At the Machine Intelligence Summit on 28-29 June, Julius Rüssmann, Analyst at Earlybird Venture Capital, will share expertise on a panel exploring the challenges and opportunities of investing in AI. Julius believes investors themselves will be disrupted by the AI revolution. Could future investments in AI be decided by other artificially intelligent systems instead of humans? I spoke to him ahead of the summit to learn more.

What are in your opinion the most promising machine intelligence sectors to invest in?

Broadly defined, the service and manufacturing industries will probably benefit the most from (an increased level) of Machine Intelligence. By that I refer to the idea that large parts of the service industry, today still based on human work, can be heavily digitized, automatized and even improved through intelligent software applications. Especially, when considering the fact that the complexity inherent to each and every service inquiry increases, exponentially as data growth, machine intelligence is critical to ensure smoothly running system.

Besides primarily consumer focused service applications, the manufacturing industry will not remain the same. As machine intelligence develops, the sharp line between human-based work and robotics vanishes (think of collaborative robotics) until robotics will effectively overtake the bulk share of manufacturing processes and frees up billions of hours every day that have been had allocated to human work beforehand (problems that will arise).

One good example of Machine Intelligence that will redefine industries and humans alike is enriched or contextual computer vision. If it would be possible for software to contextually accurately understand video content, that would change a lot (autonomous driving, health care and so forth).

Besides industries and application fields, we deem Deep Reinforcement Learning as well as Neural Networks to be critical to facilitate further use-cases of Machine Intelligence.

What are the characteristics you are looking for in a startup prior to investing?

First and foremost, we look at the people behind the startup. Why is that? Because we think that complementary skills and solid commitment are in every successful company the kea driver. Starting a company, especially in the field of technology, always implies that there will be tough times and complex problems to be solved. In those situations it is almost irrelevant, how attractive you business model or the targeted market is. It is all about the team, to steer and pivot the company in the right direction (again). Besides outstanding people, we like to see early product (prototypes) to see credibility on execution, management and skills. In most cases that turn out to be successful you will see some sort of market adaption or commercial traction already early on, as customers and markets grave for such a solution and are open to use the new offering (think of solving a real problem.

How will venture capitalists be impacted by machine intelligence in the next 5 years?

Venture capitalists (VCs) will face a two-sided effect. First of all deal flow will significantly increase in the area of Machine Intelligence-based companies that are able to produce and deliver a solid value proposition to the market and are truly disruptive to specific industries. Today we still see a lot of evolutionary Machine Intelligence applications compared to revolutionary business models, many cases we see are rather a MI feature set, but not a standalone business case or company – this will change.

VCs will be threatened and potentially even disrupted themselves by MI. In essence, VC is also just a service industry (we service our portfolios and LP’s) and evidence clearly suggests that advanced MI will reach better (investment) decisions than humans do. However, the question remains how long this development will take; and yet, there is no clear evidence that MI will be capable of assessing or completely understanding humans. Considering, what has been stated in the beginning, namely that team are key, it’s not clear whether MI will necessarily reach better decision.

What are the dangers of no distinguishing between hype and reality in AI?

As explained above, AI/ML/MI are today quite advanced but not ultimately “ready” yet. This means that a lot of application fields (e.g. customer service industry) can clearly benefit from the introduction of smart algorithms (get more efficient, partially replace humans, better results, faster etc.), but they are not yet ripe for disrupting those industries effectively. So the danger is, from a VC’s or Founders perspective, to overestimate the capabilities of the algorithm, or to underestimate the importance of human-based decisions and verification. Effectively, there is lot of (technical) work still to be done and markets are only about to open up or be created for AI application. Time to market is critical for building up good investment cases.

Do VCs have a role in progressing the fields involved with AI?

VC’s, as with every other technology field, are responsible in finding and funding the leading teams/brains/companies in the respective field. This work is critical to contribute to the technology’s further development, to facilitate market adoption, to help identify viable business models and so forth (offering capital to outstanding entrepreneurs will improve the economy and the startup ecosystem in any ways). By finding and funding good technology companies in the AI field, VC’s also help to steer public attention to this area and to help create flagship project that then attract more brain-power and top-talent. As state before, it also in the responsibility of VC’s to be critical in their decision process also in order to prevent over-hype and bubble effects. It is sort of an educational responsibility that VC’s have for the tech ecosystem, the economy and the society.

Julius Rüssmann will be speaking at the Machine Intelligence Summit, taking place alongside the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam on 28-29 June. Meet with and learn from leading experts about how AI will impact transport, manufacturing, healthcare, retail and more.
Other confirmed speakers include Roland Vollgraf, Research Lead, Zalando Research; Alexandros Karatzoglou, Scientific Director, Télefonica; Sven Behnke, Head of Autonomous Intelligent Systems Group, University of Bonn; Damian Borth, Director of the Deep Learning Competence Center, DFKI; Daniel Gebler, CTO, Picnic; and Adam Grzywaczewski, Deep Learning Solution Architect, NVIDIA. View more speakers and topics here.
Tickets are limited for this event. Register to attend now.

Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.

Human Computer Interfaces: The Future Or Our Demise?

Human Computer Interfaces: The Future Or Our Demise?

Ever since Elon Musk announced his acquisition of Neuralink in March 2017, the news has been awash with criticism and commentaries on human computer interfaces.

Perhaps the most widely shared of these is the (excruciatingly long) Wait But Why article that broke the Neuralink news. In the article, Tim Urban explains in minute detail how the technology will work. If you haven’t had the pleasure yet, you can read ‘Neuralink and the Brain’s Magical Future’ here.

The concept of human computer interfaces isn’t entirely new, though the Elon Musk news has driven it into the mainstream. Acclaimed futurist and transhumanist guru, Ray Kurzweil, has been championing human-machine symbiosis for much of his career, predicting that “the nonbiological portion of our intelligence will predominate” by the 2030s. He anticipates the full-blown Singularity by 2045. It is only fairly recently, however, that companies like Neuralink (and even Facebook) have started declaring that the technology may well be within our sights.

Presuming that this really is the case (Noam Chomsky reckons it’s impossible), we must begin to ask ourselves whether the merging of humans with technology is actually a good idea.

Clearly, humankind hasn’t been doing such a good job of things here on Earth recently. If we can become vastly more intelligent through human computer interfaces, perhaps we could do much better. If integrating ourselves with technology allows us to more accurately assess the consequences of our actions, then maybe a brighter future is ahead.

Why Human Computer Interfaces?

The reason that Elon Musk has decided to invest in neural lacing is linked to his concerns about the threat of artificial superintelligence – the Singularity that Kurzweil is convinced will be upon us in less than 20 years. This is a concern shared by many of the most prominent minds in science, including Stephen Hawking, who has also warned that the Singularity could be the greatest threat facing humankind.

Musk argues that human computer interfaces are our best chance of surviving the superintelligence explosion. In a kind of ‘if you can’t beat them, join them’ move, Musk sees our greatest chance lying in our ability to merge ourselves with those superintelligent machines. If we can do this, we remain capable of competing with machines on intelligence, retain more control over them, and thus improve our chances of survival by symbiosis.

If going bionic is an unnerving thought to you, then that’s really only natural. Such a vast alteration to how we live is bound to be a terrifying prospect, just as it was at the dawn of the first Industrial Revolution.

If the human computer interface is what will save us from our demise at the hands of the machine, we must necessarily overcome our squeamishness about letting machines into our bodies. The squeamishness itself can be argued to be a primitive concern, one that must be moved beyond if we are to survive.

Conscious Evolution

It is, certainly, a drastic solution to a problem we’re not altogether sure will even arrive. Nonetheless, we could consider it to be an unprecedented step: the point at which a species becomes so advanced that they can enact their own evolution.

Evolution, historically, has always been a natural process enacted over thousands and millions of years. When the environment fails to be sufficiently nurturing, changes begin to occur over generations to enable a species to better adapt to that environment. Those which cannot adapt die.

Technology, too, has been a slow evolutionary process, which arguably began from the time the first ape used the first rock to smash the first nut. A gradual process of evolution over millennia, leading us to the point at which we transcend biology, metamorphosing into our next form: a hybrid species of our own creation, imbued with far superior intelligence to our predecessors. A species thus capable of strengthening ourselves, of barricading death outside the door, moving on to colonise the universe: the ultimate lifeform.

Survival of the Richest?

The question remains, however, as to who will have access to the technology. Presumably, at least initially, it will be an expensive process and thus only available to the wealthy. So the rich become smarter, and an uneducated underclass of comparatively useless, stupid billions will emerge.

The only hope for the 99% is that this newly intelligent elite decide that it is better for the human race to be unanimously improved. Alternatively, it’s a simple case of survival of the fittest – those with the resources to survive. This, of course, is just one ethical consideration that needs to be raised. It’s a return to the old Doctor Strangelove question: who is fit enough for the nuclear bunker?

The question of how long we have to answer this and other questions posed by the issue is controversial. Whilst Kurzweil and others predict a very short window before artificial superintelligence arrives, others insist that we are nowhere near artificially superintelligent machines nor functional human computer interfaces. Whilst the lack of a definitive answer may lead many to dismiss the issue outright, the smarter decision, of course, is to prepare ourselves so that we are ready if and when the time comes.

“AI Neutrality” – A proposed manifesto for artificial intelligence user experience design

“AI Neutrality” – A proposed manifesto for artificial intelligence user experience design

What makes a great artificial intelligence (AI) driven user experience? Here are my thoughts…

1. Design AI services end to end – the disruptors that have transformed the travel, holiday and retail sectors over the last twenty years succeeded by aggressively focusing on continually improving their own single channel online experience. AI user experience design must also adopt this strict one channel approach to service delivery – every user journey should be simple, relevant, no fuss and always getting better because its being delivered by an artificial intelligence end to end.

2. Go beyond mobile  The interconnectivity of AI enables any environment or physical object to positively affect all of our five senses (such as connected home technology like heating and lighting devices that responds to a user’s mood). AI design should always be pushing to transcend the user interface constraints of existing service platforms (particularly the visual and audio experience of mobile) to truly reflect and improve how we use our senses to interact with the world around us.

3. Addressable media is a key user journey –  AI has the potential to utilise a complex range of historic and contextual customer data to deliver targeted, personalised advertising (UK broadcasters are adopting programmatic technology to deliver specific adverts at individual households in real time for example). Yet if designed poorly such disruptive engagement risks coming across like hard selling that overwhelms or irritates a customer (consider the negative reaction of customers to pop ad web ads that apply a similar approach). Consequently, it’s vital that AI driven addressable media is treated as a form of user experience that requires research, design and testing to ensure customers are empowered to consume it on their own terms.

 4. Hardwire ethics and sustainability –  the positive disruption to our lives from social media has enabled these services to grow rapidly and organically by billions of users worldwide. Yet this has also led to these platforms becoming so big it’s challenging for their service providers to effectively manage and safeguard the user content they share. Drawing from this experience, and combined with public calls for the proactive regulation of AI, it’s essential artificial intelligence products and services have the right ethics and sustainability values in their core design as they are likely to grow even faster and bigger than social media.

5. Champion “AI Neutrality” – artificial intelligence has the power to transform all our lives like the internet before it. A fundamental principle driving the success of the web has been “net neutrality” – that internet data services should be supplied as a form of utility (like electricity, gas, water) in a non-discriminatory way to all customers. Access to simple AI services should be similarly “neutral” – a basic human right that is complemented by differentiated, chargeable products and services from over-the-top producers.

@markHDigital

BOSTON DEEP LEARNING IN HEALTHCARE SUMMIT – DAY 2 HIGHLIGHTS

Original

After a great first day at the Deep Learning In Healthcare Summit in Boston, we’re back for day 2 where this morning’s discussions have brought together leading speakers to share their cutting-edge research in the industry and how it’s disrupting healthcare.

Mason Victors, lead data scientist of Recursion Pharmaceuticals,  kicked off this morning’s discussion ahead of his talk later on in the day. Every year, thousands of new drugs are unsuccessful in being brought to market and are often left in freezers and forgotten. Recursion Pharmaceutical repurpose these to identify drugs as potential treatments which can reach patients quickly. By combining the best elements of technology with the best elements of science the ability to answer really complex questions exponentially increases.

This morning’s sessions saw startups in healthcare presenting their latest research, and we first heard from the Director of Machine Learning at ArterysDaniel Golden.

‘About 6 million adults in the US are experiencing heart failure right now’, so by making faster and more accurate assessments of their conditions, this could be hugely reduced. As the first FDA approved company in clinical cloud-based deep learning in healthcare, Arterys ‘aim to help clinicians make faster and more accurate assessments of the volumes of blood ejected from the heart in one cardiac cycle’. This is known as ejection fraction. Arterys are moving towards solving the problem of cutting healthcare costs and improving access; where a human analyst would expect to take between 30 minutes and an hour analysing images to make a diagnosis, Arterys takes an average of 15 seconds to produce a result for one case. We heard about the architecture they are working with and the ‘critical challenge in using data that was used to support clinical care as opposed to machine learning’. This means that and Arterys are often presented with incomplete images to analyse, and they have therefore trained their model to recognise the missing sections of the image and fill in the blanks. ‘Roughly 20% of the data we work from is complete, so without this model 80% of our data would be redundant.’

@ChiefScientistDaniel Golden of Arterys understands heart disease with #DeepLearning on MRI #datagrid

The next startup to present was PulseData who are trying to overcome the ‘laborious task of building robust data pipelines for inconsistent datasets.’ We heard from CEO and founder Teddy Cha who said they aim to track ‘transactional, temporal, and ambient data on patients that isn’t sitting right in front of doctors.’ This involves amalgamating historical data from various sources such as insurance claims, Facebook posts, private health visits and more. There are numerous questions to be asked when making accurate medical predictions, and creating individual datasets can be eliminated by PulseData’s node-based approach, where machines treat data work and features as abstractions or “nodes”.  This provides ‘a way to track a calculation where you have a sequence of events that each perform their own calculation, so you don’t need to worry about any of them independently’ each time a new question is asked. Powerfully, nodes can be made dependent on other nodes, allowing them to rapidly assemble data pipelines and implement variables each time the question changes, rather than rewriting the pipeline from scratch.

Hunter Jackson, Co-Founder and Chief Scientific Officer of Proscia, spoke about their work in changing the way in which doctors diagnose cancer through their analysis of genetic images. Where many companies ‘are taking a direct AI approach, (Proscia) are taking more of a cancer specific approach’. Billions of slides are analysed every year in the US alone, many of which are hidden away stored on drives where their value is restricted to whatever on-site researchers can uncover. Proscia take a cancer specific approach to this image analysis and Jackson explained how ‘one slide can produce millions and millions of patches’ which can help answer their key questions: ‘who shall we treat, how should we treat them, and did their treatments work?’. One of the key obstacles they previously faced was getting hold of the medical images, however we heard that ‘with the help of clinical partners, we are developing deep learning powered tools that activate those digital slides to address problems in the clinic, create opportunities for translational research and data licensing, and inform disease prognosis and therapeutic plans.’ Another issue that was covered was the desire to predict the likelihood of recurrence in cancers, and when you bring pathology into cancer prediction, you are enabling more predictive biomarkers for cancer and enabling more accurate predictions. We heard about some recent successes in this domain including identifying metastases in breast and gastric lymph nodes with deep convolutional neural networks and using deep learning to predict lymph node metastasis from the primary tumour.


Michael Dietz from Waya.AI continued the discussion on medical images and explained how they are working to improve image classification with generative adversarial networks (GANs) to ‘turn your smartphone into a dermatologist, and eventually into a doctor’. A high percentage of the population have access to smartphones, and having a dermatologist constantly on hand to photograph and diagnose skin conditions could help prevent a multitude of conditions. The GANs Waya.AI use are one of the most promising areas in deep learning research, and we heard about how they can be used in the real world for unsupervised learning. The models that Dietz and his team are using ‘learns entirely from data so there’s no predisposition or human bias, we’re letting the machine learn entirely from data.’ Waya.ai uses GANs for a several different tasks in skin cancer detection, and we saw the improved results obtained when using these as opposed to traditional methods. Although GANs accurate and efficient, Dietz explained that they are ‘really hard to train and we had to have a hack to nudge them to work which is an unstable situation’. However, Waya.AI have found a method to overcome this by calculating different distances to make a reasonable and efficient approximation. Through this application of AI in healthcare, the goal is to find the causes and mechanisms of disease and analysing the patterns that connect everything together.

@joeddav@waya_ai uses GANs to create synthetic data featurization of medical prediction models #reworkdl #deeplearning

Rounding off the morning of startup showcases, Fabian Schmich, data scientist from Roche began his discussion building on the issues faced by Proscia with ‘pathology being a fairly old field so much has changed, people are using the same old protocols with slide imaging’, so there is a lot of progress to be made. There’s an increasing demand for tissue analysis in drug development and Schmich explained how Roche are improving tissue annotation with deep learning. ‘We now are in the middle of a digital revolution in pathology’ which allows Roche to quantitatively analyse cells across the whole image which is a game changer in pathology. The problem with current pathology lies in human error and inconsistencies where technicians hand draw their analyses onto low resolution images. Deep learning, however, overcomes this by segmenting aspects of the images to get a deeper analysis. Roche ‘take an architecture and convolutionise it by changing the last couple of fully connected layers and add up sample layers’ which results in each image having multiple complex labels rather than being restricted to one. Schmich went on to explain the challenges in data mining these images and how they can tap into infrustructure that they already have to leverage data to train deep neural networks to plug in and test different architectures.

As the discussions continue into the afternoon, we’re looking forward to hearing from Biswaroop (Dusty) Maiumdar from IBM Watson Health, who will discuss Empowering the Future of Cognitive Computing in Healthcare, amongst several other healthcare leaders in deep learning.

Couldn’t make it to Boston?
If you want to hear more from the Deep Learning Summit and Deep Learning In Healthcare Summit you can register now for on-demand video access.To continue our Global Deep Learning Summit Series find out about our upcoming events in London, Montreal, and San Francisco.

Our next Deep Learning In Healthcare Summit will be held in Hong Kong next 12 & 13 April 2018.

BOSTON DEEP LEARNING SUMMIT – DAY 1 HIGHLIGHTS

Original

As day one of the Deep Learning Summit draws to a close, we’re taking a look at some of the highlights. What did we learn, and what did you miss? With over 30 speakers discussing cutting edge technologies and research, there have been a series of varied and insightful presentations.

Sampriti Bhattacharyya kicked off this morning’s discussion by introducing the influential companies, researchers and innovative startups involved. Named in Forbes’ 30 under 30 last year, the founder of Hydroswarm had created the underwater drone that maps ocean floors and explores the deep sea by 28, and she spoke about the impact of deep learning on technologies and how it’s affecting our everyday lives.

Deep learning allows computers to learn from, experience and understand the world in terms of hierarchical concepts, connecting each concept back to a simpler previous step. With these technologies being implemented so rapidly, discussions covered the impact of deep learning as a disruptive trend in business and industry. How will you be impacted?

Facebook is at the forefront of these implementations and with ‘⅕ of the population using Facebook, the kind of complexities they experience are unimaginable’. We heard from research engineer Andrew Tulloch who explained how the millions of accounts are optimised to ‘receive the best user experience by running ML models, computing trillions of predictions every day.’ He explained that to ‘surface the right content at the right time presents an event prediction problem’. We heard about timeline prioritisation where Facebook can ‘go through the history of your photos and select what we think to be semantically pleasing posts’, as well as the explosion of video content over the past two years, which employs the same methods of classification apply to both photo and video. The discussion also covered natural language processing in translation, as Facebook are running billions of translations every day which need to be as accurate as possible, and we heard how they’re overcoming these complexities to deliver accurate translations. He also drew on the barriers previously faced in implementing machine learning cross device and its impact on mobile. ‘Over a billion people use Facebook on mobile only’, and on mobile, the ‘baseline computation unit is challenging to get good performance’, so building implementations and specifications for mobile is very important.

@dmurga: Cool to hear Andrew Tulloch of @facebook talk about #DeepLearning on mobile for better privacy, latency, and offline performance. #reworkDL

We next heard from Sangram Ganguly from NASA Earth Exchange Platform, who continued the discussion of image processing. The vision of the NASA-EEP is ‘to provide science as a service to the Earth science community addressing global environmental challenges’ and to ‘improve efficiency and expand the scope of NASA earth science tech, research and application programs’. Satellites capture images and are able to ‘create high resolution maps to predict climate changes and make projections for climate impact studies’. One problem that Ganguly faced in his research however, was the reliance on physics based models: as the datasets increase it’s important to blend these models with deep learning and machine learning to optimise the performance and speed of the machine and and create the most successful models. This fusion of physics and machine learning is the driving force of high resolution airborne image analysis and classification.

Next up was Dilip Krishnan from Google, who went on to explore new approaches to unsupervised domain adaptation. Their goal is to ‘train a machine learning model on a source dataset and apply this on a target data set, where it’s assumed that there are no labels at all in unsupervised domain adaptation’. He discussed the difficulties in implementation and shared two approaches to the problem. The first approach ‘mapping source domain features to target domain features’ focuses on learning a shared representation between the two domains where we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. The second approach which has been more popular and effective is ‘end-to-end learning of domain invariant features with a similarity loss.’ Krishnan proposes a new model that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. The generative adversarial network (GAN)-based method adapts synthetic images to make them appear more realistic. This method is one ‘that improves upon state of the art feature level unsupervised domain recognition and adaptation’.

@dmurga: Nice @Google #DeepLearning arch reflecting task intuition: separate private & shared info for interpretable #DomainAdaptation. #reworkDL

After a coffee break and plenty of networking, Leonid Sigal from Disney Research expanded on the difficulties of relying on large-scale annotated datasets and explained how they are currently implementing a class of semantic manifold embedding approaches that are designed to perform well when the necessary data is unavailable. For example to accurately classify an image, you need more than 1000 similar images to draw against to teach the machine to recognise this classification, but ‘very few specific images have this amount of images to use, for example ‘zebras climbing trees’ only has one or two images to sample against’. Disney need to be able to localise images, have linguistic descriptions of images, and this is where it becomes much more complicated. Sigal explained how they are currently working with embedding methods using algorithms with weak or no supervision so that algorithms can work out how to classify each image, and make it much more efficient. Sigal’s work in deep learning has helped him with his problem solving in not only image classification, but character animation and retargeting, and he is currently researching into action recognition and object detection and categorisation.

@joeddav: Interesting approach to incorporating semantics into object detection and localization from @DisneyResearch #reworkdl #deeplearning

With such an abundance of applications for visual recognition, vision is not the only sense available to us, and Soundnet’s Carl Vondrik discussed his work in training machines to understand sound through image tagging. Whilst sources for mapping images tend to be readily available, ‘it’s difficult to get lots of data for specific sounds as there isn’t the same availability as there is in image recognition.’ To overcome this, Carl explained how SoundNet can ‘take advantage of the natural synchronisation between vision and sound in videos to train machines to recognise sound.’ He explained how they can take the models in vision already trained to recognise images and ‘synchronise it with sounds and use it as a teacher.’ After testing the audience and asking us to identify specific sounds and running our results against SoundNet, it materialised that SoundNet’s analysis was far superior than humans. Where a sound such as bubbling water, breathing, and splashing is identified immediately as scuba diving, buy SoundNet, the audience were unable to comprise the types of sound and draw this conclusion as easily as the system.

@datatrell: Some may think sound learning is a solved problem, but there’s much that can be done. Excited for @cvondrick‘s talk! #reworkdl

For a autonomous systems to be successful, they not only need to understand these sounds and the visual world, but also communicate that understanding wit humans. To expand on this and wrap up this morning’s presentations we heard from Sanja Fidler from University of Toronto who spoke about the progressions towards automatically understanding stories and creating complex image descriptions from videos. Fidler is currently exploiting the alignment between movies and books in order to build more descriptive captioning systems, and she spoke about how it is possible for a machine to automatically caption an image by mapping it’s story to a book, and then to teach the machine to assign combinations of these descriptions to new images. The end goal of this work is to create an automatic understanding of stories from long and complex videos. This data can then be used to help robots gain a more in depth understanding of humans, and to ‘build conversation models in robots’.

After a busy morning attendees chatted over lunch, visited our exhibitors, and had the opportunity to write ‘deep learning’ in their own language on our RE•WORK world map.

The summit continued this afternoon with presentations from Helen Greiner, ChPhy Works, Drones Need to Learn; Stefanie Tellex, Brown University, Lex Fridman, MIT, Deep Learning for Self-Driving Cars; Maithra Raghu, Cornell University/Google Brain, Deep Understanding: Steps towards Interpreting the Internals of Neural Networks; Anatoly Gorchechnikov, Neurala, AI and the Bio Brain; Ben Klein, eBay, Finding Similar Listings at eBay Using Visual Similarity, and many more.

We’ll be back again with Deep Learning Boston tomorrow, and will cover the applications of deep learning in industry and will be hearing from the likes of Sam Zimmerman, Freebird, Deep Learning and Real-Time Flight Prediction; Anatoly Gorchechnikov, Neurala, AI and the Bio Brain; David Murgatroyd, Spotify, Agile Deep Learning; Ben Klein, eBay, Finding Similar Listings at eBay Using Visual Similarity.

View the schedule for the remainder of the summit here.

Couldn’t make it to Boston? If you want to hear more from the Deep Learning Summit and Deep Learning In Healthcare Summit you can register here for on-demand video access.

To continue our Global Deep Learning Summit Series find out about our upcoming events in London, Montreal, and San Francisco

MEET THE WORLD’S LEADING AI PIONEERS IN THE ‘SILICON VALLEY OF DEEP LEARNING’

Original
For the first time, RE•WORK will be bringing the increasingly popular Deep Learning Summit to Montreal, Canada, and are excited to announce the attendance of Yoshua Bengio, Yann LeCun, and Geoffrey Hinton who will be appearing on the Panel of Pioneers to share their expertise as the founders of the deep learning revolution. Not only have they recently been named as 3 of Forbes’ ‘Top 6 Thinkers in AI and Machine Learning’, but these leaders of the field are responsible for nurturing deep learning throughout the 80s, 90s and early 00s when others were unable to see its potential.

Whilst deep learning experienced a lull, Hinton, LeCun and Bengio laboured away in their own time at CIFAR, a research centre in Toronto where they fine tuned their abstract computational methods, and jokingly referred to themselves as the ‘deep learning conspiracy’.

The event will feature two tracks with both tracks running over 2 days. Track One will hone in on cutting edge science and research in deep learning, whilst Track Two will be focusing on the business applications.

Super Early Bird tickets are available until Friday 2nd of June, and there are limited spaces left for these heavily discounted places. Don’t miss your chance to see the godfathers of deep learning appear on ‘The Panel of Pioneers’ together. Register now to hear from the leading minds in deep learning in Montreal this October 10 – 11.

In addition to this phenomenal initial lineup of speakers, we are pleased to have received incredible support from the National Research Council Canada, as well as IVADO, the British Consulate-General Montreal, and the Tourisme Montréal.

“As part of Montreal’s AI ecosystem, IVADO is thrilled to partner with the RE•WORK Deep Learning Summit”, said Gilles Savard, CEO. “Combining entrepreneurship, technology and science to re-work the future using emerging technology, this Summit will shine light on one of the pillars of the fourth industrial revolution, deep learning. It will also bring together industry professionals and science researchers working on data-driven innovation, a shared goal with IVADO’s mission.”

Additionally, on speaking with the Canadian National Research Council’s Industrial Technology Advisor, Benoit Julien, he said that “Montreal is privileged to be the world’s largest R&D hub in Deep Learning with over 150 researchers involved in projects implicating dozens of local and international high tech companies. The National Research Council Industrial Research Assistance Program (NRC-IRAP) is proud to support the small and medium size businesses of this fast growing sector. Our involvement in bringing this leading edge conference to Montreal is another demonstration of our strong commitment to help Canadian companies leverage and achieve the full potential of artificial intelligence.”

In addition to the Panel of Pioneers, we have several leading minds in deep learning confirmed to speak, including Roland Memisevic, Chief Scientist, Twenty Billion Neurons; Maithili Mavinkurve, Founder & COO, Sightline Innovation; Kyunghyun  Cho, Assistant Professor of Computer Science and Data Science, New York University; Aaron Courville, Assistant Professor, University of Montreal as well as many more leaders in the field still to be announced.

MEET THE PANEL OF PIONEERS

Yoshua Bengio

As one of the most cited Canadian computer scientists, Yoshua Bengio is (or has been) associate editor of the top journals in machine learning and neural networks as well as having authored two books and over 300 publications in deep learning, recurrent networks, probabilistic learning as well as other fields. His discussion will explore his main research ambition of understanding principles of learning that yield intelligence.

Yoshua Bengio is currently action editor for the Journal of Machine Learning Research, associate editor for the Neural Computation journal, editor for Foundations and Trends in Machine Learning, and has been associate editor for the Machine Learning Journal and the IEEE Transactions on Neural Networks.

Geoffrey Hinton

Currently an Engineering Fellow at Google, Geoffrey Hinton manages Brain Team Toronto specialising in expanding deep learning. In addition to this, Hinton is professor in computer science at the University of Toronto. For over two decades he has been publishing papers on the use of artificial neural networks to simulate human processing of information in machines. An important figure in the deep learning community, Hinton was one of the pioneering researchers who was able to present the use of generalised backpropagation algorithm for training multi-layer neural nets.

geoffjpg

Geoffrey Hinton has published countless articles across the field of deep learning, and they can be accessed here.

Yann LeCun

Yann LeCun has been director of research at Facebook since 2013 and has received much acclaim for his pioneering work in computer vision and machine learning. LeCun is also a founding director at NYU Centre for Data Science as well as Silver Professor at NYU on a part time basis working closely with the Data Scientists and the Courant Institute of Mathematical Science.

Yannjpg

View Yann LeCun’s published works and contributions here.

In recent discussions, AI experts have suggested that at the pace deep learning is currently progressing, it could soon be the backbone of many tech products that we use every day, and the work of the trio is the foundation for the next frontier in AI technology.

To hear from Bengio, Hinton and LeCun at the Montreal Deep Learning Summit this 10-11 October, register now and confirm your place. This event will be popular and tickets are limited. Contact Katie for more information at kpollitt@re-work.co.

GET INVOLVED

Interested in showcasing your startup?
The event provides the perfect opportunity to demo and showcase the latest AI technology and applications. If you know any innovative new companies working in the field, suggest them here.

Someone you’d like to hear from?
If you know of anyone in the industry who you’d like to hear present their research, you can suggest a speaker here.

RE•WORK have events scheduled up until October 2018. View the full calendar of events here.

Original

AI software company, Celaton, receives Queen’s Award for Enterprise

AI software company, Celaton, receives Queen’s Award for Enterprise

Today, Milton Keynes based Artificial Intelligence software company, Celaton has been named a winner of the Queen’s Award for Enterprise in Innovation 2017. The Queen’s Awards for Enterprise are the UK’s most prestigious business awards to celebrate and encourage business excellence.

Established in 2004, Celaton Limited has designed and implemented a machine learning software platform which, enables better customer service, faster. An Innovation Award has been given for the development of inSTREAM.

Businesses receive a plethora of content on a daily basis from customers, suppliers and staff, which is highly labour intensive to process, make actionable and gain insights from. By applying machine learning algorithms, inSTREAM is able to understand the meaning and intent of incoming content, enrich with other pertinent information and upload verified data into line of business systems. When inSTREAM is not confident, it will refer the decision to a human operator for verification, learning from their decisions and becoming more confident every time. inSTREAM can also create and personalise an appropriate response to each correspondence, meaning that excellent customer service is accelerated. It’s artificial intelligence, but to Celaton’s customers it’s the best knowledge worker they ever hired and it means better customer service, compliance and financial performance.

To date the platform has successfully streamlined over 215 work streams across 35 brands and driven 25% growth of the company over the last 5 years. Celaton’s technology has enabled transformation in customer service at ambitious brands like Virgin Trains, ASOS and DixonsCarphone.

Andrew Anderson, Celaton CEO said “Winning this award is a fantastic accolade and I am extremely proud of what the Celaton team have achieved. It’s been a long journey, but we have never stopped believing in the potential of our unique technology solution. Celaton being recognized as a leader in innovation in the UK by the Queen and government reaffirms our commitment to that belief.”  

 

 

Celaton press contact:

Chinia Green

E-mail: chinia.green@celaton.com

www.celaton.com

 

Pat Inc. Passes Facebook AI Research (FAIR) Tests with 100% Accuracy by Teaching Language to Machines

Pat Inc. Passes Facebook AI Research (FAIR) Tests with 100% Accuracy by Teaching Language to Machines

Pat Inc., the leader in Natural Language Understanding (NLU) technology, announced the successful results of its first set of independent tests, developed by Facebook AI Research (FAIR). The IQ tests are intended to train computers to be smarter by using logical questions most humans can answer and putting them to the test against artificial intelligence (AI), i.e. what FAIR dubbed the “bAbI Project.”

Pat Inc. takes a completely different approach to NLU by teaching language to machines using advanced linguistics, and successfully completed 6 bAbI tests to further verify the platform progress and evaluate its reading comprehension.

Further, the bAbi Project’s goal is automatic text understanding and reasoning through using as little data as possible to solve question/puzzles. The aim is that each task tests a unique aspect of text and reasoning, and hence test different capabilities of learning models. Pat required very little data to successfully pass each test, a significant feat in the tests.

Despite all the investment and advances we’ve seen in machine intelligence over the last 60 years, we still can’t match a three-year-old for understanding meaning in natural language. Unlike a human, AI can’t understand the basics, let alone the nuances of human language right now. And until it does, we can’t really communicate effectively, which limits the potential value of machine intelligence, especially to narrow or specialist domains, which is hardly AI.

Pat has set out to humanize conversation with machines, building AI’s next generation NLU API to deliver “Meaning-as-a-Service” by processing natural language and human conversations into structured information about its meaning. Developers will be able to leverage the platform, currently in private beta, to build intelligent agents and applications that you can talk or text to.

Pat Ball, Founder and CTO of Pat Inc.: “This is great progress – but for us, it’s just the beginning. We believe we can scale Pat beyond these tests to really solve the challenge of NLU. In the process, we can also meet the significant forecast demand for AI apps – forecast by IDC to be valued at $40 billion across Google, IBM, Amazon and Microsoft platforms by 2020. That’s why Pat’s further development will have significant impact on the AI we already depend on today – as well as the technology just around the corner. From driverless cars and wearables to home automation and networked applications, we can expect machines to provide us with more meaningful, helpful experiences and a natural, human-like interaction.”

Dr Hossein Eslambolchi, Technical Advisor at Facebook (former AT&T CTO): “Pat offers next generation Natural Language Understanding technology, capable of being the conversational user interface of the future.”

Professor Robert Van Valin, Jr., University of Düsseldorf and University at Buffalo, The State University of New York, PhD UC Berkeley: “Statistical systems can accomplish NLP to a considerable degree, but they can never achieve NLU, which involves meaning. The answer lies in linguistics. Pat Inc. solves that.”

FAIR IQ tasks are publicly available: http://fb.ai/babi

For a summary of the tests, and Pat’s results and approach to solving the tests in greater detail, please visit the site: http://bit.ly/patincbabi

Developers are now welcome to register for private beta access to Pat API: https://pat.ai

Machine Learning help us in identifying the origin of several Medical Syndromes?

Machine learning is doing wonders currently and its latest effort has been invested in locating the roots of various medical syndromes. In a recent study, several syndromes like Chronic Fatigue Syndrome, Gulf-war Syndrome, and Post-Accutane syndrome etc. has been studied from the perspective of their genetic origins, pathways and other factors. Machine learning has been applied on the researcher data and their abstracts and with the help of natural language processing and network analysis, this conclusion has been achieved.

When It’s Smart To Play Dumb: Managing AI Recommendations

When It’s Smart To Play Dumb: Managing AI Recommendations

As machine learning and artificial intelligence evolve and begin to show interesting results, brands are exploring how to apply the technology to their products and services. The goal is simple: improve the customer experience while decreasing customer service costs.

But what does this even look like? With enough information about you, the customer, AI can lead to accurate recommendations. Or it can go a step further and take action on that information and clue you in later. So when should an AI-powered digital product check in before it does something, and when should it take matters into its own hands?

home of Artificial Intelligence information

Resource Directory, News Stories, Videos, Twitter & Forum Streams, Spotlight, Awards, Showcase and Magazine

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!