The AI Times Monthly Newspaper

Curated Monthly News about Artificial Intelligence and Machine Learning
BOOTSTRAPPING AN INTELLIGENT RECOMMENDER SYSTEM

BOOTSTRAPPING AN INTELLIGENT RECOMMENDER SYSTEM

In many different web services, machine learning is being used for recommendation systems that help users tackle information overload: there are simply too many movies, songs, and books for users to usefully browse through. Without such tools, some services are rapidly falling behind and losing customers.

Travel is a little bit different, as the world does not have millions of cities, but finding new, interesting places to travel to is still a challenge. Years ago, Skyscanner started it’s ‘everywhere’ search which allows users to find the cheapest places possible that they could travel to, leading to research showing that price is one of many factors that make a place attractive and interesting.

Neal Lathia, Senior Data Scientist at Skyscanner, will join us at the Machine Intelligence Summit in Amsterdam, to share how the company bootstrapped a destination recommender system using rich implicit data generated by millions of users, along with simple algorithmic approaches, and experiments that gauge how localised and personalised recommendation affects user engagement. I spoke to Neal ahead of the event to learn more.

Please tell us more about your work at Skyscanner.

As a Senior Data Scientist, my focus is on designing and building machine learning features for Skyscanner’s mobile app. Since joining, just under a year ago, the projects I’ve been working on have related to recommendation and search result ranking. However, the app creates a very rich ecosystem of data, and we have already identified a number of other opportunities ahead.

What do you feel are the leading factors enabling recent advancements in machine learning for recommendation systems?

Many of the near state-of-the-art algorithms for recommendation systems have been open sourced- which is always welcome news! The research field has also always been driven by open data challenges. Most importantly, the research community has always taken a multidisciplinary approach – not all recommender system challenges need to be solved with machine learning.

Which industries have the biggest potential to be impacted by advancements in recommendation systems?

As someone who has a background in recommender systems, it is difficult for me to try to envisage any industry without the lens of recommendation potential. There are so many facets of life where personalised information could be useful – from healthcare to travel and beyond.

What developments can we expect to see in machine intelligence in the travel industry in the next 5 years?

Many of the best known travel sites online have a distinct focus on price – helping users find the cheapest flight, hotel, or car (Skyscanner is no exception to this!). As these services gain greater smartphone traction, and data (e.g., flight statuses and prices) becomes available in real-time, the travel industry is going to become a ripe domain for machine intelligence applications.

Outside of your own field, what area of machine learning do you will see the most progress in the next 5 years?

There is no doubt that recent advances in neural networks have lead to wonderful results in the areas of reinforcement learning and machine vision – I expect that progress to continue to accelerate. I’m looking forward to interesting products that may arise from these areas of research.

Neal Lathia will be speaking at the Machine Intelligence Summit, taking place alongside the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam on 28-29 June. Meet with and learn from leading experts about how AI will impact transport, manufacturing, healthcare, retail and more.
Other confirmed speakers include Roland Vollgraf, Research Lead, Zalando Research; Alexandros Karatzoglou, Scientific Director, Télefonica; Sven Behnke, Head of Autonomous Intelligent Systems Group, University of Bonn; Damian Borth, Director of the Deep Learning Competence Center, DFKI; Daniel Gebler, CTO, Picnic; and Adam Grzywaczewski, Deep Learning Solution Architect, NVIDIA. View more speakers and topics here.
Tickets are limited for this event. Register to attend now.

Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.

The Future of Investment in AI Could Be Decided By AI Itself

The Future of Investment in AI Could Be Decided By AI Itself

Many believe that artificial intelligence is set to reshape global economies and society, with AI expected to double economic growth. But what are the opportunities and challenges to investing in the current world of AI?

In 2016, the AI market was worth just $644 million, according to Tractica. This year that amount is due to almost double and continue to grow exponentially, with predictions showing it to reach $36.8 billion by 2025. With so many pioneering companies and world-leaders focusing their attention and funding on artificial intelligence fields such as deep learning and machine intelligence, the momentum for progress is well underway.

At the Machine Intelligence Summit on 28-29 June, Julius Rüssmann, Analyst at Earlybird Venture Capital, will share expertise on a panel exploring the challenges and opportunities of investing in AI. Julius believes investors themselves will be disrupted by the AI revolution. Could future investments in AI be decided by other artificially intelligent systems instead of humans? I spoke to him ahead of the summit to learn more.

What are in your opinion the most promising machine intelligence sectors to invest in?

Broadly defined, the service and manufacturing industries will probably benefit the most from (an increased level) of Machine Intelligence. By that I refer to the idea that large parts of the service industry, today still based on human work, can be heavily digitized, automatized and even improved through intelligent software applications. Especially, when considering the fact that the complexity inherent to each and every service inquiry increases, exponentially as data growth, machine intelligence is critical to ensure smoothly running system.

Besides primarily consumer focused service applications, the manufacturing industry will not remain the same. As machine intelligence develops, the sharp line between human-based work and robotics vanishes (think of collaborative robotics) until robotics will effectively overtake the bulk share of manufacturing processes and frees up billions of hours every day that have been had allocated to human work beforehand (problems that will arise).

One good example of Machine Intelligence that will redefine industries and humans alike is enriched or contextual computer vision. If it would be possible for software to contextually accurately understand video content, that would change a lot (autonomous driving, health care and so forth).

Besides industries and application fields, we deem Deep Reinforcement Learning as well as Neural Networks to be critical to facilitate further use-cases of Machine Intelligence.

What are the characteristics you are looking for in a startup prior to investing?

First and foremost, we look at the people behind the startup. Why is that? Because we think that complementary skills and solid commitment are in every successful company the kea driver. Starting a company, especially in the field of technology, always implies that there will be tough times and complex problems to be solved. In those situations it is almost irrelevant, how attractive you business model or the targeted market is. It is all about the team, to steer and pivot the company in the right direction (again). Besides outstanding people, we like to see early product (prototypes) to see credibility on execution, management and skills. In most cases that turn out to be successful you will see some sort of market adaption or commercial traction already early on, as customers and markets grave for such a solution and are open to use the new offering (think of solving a real problem.

How will venture capitalists be impacted by machine intelligence in the next 5 years?

Venture capitalists (VCs) will face a two-sided effect. First of all deal flow will significantly increase in the area of Machine Intelligence-based companies that are able to produce and deliver a solid value proposition to the market and are truly disruptive to specific industries. Today we still see a lot of evolutionary Machine Intelligence applications compared to revolutionary business models, many cases we see are rather a MI feature set, but not a standalone business case or company – this will change.

VCs will be threatened and potentially even disrupted themselves by MI. In essence, VC is also just a service industry (we service our portfolios and LP’s) and evidence clearly suggests that advanced MI will reach better (investment) decisions than humans do. However, the question remains how long this development will take; and yet, there is no clear evidence that MI will be capable of assessing or completely understanding humans. Considering, what has been stated in the beginning, namely that team are key, it’s not clear whether MI will necessarily reach better decision.

What are the dangers of no distinguishing between hype and reality in AI?

As explained above, AI/ML/MI are today quite advanced but not ultimately “ready” yet. This means that a lot of application fields (e.g. customer service industry) can clearly benefit from the introduction of smart algorithms (get more efficient, partially replace humans, better results, faster etc.), but they are not yet ripe for disrupting those industries effectively. So the danger is, from a VC’s or Founders perspective, to overestimate the capabilities of the algorithm, or to underestimate the importance of human-based decisions and verification. Effectively, there is lot of (technical) work still to be done and markets are only about to open up or be created for AI application. Time to market is critical for building up good investment cases.

Do VCs have a role in progressing the fields involved with AI?

VC’s, as with every other technology field, are responsible in finding and funding the leading teams/brains/companies in the respective field. This work is critical to contribute to the technology’s further development, to facilitate market adoption, to help identify viable business models and so forth (offering capital to outstanding entrepreneurs will improve the economy and the startup ecosystem in any ways). By finding and funding good technology companies in the AI field, VC’s also help to steer public attention to this area and to help create flagship project that then attract more brain-power and top-talent. As state before, it also in the responsibility of VC’s to be critical in their decision process also in order to prevent over-hype and bubble effects. It is sort of an educational responsibility that VC’s have for the tech ecosystem, the economy and the society.

Julius Rüssmann will be speaking at the Machine Intelligence Summit, taking place alongside the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam on 28-29 June. Meet with and learn from leading experts about how AI will impact transport, manufacturing, healthcare, retail and more.
Other confirmed speakers include Roland Vollgraf, Research Lead, Zalando Research; Alexandros Karatzoglou, Scientific Director, Télefonica; Sven Behnke, Head of Autonomous Intelligent Systems Group, University of Bonn; Damian Borth, Director of the Deep Learning Competence Center, DFKI; Daniel Gebler, CTO, Picnic; and Adam Grzywaczewski, Deep Learning Solution Architect, NVIDIA. View more speakers and topics here.
Tickets are limited for this event. Register to attend now.

Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.

Human Computer Interfaces: Neural Lacing and Brain Uploads – The Future Or Our Demise

Ever since Elon Musk announced his acquisition of Neuralink in March 2017, the news has been awash with criticism and commentaries on human computer interfaces.

Perhaps the most widely shared of these is the (excruciatingly long) Wait But Why article that broke the Neuralink news. In the article, Tim Urban explains in minute detail how the technology will work. If you haven’t had the pleasure yet, you can read ‘Neuralink and the Brain’s Magical Future’ here.

The concept of human computer interfaces isn’t entirely new, though the Elon Musk news has driven it into the mainstream. Acclaimed futurist and transhumanist guru, Ray Kurzweil, has been championing human-machine symbiosis for much of his career, predicting that “the nonbiological portion of our intelligence will predominate” by the 2030s. He anticipates the full-blown Singularity by 2045. It is only fairly recently, however, that companies like Neuralink (and even Facebook) have started declaring that the technology may well be within our sights.

Presuming that this really is the case (Noam Chomsky reckons it’s impossible), we must begin to ask ourselves whether the merging of humans with technology is actually a good idea.

Clearly, humankind hasn’t been doing such a good job of things here on Earth recently. If we can become vastly more intelligent through human computer interfaces, perhaps we could do much better. If integrating ourselves with technology allows us to more accurately assess the consequences of our actions, then maybe a brighter future is ahead.

Why Human Computer Interfaces?

The reason that Elon Musk has decided to invest in neural lacing is linked to his concerns about the threat of artificial superintelligence – the Singularity that Kurzweil is convinced will be upon us in less than 20 years. This is a concern shared by many of the most prominent minds in science, including Stephen Hawking, who has also warned that the Singularity could be the greatest threat facing humankind.

Musk argues that human computer interfaces are our best chance of surviving the superintelligence explosion. In a kind of ‘if you can’t beat them, join them’ move, Musk sees our greatest chance lying in our ability to merge ourselves with those superintelligent machines. If we can do this, we remain capable of competing with machines on intelligence, retain more control over them, and thus improve our chances of survival by symbiosis.

If going bionic is an unnerving thought to you, then that’s really only natural. Such a vast alteration to how we live is bound to be a terrifying prospect, just as it was at the dawn of the first Industrial Revolution.

If the human computer interface is what will save us from our demise at the hands of the machine, we must necessarily overcome our squeamishness about letting machines into our bodies. The squeamishness itself can be argued to be a primitive concern, one that must be moved beyond if we are to survive.

Conscious Evolution

It is, certainly, a drastic solution to a problem we’re not altogether sure will even arrive. Nonetheless, we could consider it to be an unprecedented step: the point at which a species becomes so advanced that they can enact their own evolution.

Evolution, historically, has always been a natural process enacted over thousands and millions of years. When the environment fails to be sufficiently nurturing, changes begin to occur over generations to enable a species to better adapt to that environment. Those which cannot adapt die.

Technology, too, has been a slow evolutionary process, which arguably began from the time the first ape used the first rock to smash the first nut. A gradual process of evolution over millennia, leading us to the point at which we transcend biology, metamorphosing into our next form: a hybrid species of our own creation, imbued with far superior intelligence to our predecessors. A species thus capable of strengthening ourselves, of barricading death outside the door, moving on to colonise the universe: the ultimate lifeform.

Survival of the Richest?

The question remains, however, as to who will have access to the technology. Presumably, at least initially, it will be an expensive process and thus only available to the wealthy. So the rich become smarter, and an uneducated underclass of comparatively useless, stupid billions will emerge.

The only hope for the 99% is that this newly intelligent elite decide that it is better for the human race to be unanimously improved. Alternatively, it’s a simple case of survival of the fittest – those with the resources to survive. This, of course, is just one ethical consideration that needs to be raised. It’s a return to the old Doctor Strangelove question: who is fit enough for the nuclear bunker?

The question of how long we have to answer this and other questions posed by the issue is controversial. Whilst Kurzweil and others predict a very short window before artificial superintelligence arrives, others insist that we are nowhere near artificially superintelligent machines nor functional human computer interfaces. Whilst the lack of a definitive answer may lead many to dismiss the issue outright, the smarter decision, of course, is to prepare ourselves so that we are ready if and when the time comes.


Link to Full Article: Read Here

TECHXLR8 ANNOUNCES LONDON TECH WEEK’S HEADLINE SPEAKERS

Curated by KNect365, TechXLR8 brings together 8 leading technology events at London’s ExCeL, June 13-15, 2017.
Forming part of London Tech Week, TechXLR8 will play a central part in a week-long festival of live technology events taking place across the UK’s capital. It celebrates and cultivates London as a global powerhouse of tech innovation by connecting the entire ecosystem both within London and beyond.

“We are delighted to be hosting a group of industry leading speakers at TechXLR8. The London Tech Week Headline Stage will showcase the absolute pinnacle of technology innovation. These men and women are shaping the ways we will interact with technology in the future, and we are excited to share their messages with London, and the world,” says Carolyn Dawson, Events Director & Managing Director of KNect365, an Informa Plc business.

TechXLR8 and London Tech Week are pleased to announce the LTW Headline Stage speakers for 2017.

Bibop G Gresta, COO and Chairman, Hyperloop Technologies,
Marc Allera, CEO, EE
David Hanson, Hanson Robotics & Sophia the Robot
Janet Coyle, Principal Advisory for Growth, London & Partners
Emma Sinclair MBE, Co-Founder, Enterprise Jungle, Columnist, The Telegraph
Tamara Lohan, Founder & CTO, Mr & Mrs Smith
Stephen Kelly, CEO, Sage
Jodi Goldstein, Managing Director, Harvard Innovation Labs
Fiona Murray CBE, Associate Dean for Innovation, MIT Sloan School of Management; Co-Director, MIT Innovation Initiative
Robert Thomson, CEO, News Corp.
Richard Browning, Gravity
Marc Speichert, Global Chief Digital Officer, GSK
A further group of headline speakers are yet to be announced.

Press opportunities will be available on site for a selection of the headline speakers, if you would like to book interview time, please contact Rhian Wilkinson at Rhian.wilkinson@KNect365.com.

Interviews with TechXLR8 speakers from across the event spectrum are available for syndication from the links below:

What is TechXLR8?
Taking place 13-15 June in London, TechXLR8 incorporates 8 co-located events focusing on the cutting-edge technologies transforming industries and enterprises: Internet of Things, 5G, Virtual Reality & Augmented Reality, Connected Cars & Autonomous Vehicles, Cloud & DevOps, Artificial Intelligence & Machine Learning, and Apps.

Experience a show like no other with one shared exhibition, 20 tracks of content, 8 live demo zones, 40+ hours of networking, an awards ceremony and more. TechXLR8 is set to welcome 15,000+ attendees from over 8,000 companies over the three conference days.

Free visitor tickets for TechXLR8 include access to 300+ exhibitors and 50+ hours of content featuring over 150 industry leading speakers including speakers from NASA, BarclaysHSBCBTFacebookBPBBCUBSRalph Lauren and Lebara. What’s more a free visitor ticket also provides access to 8 demo zones on the latest emerging tech in 5G, Smart Cities, Robotics, Drones and more.

Watch the TechXLR8 launch video here: https://goo.gl/TMtRFn

Access to the London Tech Week Headline Stage is included in the TechXLR8 Free Visitor Ticket, registration is open now.

Free visitor tickets for TechXLR8 are available here: https://goo.gl/YWLoMQ

About London Tech Week, 12-16 June
London Tech Week is a festival of events, taking place across the city and representing the entire technology ecosystem.

No other festival of live events brings together as many domestic and international tech specialists and enthusiasts to London for such a variety of networking, social, learning and business opportunities.
Since its launch in 2014 London Tech Week has included more than 700 events and has welcomed delegations from around the world.

London Tech Week 2017 is organised by founding partners, KNect365, London & Partners and Tech London Advocates, with support from strategic partners Tech City UK, ExCeL London, DIT and techUK.
More information on whats happening during the week can be found at https://londontechweek.com/

Speakers Wanted

Many of the meetup and conference events are looking for guest speakers to present at one of their meetings. We frequently get asked if we know of anyone that is available to speak at events.

Speakers can be Authors, Academics or Professionals working on Artificial Intelligence. These meetings typically allow the opportunity for some self promotion or sales pitch.

If you would like to be added onto our list of potential speakers, please send us a message with some details of the locations and topics you can cover.

Speakers Contact Us

M.I.E. SUMMIT BERLIN 2017 – 20th June

The world’s first open-space Machine Intelligence summit, which will be held on the 20th of June 2017.

This event will give you the opportunity to learn, discuss and network with your peers in the MI field. Back dropped in one of Berlin’s most vibrant and artistic locations, break free from traditional conference rooms and share a drink in a typical Berliner Biergarten.

The M.I.E Summit Berlin 2017 will provide you with two in-depth event tracks (keynotes, workshops, and panels) as well as over 20 leading speakers and unparalleled networking opportunities.

The following topics will make this event one of the most inspiring, entertaining and thought-provoking this year:

  • What exactly does AI mean for all industries, from medicine to cars, from cognitive to neural networks?
  • Can machines really outperform humans? What if AI systems become better than humans at all cognitive tasks?
  • Should you worry whether your job is going to be replaced by robots? If yes, what can you do about it?
  • You work on innovation and are eager to find out how AI could apply to your business?
  • How can we benefit from the great advancements brought about by AI while taking into account ethical and economical considerations?
  • Is investing in AI startups a good idea? What’s behind the hype?

 

We are pleased to offer a 30% discount code for this event of using code “miepartners

https://www.eventbrite.com/e/mie-summit-berlin-2017-can-machine-ai-outperform-human-tickets-33207267832

Strata London Community Lightning Talks | Tuesday, 23 May

Strata London Community Lightning Talks | Tuesday, 23 May
On Tuesday, 23 May, O’Reilly is hosting Community Lightning Talks to highlight the projects from the London data community for an evening of networking and sharing stories. The theme of Strata is big data, pervasive computing, and data science—but presenters are welcome to talk about anything that can enlighten and inspire the community.
This event is free and open to the public (Strata attendees do not need to register separately for this event.)

http://www.oreilly.com/pub/cpc/80557

Expo Plus Pass: What’s Included
The Strata Data Conference in London Expo Plus Pass (£275) includes a lot more than access to the Expo Hall. Not only do you get to meet, greet, and network with 5,000+ like minds, you also get access to the expo hall, sponsored sessions, up to two technical sessions (Wednesday and/or Thursday), and special events like Speed Networking (Wednesday and Thursday) and the Expo Hall Reception.

http://www.oreilly.com/pub/cpc/80559

“AI Neutrality” – A proposed manifesto for artificial intelligence user experience design

“AI Neutrality” – A proposed manifesto for artificial intelligence user experience design

What makes a great artificial intelligence (AI) driven user experience? Here are my thoughts…

1. Design AI services end to end – the disruptors that have transformed the travel, holiday and retail sectors over the last twenty years succeeded by aggressively focusing on continually improving their own single channel online experience. AI user experience design must also adopt this strict one channel approach to service delivery – every user journey should be simple, relevant, no fuss and always getting better because its being delivered by an artificial intelligence end to end.

2. Go beyond mobile  The interconnectivity of AI enables any environment or physical object to positively affect all of our five senses (such as connected home technology like heating and lighting devices that responds to a user’s mood). AI design should always be pushing to transcend the user interface constraints of existing service platforms (particularly the visual and audio experience of mobile) to truly reflect and improve how we use our senses to interact with the world around us.

3. Addressable media is a key user journey –  AI has the potential to utilise a complex range of historic and contextual customer data to deliver targeted, personalised advertising (UK broadcasters are adopting programmatic technology to deliver specific adverts at individual households in real time for example). Yet if designed poorly such disruptive engagement risks coming across like hard selling that overwhelms or irritates a customer (consider the negative reaction of customers to pop ad web ads that apply a similar approach). Consequently, it’s vital that AI driven addressable media is treated as a form of user experience that requires research, design and testing to ensure customers are empowered to consume it on their own terms.

 4. Hardwire ethics and sustainability –  the positive disruption to our lives from social media has enabled these services to grow rapidly and organically by billions of users worldwide. Yet this has also led to these platforms becoming so big it’s challenging for their service providers to effectively manage and safeguard the user content they share. Drawing from this experience, and combined with public calls for the proactive regulation of AI, it’s essential artificial intelligence products and services have the right ethics and sustainability values in their core design as they are likely to grow even faster and bigger than social media.

5. Champion “AI Neutrality” – artificial intelligence has the power to transform all our lives like the internet before it. A fundamental principle driving the success of the web has been “net neutrality” – that internet data services should be supplied as a form of utility (like electricity, gas, water) in a non-discriminatory way to all customers. Access to simple AI services should be similarly “neutral” – a basic human right that is complemented by differentiated, chargeable products and services from over-the-top producers.

@markHDigital

BOSTON DEEP LEARNING IN HEALTHCARE SUMMIT – DAY 2 HIGHLIGHTS

Original

After a great first day at the Deep Learning In Healthcare Summit in Boston, we’re back for day 2 where this morning’s discussions have brought together leading speakers to share their cutting-edge research in the industry and how it’s disrupting healthcare.

Mason Victors, lead data scientist of Recursion Pharmaceuticals,  kicked off this morning’s discussion ahead of his talk later on in the day. Every year, thousands of new drugs are unsuccessful in being brought to market and are often left in freezers and forgotten. Recursion Pharmaceutical repurpose these to identify drugs as potential treatments which can reach patients quickly. By combining the best elements of technology with the best elements of science the ability to answer really complex questions exponentially increases.

This morning’s sessions saw startups in healthcare presenting their latest research, and we first heard from the Director of Machine Learning at ArterysDaniel Golden.

‘About 6 million adults in the US are experiencing heart failure right now’, so by making faster and more accurate assessments of their conditions, this could be hugely reduced. As the first FDA approved company in clinical cloud-based deep learning in healthcare, Arterys ‘aim to help clinicians make faster and more accurate assessments of the volumes of blood ejected from the heart in one cardiac cycle’. This is known as ejection fraction. Arterys are moving towards solving the problem of cutting healthcare costs and improving access; where a human analyst would expect to take between 30 minutes and an hour analysing images to make a diagnosis, Arterys takes an average of 15 seconds to produce a result for one case. We heard about the architecture they are working with and the ‘critical challenge in using data that was used to support clinical care as opposed to machine learning’. This means that and Arterys are often presented with incomplete images to analyse, and they have therefore trained their model to recognise the missing sections of the image and fill in the blanks. ‘Roughly 20% of the data we work from is complete, so without this model 80% of our data would be redundant.’

@ChiefScientistDaniel Golden of Arterys understands heart disease with #DeepLearning on MRI #datagrid

The next startup to present was PulseData who are trying to overcome the ‘laborious task of building robust data pipelines for inconsistent datasets.’ We heard from CEO and founder Teddy Cha who said they aim to track ‘transactional, temporal, and ambient data on patients that isn’t sitting right in front of doctors.’ This involves amalgamating historical data from various sources such as insurance claims, Facebook posts, private health visits and more. There are numerous questions to be asked when making accurate medical predictions, and creating individual datasets can be eliminated by PulseData’s node-based approach, where machines treat data work and features as abstractions or “nodes”.  This provides ‘a way to track a calculation where you have a sequence of events that each perform their own calculation, so you don’t need to worry about any of them independently’ each time a new question is asked. Powerfully, nodes can be made dependent on other nodes, allowing them to rapidly assemble data pipelines and implement variables each time the question changes, rather than rewriting the pipeline from scratch.

Hunter Jackson, Co-Founder and Chief Scientific Officer of Proscia, spoke about their work in changing the way in which doctors diagnose cancer through their analysis of genetic images. Where many companies ‘are taking a direct AI approach, (Proscia) are taking more of a cancer specific approach’. Billions of slides are analysed every year in the US alone, many of which are hidden away stored on drives where their value is restricted to whatever on-site researchers can uncover. Proscia take a cancer specific approach to this image analysis and Jackson explained how ‘one slide can produce millions and millions of patches’ which can help answer their key questions: ‘who shall we treat, how should we treat them, and did their treatments work?’. One of the key obstacles they previously faced was getting hold of the medical images, however we heard that ‘with the help of clinical partners, we are developing deep learning powered tools that activate those digital slides to address problems in the clinic, create opportunities for translational research and data licensing, and inform disease prognosis and therapeutic plans.’ Another issue that was covered was the desire to predict the likelihood of recurrence in cancers, and when you bring pathology into cancer prediction, you are enabling more predictive biomarkers for cancer and enabling more accurate predictions. We heard about some recent successes in this domain including identifying metastases in breast and gastric lymph nodes with deep convolutional neural networks and using deep learning to predict lymph node metastasis from the primary tumour.


Michael Dietz from Waya.AI continued the discussion on medical images and explained how they are working to improve image classification with generative adversarial networks (GANs) to ‘turn your smartphone into a dermatologist, and eventually into a doctor’. A high percentage of the population have access to smartphones, and having a dermatologist constantly on hand to photograph and diagnose skin conditions could help prevent a multitude of conditions. The GANs Waya.AI use are one of the most promising areas in deep learning research, and we heard about how they can be used in the real world for unsupervised learning. The models that Dietz and his team are using ‘learns entirely from data so there’s no predisposition or human bias, we’re letting the machine learn entirely from data.’ Waya.ai uses GANs for a several different tasks in skin cancer detection, and we saw the improved results obtained when using these as opposed to traditional methods. Although GANs accurate and efficient, Dietz explained that they are ‘really hard to train and we had to have a hack to nudge them to work which is an unstable situation’. However, Waya.AI have found a method to overcome this by calculating different distances to make a reasonable and efficient approximation. Through this application of AI in healthcare, the goal is to find the causes and mechanisms of disease and analysing the patterns that connect everything together.

@joeddav@waya_ai uses GANs to create synthetic data featurization of medical prediction models #reworkdl #deeplearning

Rounding off the morning of startup showcases, Fabian Schmich, data scientist from Roche began his discussion building on the issues faced by Proscia with ‘pathology being a fairly old field so much has changed, people are using the same old protocols with slide imaging’, so there is a lot of progress to be made. There’s an increasing demand for tissue analysis in drug development and Schmich explained how Roche are improving tissue annotation with deep learning. ‘We now are in the middle of a digital revolution in pathology’ which allows Roche to quantitatively analyse cells across the whole image which is a game changer in pathology. The problem with current pathology lies in human error and inconsistencies where technicians hand draw their analyses onto low resolution images. Deep learning, however, overcomes this by segmenting aspects of the images to get a deeper analysis. Roche ‘take an architecture and convolutionise it by changing the last couple of fully connected layers and add up sample layers’ which results in each image having multiple complex labels rather than being restricted to one. Schmich went on to explain the challenges in data mining these images and how they can tap into infrustructure that they already have to leverage data to train deep neural networks to plug in and test different architectures.

As the discussions continue into the afternoon, we’re looking forward to hearing from Biswaroop (Dusty) Maiumdar from IBM Watson Health, who will discuss Empowering the Future of Cognitive Computing in Healthcare, amongst several other healthcare leaders in deep learning.

Couldn’t make it to Boston?
If you want to hear more from the Deep Learning Summit and Deep Learning In Healthcare Summit you can register now for on-demand video access.To continue our Global Deep Learning Summit Series find out about our upcoming events in London, Montreal, and San Francisco.

Our next Deep Learning In Healthcare Summit will be held in Hong Kong next 12 & 13 April 2018.

BOSTON DEEP LEARNING SUMMIT – DAY 1 HIGHLIGHTS

Original

As day one of the Deep Learning Summit draws to a close, we’re taking a look at some of the highlights. What did we learn, and what did you miss? With over 30 speakers discussing cutting edge technologies and research, there have been a series of varied and insightful presentations.

Sampriti Bhattacharyya kicked off this morning’s discussion by introducing the influential companies, researchers and innovative startups involved. Named in Forbes’ 30 under 30 last year, the founder of Hydroswarm had created the underwater drone that maps ocean floors and explores the deep sea by 28, and she spoke about the impact of deep learning on technologies and how it’s affecting our everyday lives.

Deep learning allows computers to learn from, experience and understand the world in terms of hierarchical concepts, connecting each concept back to a simpler previous step. With these technologies being implemented so rapidly, discussions covered the impact of deep learning as a disruptive trend in business and industry. How will you be impacted?

Facebook is at the forefront of these implementations and with ‘⅕ of the population using Facebook, the kind of complexities they experience are unimaginable’. We heard from research engineer Andrew Tulloch who explained how the millions of accounts are optimised to ‘receive the best user experience by running ML models, computing trillions of predictions every day.’ He explained that to ‘surface the right content at the right time presents an event prediction problem’. We heard about timeline prioritisation where Facebook can ‘go through the history of your photos and select what we think to be semantically pleasing posts’, as well as the explosion of video content over the past two years, which employs the same methods of classification apply to both photo and video. The discussion also covered natural language processing in translation, as Facebook are running billions of translations every day which need to be as accurate as possible, and we heard how they’re overcoming these complexities to deliver accurate translations. He also drew on the barriers previously faced in implementing machine learning cross device and its impact on mobile. ‘Over a billion people use Facebook on mobile only’, and on mobile, the ‘baseline computation unit is challenging to get good performance’, so building implementations and specifications for mobile is very important.

@dmurga: Cool to hear Andrew Tulloch of @facebook talk about #DeepLearning on mobile for better privacy, latency, and offline performance. #reworkDL

We next heard from Sangram Ganguly from NASA Earth Exchange Platform, who continued the discussion of image processing. The vision of the NASA-EEP is ‘to provide science as a service to the Earth science community addressing global environmental challenges’ and to ‘improve efficiency and expand the scope of NASA earth science tech, research and application programs’. Satellites capture images and are able to ‘create high resolution maps to predict climate changes and make projections for climate impact studies’. One problem that Ganguly faced in his research however, was the reliance on physics based models: as the datasets increase it’s important to blend these models with deep learning and machine learning to optimise the performance and speed of the machine and and create the most successful models. This fusion of physics and machine learning is the driving force of high resolution airborne image analysis and classification.

Next up was Dilip Krishnan from Google, who went on to explore new approaches to unsupervised domain adaptation. Their goal is to ‘train a machine learning model on a source dataset and apply this on a target data set, where it’s assumed that there are no labels at all in unsupervised domain adaptation’. He discussed the difficulties in implementation and shared two approaches to the problem. The first approach ‘mapping source domain features to target domain features’ focuses on learning a shared representation between the two domains where we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. The second approach which has been more popular and effective is ‘end-to-end learning of domain invariant features with a similarity loss.’ Krishnan proposes a new model that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. The generative adversarial network (GAN)-based method adapts synthetic images to make them appear more realistic. This method is one ‘that improves upon state of the art feature level unsupervised domain recognition and adaptation’.

@dmurga: Nice @Google #DeepLearning arch reflecting task intuition: separate private & shared info for interpretable #DomainAdaptation. #reworkDL

After a coffee break and plenty of networking, Leonid Sigal from Disney Research expanded on the difficulties of relying on large-scale annotated datasets and explained how they are currently implementing a class of semantic manifold embedding approaches that are designed to perform well when the necessary data is unavailable. For example to accurately classify an image, you need more than 1000 similar images to draw against to teach the machine to recognise this classification, but ‘very few specific images have this amount of images to use, for example ‘zebras climbing trees’ only has one or two images to sample against’. Disney need to be able to localise images, have linguistic descriptions of images, and this is where it becomes much more complicated. Sigal explained how they are currently working with embedding methods using algorithms with weak or no supervision so that algorithms can work out how to classify each image, and make it much more efficient. Sigal’s work in deep learning has helped him with his problem solving in not only image classification, but character animation and retargeting, and he is currently researching into action recognition and object detection and categorisation.

@joeddav: Interesting approach to incorporating semantics into object detection and localization from @DisneyResearch #reworkdl #deeplearning

With such an abundance of applications for visual recognition, vision is not the only sense available to us, and Soundnet’s Carl Vondrik discussed his work in training machines to understand sound through image tagging. Whilst sources for mapping images tend to be readily available, ‘it’s difficult to get lots of data for specific sounds as there isn’t the same availability as there is in image recognition.’ To overcome this, Carl explained how SoundNet can ‘take advantage of the natural synchronisation between vision and sound in videos to train machines to recognise sound.’ He explained how they can take the models in vision already trained to recognise images and ‘synchronise it with sounds and use it as a teacher.’ After testing the audience and asking us to identify specific sounds and running our results against SoundNet, it materialised that SoundNet’s analysis was far superior than humans. Where a sound such as bubbling water, breathing, and splashing is identified immediately as scuba diving, buy SoundNet, the audience were unable to comprise the types of sound and draw this conclusion as easily as the system.

@datatrell: Some may think sound learning is a solved problem, but there’s much that can be done. Excited for @cvondrick‘s talk! #reworkdl

For a autonomous systems to be successful, they not only need to understand these sounds and the visual world, but also communicate that understanding wit humans. To expand on this and wrap up this morning’s presentations we heard from Sanja Fidler from University of Toronto who spoke about the progressions towards automatically understanding stories and creating complex image descriptions from videos. Fidler is currently exploiting the alignment between movies and books in order to build more descriptive captioning systems, and she spoke about how it is possible for a machine to automatically caption an image by mapping it’s story to a book, and then to teach the machine to assign combinations of these descriptions to new images. The end goal of this work is to create an automatic understanding of stories from long and complex videos. This data can then be used to help robots gain a more in depth understanding of humans, and to ‘build conversation models in robots’.

After a busy morning attendees chatted over lunch, visited our exhibitors, and had the opportunity to write ‘deep learning’ in their own language on our RE•WORK world map.

The summit continued this afternoon with presentations from Helen Greiner, ChPhy Works, Drones Need to Learn; Stefanie Tellex, Brown University, Lex Fridman, MIT, Deep Learning for Self-Driving Cars; Maithra Raghu, Cornell University/Google Brain, Deep Understanding: Steps towards Interpreting the Internals of Neural Networks; Anatoly Gorchechnikov, Neurala, AI and the Bio Brain; Ben Klein, eBay, Finding Similar Listings at eBay Using Visual Similarity, and many more.

We’ll be back again with Deep Learning Boston tomorrow, and will cover the applications of deep learning in industry and will be hearing from the likes of Sam Zimmerman, Freebird, Deep Learning and Real-Time Flight Prediction; Anatoly Gorchechnikov, Neurala, AI and the Bio Brain; David Murgatroyd, Spotify, Agile Deep Learning; Ben Klein, eBay, Finding Similar Listings at eBay Using Visual Similarity.

View the schedule for the remainder of the summit here.

Couldn’t make it to Boston? If you want to hear more from the Deep Learning Summit and Deep Learning In Healthcare Summit you can register here for on-demand video access.

To continue our Global Deep Learning Summit Series find out about our upcoming events in London, Montreal, and San Francisco

MEET THE WORLD’S LEADING AI PIONEERS IN THE ‘SILICON VALLEY OF DEEP LEARNING’

Original
For the first time, RE•WORK will be bringing the increasingly popular Deep Learning Summit to Montreal, Canada, and are excited to announce the attendance of Yoshua Bengio, Yann LeCun, and Geoffrey Hinton who will be appearing on the Panel of Pioneers to share their expertise as the founders of the deep learning revolution. Not only have they recently been named as 3 of Forbes’ ‘Top 6 Thinkers in AI and Machine Learning’, but these leaders of the field are responsible for nurturing deep learning throughout the 80s, 90s and early 00s when others were unable to see its potential.

Whilst deep learning experienced a lull, Hinton, LeCun and Bengio laboured away in their own time at CIFAR, a research centre in Toronto where they fine tuned their abstract computational methods, and jokingly referred to themselves as the ‘deep learning conspiracy’.

The event will feature two tracks with both tracks running over 2 days. Track One will hone in on cutting edge science and research in deep learning, whilst Track Two will be focusing on the business applications.

Super Early Bird tickets are available until Friday 2nd of June, and there are limited spaces left for these heavily discounted places. Don’t miss your chance to see the godfathers of deep learning appear on ‘The Panel of Pioneers’ together. Register now to hear from the leading minds in deep learning in Montreal this October 10 – 11.

In addition to this phenomenal initial lineup of speakers, we are pleased to have received incredible support from the National Research Council Canada, as well as IVADO, the British Consulate-General Montreal, and the Tourisme Montréal.

“As part of Montreal’s AI ecosystem, IVADO is thrilled to partner with the RE•WORK Deep Learning Summit”, said Gilles Savard, CEO. “Combining entrepreneurship, technology and science to re-work the future using emerging technology, this Summit will shine light on one of the pillars of the fourth industrial revolution, deep learning. It will also bring together industry professionals and science researchers working on data-driven innovation, a shared goal with IVADO’s mission.”

Additionally, on speaking with the Canadian National Research Council’s Industrial Technology Advisor, Benoit Julien, he said that “Montreal is privileged to be the world’s largest R&D hub in Deep Learning with over 150 researchers involved in projects implicating dozens of local and international high tech companies. The National Research Council Industrial Research Assistance Program (NRC-IRAP) is proud to support the small and medium size businesses of this fast growing sector. Our involvement in bringing this leading edge conference to Montreal is another demonstration of our strong commitment to help Canadian companies leverage and achieve the full potential of artificial intelligence.”

In addition to the Panel of Pioneers, we have several leading minds in deep learning confirmed to speak, including Roland Memisevic, Chief Scientist, Twenty Billion Neurons; Maithili Mavinkurve, Founder & COO, Sightline Innovation; Kyunghyun  Cho, Assistant Professor of Computer Science and Data Science, New York University; Aaron Courville, Assistant Professor, University of Montreal as well as many more leaders in the field still to be announced.

MEET THE PANEL OF PIONEERS

Yoshua Bengio

As one of the most cited Canadian computer scientists, Yoshua Bengio is (or has been) associate editor of the top journals in machine learning and neural networks as well as having authored two books and over 300 publications in deep learning, recurrent networks, probabilistic learning as well as other fields. His discussion will explore his main research ambition of understanding principles of learning that yield intelligence.

Yoshua Bengio is currently action editor for the Journal of Machine Learning Research, associate editor for the Neural Computation journal, editor for Foundations and Trends in Machine Learning, and has been associate editor for the Machine Learning Journal and the IEEE Transactions on Neural Networks.

Geoffrey Hinton

Currently an Engineering Fellow at Google, Geoffrey Hinton manages Brain Team Toronto specialising in expanding deep learning. In addition to this, Hinton is professor in computer science at the University of Toronto. For over two decades he has been publishing papers on the use of artificial neural networks to simulate human processing of information in machines. An important figure in the deep learning community, Hinton was one of the pioneering researchers who was able to present the use of generalised backpropagation algorithm for training multi-layer neural nets.

geoffjpg

Geoffrey Hinton has published countless articles across the field of deep learning, and they can be accessed here.

Yann LeCun

Yann LeCun has been director of research at Facebook since 2013 and has received much acclaim for his pioneering work in computer vision and machine learning. LeCun is also a founding director at NYU Centre for Data Science as well as Silver Professor at NYU on a part time basis working closely with the Data Scientists and the Courant Institute of Mathematical Science.

Yannjpg

View Yann LeCun’s published works and contributions here.

In recent discussions, AI experts have suggested that at the pace deep learning is currently progressing, it could soon be the backbone of many tech products that we use every day, and the work of the trio is the foundation for the next frontier in AI technology.

To hear from Bengio, Hinton and LeCun at the Montreal Deep Learning Summit this 10-11 October, register now and confirm your place. This event will be popular and tickets are limited. Contact Katie for more information at kpollitt@re-work.co.

GET INVOLVED

Interested in showcasing your startup?
The event provides the perfect opportunity to demo and showcase the latest AI technology and applications. If you know any innovative new companies working in the field, suggest them here.

Someone you’d like to hear from?
If you know of anyone in the industry who you’d like to hear present their research, you can suggest a speaker here.

RE•WORK have events scheduled up until October 2018. View the full calendar of events here.

Original

AI software company, Celaton, receives Queen’s Award for Enterprise

AI software company, Celaton, receives Queen’s Award for Enterprise

Today, Milton Keynes based Artificial Intelligence software company, Celaton has been named a winner of the Queen’s Award for Enterprise in Innovation 2017. The Queen’s Awards for Enterprise are the UK’s most prestigious business awards to celebrate and encourage business excellence.

Established in 2004, Celaton Limited has designed and implemented a machine learning software platform which, enables better customer service, faster. An Innovation Award has been given for the development of inSTREAM.

Businesses receive a plethora of content on a daily basis from customers, suppliers and staff, which is highly labour intensive to process, make actionable and gain insights from. By applying machine learning algorithms, inSTREAM is able to understand the meaning and intent of incoming content, enrich with other pertinent information and upload verified data into line of business systems. When inSTREAM is not confident, it will refer the decision to a human operator for verification, learning from their decisions and becoming more confident every time. inSTREAM can also create and personalise an appropriate response to each correspondence, meaning that excellent customer service is accelerated. It’s artificial intelligence, but to Celaton’s customers it’s the best knowledge worker they ever hired and it means better customer service, compliance and financial performance.

To date the platform has successfully streamlined over 215 work streams across 35 brands and driven 25% growth of the company over the last 5 years. Celaton’s technology has enabled transformation in customer service at ambitious brands like Virgin Trains, ASOS and DixonsCarphone.

Andrew Anderson, Celaton CEO said “Winning this award is a fantastic accolade and I am extremely proud of what the Celaton team have achieved. It’s been a long journey, but we have never stopped believing in the potential of our unique technology solution. Celaton being recognized as a leader in innovation in the UK by the Queen and government reaffirms our commitment to that belief.”  

 

 

Celaton press contact:

Chinia Green

E-mail: chinia.green@celaton.com

www.celaton.com

 

Pat Inc. Passes Facebook AI Research (FAIR) Tests with 100% Accuracy by Teaching Language to Machines

Pat Inc. Passes Facebook AI Research (FAIR) Tests with 100% Accuracy by Teaching Language to Machines

Pat Inc., the leader in Natural Language Understanding (NLU) technology, announced the successful results of its first set of independent tests, developed by Facebook AI Research (FAIR). The IQ tests are intended to train computers to be smarter by using logical questions most humans can answer and putting them to the test against artificial intelligence (AI), i.e. what FAIR dubbed the “bAbI Project.”

Pat Inc. takes a completely different approach to NLU by teaching language to machines using advanced linguistics, and successfully completed 6 bAbI tests to further verify the platform progress and evaluate its reading comprehension.

Further, the bAbi Project’s goal is automatic text understanding and reasoning through using as little data as possible to solve question/puzzles. The aim is that each task tests a unique aspect of text and reasoning, and hence test different capabilities of learning models. Pat required very little data to successfully pass each test, a significant feat in the tests.

Despite all the investment and advances we’ve seen in machine intelligence over the last 60 years, we still can’t match a three-year-old for understanding meaning in natural language. Unlike a human, AI can’t understand the basics, let alone the nuances of human language right now. And until it does, we can’t really communicate effectively, which limits the potential value of machine intelligence, especially to narrow or specialist domains, which is hardly AI.

Pat has set out to humanize conversation with machines, building AI’s next generation NLU API to deliver “Meaning-as-a-Service” by processing natural language and human conversations into structured information about its meaning. Developers will be able to leverage the platform, currently in private beta, to build intelligent agents and applications that you can talk or text to.

Pat Ball, Founder and CTO of Pat Inc.: “This is great progress – but for us, it’s just the beginning. We believe we can scale Pat beyond these tests to really solve the challenge of NLU. In the process, we can also meet the significant forecast demand for AI apps – forecast by IDC to be valued at $40 billion across Google, IBM, Amazon and Microsoft platforms by 2020. That’s why Pat’s further development will have significant impact on the AI we already depend on today – as well as the technology just around the corner. From driverless cars and wearables to home automation and networked applications, we can expect machines to provide us with more meaningful, helpful experiences and a natural, human-like interaction.”

Dr Hossein Eslambolchi, Technical Advisor at Facebook (former AT&T CTO): “Pat offers next generation Natural Language Understanding technology, capable of being the conversational user interface of the future.”

Professor Robert Van Valin, Jr., University of Düsseldorf and University at Buffalo, The State University of New York, PhD UC Berkeley: “Statistical systems can accomplish NLP to a considerable degree, but they can never achieve NLU, which involves meaning. The answer lies in linguistics. Pat Inc. solves that.”

FAIR IQ tasks are publicly available: http://fb.ai/babi

For a summary of the tests, and Pat’s results and approach to solving the tests in greater detail, please visit the site: http://bit.ly/patincbabi

Developers are now welcome to register for private beta access to Pat API: https://pat.ai

Machine Learning help us in identifying the origin of several Medical Syndromes?

Machine learning is doing wonders currently and its latest effort has been invested in locating the roots of various medical syndromes. In a recent study, several syndromes like Chronic Fatigue Syndrome, Gulf-war Syndrome, and Post-Accutane syndrome etc. has been studied from the perspective of their genetic origins, pathways and other factors. Machine learning has been applied on the researcher data and their abstracts and with the help of natural language processing and network analysis, this conclusion has been achieved.

When It’s Smart To Play Dumb: Managing AI Recommendations

When It’s Smart To Play Dumb: Managing AI Recommendations

As machine learning and artificial intelligence evolve and begin to show interesting results, brands are exploring how to apply the technology to their products and services. The goal is simple: improve the customer experience while decreasing customer service costs.

But what does this even look like? With enough information about you, the customer, AI can lead to accurate recommendations. Or it can go a step further and take action on that information and clue you in later. So when should an AI-powered digital product check in before it does something, and when should it take matters into its own hands?

Automation Anywhere Launches IQ Bots, Software Bots Capable of Learning from Human Behavior to Improve Process Automation

Intelligent Software Bots Integrated into RPA Platform Facilitate End-to-End Automation with Near Zero Error Rates

NEW YORK CITY – May 25, 2017 –  Automation Anywhere, the global leader in enterprise Robotic Process Automation (RPA), today announced the availability of IQ Bots, software bots capable of studying, learning and mimicking human behavior for intelligent process automation. By combining cognitive abilities with practical, rule-based RPA capabilities, organizations can quickly scale and up level their Digital Workforces to fully automate processes end-to-end and run them independently with minimal human intervention. The product was launched at Automation Anywhere Imagine, the company’s premier customer experience event taking place in New York City.

IQ Bots are skilled at applying human logic to document patterns and extracting values in the same way that a human would, but with instantaneous speed, the accuracy of a machine and with a near-zero error rate. Fully integrated with the Automation Anywhere Enterprise platform, IQ Bots deliver organizations enormous gains in productivity because they are capable of processing and automating business tasks involving complex documents with unstructured data. With Automation Anywhere’s comprehensive Digital Workforce platform, comprised of RPA, cognitive and analytic capabilities, organizations can automate up to 80 percent of business processes, compared to the 30 percent automation capability by using RPA alone.

“IQ Bots are the next evolution of cognitive capabilities that significantly extends the proficiency of RPA beyond anything we’ve yet experienced. They enable companies to leverage what humans do best and what machines do best, delivering the first intelligent automation platform,” said Mihir Shukla, CEO and Co-founder, Automation Anywhere. “We strongly believe the full potential of enterprise automation is only realized when RPA and cognitive computing work together. With the release of IQ Bots, we are delivering critical functionality, which can be truly transformational.”

IQ Bots have a built-in, intuitive dashboard that makes them easy to setup and manage. IQ Bots rely on supervised learning, meaning that every human interaction makes IQ Bots smarter. In addition to English, IQ Bots can extract data in Spanish, French, Italian and German. To learn more, visit here.

Interact with Automation Anywhere

About Automation Anywhere
Automation Anywhere delivers the most comprehensive enterprise-grade RPA platform with built-in cognitive solutions and analytics. Over 500 of the world’s largest brands use the platform to manage and scale their business processes faster, with near-zero error rates, while dramatically reducing operational costs. Based on the belief that people who have more time to create, think and discover build great companies, Automation Anywhere has provided the world’s best RPA and cognitive technology to leading financial services, BPO, healthcare, technology and insurance companies across more than 90 countries for over a decade. For additional information visit www.automationanywhere.com.

# # #

 

Media Contact:

Bhava Communications for Automation Anywhere

Brianna Galloway

automationanywhere@bhavacom.com

510-356-0013

O’Reilly Artificial Intelligence

O’Reilly Artificial Intelligence | Sept. 17-20, 2017 | San Francisco, CA

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
– Eliezer Yudkowsky

The O’Reilly Artificial Intelligence Conference gathers some of the brightest minds in AI to cut through the hype, identify and explain the most promising AI developments, and provide practical case studies, time-saving tips and tricks, and proven best practices you need to use AI right now—as well as the insight you need to see where AI is going. Save 20% on most passes with discount code PCEVENTS.
http://www.oreilly.com/pub/cpc/81024

A.I LOVE YOU

AI Love You is an interactive show by Heart to Heart Theatre which discusses the relationship between humans and artificial intelligence. Putting the audience at the centre of this ‘choose your own story’ play, AI Love You allows the audience to create the fate of the characters. Through voting the audience are asked to make moral decisions that bring our own relationship with technology to the foreground.

Drawing inspiration from philosophers, Deep Mind, addiction to the internet/technology and some very creepy sex robots, Heart to Heart Theatre invite the audience to question what it is that makes them human and if a robot could or should have the same rights as them. What will happen when programmes are so good at imitating life that they ask to be treated that way? With AI becoming more and more a part of our lives, this piece of theatre begins question where the line is.

Location: London, UK

Tickets:  http://www.theatreN16.co.uk

Speakers Wanted

Many of the meetup and conference events are looking for guest speakers to present at one of their meetings. We frequently get asked if we know of anyone that is available to speak at events.

Speakers can be Authors, Academics or Professionals working on Artificial Intelligence. These meetings typically allow the opportunity for some self promotion or sales pitch.

If you would like to be added onto our list of potential speakers, please send us a message with some details of the locations and topics you can cover.

Speakers Contact Us


Link to Full Article: Read Here

M.I.E. SUMMIT BERLIN 2017 – 20th June

The world’s first open-space Machine Intelligence summit, which will be held on the 20th of June 2017.

This event will give you the opportunity to learn, discuss and network with your peers in the MI field. Back dropped in one of Berlin’s most vibrant and artistic locations, break free from traditional conference rooms and share a drink in a typical Berliner Biergarten.

The M.I.E Summit Berlin 2017 will provide you with two in-depth event tracks (keynotes, workshops, and panels) as well as over 20 leading speakers and unparalleled networking opportunities.

The following topics will make this event one of the most inspiring, entertaining and thought-provoking this year:

  • What exactly does AI mean for all industries, from medicine to cars, from cognitive to neural networks?
  • Can machines really outperform humans? What if AI systems become better than humans at all cognitive tasks?
  • Should you worry whether your job is going to be replaced by robots? If yes, what can you do about it?
  • You work on innovation and are eager to find out how AI could apply to your business?
  • How can we benefit from the great advancements brought about by AI while taking into account ethical and economical considerations?
  • Is investing in AI startups a good idea? What’s behind the hype?

 

We are pleased to offer a 30% discount code for this event of using code “miepartners

https://www.eventbrite.com/e/mie-summit-berlin-2017-can-machine-ai-outperform-human-tickets-33207267832


Link to Full Article: Read Here

home of Artificial Intelligence information

Resource Directory, News Stories, Videos, Twitter & Forum Streams, Spotlight, Awards, Showcase and Magazine

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!