The AI Times Monthly Newspaper

Curated Monthly News about Artificial Intelligence and Machine Learning

Study.AI – A collection of resources for students of AI

study_ai_website

STUDY.AI

We are please to launch a dedicated resources page for students of artificial intelligence and machine learning.

We will be adding more to this area over the coming weeks, but wanted to share what we have already put together. Essentially we have re-organised our directory of resources so that it is tailored for students. The full directory is still available if needed, plus you still have full access of the homeAI.info resources. This is just a starting page for students on our homeAI.info site.

The page is available by visiting http://Study.AI

As always we welcome feedback and suggestions for improvements.

Spotlight – Vocation.AI – Careers Portal and Jobs Board for AI

vocation-logo-final

Vocation.AI is part of the Informed.AI network and is our Careers Portal and Jobs Board dedicated to people interested in working in the fields of data science, machine learning and artificial intelligence.

As we know, the last few years has seen a rapid expansion of interest in Artificial Intelligence. both from a commercial and academic perspective. We have seen many start-ups funded over the last couple of years that are developing various applications across many different industries. Large technology companies have invested huge amounts of resources to build out departments dedicated to the development of AI techniques that maybe applied to their existing products and services. While research activities continues at pace to advance the methodologies of AI.

With all this activity, we want to support the related jobs market that has been generated from this continued investment in the field.

Vocation.AI is not a jobs agency. We do not charge any fees or commissions for any jobs posted on our jobs board. This is offered as a completely free service. We are open for either companies or start-ups direct or agencies to post job opening on our board.

To find out more please visit http://Vocation.AI or follow us on twitter at @Vocation_AI

To contact us email jobs@vocation.ai

Special Report: The State of Robotic Process Automation and Artificial Intelligence in the Enterprise

Special Report:
The State of Robotic Process Automation and Artificial Intelligence in the Enterprise
As a member of Informed AI, we would like to share with you an exclusive industry report: The ‘State of Robotic Process Automation and Artificial Intelligence in the Enterprise. Discover the main challenges, key steps on implementing RPA and Artificial Intelligence, savings expected to be made, processes planned to be automated, general trends practitioners are experiencing and more…

Plus, we also analysed how those working in sectors such as Finance, IT, Human Resources, Operations and Business Development, are reacting to RPA and AI.

This report displays how collaborative and creative the industry is becoming as a group!

The information contained in this report will be discussed in further detail at the RPA and Artificial Intelligence Summit taking place from 30th November- 2nd December 2016 in London, UK.

If you haven’t seen the agenda yet, please download it here.

As a member of Informed AI, you are entitled to 20% off the current rate, please quote the 20% discount code: VIP_INFORMEDAI when registering here.

For more information about the event please visit www.rpaandaisummit.com, call us on +44(0) 20 7368 9809 or email us at enquire@iqpc.co.uk

We hope you find the report valuable!

The RPA and Artificial Intelligence Summit team

*Reduced price tickets offered by IQPC are non-transferrable between organisations and only transferrable between individuals within the same organisation where written permission is obtained from IQPC in advance. Reduced tickets are available to the robotics automation end user organisations only. The offer does not extend to any company whose main or partial business is the provision of products or services of any kind to the aforementioned company type/s. IQPC reserves the right to revoke or refuse issue of reduced tickets at any time.

Super Sunday Updates

Another Super Sunday for updates on our site.

  • Additional listings on the Company page
  • The new FinTech category added to the directory
  • Added a page for the Neurons Professional Network signup
  • Fixed a problem with the Videos page so now all our playlist groups are shown
  • Added our new sponsored links on pages

As always we welcome feedback and suggestions. Please use our contact us page for all comments.

Machine vs Machine: Should AI be human?

flyerdialogue_green
‘Machine vs Machine: Should AI be human?’
18th Oct, 19.00-22.00
The Book Club, 100-106 Leonard Street, EC2A 4RH
 
If we continue to develop and research ‘Artificial Intelligence’, humans could eventually create machines that think and feel like we do. If we accept that our thoughts are simply the firing of neurons, then should we also accept that the mind IS the physical brain? Would this also imply that humans are nothing but incredibly sophisticated machines? This might unsettle our self-perception of uniqueness and the very foundations of our moral codes and human rights. Perhaps then, we need to dig even deeper and ask ourselves the question, artificially or not, what does it even mean to possess ‘intelligence’ or ‘consciousness’? How appropriate is the Turing test for comparing and contrasting machine and human qualities or do we need to identify new measures to take it further?
Optimists argue AI could be an utopian symbiotic solution to the world’s greatest needs, a learning system that foresees the future far better than we do. Should we design AI to have human-like qualities or bypass emotional subjectivity to become a more ‘rational’ utilitarian mirror reflecting our interests? Beyond replicating ourselves, what of the potential of AI to evolve?We can imagine artificial intelligences with sensors and intellectual capabilities profoundly different (and potentially greater) than ours, for example seeing far beyond our limited visual spectrum of electromagnetic radiation, or thinking billions of times faster than us.
Yet never far from the surface are worries that unchecked learning could lead to manifold dystopian outcomes, immortalised in sci-fi through the horror classic ‘Frankenstein’, or modernised in recent cinema with the bittersweet ‘Ex Machina’ or the urgently practical moral questions being raised by imminent driverless cars.
 
Still some £6 early bird tickets, £8 advance and £10 on the door

Get yours now and join us as we decode the hype surrounding A.I., and delve into the philosophical hard problem of consciousness, before discussing the ethics and current applications of artificially intelligent systems.
 

Read on below to find out more about the stellar speakers we have (plus more to be announced) and follow us as we post more about them on: www.facebook.com/JugularJoiningHeadandHeart and related news on the theme on our twitter @JugularArtSci

[Speakers]

Prof Murray Shanahan
Professor in Cognitive Robotics, Imperial College

Murray is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, where he heads the Neurodynamics Group. Educated at Imperial College and Cambridge University (King’s College), he became a full professor in 2006. His publications span artificial intelligence, robotics, logic, dynamical systems, computational neuroscience, and philosophy of mind. He was scientific advisor to the film Ex Machina, and regularly appears in the media to comment on artificial intelligence and robotics. His books include “Embodiment and the Inner Life” (2010), and “The Technological Singularity” (2015).

Dr Piotr Mirowski
Improviser and research scientist in deep learning

Piotr obtained his Ph.D. in computer science at New York University under the supervision of deep learning pioneer Prof. Yann LeCun. He has a decade-long experience of machine learning in industrial research labs, where he developed solutions for epileptic seizure prediction from EEG, robotic navigation and natural language processing. His passion for performing arts, as a drama student with a 17-year background in improvised theatre, drew him to create HumanMachine, an artistic experiment fusing improv and AI, where Piotr’s alter-ego Albert shares the stage with a computer called A.L.Ex. The show aims at raising questions on communication, spontaneity and automaticity.

Luba Elliott
Creative producer, artist and researcher

Luba is exploring the role of artificial intelligence in the creative industries. Trained as a human-centered designer, she has worked on several projects bridging the gap between the traditional art world and the latest technological innovations. She is currently working to educate and engage the broader public about the latest developments in creative AI.

Dr Yasemin J. Erden
Senior lecturer in Philosophy, St Mary’s University

Yasemin’s main areas of research are within emerging technologies such as intelligent systems, nanotechnology, the internet and social networking. Alongside this she is an independent ethics expert for the European Commission, as well as a committee member of The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).

Dr Amnon Eden
Computer scientist, Principal of the Sapience.org thinktank
Amnon’s research contributes to artificial intelligence, the philosophy of computer science, the application of disruptive technologies and original thought to interdisciplinary questions. He is co-editor of ‘Singularity Hypotheses’[Chair]

Dr Shama Rahman
Storyteller: Scientist, Musician, Actor

Shama is a storyteller in different media. With an interdisciplinary PhD in the Neuroscience of Creativity, she is the Founder and Artistic Director of Jugular Productions. As a professional musician, she also likes to work at the cross section of music, technology and other art-forms and is about to release her album ‘Truth BeTold’, the world’s first full live album recorded and performed with wearable technology, which was showcased at a special one-off performance with real-time generative visuals and dancers. Her acting highlights include being the lead of South Asia’s first supernatural detective thriller, a BBC drama series shown to over 53 million worldwide.

A Vision of the Future of Artificial Intelligence

WHERE DO WE STAND TODAY?

Today, anywhere you look, startups are emerging. The numbers are simply mindboggling! Extrapolating from the data that Dr. Paul D. Reynolds, Director of Research Institute, Global Entrepreneurship Center, provided, we find that there are:

472 million entrepreneurs worldwide attempting to start 305 million companies, approximately 100 million new businesses will open each year around the world.

How crazy is that! 100 million new businesses each year, that is, over 273972 new businesses per day! Further, out of the 100 million, 1.35 million are tech startups.

But that isn’t the worst of it – 9 out of 10 startups fail!

Why this staggeringly high failure rate? CB Insights performed a study to find the top 20 reasons for failure.

startups_failure_cause

42% of the failures are due to “No market need”. People make products that consumers aren’t willing to buy. They fail to study the market needs properly before plunging into their project, thus flushing down millions of dollars.

The same mistake cannot be repeated with the creation of Artificial Intelligence. Hence, we have to first study what the world needs and accordingly develop intelligent systems. We need a vision of the future of AI before we plunge into its creation.

A VISION OF THE FUTURE OF AI

Science and technology have changed our lives with staggering effect. In around 100 years, the average life expectancy of a human being has increased by nearly 40 years.

life-expectancy-globally-since-1770And yet, it is not enough. The following data from WHO reveals that the leading causes of death in the world are: heart disease, stroke, chronic obstructive lung disease and lower respiratory infections.

who-data

What if we could predict these diseases and thus prevent them? Or more importantly, how do we predict?

And this is where AI steps in.

Medicine

Researchers today are developing machine learning techniques which analyse huge amount clinical records to predict imminent diseases. These programs sift through the medical history of thousands of patients with a particular disease, then look for others with similar records, and gives the likelihood of the disease occurring. This paper is an example of AI being used to predict heart disease.

Other researchers are combining machine learning methods with advanced MRI techniques to help predict Alzheimer’s disease, brain cancer and other diseases of the brain. The machines learn to recognise patterns in the scans and extrapolate to predict the diagnosis. Today, researchers are able to predict the Alzheimer’s disease with 82-90 percent accuracy.

Some are even studying genes, attempting to get the relation between a particular gene and a disease. As the other techniques, they too are applying machine learning techniques to crunch tons of data to extract patterns that might help them find the root of the disease.

Surgical robotics is steadily emerging too. Today, there are hospitals where basic surgeries are performed without a doctor slicing into the patient directly. An operator guides robotic arms with the help of joysticks. These arms reduce jerky or shaky movements, thus increasing precision. With the addition of superior visualisation, surgical robotics minimises incisions, thus reducing risk and need for medication. The da Vinci System has performed over 3 million minimally invasive surgeries successfully.

Furthermore, smart bionic limbs are using machine intelligence to aid invalids lead a normal life. They sense and adapt to the environment and predict the user’s intentions to provide greater stability and ease.

AI is not only helping in diagnostics and surgeries, but also in designing drugs. Atomwise’s AtomNet studies protein structures, which can be considered to be “locks”, and tries millions of molecular combinations to open these “locks”. Basically, it’s designing complex molecules to destroy harmful protein combinations, or put differently, designing drugs to cure diseases.

Though these technologies are still premature, we can see the potential they have. In the future, life expectancy is bound to rise, diagnostics will be more accurate and easily obtained, surgeries will be automated, medical advise and care will be provided by virtual assistants online, drugs will be more effective with minimal side effects, smart exoskeletons will aid the disabled to lead a normal life. We may now even dream of exoskeletons merged with the human body for superior performance, human augmentation, automated gene modification, advanced cyborg technology, nanobots racing through our blood streams clearing clots and cleansing the body of diseases and other crazy ideas. Man, how cool will our future be!!!

Personal Assistants

What if you had a friend for life, a friend you would never lose, a friend who understood every emotion, every idea, every intention? What if this friend was unique to you, would guide you, be there to support and help you? What if I told you that this is our future?

With the advent of Internet, there is a boom of information, so much information that discerning the useful ones from the worthless ones is becoming nearly impossible. What if somebody could do extract the meaningful content and use it to make our lives easier?

This should be the aim of virtual personal assistants (VPA). Today, we have Siri, Cortana, Google Now, Watson – all crude versions of our vision. The technology currently is basic, involving pattern recognition, knowledge bases, natural language processing and sentiment analysis among the numerous other techniques. And yet, true “cognisance” is yet to be achieved.

In the future, here are some tasks we might expect our VPAs to perform: schedule meetings and manage time, monitor personal health and in case of emergency automatically call medical aid, connect home, car and office through IoT, update user on news, traffic and weather, book reservations or order from restaurants, provide information requested by the user by searching the Internet, but more importantly, customising and adapting to the user’s needs. There are tons of other tasks, but listing them all would be mindless.

But we want our VPAs to be more than that. We need them to understand our emotions, our sentiments, our moods…when we are down, automatically soft music should play and the lighting of the room should mellow down, where we are jovial, upbeat music should cheer us on etc. Furthermore, our VPAs should be able to converse meaningfully, understand our feelings and guide us by extracting information from the Internet. Stop for a moment and ponder, what is it to really understand? How to make a machine understand our feelings?

This is merely a glimpse of the future of virtual personal assistants, but one thing can be sure: they will have a significantly large role in our future. Currently, I’m creating my own virtual personal assistant in Python and will soon be writing about it.

Cyborgs, Humanoids and Robots

Some readers now might be “Ah, now he’s talking about AI”. This is the typical vision of AI, for it has some truth. The personal assistants need not only be virtual, but will soon have bodies. They will aid us in our daily mundane tasks, such as driving the kids to school, throwing out the trash, playing, helping cook, helping wash and any other task you could think of. Furthermore, they could help the aged with daily activities. Today, humanoid robots are already emerging, such as Asimo, Nao, Atlas and Actroid-SIT.

Furthermore, there is a tremendous boom in industrial robots which automate manufacturing. Today they are used for welding, painting, picking and placing, packaging and numerous other tasks. They are in demand because of their precision, speed and endurance.

Apart from these, robotics will soon have applications in military, space exploration, agriculture, medicine, sports, fire fighting, construction and in innumerable other fields. Lastly, we should keep an eye out for nano robots too, for they are an emerging technology.

Autonomous Vehicles

By 2018, autonomous cars will be rolling on the streets created by companies like Google, Uber and Tesla. These automobiles use machine vision, GPS and odemetry among other techniques. There is a lot of research in this field because, as shown by the graph above, deaths due to road injuries ranks 9th, with 1.3 million dying per year. Reports say that driverless cars could reduce road fatalities by 90%. Furthermore, in the foreseeable future people will avoid buying cars because automobile services like Uber will be a tap away and considerably, and this will cause a reduction and a better organisation of traffic, which could further reduce fatalities.

Other autonomous vehicles too will soon emerge, like Hyperloop, drones, hovercrafts, trains, planes, ships etc.

Business

AI will cause a boost in business. With the help of predictive analysis, there will be major improvement in stock market prediction, business models, recommender systems and numerous other fields.

Today, Amazon, Facebook, Microsoft and Google, among numerous others, are using AI to analyse consumer behaviour to provide better ads, services and products.

These are merely some of the fields in which AI is going to have a major impact. This graph reveals the future of emerging technologies:

ai-sep-gartner-hype-cycle-2015.jpg

 Source: zdnet.com

To conclude, AI is currently undergoing a boom and therefore, an AI startup will have high demand and relatively easy funding. And thus, guided by our vision, let us proceed to create our future.

Building Blocks of Artificial Intelligence

BUILDING BLOCKS OF ARTIFICIAL INTELLIGENCE

INTRODUCTION

We are living a new technological revolution, a revolution that will transform our lifestyles drastically, a revolution caused by the advent of Artificial Intelligence (AI).

Today, AI is equated with killer robots itching to destroy the human race, an idea bred by Hollywood movies. But AI is more than that. Broadly speaking, AI’s objective is to build intelligent entities, such as machines or software, to facilitate our daily tasks and to bring comfort to our lives. John McCarthy, who coined the term in 1955, defines it as, “It is the science and engineering of making intelligent machines, especially intelligent computer programs.” Numerous other rigorous definitions have been provided, but a single universally accepted definition of AI is yet to be established.

THE FOUNDATIONS OF AI

It is evident that to build an intelligent machine, we have to first understand intelligence. Therefore, AI is a multidisciplinary field, founded by ideas from numerous other subjects.

Philosophy

The question that bothers me the most is: does the mind create our thoughts, or is it our brain? Put differently, if we could recreate a brain exactly, would it function like any other brain? Would it display consciousness, will and creativity? Further, if consciousness is not the effect of the wiring in our brain, but in fact the cause of our thoughts, how do we create consciousness?

Following this stream of thought, other questions arise: what does it mean to understand something? Is logic inherent? What is knowledge? Does it differ from information?

From the times of the ancient Greeks, people have been troubled with the functioning of the mind, reason and logic. Aristotle’s theory of Syllogism describes the basic process of the rational mind, where we deduce conclusions from an initial premise. He says that:

A deduction is speech (logos) in which, certain things having been supposed, something different from those supposed results of necessity because of their being so. (Prior Analytics I.2, 24b18–20)

This is nothing but the notion of logical consequence or logical implication, which is the base of mathematics. More information on syllogism is available here.

Syllogisms describe the workings of the mind, but what is the mind? René Descartes offered a theory saying that the mind is a substance whose essence is thought. Further, mind and body are distinct, a theory known as the “mind-body dualism“. Further reading can be done here.

Materialism holds an alternative view and proposes that the mind is merely the result of the interaction of matter, a view that seems rather narrow.

We have discussed the workings and the substance of the mind, but what about consciousness? Is consciousness a substance, or merely a result of the interaction of matter. Sri Aurobindo Ghosh posits that consciousness is the fundamental substance of this universe, and is involved in every material object. Hence, evolution is simply the effect of the emergence of consciousness. His major work, The Life Divine, explains his philosophy thoroughly.

I have merely scratched the tip of the iceberg of the philosophical theories which found AI, but it is enough to make you aware of the complexities of AI.

Mathematics

Philosophy delves in the realm of ideas. But to make the ideas concrete, a formal set of laws, logic and notations are required. And this is where comes mathematics. Though approaches to the subject vary from researcher to research, in general here are the topics used:

-Logic : propositional, first-order and fuzzy.

-Linear Algebra

-Probability Theory

-Statistics

-Algorithms

-Calculus

-Optimization Theory

-Graph Theory

A strong mathematical foundation is a must for AI, for without it, you’ll be floundering in the ocean of jargon and symbols.

Computer Science

There are two sides of computer science that needs to be developed to create AI: hardware and software.

A basic background of the architecture of computers is necessary to be able to create a functional intelligent program and to understand the flaws and restrictions. In the future, hardware too can be modified to be more befitting to our aim. What if we create a computer shaped like a brain? Would it perform faster? What tasks would be easier? What would be the restrictions? Could we create artificial neurons?

AI is finally a computer program, a software. Therefore, a thorough understanding of data structures, algorithms, computer networks, databases and programming, especially object-oriented programming, are a must.

There are tons of programming languages out there, but currently, here are the top ones used for AI/ Machine Learning/ Data Mining/ Data Science/ Analytics.

ml2_graph

Source: Languages and Libraries for Machine Learning

Further information on programming languages can be obtained here.

Neuroscience

To create intelligence, it is obvious that we have to first understand how our brain functions. This task is accomplished by neuroscience. Which areas of the brain work we reason? think? imagine? see? hear? How is information stored in the brain? What creates thoughts? What happens when we dream?

I will elaborate on the studies of neuroscience in a later post.

Psychology

Neuroscience studies the physical functioning of the brain, but what about our behaviour? How do we act? How do we make decisions? How do we reason?

Cognitive psychology studies the mental processes behind these actions, and further attempts to give a theory to the functioning of the brain. Subsequently, taking inspiration from these theories, we can create AI.

Linguistics

Finally, humans have to interact with machines. Therefore, the machines have to understand our language and all the underlying nuances. This task at a first glance doesn’t seem too difficult, but is actually quite complex. To understand a sentence, understanding the grammar is not enough, one has to understand the context and the matter. How do you make a machine understand the concept of a “cat”? What about an abstract concept such as “love”? You could provide a definition, but would it understand? The difficulty, as you might realise, is not the syntax, but the semantics.

This gives only a glimpse into the fields that are at the foundations of AI, but it is enough to reveal the magnitude of the difficulties ahead. In the following post, I will explore the tasks that AI aims to accomplish.

 

Press Release – Celaton announced the appointment of Andrew Burgess to its’ Advisory Board

Milton Keynes, UK, 08, September, 16 – AI software company Celaton today announced the appointment of Andrew Burgess to its’ Advisory Board.

A management consultant, author and speaker with over 25 years’ experience, Andrew is considered an authority on innovative and disruptive technology, artificial intelligence, robotic process automation and impact sourcing.

He is a former CTO who has run sourcing advisory firms and built automation practices. He has been involved in many major change projects, including strategic development, IT transformation and outsourcing, in a wide range of industries across four continents.

Andrew joins Celaton at an exciting period of growth for not only the company but the industry as whole.

Andrew says of the appointment “It’s an honour to be working with one of the leading vendors in artificial intelligence and cognitive automation. Celaton already has an impressive track record in this market and I look forward to helping them grow and develop further”.

Andrew Anderson, CEO Celaton said “I have known Andrew for several years and he never ceases to amaze me with his passion and dedication to sharing his knowledge about the AI and Intelligent Automation. It is a great honour that he has agreed to join Celaton and we thoroughly look forward to working with him.”

Software Engineering Institute at Carnegie Mellon University and SparkCognition Collaborate to Advance Cognitive Security

Software Engineering Institute at Carnegie Mellon University and SparkCognition Collaborate to Advance Cognitive Security

Carnegie Mellon University’s Software Engineering Institute collaborates with AI industry leader, SparkCognition, to build next generation cybersecurity programming guide.
AUSTIN, TX (PRWEB) JULY 25, 2016
The Software Engineering Institute (SEI) at Carnegie Mellon University is collaborating with industry-leading Cognitive Security Analytics company, SparkCognition, to build an automated cognitive cyber security threat remediation tool using SparkCognition’s proprietary technology and IBM Watson.
As part of the collaboration, engineers at SparkCognition will train the research team at the SEI’s CERT Division on how to use IBM Watson to catalogue and make query-able vulnerabilities on the Common Weakness Enumeration (CWE) list and CERT Secure Coding Rules. SparkCognition has already trained IBM Watson on a very large corpus of cybersecurity technical literature, including the Common Vulnerabilities and Exposures (CVE) list, OWASP literature, and many more cyber security databases.
“As software has become essential to all aspects of system capabilities and operations, there has been a dramatic increase in the significance of cybersecurity,” said Mark Sherman, Technical Director for Cyber Security Foundations at the SEI. “The CERT Division focuses its research on cybersecurity challenges in national security, homeland security, and critical infrastructure protection. We seek to develop and broadly transition new technologies, tools, and practices that enable informed trust and confidence in using information and communication technology. SparkCognition provides critical capabilities for this advanced initiative.”
SparkCognition’s technology is capable of harnessing real time infrastructure data and learning from it continuously, allowing for more accurate risk mitigation and prevention policies to intervene and avert disasters. The company’s cybersecurity centered solution analyzes structured and unstructured data and natural language sources to identify potential cyber threats. The uniqueness of the cognitive platform is resonated by the fact that it can continuously learn from data and derive automated insights to thwart any emerging issue.
“We are looking forward to working with one of the nation’s leading cybersecurity programs,” said Keith Moore, Product Manager of SparkCognition. “The company is building solutions that address cyber risk and resilience, software vulnerability, insider threat, secure coding practices, and other areas. Together, we are leading in new approaches, analysis tools, and training options to improve the practice of cybersecurity in private and public sector organizations, and we’re excited to collaborate with the SEI in pursuit of that mission.”
About SparkCognition
SparkCognition, Inc. is the world’s first Cognitive Security Analytics company based in Austin, Texas. The company is successfully building and deploying a Cognitive, data-driven Analytics platform for Clouds, Devices and the Internet of Things industrial and security markets by applying patent-pending algorithms that deliver out of-band, symptom-sensitive analytics, insights, and security. SparkCognition was named the 2015 Hottest Start Up in Austin by SXSW and the Greater Austin Chamber of Commerce, was the only US-based company to win Nokia’s 2015 Open Innovation Challenge, was a 2015 Gartner Cool Vendor, and is a 2016 Edison Award Winner. SparkCognition’s Founder and CEO, Amir Husain, is a highly awarded serial entrepreneur and prolific inventor with nearly 50 patents and applications to his name. Amir has been named the top technology entrepreneur in Austin by the Austin Business Journal, is the 2016 Austin Under 40 Award Winner for Technology and Science, and serves as an advisor to the IBM Watson Group and the University of Texas Computer Science Department. For more information on the company, its technology and team, please visit http://www.sparkcognition.com.

The Robot Rebellion: It’s Inevitable But It’s Our Fault Not Theirs

The Robot Rebellion: It’s Inevitable But It’s Our Fault Not Theirs

In the past year several concepts dear to my heart have become quite popular, namely Artificial Intelligence, Machine Life – and Artificial Neural Nets. After quietly toiling on these ideas as a hobbyist since 2007, it feels like a bit of a vindication to see the entire world finally realize how important they are. Along with this rise of Artificial Intelligence within main stream consciousness has come the inevitable question: will conscious robots rebel against humankind?

http://anonymousglobal.org/commanderx/blog/?p=230

An AI Founder’s Story: Beagle Goes Global

An AI Founder’s Story: Beagle Goes Global

Artificial Lawyer caught up with Cian O’Sullivan, founder of Beagle, the automated contract analysis system that is just celebrating a year and a half of operations and landing VW as a client.

We discussed how Beagle came about, why maybe sometimes it’s better not to talk to lawyers about AI and how come the company has one of the world’s largest auto companies as a client, and then some.


Introduction

Cian O’Sullivan’s web camera is not working when Artificial Lawyer calls for a video conference and so is treated to a picture of a soccer pitch in Colombia that the legal tech company founder took on his travels.

The international reference makes sense once you start to talk to O’Sullivan. The Canadian travels a lot. He went to law school in Ireland and studied for the New York Bar exam while he was staying in Bermuda.

As his start-up legal tech company, Beagle.ai, grows …..

To continue reading: https://artificiallawyer.com/2016/09/07/a-founders-story-beagle-goes-global/

 

Artificial Intelligence – The New Superpower for Compliance

In our quest for business productivity and cost savings, compliance teams are all too often being given increasing demands to keep the organization out of trouble, but are not being allocated additional budget to achieve this goal.

It typically takes a high-profile violation or industry-wide regulations like FCPA, to kick-start the implementation of risk management and compliance programs. And even then, there’s resistance due to concerns about the cost for these new programs and the potential for additional bureaucracy, slower decision-making and operational inefficiencies. When given the choice, today’s business executive tends to err on the side of speed over process.

Today’s automated ERP systems are designed with the intention of streamlining information delivery and decision-making, as well as providing decision-makers with sufficient information to make informed decisions and then providing electronic decision-making and audit tracking. The best of both worlds: speed and process accuracy, plus compliance.

http://corporatecomplianceinsights.com/artificial-intelligence-new-superpower-business-compliance/?utm_source=appzen

Celebrating the Women Advancing Machine Intelligence in Healthcare

Celebrating the Women Advancing Machine Intelligence in Healthcare

As an all-female company, RE•WORK is a strong advocate for supporting female entrepreneurs and women working towards advancing technology and science.

Following a fantastic dinner in February, RE•WORK have will be holding the next Women in Machine Intelligence in Healthcare Dinner in London on 12 October, to celebrate the women advancing this field.

The event is sponsored by IBM Watson Health, and is open to anyone keen to support women progressing the use of machine intelligence in healthcare, medicine and diagnostics. Confirmed attendees include Bupa, Google DeepMind, Kings College London, Lloyds Online Doctor, Omixy, UCL and Playfair Capital.

Over the course of the dinner, hear from leading female experts in Machine Intelligence and discuss the impact of AI sectors including machine learning, deep learning and robotics in healthcare. Attendees will establish new connections and network with peers including Founders, CTOs, Data Scientists and Medical Practitioners.
 

Speakers include:

  • Razia Ahamed, Google DeepMind
  • Alice Gao, Deep Genomics
  • Kathy McGroddy Goetz, IBM Watson Health

There’s just a limited number of tickets left for this event! To book your tickets, please visit the event site here.

Check out RE•WORK’s Women in Tech & Science series & see their full events list here for summits and dinners taking place in London, Amsterdam, Boston, San Francisco, New York, Hong Kong and Singapore. 

CorTeX Assembly Language

CorTeX Assembly Language

This is an attempt to change the way a Neural Network is executed and trained. Instead of accellerating parts of the NN execution, the whole NN problem is converted into a new assembly language that can do evaluation and back propagation of any NN. Multiple passages of convolution steps can be combined into a fully parallell pipeline.

http://www.gizmosdk.com/archives/CorTeX/execution_example.pdf

Celaton, today announced the release of Personalised Response

Milton Keynes, UK, 06, September, 16 – Ai software company Celaton, today announced the release of Personalised Response, the latest artificially intelligent module for its inSTREAM™ platform.

The Institute of Customer Service recently reported that 46% of customers expect a response within 24 hours if they contact an organisation via email, with over two fifths saying the same for website contact and one third for social media enquiries. With customers demanding faster responses, resolutions times and the bar constantly being raised on service levels it is now harder than ever for organisations to distinguish themselves as leaders in customer service.

Personalised Response significantly extends the capabilities of inSTREAM by enabling it to present operators with the most appropriate response to send. The proposed response is based on inSTREAM’s understanding of the meaning and intent of each incoming correspondence and the enrichment of data from other data sources. Responses chosen by inSTREAM are presented to operators for validation or for them to make the final decision before submission to customers.

inSTREAM learns through the natural consequence of processing and gains confidence as a result of experience. When inSTREAM is not sure of the most appropriate response, it will suggest the all possible options for an operator to choose and subsequently learns from the actions and decisions they make, this learning helps to continually optimise the process.

It accelerates resolution times, ensuring responses are consistent and appropriate, enabling organisations to deliver better customer service faster with fewer people.

Personalised Response is especially important for large organisations who deal with consumers.

Spotlight – Nikolas Badminton – Futurist

Nikolas Badminton, Futurist

Nikolas Badminton is a world-respected futurist speaker that provides keynote speeches about the future of work, the sharing economy, and how the world is evolving. Nikolas is based in Vancouver, BC, and speaks across Canada, UK, Asia, and Europe.

His Artificial Intelligence Keynote can be viewed here https://www.youtube.com/watch?v=x7IbrYFX4Fs and the presentation can be seen here – http://www.slideshare.net/nikolasbadminton/the-future-of-society-the-artificial-intelligence-revolution

We look forward to sharing more from Nikolas in the future!

His website can be found here http://nikolasbadminton.com

Showcase 2016 – Our first conference on the 15th and 16th September

All,

The Showcase 2016 Event is on the 15th and 16th September and still has a few tickets available for purchase.

The 2 day event is hosted by FutureWorld.tech at The Old Truman Brewery, East London’s revolutionary arts and media quarter.

For more details visit Showcase.ai and FutureWorld.tech

Our confirmed speakers include:

  • Peter Morgan, DSP
  • Patrick Levy-Rosenthal, Emoshape
  • Clara Durodié, Independant Director
  • Melanie Warrick, Skymind
  • Parit Patel, IPsoft
  • Andy Pardoe, Informed.AI
  • Laure Andrieux, Aiseedo
  • Alexander Hill, Senesce
  • Dale Lane, IBM Watson

The 2nd AI Achievement Awards – Opens for Voting on Thursday 1st September

awards.ai-logo-final

All,

It is with great pleasure that we formally announce that the 2nd Annual Global AI Achievement Awards will be open for nomination votes this Thursday 1st Sept.

This is our second year of holding these awards, which we had ten awards categories, you can see the previous years winners on the awards website. This year we have doubled the number of categories to twenty, the full list can be found on the categories section of the website.

We encourage you all to support our initiative and vote for the companies and individuals you feel are contributing the most to the field of artificial intelligence. Support the community and celebrate the amazing work being done by thousands of people and hundreds of companies across the global.

Its a public vote open to everyone, for companies that wish to be nominated we suggest you inform your customers about the awards and suggest which category you would like to be nominated in.

There is still time to be a corporate sponsor of the awards. See the sponsors page for more details.

Don’t forget to visit the site http://Awards.AI on or after the 1st Sept to cast your vote.

awardstrophy

GeckoSystems, an AI Robotics Co., Signs U.S. Joint Venture Agreement

GeckoSystems, an AI Robotics Co., Signs U.S. Joint Venture Agreement

CONYERS, Ga., August 18, 2016 — GeckoSystems Intl. Corp. (Pink Sheets: GOSY | http://www.GeckoSystems.com) announced today that after effectuating an NDA, MOU, and LOI agreements with this NYC AI firm, that they have now executed a joint venture agreement. For over nineteen years, GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

“We are very pleased to announce our first US JV. We will jointly coordinate our advanced Artificial Intelligence (AI) R&D to achieve higher levels of human safety and sentient verbal interaction for the professional healthcare markets.  We expect not only near term licensing revenues, but also an initial AI+ CareBot(tm) sale. While we have several JV’s in Japan continuing to mature, it is gratifying to have gained demonstrable traction in the US markets.

“One of our primary software and hardware architecture design goals has been for our MSR platforms to be extensible such that obsolescence of the primary cost drivers, the mechanicals, would be as much as five or more years (or when actually worn out from use).  Consequently, our hardware architecture is x86 CPU centric and all our AI savants communicate over a LAN using TCP/IP protocols with relatively simple messaging. This means all systems on the Company’s MSR’s are truly “Internet of Things” (IoT) due to each having a unique IP address for easy and reliable data communications. Because of our high level of pre-existing, linchpin, 3-legged milk stool basic functionalities that make our AI+ CareBot so desirable by being easily upgraded, not only by GeckoSystems, but also third party developers, such as this advanced NYC AI firm.

“This is the strategic hardware development path that IBM used in setting PC standards that have enabled cost effective use of complex, but upgradeable for a long service life, personal computers for over thirty years now,” observed Martin Spencer, CEO, GeckoSystems Intl. Corp.

NYC has national prominence in the AI development community.  For example, NYC has twenty listed here: http://nycstartups.net/startups/artificial_intelligence  Atlanta, GA, reports only one AI robotics startup, Monsieur, a leader in the automated bartending space. http://monsieur.co/company/

 

Artificial intelligence technologies and applications span:

Big Data, Predictive Analytics, Statistics, Mobile Robots, Social Robots, Companion Robots, Service Robotics, Drones, Self-driving Cars, Driverless Cars, Driver Assisted Cars, Internet of Things (IoT), Smart Homes, UGV’s, UAV’s, USV’s, AGV’s, Forward and/or Backward Chaining Expert Systems, Savants, AI Assistants, Sensor Fusion, Point Clouds, Worst Case Execution Time (WCET is reaction time.) Machine Learning, Chatbots, Cobots, Natural Language Processing (NLP), Subsumption, Embodiment, Emergent, Situational Awareness, Level of Autonomy, etc.

 

An internationally renowned market research firm, Research and Markets, has again named GeckoSystems as one of the key market players in the service robotics industry. The report covers the present scenario and the growth prospects of the Global Mobile Robotics market for the period 2015-2019. Research and Markets stated in their report, that they: “…forecast the Global Mobile Robotics market to grow at a CAGR of nearly sixteen percent over the period 2015-2019.”

 

The report has been prepared based on an in-depth market analysis with inputs from industry experts and covers the Americas, the APAC, and the EMEA regions.  The report is entitled, Global Professional Service Robotics Market 2015-2019.

 

Research and Markets lists the following as the key vendors operating in this market:

Companies mentioned:

AB Electrolux

Blue River Technology

Curexo Technology

Elbit Systems

GeckoSystems

Health Robotics

MAKO Surgical Corp.
“GeckoSystems has been recognized by Research and Markets for several years now and it is the most comprehensive report of the global service robotics industry to my knowledge. I am pleased that their experienced market researchers are sufficiently astute to accept that small service robot firms, such as GeckoSystems, can nonetheless develop advanced technologies and products as well, or better, as much larger, multi-billion dollar corporations such as AB Electrolux, etc., reflected Martin Spencer, CEO, GeckoSystems Intl. Corp.

 

Research and Markets also discusses:

Professional service robots have the tendency to work closely with humans and can be used in a wide application ranging from surveillance to underwater inspection. They provide convenience and safety, among other benefits, thus creating demand worldwide. Technavio expects the global professional service robotics market to multiply at a remarkable rate of nearly 16% during the forecast period. Today, the adoption of robots is on the rise globally as they tend to minimize manual labor and reduce the chances of human error.

 

In the last decade, there have been numerous technological advancements in the field of robotics that have made the adoption of robots easy, viable, and beneficial. For instance, there has been a lot of innovations and improvements in the Internet of things, automation, M2M communications, and cloud. The modern robotic manufacturers are trying to take advantage of these technologies as a communication medium between the robots and humans, thus increasing the convenience as well as the transfer of real-time information within the business entity seamlessly.

 

Segmentation of the professional service robotics market by application:

– Defense, rescue, safety, and aerospace application

– Field application

– Logistics application

– Healthcare application

– Others

 

The defense application segment was the largest contributor to the growth of the global professional service robotics market with more than 44% share of the overall shipments in 2014. The demand for UGV and UAV for surveillance and safeguarding lives of personnel from ammunition, landmines, and bombs is expected to drive the demand for robotics.

 

“It is an honor that they recognize the value of the over 100 man-years we have invested in our proprietary AI robotics Intellectual Properties and my full time work for nearly 20 years now.   Our suite of AI mobile robot solutions is well tested, portable, and extensible.  It is a reality that we could partner with any other company on that list and provide them with high-level autonomy for collision free navigation at the lowest possible cost to manufacture.  There is also an opportunity for other cost reductions and enhancement of functionality with other components of our AI solutions,” stated Spencer.

 

In order for any companion (social) robot to be utilitarian for family care, it must be like a “three-legged milk stool” for safe, routine usage.  For any mobile robot to move in close proximity to humans, it must have:

(1) Human quick reflex time to avoid moving and/or unmapped obstacles, (GeckoNav(tm): http://tinyurl.com/le8a39r) (See the importance of WCET discussion below.)

(2) Verbal interaction (GeckoChat(tm): http://tinyurl.com/nnupuw7) for easy user dialogues and/or monologues with a sense of date and time (GeckoScheduler(tm): http://tinyurl.com/kojzgbx), and

(3) Ability to automatically find and follow designated parties (GeckoTrak(tm): http://tinyurl.com/mton9uh) such that verbal interaction can occur routinely with video and audio monitoring and/or teleconferences of the care receiver occur readily and are uninterrupted.

 

In the US, GeckoSystems projects the available market size in dollars for cost effective, utilitarian, multitasking eldercare social mobile robots in 2017 to be $74.0B, in 2018 to be $77B, in 2019 to be $80B, in 2020 to be $83.3B, and in 2021 to be $86.6B.  With market penetrations of 0.03% in 2017, 0.06% in 2018, 0.22% in 2019, 0.53% in 2020, and 0.81% in 2021, we anticipate CareBot social robot sales from the consumer market alone at levels of $22.0M, $44.0M, $176M, $440.2M, and $704.3M, respectively.

 

“This first US JV will continue to evolve, such that GeckoSystems enjoys revenues that increase shareholder value. After many years of patience by our current 1300+ stockholders, they can continue to be completely confident that this new, potentially multi-million-dollar JV licensing agreement further substantiates and delineates the reality that GeckoSystems will continue to be rewarded with additional licensing revenues furthering shareholder value,” concluded Spencer.

 

About GeckoSystems:

GeckoSystems has been developing innovative robotic technologies for nineteen years.  It is CEO Martin Spencer’s dream to make people’s lives better through AI robotic technologies.

 

The safety requirement for human quick WCET reflex time in all forms of mobile robots:

In order to understand the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why another Japanese robotics company desires a business relationship with GeckoSystems, it’s key to acknowledge some basic realities for all forms of automatic, non-human intervention, vehicular locomotion and steering.

  1. Laws of Physics such as Conservation of Energy, inertia, and momentum, limit a vehicle’s ability to stop or maneuver. If, for instance, a car’s braking system design cannot generate enough friction for a given road surface to stop the car in 100 feet after brake application, that’s a real limitation. If a car cannot corner at more than .9g due to a combination of suspension design and road conditions, that, also, is reality. Regardless how talented a NASCAR driver may be, if his race car is inadequate, he’s not going to win races.
  2. At the same time, if a car driver (or pilot) is tired, drugged, distracted, etc. their reflex time becomes too slow to react in a timely fashion to unexpected direction changes of moving obstacles, or the sudden appearance of fixed obstacles. Many car “accidents” result from drunk driving due to reflex time and/or judgment impairment. Average reflex time takes between 150 & 300ms. http://tinyurl.com/nsrx75n
  3. In robotic systems, “human reflex time” is known as Worst Case Execution Time (WCET). Historically, in computer systems engineering, WCET of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. In big data, this is the time to load up the data to be processed, processed, and then outputted into useful distillations, summaries, or common sense insights. GeckoSystems’ basic AI self-guidance navigation system processes 147 megabytes of data per second using low cost, Commercial Off The Shelf (COTS) Single Board Computers (SBC’s).
  4. Highly trained and skilled jet fighter pilots have a reflex time (WCET) of less than 120ms. Their “eye to hand” coordination time is a fundamental criterion for them to be successful jet fighter pilots. The same holds true for all high performance forms of transportation that are sufficiently pushing the limits of the Laws of Physics to require the quickest possible reaction time for safe human control and/or usage.
  5. GeckoSystems’ WCET is less than 100ms, or as quick, or quicker than most gifted jet fighter pilots, NASCAR race car drivers, etc. while using low cost COTS and SBC’s
  6. In mobile robotic guidance systems, WCET has 3 fundamental components.
  7. Sufficient Field of View (FOV) with appropriate granularity, accuracy, and update rate.
  8. Rapid processing of that contextual data such that common sense responses are generated.
  9. Timely physical execution of those common sense responses.

 

——————————————————————————————-

An earlier third party verification of GeckoSystems’ AI centric, human quick sense and avoidance of moving and/or unmapped obstacles by one of their mobile robots can be viewed here: http://t.co/NqqM22TbKN

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd” situations like a mall or an exhibit area.

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home.  You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area.  There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration.  SafePath(tm) navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.
GeckoSystems, Star Wars Technology

 

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly.  It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.   The company believes that the CareBot will increase the safety and well-being of its elderly charges while decreasing stress on the caregiver and the family.

GeckoSystems is preparing for Beta testing of the CareBot prior to full-scale production and marketing.   CareBot has recently incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

 

Kinect Enabled Personal Robot video:

http://www.youtube.com/watch?v=kn93BS44Das

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley.  There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control.  GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative.  Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

 

More information on the CareBot personal assistance robot:

http://www.geckosystems.com/markets/CareBot.php

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

 

GeckoSystems uses http://www.LinkedIn.com as its primary social media site for investor updates. Here is Spencer’s LinkedIn.com profile:

http://www.linkedin.com/pub/martin-spencer/11/b2a/580

 

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website:  http://www.geckosystems.com/

Source: GeckoSystems Intl. Corp.

 

Safe Harbor:

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

 

 

 

 

home of Artificial Intelligence information

Resource Directory, News Stories, Videos, Twitter & Forum Streams, Spotlight, Awards, Showcase and Magazine

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!