The AI Times Monthly NewspaperCurated Monthly News about Artificial Intelligence and Machine Learning
The era of Industry 4.0 has arrived and it brought changes. Between 2010 and 2015 the number of robots in operation grew by 98% according to the IFR. Smart factories have a positive effect on manufacturing: they increase productivity and efficiency. Business owners can profit highly by eliminating the factor of human mistakes. But as artificial intelligence improves, more jobs can be carried out by robots. This is why several experts fear the negative social consequences of the expansion of industrial robots and forecast a wave of massive job loss. But is the situation really that bad? Take a look at TradeMachines’s new inforgaphic to find out more about the subject! There might be another side to it…
By Molly Connell, TradeMachines
So our 2017 Predictions for Artificial Intelligence are as follows;
- Art and Media Creation – I feel this will continue to build momentum next year with more advances in the type of content and quality of content produced. We are already seeing the quality coming close to that of humans, and maybe this year in a number of categories we will find it difficult to distinguish between AI generated and human created.
- Virtual Assistants – We are now seeing agents entering the home (not just on our phones) the integration with IoT devices will continue at pace in 2017. But can they do more than just switching lights on and off, doing simple search tasks and ordering repeat items from our favourite stores.
- More Physical Robots and Self-Driving Cars – This is an area that is very close to explode in terms of mainstream. Apple are design a vehicle, Uber are testing self-driving taxis. Robots to provide home help will be further developed in Japan (where there is a huge need for this). But we will see more robots doing simple tasks in the home and office?
- Start-ups come out of Stealth – Over the past few years there has been a huge amount of VC funding for start-ups working in AI. We have seen the first firms already and many of them have been bought by the large technology players, but there will be many more start-ups surface from stealth this year, delivering more and more novel business models and solutions
- More Industries investigate use of AI – We will see many more diverse industries looking to employ the benefits of AI in there own environments. This will really drive AI in Business. But this may just start as advanced analytics and prediction / recommendation tasks, with robotics and deep learning applications coming into play in 2018.
- AI in the Workplace – We will start to see Augmentation (See the Four-A’s) in the workplace for a number of different jobs. This will be a gentle introduction to how AI is going to transform the workplace over the next five years. Sectors that are consume huge quantities of information, both structured and unstructured will see augmentation first, including the legal and accounting professions.
- More Lives Saved – We will see more applications of analytics and research empowered by AI that will be saving peoples lives, both immediate and from diagnosis and medical cures. I hope these applications get well publicised as this is one of the wonderful benefits of these advanced algorithms.
- More Talk about Governance and Ethics – This will continue until we have a world-wide agreement on a framework for both keys areas but don’t expect that in 2017. While we will see certain countries moving this forward faster than others, it will still be mostly superficial talk with limited progress in 2017.
- Further Advances in the Technology – There is a huge amount of research happening, from Universities, Large Companies and Start-ups, that we will continue to see advances is the models, algorithms, frameworks and platforms. But the progress will be incremental in 2017, with no major quantum leap advance.
- A Standard Definition for Artificial Intelligence – While the field is technically 60 years old this year, there is still not a standard definition that is widely accepted. I plan to help close this one out in 2017, more to follow on this one soon.
Ada grows to no. 1 medical app in more than 80 countries, including Canada, bringing next-level health care technology to doctors, patients and community health
In case you haven’t already downloaded the new artificial intelligence (AI) app that’s taking the country by storm – well, 80+ countries to be exact – Ada is changing the way we (consumers / users / patients) are able to assess and monitor our personal health. Designed to grow smarter as users engage with it, Ada’s intelligence amounts to much more than personalized health assessments for individuals – Ada supports doctors in providing more accurate assessments and through data collection and analysis, has the ability to help patients and doctors monitor health situations over time.
So what sets Ada apart from other medical assessment products on the market? She gets smarter with use. Ada intelligently checks symptoms by asking simple and individualized questions without complicated medical jargon, and becomes smarter as she becomes familiar with the user’s medical history. A detailed symptom assessment report is generated by analyzing all the symptom information provided by users, which can then be shared with the user’s doctor.
“While the topic of machine learning and AI comes with some unknowns, in the medical field, we know the future of AI is bright and the possibilities are endless,” said Daniel Nathrath, Ada Health co-founder and Chief Executive Officer. “We’re at the forefront of something special. Ada continues to get smarter with each passing day. At a time when health care resources are limited, Ada can work in concert with doctors to alleviate strain and allow them to focus on their core competencies.”
Developed by a team of medical doctors and scientists, Ada’s AI engine is a representation of where personal and community healthcare is headed. Since Ada’s global launch earlier this fall, the app has already climbed to no. 1 medical app in the App Store in 80 countries – more than any other iOS app in 2016.
Notable features & benefits for doctors include:
Earlier and better health assessment through a sophisticated decision support system.
Ada generates detailed symptom assessment reports that users are able to share with their doctors in advance, or during office visits
Notable features & benefits for individuals and community health include:
Allows individuals to check almost any symptom by answering simple, personalized questions about their health.
Builds and stores an overview of users’ health situation (i.e. allergies, medications, symptoms) – secure, up to date, and accessible from their pocket.
Allows users to track the health of loved ones through a multi-profile management platform – ideal for parents with young children, and adults with aging parents.
Makes the most of the user’s time spent in the doctor’s office.
Access to high quality health information and care for everyone in the world.
“What’s special about Ada is the level of detail and personalization of each interaction,” said Dr. Claire Novorol, Ada Health Co-founder and Chief Medical Officer. “At each step during an assessment Ada carefully selects follow up questions to gather the information that matters the most. But that’s not all – Ada can also help you to track symptoms and outcomes, which further improves and individualizes the experience over time. This has obvious benefits for those using Ada to assess, understand, monitor and manage their own health. Doctors are excited about it too, as Ada often collects important details that they might have otherwise missed or not had time to ask about.”
Mobile, applications and artificial intelligence (AI) are disrupting urban transport. Coupled with the rise of the sharing economy where consumers prefer to hire or borrow as they need things rather than invest in outright ownership, there is huge potential for players in the transport space to revolutionise the way we get around.
The challenge is to efficiently integrate different means of transport and manage the massive quantities of data needed to make new types of transport a reality. This data needs to be turned into actionable insights, which makes management easier to operate and delivers a better user experience. As new models are developed, AI is playing an increasingly important role and is the key to simplifying complex transportation networks.
AI is already supporting the integration and optimisation of new models for transport. Uber has been immensely successful across the globe in disrupting the taxi market by utilising consumer data, Global Positioning Systems (GPS) and AI to tailor its services to users. This is only the beginning for AI in transport as other businesses begin to converge transportation networks and harness technology and data to innovate further in citywide transport.
A former Uber executive in China has recognised the potential of Bike Share Schemes and has raised millions of dollars in funding for his Bike Share start-up Mobike. The funding allows Mobike to create a Bike-Sharing network where users can lend bikes via an integrated app. Bikes offer an alternative to taxis and with a unique model can be used as a sustainable option to cover short distances.
The challenge is to deliver these new models seamlessly while keeping it simple when it comes to managing new schemes. AI has a role to play in utilising available data to remove this growing complexity and deliver real-time visibility and optimisation. It permits extremely large quantities of data to be made accessible and useful for people to make faster and more precise decisions. It is enabling workers to manage and use more data with better results.
Simplifying the management of transportation services is vital for growth and success. Using resource efficiently enables operators to maximise the potential of their service and give consumers the best possible experience. Operators need a service that can run smoothly and remain profitable while users want a service that simply delivers what they need when they need it – access to transport.
AI can be used to innovate and manage transportation systems globally. It will help operators to efficiently distribute and maintain their services, removing the pain from consumer travelling. AI will provide transport operators with data-driven recommendations to overcome complex challenges throughout their service, which can later be used to justify and inform decisions before implementation.
AI will be at the forefront of this immensely potential market, especially for new comers. Its capabilities should be recognised and embraced as a smart solution to helping facilitate and bring efficiency to the transportation services industry while improving city life and increasing the health and welfare of citizens.
We are at the very beginning of AI in transport and it will play a leading role in supporting new models and innovations as well as how we experience our cities.
Organisations in the transportation industry should consider how AI can help them to simplify their operations and manage market disruption. New intelligence is changing what transportation can be.
Author: Tom Nutley
Business Development Director
Celebrating Eighteen Months of homeAI.info and the Informed.AI Group
Another six months, and a lot of progress to report on.
Our main site homeAI.info still remains at the heart of our group. A growing directory of information resources, with an additional category added for fintech during the last period and still more to come. The news area is still very popular and we continue to see more user submitted stories. We have also continued to add more to our spotlight area and are always looking for more companies, startups and people to profile in our spotlight section.
We have just launched a new dedicate area for Students of AI accessed via the link http://Study.AI which we see as a major part to our educational offering going forward, and will over the coming months add more resources to this area.
We have launched the 2nd Annual Global AI Achievement Awards which has an amazing 21 categories, making it the biggest and best Awards for AI. This is the original AI Awards and we hope you all support this initiative by voting at http://Awards.AI. The Awards are a core part of us delivering our manifesto obligation of supporting the AI community and celebrating the achievements of those working in the field.
To mark the Eighteen month anniversary we are launching our most ambition website yet. We are calling it Neurons.AI and its a Professional Network for AI Practitioners and Researchers. The focus on this site is to provide a bridge between commercial and academic endeavours in the field of AI. We strongly believe that bringing the two groups together will produce even more amazing developments in the field of AI, Machine Learning and Data Science. The network is like a social media network, but with a significant emphasis on forums and discussions. Neurons.AI also includes an offline element in the form of regular meet-ups. We have an official Press Release for this launch which you can read here.
We are also preparing to launch our AI Showcase Quarterly meet up from Q1 2017, the details can be seen at http://Showcase.AI. The desire is to inform students of AI and Machine Learning about the inner workings of a commercial development of Machine Learning applications and systems. This meetup will also be an opportunity for startups to showcase their products.
We continue to develop the careers portal and jobs board at http://Vocation.AI and are actively looking for more companies or agencies wanting to list their job opportunities on our site for free.
As always without the support of the AI community we are nothing. We continue to get wonderful feedback, and look forward to develop our platform to further support the AI community. We are very excited to make significant progress in 2017. As part of this we are looking to build out our advisory board to help us shape the direction of our future growth and are exploring ways we can accelerate our growth and rollout in 2017.
Thank you for your continued support and encouragement.
Dr Andy Pardoe
Founder of the Informed.AI Group of Community Websites
Our group of websites includes:
Our social media:
We have twitter accounts for all of our sites;
Neurons.AI Launches Today – The Network for AI Professionals
A new social network for artificial intelligence professionals called Neurons.AI is launching today that will both operate online and host real world meet-ups.
Neurons will be the Facebook for AI experts and also provide members with the chance to socialise at regular events, to learn more about the subject and share ideas with others in the field.
Neurons is the brainchild of UK-based Dr Andy Pardoe, a PhD in Artificial Intelligence and Founder of Informed.AI, a group of community websites supporting those interested in AI, machine learning and data science.
The network will officially launch in beta mode on the 27th November but is open today for early registrations with a limited membership for the first six months, to be followed later by open paid subscriptions. Beta members will not pay membership for the first year. All membership fees will be used to directly for activities of the Informed.AI group to help promote and support the wider AI community.
Founder, Andy Pardoe, said: ‘I want to build a place where people can talk and share their ideas and experiences about AI and machine learning and allow collaborations between researchers and those working in a commercial setting.’
‘The idea is to have a more dynamic conversation about AI, a place where people can have a voice.’
He added that members would be able to learn more about the latest developments in the AI field, often before anyone else does, given that this will be a forum for experts from industry and academia.
The social dimension will also be front and centre with an objective to build new connections and make friends in the AI and machine learning world. There will also be opportunities for members and their organisations to make presentations to members at meet-ups.
Naturally there will be significant networking opportunities; the ability to share and contribute to online forums and articles connected to Neurons; and to participate and also present new ideas at meet-ups.
If you would like to become a member please visit http://Neurons.AI to find out more.
There are so many articles about learning Deep Learning but still I decided to write one more. The reason is I find many of those articles saying the same thing over and over again. The same set of online courses and the same set of books. I think there is a need for a new guide for learning DL for people who are already well-versed with traditional ML.
Deep Learning is as much science as it is art. It’s increasingly looking like the most promising candidate among a set of different techniques for solving Artificial Intelligence one day. I’ve met and spoken to a lot of people recently who believe doing deep learning is pretty easy, you only need an open source library like TensorFlow, Theano etc. and decent data at your disposal, and you are all set. Trust me, it’s not true.
Coming from a science background before venturing into the world of ML first as an engineer and then as a founder I think one should seriously dive very deep into a field to appreciate the low-level details before building models in a hackish way. This method is OK for a small personal project but not good if you want to be a good researcher in the field one day or if you have plans of building a great product for the real world.
We at Artifacia broadly classify all of AI into Visual Understanding and Language Understanding. This is not exactly the best approach in the world but it helps us organize and execute our projects pretty efficiently. Much of our work is applied in nature with a small part of it being basic and long term in nature such as Project Turing and Project Button. We expect to publish some of our ongoing work sometime next year.
Even though my co-founder and CTO Vivek primarily looks after technology and research at Artifacia, I continue to spend 20% of my time with the research team to be able to do the right kind of mapping between our technology and product, and between our product and business. Moreover, I like speaking to them and continue my learning of an area I believe will impact every industry similar in scale to the Internet and the Personal Computer before that.
The following is the list of essential read for anyone who really wants to learn the fundamentals of Deep Learning :
- A Few Useful Things to Know about Machine Learning by Pedro Domingos
- Deep Learning by Yann LeCun, Yoshua Bengio, and Geoffrey Hinton
- Representation Learning: A Review and New Perspectives by Yoshua Bengio, Aaron Courville, Pascal Vincent
- Convolutional Networks for Images, Speech and Time-Series by Yann LeCun and Yoshua Bengio
- Learning Deep Architectures for AI by Yoshua Bengio
- ImageNet Classification with Deep Convolutional Neural Networks by Alex Krizhevsky, Ilya Sutskever and Geoffery Hinton
- LSTM: A Search Space Odyssey by Klaus Greff, Rupesh K. Srivastava, Jan Koutník, Bas R. Steunebrink and Jürgen Schmidhuber
- Distributed Representations of Words and Phrases and Their Compositionality by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean
- Recurrent Neural Network Based Language Model by Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, Sanjeev Khudanpur
- Sequence to Sequence Learning with Neural Networks by Ilya Sutskever, Oriol Vinyals and Quoc V. Le
- Text Understanding from Scratch by Xiang Zhang and Yann LeCun
1) This is going to be an evolving post and I’ll keep updating it. The latest update included second and third papers.
2) I’ve also taken inputs from Vivek@Artifacia, who specialises in visual understanding, and Rajarshee@Artifacia, who specialises in language understanding, to compile this list of essential papers.
3) The title of this post is inspired by a popular book series by Zed Shaw. His book Learn Python the Hard Way remains one of the most recommended books for people starting with Python or programming in general.
4) If you’ve already read most of these papers and understood all of it, you should really consider applying to Artifacia!
The number one goal in United Nations’ Sustainable Development Goals for 2030 is: eliminate poverty. Today, around 1 billion people, that’s roughly one seventh of the world’s population, live in extreme poverty by earning less than 1.90$ per day. Though studies reveal that global poverty is reducing, we are still a long way from our goal.
To eradicate poverty, we first need the poverty distribution across the globe. The following diagram gives a rough estimate.
But unfortunately, data availability is poor. Numerous countries have no survey conducted over the last 3 decades, and others, only a few. More importantly, in many African countries, a single survey has been conducted over the last decade, which makes the data inaccurate. Lastly, the surveys don’t yield a perfect result. Therefore, it is evident that a new method has to be conceived to obtain more precise information.
A STUDY FROM SPACE
A team of social and computer scientists at Stanford University in California, led by Marshall Burke, aim to map poverty from space with the help of artificial intelligence (AI). They collected a large amount of night-time satellite images of the planet, taken by high quality cameras. Studying the glow of lights on the starry map using machine learning algorithms, they aim to distinguish the poor regions from the rich, as higher intensity of light indicates better development. Unfortunately, it was hard to discern the moderately poor regions from the extremely poor, as the intensity between the two wasn’t considerably different.
Therefore, they had to study daytime images and obtain key indicators such as: closest urban marketplace, distance from agriculture fields, nearest water sources and other such subtle signs.
They fed the computer large training datasets of images of regions where income per capita was previously known. The computer then used neural nets, a machine learning technique, to create links, discover relationships and find patterns. Then, they verified the accuracy of the algorithm on a validation set, and finally, implemented it on the test set. They focused on the African countries: Nigeria, Malawi, Rwanda, Tanzania and Uganda. Evidently, this technique doesn’t eradicate poverty, but provides reliable data to governments and NGOs.
But in all honesty, disregarding the hype about AI, is the new data really going to make a considerable difference? The World Bank may not have reliable information, but it is unlikely that the governments are completely unaware of the poverty spread in their country. Though a notable effort, this solution is more sensational than practical. And therefore we need another approach, one which hits the problem directly in the heart.
AI IN EDUCATION
“Education is not a way to escape poverty – It is a way of fighting it.”
– Julius Nyerere, former President of the United Republic of Tanzania
The primary step to alleviate poverty is: education. Simply put, if an underprivileged child can receive a decent education, the likelihood of him breaking away from the cycle of poverty increases. Therefore, education plays a crucial role in eradicating poverty.
The major difficulty with educating the poor is the lack of teachers. The reason is evident: helping the poor doesn’t pay, and so, there is no incentive for educators.
Therefore, taking inspiration from the Hole in the Wall experiment conducted by Sugata Mitra in 1999, we could bypass the problem. The study reveals that children can educate themselves only with the aid of a basic computer, requiring nearly no adult guidance. This form of education, known as Minimally Invasive Education (MIE), has significantly benefited over 300,000 underprivileged children from India and Africa.
Today, MIE can be substantially enhanced with AI and be made the future of education in the slums. With smart virtual bots installed in the systems, the machines would not only provide information, but could also “teach” the children. No external human guidance would be required, just the systems with the virtual “teachers” installed. Let us briefly look into how this can be achieved.
A GLIMPSE OF THE REQUIREMENTS OF A VIRTUAL “TEACHERS”
To interact with human beings, the machines would require advanced Natural Language Processing (NLP) algorithms, such as automatic speech recogniser (ASR), part of speech tagging (POS), syntactic/semantic parser, natural language generator, text-to-speech engine (TTS) etc. They should “understand” the language of the specific area so that children who don’t know English could communicate effortlessly. This would require accurate translation which again uses advanced NLP techniques.
Evidently, to create a solid dialogue system we need a huge database and therefore a centralised server to link all the systems globally might be a solution. But this would require an expensive infrastructure which beats the point of this endeavour.
Additionally, Machine Learning algorithms should make the systems learn from past mistakes, so that in the future, children find it easier to communicate. Furthermore, the interface should be simple, clean, and not cluttered with too many options. The courses could be designed specifically for the rural children or could simply be MOOCs. This would depend on the governments and their educational policies.
To conclude, this merely outlines the task ahead, gives a vision, a step to eradicate poverty. The work, the team required, the involvement needed, are enormous. The funding for the research is considerable, but if we can come together for this project, the entire world, we could succeed. It should be open source, so that anybody can contribute, from leading professors of AI and computer science, to students, to investors, to educators, to government officials, to NGOs…anybody. So are you willing to join hands in this endeavour? Are you willing to help your needy brothers? Does it bother you enough to make a change?
We are please to launch a dedicated resources page for students of artificial intelligence and machine learning.
We will be adding more to this area over the coming weeks, but wanted to share what we have already put together. Essentially we have re-organised our directory of resources so that it is tailored for students. The full directory is still available if needed, plus you still have full access of the homeAI.info resources. This is just a starting page for students on our homeAI.info site.
The page is available by visiting http://Study.AI
As always we welcome feedback and suggestions for improvements.
Vocation.AI is part of the Informed.AI network and is our Careers Portal and Jobs Board dedicated to people interested in working in the fields of data science, machine learning and artificial intelligence.
As we know, the last few years has seen a rapid expansion of interest in Artificial Intelligence. both from a commercial and academic perspective. We have seen many start-ups funded over the last couple of years that are developing various applications across many different industries. Large technology companies have invested huge amounts of resources to build out departments dedicated to the development of AI techniques that maybe applied to their existing products and services. While research activities continues at pace to advance the methodologies of AI.
With all this activity, we want to support the related jobs market that has been generated from this continued investment in the field.
Vocation.AI is not a jobs agency. We do not charge any fees or commissions for any jobs posted on our jobs board. This is offered as a completely free service. We are open for either companies or start-ups direct or agencies to post job opening on our board.
To contact us email email@example.com
Special Report: The State of Robotic Process Automation and Artificial Intelligence in the Enterprise
The State of Robotic Process Automation and Artificial Intelligence in the Enterprise
|As a member of Informed AI, we would like to share with you an exclusive industry report: The ‘State of Robotic Process Automation and Artificial Intelligence in the Enterprise‘. Discover the main challenges, key steps on implementing RPA and Artificial Intelligence, savings expected to be made, processes planned to be automated, general trends practitioners are experiencing and more…
Plus, we also analysed how those working in sectors such as Finance, IT, Human Resources, Operations and Business Development, are reacting to RPA and AI.
This report displays how collaborative and creative the industry is becoming as a group!
The information contained in this report will be discussed in further detail at the RPA and Artificial Intelligence Summit taking place from 30th November- 2nd December 2016 in London, UK.
If you haven’t seen the agenda yet, please download it here.
As a member of Informed AI, you are entitled to 20% off the current rate, please quote the 20% discount code: VIP_INFORMEDAI when registering here.
We hope you find the report valuable!
The RPA and Artificial Intelligence Summit team
*Reduced price tickets offered by IQPC are non-transferrable between organisations and only transferrable between individuals within the same organisation where written permission is obtained from IQPC in advance. Reduced tickets are available to the robotics automation end user organisations only. The offer does not extend to any company whose main or partial business is the provision of products or services of any kind to the aforementioned company type/s. IQPC reserves the right to revoke or refuse issue of reduced tickets at any time.
Another Super Sunday for updates on our site.
- Additional listings on the Company page
- The new FinTech category added to the directory
- Added a page for the Neurons Professional Network signup
- Fixed a problem with the Videos page so now all our playlist groups are shown
- Added our new sponsored links on pages
As always we welcome feedback and suggestions. Please use our contact us page for all comments.
Optimists argue AI could be an utopian symbiotic solution to the world’s greatest needs, a learning system that foresees the future far better than we do. Should we design AI to have human-like qualities or bypass emotional subjectivity to become a more ‘rational’ utilitarian mirror reflecting our interests? Beyond replicating ourselves, what of the potential of AI to evolve?We can imagine artificial intelligences with sensors and intellectual capabilities profoundly different (and potentially greater) than ours, for example seeing far beyond our limited visual spectrum of electromagnetic radiation, or thinking billions of times faster than us.
Yet never far from the surface are worries that unchecked learning could lead to manifold dystopian outcomes, immortalised in sci-fi through the horror classic ‘Frankenstein’, or modernised in recent cinema with the bittersweet ‘Ex Machina’ or the urgently practical moral questions being raised by imminent driverless cars.
Get yours now and join us as we decode the hype surrounding A.I., and delve into the philosophical hard problem of consciousness, before discussing the ethics and current applications of artificially intelligent systems.
Read on below to find out more about the stellar speakers we have (plus more to be announced) and follow us as we post more about them on: www.facebook.com/JugularJo
Prof Murray Shanahan
Professor in Cognitive Robotics, Imperial College
Murray is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, where he heads the Neurodynamics Group. Educated at Imperial College and Cambridge University (King’s College), he became a full professor in 2006. His publications span artificial intelligence, robotics, logic, dynamical systems, computational neuroscience, and philosophy of mind. He was scientific advisor to the film Ex Machina, and regularly appears in the media to comment on artificial intelligence and robotics. His books include “Embodiment and the Inner Life” (2010), and “The Technological Singularity” (2015).
Dr Piotr Mirowski
Improviser and research scientist in deep learning
Piotr obtained his Ph.D. in computer science at New York University under the supervision of deep learning pioneer Prof. Yann LeCun. He has a decade-long experience of machine learning in industrial research labs, where he developed solutions for epileptic seizure prediction from EEG, robotic navigation and natural language processing. His passion for performing arts, as a drama student with a 17-year background in improvised theatre, drew him to create HumanMachine, an artistic experiment fusing improv and AI, where Piotr’s alter-ego Albert shares the stage with a computer called A.L.Ex. The show aims at raising questions on communication, spontaneity and automaticity.
Creative producer, artist and researcher
Luba is exploring the role of artificial intelligence in the creative industries. Trained as a human-centered designer, she has worked on several projects bridging the gap between the traditional art world and the latest technological innovations. She is currently working to educate and engage the broader public about the latest developments in creative AI.
Dr Yasemin J. Erden
Senior lecturer in Philosophy, St Mary’s University
Yasemin’s main areas of research are within emerging technologies such as intelligent systems, nanotechnology, the internet and social networking. Alongside this she is an independent ethics expert for the European Commission, as well as a committee member of The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).
Dr Amnon Eden
Computer scientist, Principal of the Sapience.org thinktank
Amnon’s research contributes to artificial intelligence, the philosophy of computer science, the application of disruptive technologies and original thought to interdisciplinary questions. He is co-editor of ‘Singularity Hypotheses’[Chair]
Dr Shama Rahman
Storyteller: Scientist, Musician, Actor
WHERE DO WE STAND TODAY?
Today, anywhere you look, startups are emerging. The numbers are simply mindboggling! Extrapolating from the data that Dr. Paul D. Reynolds, Director of Research Institute, Global Entrepreneurship Center, provided, we find that there are:
472 million entrepreneurs worldwide attempting to start 305 million companies, approximately 100 million new businesses will open each year around the world.
How crazy is that! 100 million new businesses each year, that is, over 273972 new businesses per day! Further, out of the 100 million, 1.35 million are tech startups.
But that isn’t the worst of it – 9 out of 10 startups fail!
Why this staggeringly high failure rate? CB Insights performed a study to find the top 20 reasons for failure.
42% of the failures are due to “No market need”. People make products that consumers aren’t willing to buy. They fail to study the market needs properly before plunging into their project, thus flushing down millions of dollars.
The same mistake cannot be repeated with the creation of Artificial Intelligence. Hence, we have to first study what the world needs and accordingly develop intelligent systems. We need a vision of the future of AI before we plunge into its creation.
A VISION OF THE FUTURE OF AI
Science and technology have changed our lives with staggering effect. In around 100 years, the average life expectancy of a human being has increased by nearly 40 years.
And yet, it is not enough. The following data from WHO reveals that the leading causes of death in the world are: heart disease, stroke, chronic obstructive lung disease and lower respiratory infections.
What if we could predict these diseases and thus prevent them? Or more importantly, how do we predict?
And this is where AI steps in.
Researchers today are developing machine learning techniques which analyse huge amount clinical records to predict imminent diseases. These programs sift through the medical history of thousands of patients with a particular disease, then look for others with similar records, and gives the likelihood of the disease occurring. This paper is an example of AI being used to predict heart disease.
Other researchers are combining machine learning methods with advanced MRI techniques to help predict Alzheimer’s disease, brain cancer and other diseases of the brain. The machines learn to recognise patterns in the scans and extrapolate to predict the diagnosis. Today, researchers are able to predict the Alzheimer’s disease with 82-90 percent accuracy.
Some are even studying genes, attempting to get the relation between a particular gene and a disease. As the other techniques, they too are applying machine learning techniques to crunch tons of data to extract patterns that might help them find the root of the disease.
Surgical robotics is steadily emerging too. Today, there are hospitals where basic surgeries are performed without a doctor slicing into the patient directly. An operator guides robotic arms with the help of joysticks. These arms reduce jerky or shaky movements, thus increasing precision. With the addition of superior visualisation, surgical robotics minimises incisions, thus reducing risk and need for medication. The da Vinci System has performed over 3 million minimally invasive surgeries successfully.
Furthermore, smart bionic limbs are using machine intelligence to aid invalids lead a normal life. They sense and adapt to the environment and predict the user’s intentions to provide greater stability and ease.
AI is not only helping in diagnostics and surgeries, but also in designing drugs. Atomwise’s AtomNet studies protein structures, which can be considered to be “locks”, and tries millions of molecular combinations to open these “locks”. Basically, it’s designing complex molecules to destroy harmful protein combinations, or put differently, designing drugs to cure diseases.
Though these technologies are still premature, we can see the potential they have. In the future, life expectancy is bound to rise, diagnostics will be more accurate and easily obtained, surgeries will be automated, medical advise and care will be provided by virtual assistants online, drugs will be more effective with minimal side effects, smart exoskeletons will aid the disabled to lead a normal life. We may now even dream of exoskeletons merged with the human body for superior performance, human augmentation, automated gene modification, advanced cyborg technology, nanobots racing through our blood streams clearing clots and cleansing the body of diseases and other crazy ideas. Man, how cool will our future be!!!
What if you had a friend for life, a friend you would never lose, a friend who understood every emotion, every idea, every intention? What if this friend was unique to you, would guide you, be there to support and help you? What if I told you that this is our future?
With the advent of Internet, there is a boom of information, so much information that discerning the useful ones from the worthless ones is becoming nearly impossible. What if somebody could do extract the meaningful content and use it to make our lives easier?
This should be the aim of virtual personal assistants (VPA). Today, we have Siri, Cortana, Google Now, Watson – all crude versions of our vision. The technology currently is basic, involving pattern recognition, knowledge bases, natural language processing and sentiment analysis among the numerous other techniques. And yet, true “cognisance” is yet to be achieved.
In the future, here are some tasks we might expect our VPAs to perform: schedule meetings and manage time, monitor personal health and in case of emergency automatically call medical aid, connect home, car and office through IoT, update user on news, traffic and weather, book reservations or order from restaurants, provide information requested by the user by searching the Internet, but more importantly, customising and adapting to the user’s needs. There are tons of other tasks, but listing them all would be mindless.
But we want our VPAs to be more than that. We need them to understand our emotions, our sentiments, our moods…when we are down, automatically soft music should play and the lighting of the room should mellow down, where we are jovial, upbeat music should cheer us on etc. Furthermore, our VPAs should be able to converse meaningfully, understand our feelings and guide us by extracting information from the Internet. Stop for a moment and ponder, what is it to really understand? How to make a machine understand our feelings?
This is merely a glimpse of the future of virtual personal assistants, but one thing can be sure: they will have a significantly large role in our future. Currently, I’m creating my own virtual personal assistant in Python and will soon be writing about it.
Cyborgs, Humanoids and Robots
Some readers now might be “Ah, now he’s talking about AI”. This is the typical vision of AI, for it has some truth. The personal assistants need not only be virtual, but will soon have bodies. They will aid us in our daily mundane tasks, such as driving the kids to school, throwing out the trash, playing, helping cook, helping wash and any other task you could think of. Furthermore, they could help the aged with daily activities. Today, humanoid robots are already emerging, such as Asimo, Nao, Atlas and Actroid-SIT.
Furthermore, there is a tremendous boom in industrial robots which automate manufacturing. Today they are used for welding, painting, picking and placing, packaging and numerous other tasks. They are in demand because of their precision, speed and endurance.
Apart from these, robotics will soon have applications in military, space exploration, agriculture, medicine, sports, fire fighting, construction and in innumerable other fields. Lastly, we should keep an eye out for nano robots too, for they are an emerging technology.
By 2018, autonomous cars will be rolling on the streets created by companies like Google, Uber and Tesla. These automobiles use machine vision, GPS and odemetry among other techniques. There is a lot of research in this field because, as shown by the graph above, deaths due to road injuries ranks 9th, with 1.3 million dying per year. Reports say that driverless cars could reduce road fatalities by 90%. Furthermore, in the foreseeable future people will avoid buying cars because automobile services like Uber will be a tap away and considerably, and this will cause a reduction and a better organisation of traffic, which could further reduce fatalities.
Other autonomous vehicles too will soon emerge, like Hyperloop, drones, hovercrafts, trains, planes, ships etc.
AI will cause a boost in business. With the help of predictive analysis, there will be major improvement in stock market prediction, business models, recommender systems and numerous other fields.
Today, Amazon, Facebook, Microsoft and Google, among numerous others, are using AI to analyse consumer behaviour to provide better ads, services and products.
These are merely some of the fields in which AI is going to have a major impact. This graph reveals the future of emerging technologies:
To conclude, AI is currently undergoing a boom and therefore, an AI startup will have high demand and relatively easy funding. And thus, guided by our vision, let us proceed to create our future.
BUILDING BLOCKS OF ARTIFICIAL INTELLIGENCE
We are living a new technological revolution, a revolution that will transform our lifestyles drastically, a revolution caused by the advent of Artificial Intelligence (AI).
Today, AI is equated with killer robots itching to destroy the human race, an idea bred by Hollywood movies. But AI is more than that. Broadly speaking, AI’s objective is to build intelligent entities, such as machines or software, to facilitate our daily tasks and to bring comfort to our lives. John McCarthy, who coined the term in 1955, defines it as, “It is the science and engineering of making intelligent machines, especially intelligent computer programs.” Numerous other rigorous definitions have been provided, but a single universally accepted definition of AI is yet to be established.
THE FOUNDATIONS OF AI
It is evident that to build an intelligent machine, we have to first understand intelligence. Therefore, AI is a multidisciplinary field, founded by ideas from numerous other subjects.
The question that bothers me the most is: does the mind create our thoughts, or is it our brain? Put differently, if we could recreate a brain exactly, would it function like any other brain? Would it display consciousness, will and creativity? Further, if consciousness is not the effect of the wiring in our brain, but in fact the cause of our thoughts, how do we create consciousness?
Following this stream of thought, other questions arise: what does it mean to understand something? Is logic inherent? What is knowledge? Does it differ from information?
From the times of the ancient Greeks, people have been troubled with the functioning of the mind, reason and logic. Aristotle’s theory of Syllogism describes the basic process of the rational mind, where we deduce conclusions from an initial premise. He says that:
A deduction is speech (logos) in which, certain things having been supposed, something different from those supposed results of necessity because of their being so. (Prior Analytics I.2, 24b18–20)
This is nothing but the notion of logical consequence or logical implication, which is the base of mathematics. More information on syllogism is available here.
Syllogisms describe the workings of the mind, but what is the mind? René Descartes offered a theory saying that the mind is a substance whose essence is thought. Further, mind and body are distinct, a theory known as the “mind-body dualism“. Further reading can be done here.
Materialism holds an alternative view and proposes that the mind is merely the result of the interaction of matter, a view that seems rather narrow.
We have discussed the workings and the substance of the mind, but what about consciousness? Is consciousness a substance, or merely a result of the interaction of matter. Sri Aurobindo Ghosh posits that consciousness is the fundamental substance of this universe, and is involved in every material object. Hence, evolution is simply the effect of the emergence of consciousness. His major work, The Life Divine, explains his philosophy thoroughly.
I have merely scratched the tip of the iceberg of the philosophical theories which found AI, but it is enough to make you aware of the complexities of AI.
Philosophy delves in the realm of ideas. But to make the ideas concrete, a formal set of laws, logic and notations are required. And this is where comes mathematics. Though approaches to the subject vary from researcher to research, in general here are the topics used:
-Logic : propositional, first-order and fuzzy.
A strong mathematical foundation is a must for AI, for without it, you’ll be floundering in the ocean of jargon and symbols.
There are two sides of computer science that needs to be developed to create AI: hardware and software.
A basic background of the architecture of computers is necessary to be able to create a functional intelligent program and to understand the flaws and restrictions. In the future, hardware too can be modified to be more befitting to our aim. What if we create a computer shaped like a brain? Would it perform faster? What tasks would be easier? What would be the restrictions? Could we create artificial neurons?
AI is finally a computer program, a software. Therefore, a thorough understanding of data structures, algorithms, computer networks, databases and programming, especially object-oriented programming, are a must.
There are tons of programming languages out there, but currently, here are the top ones used for AI/ Machine Learning/ Data Mining/ Data Science/ Analytics.
Further information on programming languages can be obtained here.
To create intelligence, it is obvious that we have to first understand how our brain functions. This task is accomplished by neuroscience. Which areas of the brain work we reason? think? imagine? see? hear? How is information stored in the brain? What creates thoughts? What happens when we dream?
I will elaborate on the studies of neuroscience in a later post.
Neuroscience studies the physical functioning of the brain, but what about our behaviour? How do we act? How do we make decisions? How do we reason?
Cognitive psychology studies the mental processes behind these actions, and further attempts to give a theory to the functioning of the brain. Subsequently, taking inspiration from these theories, we can create AI.
Finally, humans have to interact with machines. Therefore, the machines have to understand our language and all the underlying nuances. This task at a first glance doesn’t seem too difficult, but is actually quite complex. To understand a sentence, understanding the grammar is not enough, one has to understand the context and the matter. How do you make a machine understand the concept of a “cat”? What about an abstract concept such as “love”? You could provide a definition, but would it understand? The difficulty, as you might realise, is not the syntax, but the semantics.
This gives only a glimpse into the fields that are at the foundations of AI, but it is enough to reveal the magnitude of the difficulties ahead. In the following post, I will explore the tasks that AI aims to accomplish.
Milton Keynes, UK, 08, September, 16 – AI software company Celaton today announced the appointment of Andrew Burgess to its’ Advisory Board.
A management consultant, author and speaker with over 25 years’ experience, Andrew is considered an authority on innovative and disruptive technology, artificial intelligence, robotic process automation and impact sourcing.
He is a former CTO who has run sourcing advisory firms and built automation practices. He has been involved in many major change projects, including strategic development, IT transformation and outsourcing, in a wide range of industries across four continents.
Andrew joins Celaton at an exciting period of growth for not only the company but the industry as whole.
Andrew says of the appointment “It’s an honour to be working with one of the leading vendors in artificial intelligence and cognitive automation. Celaton already has an impressive track record in this market and I look forward to helping them grow and develop further”.
Andrew Anderson, CEO Celaton said “I have known Andrew for several years and he never ceases to amaze me with his passion and dedication to sharing his knowledge about the AI and Intelligent Automation. It is a great honour that he has agreed to join Celaton and we thoroughly look forward to working with him.”
Software Engineering Institute at Carnegie Mellon University and SparkCognition Collaborate to Advance Cognitive Security
Carnegie Mellon University’s Software Engineering Institute collaborates with AI industry leader, SparkCognition, to build next generation cybersecurity programming guide.
AUSTIN, TX (PRWEB) JULY 25, 2016
The Software Engineering Institute (SEI) at Carnegie Mellon University is collaborating with industry-leading Cognitive Security Analytics company, SparkCognition, to build an automated cognitive cyber security threat remediation tool using SparkCognition’s proprietary technology and IBM Watson.
As part of the collaboration, engineers at SparkCognition will train the research team at the SEI’s CERT Division on how to use IBM Watson to catalogue and make query-able vulnerabilities on the Common Weakness Enumeration (CWE) list and CERT Secure Coding Rules. SparkCognition has already trained IBM Watson on a very large corpus of cybersecurity technical literature, including the Common Vulnerabilities and Exposures (CVE) list, OWASP literature, and many more cyber security databases.
“As software has become essential to all aspects of system capabilities and operations, there has been a dramatic increase in the significance of cybersecurity,” said Mark Sherman, Technical Director for Cyber Security Foundations at the SEI. “The CERT Division focuses its research on cybersecurity challenges in national security, homeland security, and critical infrastructure protection. We seek to develop and broadly transition new technologies, tools, and practices that enable informed trust and confidence in using information and communication technology. SparkCognition provides critical capabilities for this advanced initiative.”
SparkCognition’s technology is capable of harnessing real time infrastructure data and learning from it continuously, allowing for more accurate risk mitigation and prevention policies to intervene and avert disasters. The company’s cybersecurity centered solution analyzes structured and unstructured data and natural language sources to identify potential cyber threats. The uniqueness of the cognitive platform is resonated by the fact that it can continuously learn from data and derive automated insights to thwart any emerging issue.
“We are looking forward to working with one of the nation’s leading cybersecurity programs,” said Keith Moore, Product Manager of SparkCognition. “The company is building solutions that address cyber risk and resilience, software vulnerability, insider threat, secure coding practices, and other areas. Together, we are leading in new approaches, analysis tools, and training options to improve the practice of cybersecurity in private and public sector organizations, and we’re excited to collaborate with the SEI in pursuit of that mission.”
SparkCognition, Inc. is the world’s first Cognitive Security Analytics company based in Austin, Texas. The company is successfully building and deploying a Cognitive, data-driven Analytics platform for Clouds, Devices and the Internet of Things industrial and security markets by applying patent-pending algorithms that deliver out of-band, symptom-sensitive analytics, insights, and security. SparkCognition was named the 2015 Hottest Start Up in Austin by SXSW and the Greater Austin Chamber of Commerce, was the only US-based company to win Nokia’s 2015 Open Innovation Challenge, was a 2015 Gartner Cool Vendor, and is a 2016 Edison Award Winner. SparkCognition’s Founder and CEO, Amir Husain, is a highly awarded serial entrepreneur and prolific inventor with nearly 50 patents and applications to his name. Amir has been named the top technology entrepreneur in Austin by the Austin Business Journal, is the 2016 Austin Under 40 Award Winner for Technology and Science, and serves as an advisor to the IBM Watson Group and the University of Texas Computer Science Department. For more information on the company, its technology and team, please visit http://www.sparkcognition.com.
In the past year several concepts dear to my heart have become quite popular, namely Artificial Intelligence, Machine Life – and Artificial Neural Nets. After quietly toiling on these ideas as a hobbyist since 2007, it feels like a bit of a vindication to see the entire world finally realize how important they are. Along with this rise of Artificial Intelligence within main stream consciousness has come the inevitable question: will conscious robots rebel against humankind?
Artificial Lawyer caught up with Cian O’Sullivan, founder of Beagle, the automated contract analysis system that is just celebrating a year and a half of operations and landing VW as a client.
We discussed how Beagle came about, why maybe sometimes it’s better not to talk to lawyers about AI and how come the company has one of the world’s largest auto companies as a client, and then some.
Cian O’Sullivan’s web camera is not working when Artificial Lawyer calls for a video conference and so is treated to a picture of a soccer pitch in Colombia that the legal tech company founder took on his travels.
The international reference makes sense once you start to talk to O’Sullivan. The Canadian travels a lot. He went to law school in Ireland and studied for the New York Bar exam while he was staying in Bermuda.
As his start-up legal tech company, Beagle.ai, grows …..
To continue reading: https://artificiallawyer.com/2016/09/07/a-founders-story-beagle-goes-global/
home of Artificial Intelligence informationResource Directory, News Stories, Videos, Twitter & Forum Streams, Spotlight, Awards, Showcase and Magazine