The AI Times Monthly Newspaper

Curated Monthly News about Artificial Intelligence and Machine Learning
Celebrating Two Years of homeAI.info and the Informed.AI Media Group

Celebrating Two Years of homeAI.info and the Informed.AI Media Group

Two years today homeAI.info first went live. I am very happy to say that the purpose for site then is still the core purpose of the Informed.AI group of websites, and that is to Support the AI Community.

The two year milestone has made us review this mission and we are pleased to revise our mandate to ensure our core message and purpose is clear and easy to understand.

The past two years have been an amazing journey, adding more functionality to homeAI.info, creating a number supporting sites for specific parts of the ecosystem. But the most enjoyable part of this for us has always been meeting people and helping them to engage better with the wider AI community.

We have accomplished so many amazing things over the past two years. For example, our Events site recently hit the 150 events listed milestone and continues at pace with more events being listed every single week.

One of the key issues highlighted by the Royal Society report on Machine Learning was the potential skills gap as an industry we are facing. Obviously there isn’t a single solution to this, but we hope both homeAI.info from a training and education perspective, and Vocation.AI from a career portal and jobs board will help to support this key area of development needed over the coming years.

AI Awards

The AI Awards, have been a huge success on a global scale, we have already awarded 27 winners of the AI Awards. Having now entered our third year, we have some fantastic plans and announcements coming very soon that I am sure everyone will be very pleased to hear. Celebrating the successes and achievements of this fast paced industry is hugely important and really at the core of our efforts to support the AI community.

Neurons Network

We continue to grow our Professional Network for AI Practitioners and Researchers call Neurons. This is a mix of both online and offline interactions, with meetups, forums, user blogs and messaging. This platform allows us to facilitate the knowledge sharing between organisations, allowing us all to benefit from the open developments of this field. We have recently created Meetup Chapters covering several new Cities in addition London, and are looking to rapidly add more regions and locations over the coming months.

Unique Content

We want to thank everyone for their continued involvement. We are always looking for contributors, to submit stories, press releases and thought leadership articles. Sharing your news and thoughts is all part of what this ecosystem is designed for, so please don’t hesitate to submit your news on our platform. We are now also creating our own unique stories, articles and papers, which we hope you will enjoy. This exclusive content will also be shared as part of our new weekly news letter.

Showcase

We will be launching a number of modern styled events as part of our AI Showcase, including a Quarterly Online Live Demonstrations of AI Products, Frameworks and Tools, an Annual UnConference for Researchers to better share ideas and receive feedback and a Showcase Summit. More details coming soon, but please visit our site and register your interest.

In addition we have a number of additional projects and sites in the planning stage which we will be launching in the coming months, which will add new capabilities to our portfolio of sites.

Volunteers Needed

One of the key developments over the last six months, is the building of a team of volunteers to help accelerate our own capabilities and deliveries to facilitate our mission of support the AI community. I want to thank everyone that has already signed up to help us with our mission, and to welcome others to volunteer, as there is always more to do in this fast paced environment and the more volunteers we have the more we can support the community.

As we add more Cities to our Neurons Meetup Chapters we need volunteers to help organise the  local events. We are also looking for AI Experts to join our panel of judges for the AI Awards. If you would like to involved please contact us to find our more.

We are also forming a number of strategic partnerships that will enable us to achieve alot more in the coming months. More detailed announcements of these will be released in due course, but this is a very exciting stage of our expansion.

We Value your Feedback

As always, your feedback is welcomed and encouraged, as we want to make sure we are delivering useful sites and functionality. The best way to do this is email us at talk@homeAI.info

Thank you for your continued support and encouragement.

Dr Andy Pardoe
Founder of the Informed.AI Media Group

Our group of websites includes:

http://homeAI.info

http://Neurons.AI

http://Chapters.Neurons.AI

http://Awards.AI

http://Events.AI

http://Showcase.AI

http://Vocation.AI

http://Study.AI

http://Informed.AI

Our social media:

We have twitter accounts for all of our sites;

@homeAIinfo
@Neurons_AI
@Awards_AI
@Events_AI
@Showcase_AI
@Vocation_AI
@Informed_AI

A.I LOVE YOU – A Play in London 14th to 24th June

AI Love You is an interactive show by Heart to Heart Theatre which discusses the relationship between humans and artificial intelligence. Putting the audience at the centre of this ‘choose your own story’ play, AI Love You allows the audience to create the fate of the characters. Through voting the audience are asked to make moral decisions that bring our own relationship with technology to the foreground.

Drawing inspiration from philosophers, Deep Mind, addiction to the internet/technology and some very creepy sex robots, Heart to Heart Theatre invite the audience to question what it is that makes them human and if a robot could or should have the same rights as them. What will happen when programmes are so good at imitating life that they ask to be treated that way? With AI becoming more and more a part of our lives, this piece of theatre begins question where the line is.

Tickets from http://www.theatreN16.co.uk

Big Data Strategy (Part III): is your company data-driven?

This article originally appeared on Cyber Tales

If you missed the first two parts, I have previously proposed some tips for analyzing corporate data as well as a data maturity map to understand the stage of data development of an organization. Now, in this final article, I want to conclude this mini-series with final food for thoughts and considerations on big data capabilities in a company context.


I. Where is home for big data capabilities?

First of all, I want to spend few more words regarding the organizational home (Pearson and Wegener, 2013) for data analytics. I claimed that the Centre of Excellence is the cutting-edge structure to incorporate and supervise the data functions within a company. Its main task is to coordinate cross-units activities, which include:

  • Maintaining and upgrading the technological infrastructures;
  • Deciding what data have to be gathered and from which department;
  • Helping with the talents recruitment;
  • Planning the insights generation phase and stating the privacy, compliance, and ethics policies.

However, other forms may exist, and it is essential to know them since sometimes they might fit better into preexisting business models.

Data analytics organizational models

The figure shows different combinations of data analytics independence and business models. It ranges from business units (BUs) that are completely independent one from the other, to independent BUs that join the efforts in some specific projects, to an internal (corporate center) or external (center of excellence) center that coordinate different initiatives.


II. Data Startups vs Data Incumbents

Inspite of everything, all the considerations made so far mean different things and provide singular insights depending on the firm’s peculiarities. In particular, the different phase of the company life cycle deeply influences the type of strategy that needs to be implemented.

Although many times smaller companies have structural competitive advantages over bigger players, there is no strong correlation between data maturity and company’s life cycle (e.g., some startups are better than big pharma companies in managing their data and vice-versa). However, startups are obviously more rapid in advancing data maturity steps because they are more agile and because of the different organizational scale.

The important aspect here that I want to highlight is that startups and incumbents need to look at the data problem under two completely different approaches (although with a same final goal). I call these two approaches the retrospective approach and the prospective approach.

The prospective approach concerns mainly startups, i.e., companies that are in business since not that long and that are not producing a huge amount of data (yet). They will produce and gather a lot of data quite soon, though, so it is extremely relevant to set an efficient data strategy from the beginning.

The retrospective approach is instead for existing businesses that are overwhelmed by data, but they do not know how to use them or they may face specific problems (e.g., centralized integration).


The prospective approach (Startups)

A startup is completely free from any predetermined structure, and it can easily establish a strong internal data policy from the beginning adopting a long-term vision, which would prevent most of the data-related future issues. This should not be underestimated, and it requires an initial investment of resources and time: if the firm does it well once, it will get rid of a lot of inconveniences later on.

A well-set data policy would indeed guarantee a lean approach for the startup throughout any following stages. Moreover, young companies are often less regulated, both internally (i.e., internal bureaucracy is lower) and externally (i.e., compliance rules and laws). They do have a different risk appetite, which pushes them to experiment and adopt forefront technologies. Nonetheless, they always have to focus on quality data rather than quantity data to start with.


The retrospective approach (Incumbents)

Bigger companies instead usually face two main issues:

i) They have piles of data and they do not know what to do with them;

ii) They have the data and a specific purpose in mind, but they cannot even start the project because of poor data quality, inadequate data integration, or shortage of skills.

In the first case, they are in the Primitive stage, meaning that they have data but no clue on how extracting any value from them. Since big institutions usually have really tight and demanding job roles, it is sometimes impossible to internally innovate — in other words, they are “too busy to innovate”. Some sector is more affected by this problem (banking/fintech sector for instance) with respect to others (biopharma industry).

I believe a good starting point for this issue is hiring a business idea generator, an experienced high-level individual who becomes a sort of data evangelist and provides valuable insights even without owning a strong technical computer science background. After that, a proper data scientist is essential.

For the second scenario (they have data but cannot use them) I see mainly two solutions:

i) The firm implements from scratch a new data platform/team/culture;

ii) The firm outsources the analysis/problem.

The first scenario is, of course, more robust (if succeeds) and revolutionary for the organization but also much more expensive. If the firm goes with implementing from scratch a new platform/team/culture, it needs to consider a simple cost-benefit analysis:

What is the marginal utility of the new data platform/team/culture with respect to the implementation and running costs?

But, most of all, never forget that it is usually a single individual (or small group of people) who takes this decision in an uncertain and unlikely scenario.

“I am investing a ton of money in something that can — but also cannot with a good probability — have a return in five years time”.


III. Data Science Outsourcing

This brings us to the second solution: outsourcing.

When it comes to choosing whom to outsource to, universities often represent a preferred avenue for big corporations: universities always need funding and they need data for running their studies (and publishing their works). They cost far less than startups, they have a good pool of brains, time, and willingness to analyze messy datasets.

Startups are instead revenue-generating entities and they will cost more to big incumbents, but they often gather the best minds and talents with good compensation packages and interesting applied research and datasets that universities cannot always provide.

In both cases, the biggest issue is anyway about data security, confidentiality, and privacy: what data the company actually outsources, how the third parties keep the data secured, how do they store them, how the decision-making process is structured ( data-driven vs HiPPO, i.e., highest paid person’s opinion), are few of the most common issues to deal with.

Another relatively new and interesting way for big corporations to get some analysis virtually for free and potentially selecting vendors are meetups and hackathons, window-dressings for the firm but a good way to scout people and experiment pilots.


IV. Other Alternatives

There exists also a middle way between complete outsourcing and in-house development called buy-in mentality, which looks at buying and integrating (horizontally or vertically) anything that the company does not develop in-house. It is definitely more costly than other options, but it solves all the problems related to data privacy and security.

Incubators and accelerators can also offer a substitute way to invest less in more companies of interests and dealing with several useful subjects without fully buying any company. The disadvantage of this fragmented investment business, however, is that new companies have a high-risk of failing — and the ‘failing culture’ is not well seen and deeply-rooted within big organizations — and companies need to invest in a team dedicated to select and support the on-boarded ventures.

Finally, it is possible to also design hybrid solutions and an example is given by this two-steps approach: in the first phase, universities can be used to run a pilot or the first two to three worthy projects that can drive the business from a Primitive stage to a Bespoke one. Then, the results can be used to persuade management to invest in data analytics and either build a proper internal team or pursue a different acquisition strategy.


V. Why big data projects fail?

Simple answer: a ton of different reasons.

There are though some more commons mistakes made by companies trying to implement data science projects:

  • Lack of business objectives and correct problem framing;
  • Absence of scalability, or project not correctly sized;
  • Absence of C-level or high management sponsorship;
  • Excessive costs and time, especially when people with wrong skill sets are selected (which is more common than you think);
  • Incorrect management of expectations and metrics;
  • Internal barriers (e.g., data silos, poor inter-team communication, infrastructure problems, etc.);
  • Think the work as a one time project rather than a continuous learning;
  • Data governance, privacy and protection.

Reference

Pearson, T., & Wegener, R. (2013). “Big data: the organizational challenge”. Bain & Company White paper.

Note: the above is an adapted excerpt from my book “Big Data Analytics: A Management Perspective” (Springer, 2016).

— —

Follow me on Medium

A mass-market ARTIFICIAL INTELLIGENCE

A non-computer science researcher in artificial intelligence

Among AI specialists, my specificity may surprise: I am an independent non-computer researcher, graduate from a French top business school. All my R & D was conducted in France since 1986 by channelling computer scientists in the direction I wanted: producing an AI available to everyone. For these reasons the following discussion is written to be understood by everyone.

I have a business background in sales prospecting. My job was always to conquer new customers  and never to maintain a customer base. I sold computers, software, services, developers, and then artificial intelligence (from 1983). I started as a commercial engineer, then I became a branch manager, a regional director, a sales manager for a software company, and finally the founder and CEO of an AI start-up.

The point I want to make with this description of my background is that I have a long experience of the business world, therefore of the AI market, which is usually not the case of researchers and their managers. My clients were my partners, they tested my inventions one by one before buying them. Thanks to this proximity to the real world, I have made many discoveries that I will briefly describe here, well received by the market and sold.

At 70, I am old enough and not yet a dotard to share a useful vision of the AI. To be brief, my philosophy as a researcher in computer science boils down to this: I do not care about the power of algorithms, finely-tuned programming and the rules enacted in AI by great researchers. The only thing I care about is the user point of view: my AI must work! Optimizations will come later when I have competitors.

Real my full article on a mass-market ARTIFICIAL INTELLIGENCE

written by Jean-Philippe de Lespinay

Big Data Strategy (Part II): a data maturity map

This article originally appeared on Cyber Tales

As shown in Part I, there are a series of issues related to internal data management policies and approaches. The answers to these problems are not trivial, and we need a frame to approach them.

A Data Stage of Development Structure (DS2) is a maturity model built for this purpose, a roadmap developed to implement a revenue-generating and impactful data strategy. It can be used to assess the current situation of the company and to understand the future steps to undertake to enhance internal big data capabilities.

The following table provides a four by four matrix where the increasing stages of evolution are indicated as Primitive, Bespoke, Factory, and Scientific, while the metrics they are considered through are Culture, Data, Technology, and Talent. The final considerations are drawn in the last row, the one that concerns the financial impact on the business of a well-set data strategy.

Data Stage of Development Structure (DS2)

Stage one is about raising awareness: the realization that data science could be relevant to the company business. In this phase, there is neither any governance structure in place nor any pre-existing technology and above all no organization-wide buy-in. Yet, tangible projects are still the result of individual’s data enthusiasm being channelled into something actionable. The set of skills owned is still rudimental, and the actual use of data is quite rough. Data are used only to convey basic information to the management, so it does not really have any impact on the business. Being at this stage does not mean being inevitably unsuccessful, but it simply shows that the projects performance and output are highly variable, contingent, and not sustainable.

The second Phase is the reinforcing one: it is actually an exploration period. The pilot has proved big data to have a value, but new competencies, technologies and infrastructures are required — and especially a new data governance, in order to also take track of possible data contagion and different actors who enter the data analytics process at different stages. Since management contribution is still very limited, the potential applications are relegated to a single department or a specific function. The methods used although more advanced than in Phase I are still highly customized and not replicable.

By contrast, Phase III adopts a more standardized, optimized, and replicable process: access to the data is much broader, the tools are at the forefront, and a proper recruitment process has been set to gather talents and resources. The projects benefit from regular budget allocation, thanks to the high-level commitment of the leadership team.

Step four deals with the business transformation: every function is now data-driven, it is lead by agile methodologies (i.e., deliver value incrementally instead of at the end of the production cycle), and the full support from executives is translated into a series of relevant actions. These may encompass the creation of a Centre of Excellence (i.e., a facility made by top-tier scientists, with the goal of leveraging and fostering research, training and technology development in the field), high budget and levels of freedom in choosing the scope, or optimized cutting-edge technological and architectural infrastructures, and all these bring a real impact on the revenues’ flow.


A particular attention has to be especially put on data lakes, repositories that store data in native formats: they are low costs storage alternatives, which supports manifold languages. Highly scalable and centralized stored, they allow the company to switch without extra costs between different platforms, as well as guarantee a lower data loss likelihood. Nevertheless, they require a metadata management that contextualizes the data, and strict policies have to be established in order to safeguard data quality, analysis, and security. Data have to be correctly stored, studied through the most suitable means, and to be breach-proof. An information life cycle has to be established and followed, and it has to take particular care of timely efficient archiving, data retention, and testing data for the production environment.


A final consideration has to be made about the cross-stage dimension of “Culture”. Each stage has associated a different kind of analytics, as explained in Davenport (2015). Descriptive analytics concerned what happened, predictive analytics is about future scenarios (sometimes augmented by diagnostic analytics, which investigates also the causes of a certain phenomenon), prescriptive analytics suggests recommendations, and finally, automated analytics are the ones that take action automatically based on the analysis’ results.

Data Map

Some of the outcomes presented so far are summarized in the following figure. The next chart shows indeed the relation between management support for the analytics function and the complexity and skills required to excel into data-driven businesses. The horizontal axis shows the level of commitment of the management (high vs. low), while the vertical axis takes into account the feasibility of the project undertaken — where feasibility is here intended as the ratio of the project complexity and the capabilities needed to complete it. The intersection between feasibility of big data analytics and management involvement divides the matrix into four quarters, corresponding to the four types of analytics.

Each circle identifies one of the four stages (numbered in sequence, from I-Primitive to IV-Scientific). The size of each circle indicates its impact on the business (i.e., the larger the circle, the higher the ROI). Finally, the second horizontal axis keeps track of the increasing data variety used in the different stages, meaning structured, semi-structured, or unstructured data (i.e., IoT, sensors, etc.). The orange diagonal represents what kind of data are used: from closed systems of internal private networks in the bottom left quadrant, to market/public and external data in the top right corner.

Big Data Maturity Map

Once the different possibilities and measurements have been identified, it would be also useful to see how a company could transition from one level to the next. In the following figure, some recommended procedures have been indicated to foster this transition.

Maturity Stage Transitions

In order to smoothly move from the Primitive stage to the Bespoke, it is necessary to proceed by experiments run from single individuals, who aim to create proof of concepts or pilots to answer a single small question using internal data. If these questions have a good/high-value impact on the business, they could be acknowledged faster. Try to keep the monetary costs as low as possible (using the cloud, open source software, etc.), since the project will be already expensive in terms of time and manual effort. On a company level, the problem of data duplication should be addressed.

The transition from Bespoke to Factory instead demands the creation of standard procedures and golden records, and a robust project management support. The technologies, tools, and architecture have to be tested, and thought as they are implemented or developed to stay. The vision should be medium/long-term then. It is essential to foster the engagement of the higher senior management level. At a higher level, new sources and type of data have to be promoted, data gaps have to be addressed, and a strategy for platforms resiliency should be developed. In particular, it has to be drawn down the acceptable data loss (Recovery Point Objective), and the expected recovered time for disrupted units (Recovery Time Objective).

Finally, to become data experts and leaders and shifting to the Scientific level, it is indispensable to focus on details, to optimize models and datasets, improve the data discovery process, increase the data quality and transfer- ability, and identify a blue ocean strategy to pursue. Data security and privacy are essential, and additional transparency on the data approach should be provided to the shareholders. A Centre of Excellence (CoE) and a talent recruitment value chain play a crucial role as well, with the final goal to put the data science team in charge of driving the business. The CoE is indeed fundamental in order to mitigate the short-term performance goals that managers have, but it has to be reintegrated at some point for the sake of the organization scalability. It would be possible now to start documenting and keeping track of improvements and ROI.

From the final step on, a process of continuous learning and forefront experimentations is required to maintain a leadership and attain respectability in the data community. I have included also a suggested timeline for each step, respectively up to six months for assessing the current situation, doing some research and starting a pilot; up to one year for exploiting a specific project to understand the skills gap, justify a higher budget allocations, and plan the team expansion; two to four years to verify the complete support from every function and level within the firm, and finally at least five years to achieving a fully operationally data-driven business. Of course, the time needed by each company varies due to several factors, so it should be highly customizable.

References

Davenport, T. H. (2015). “The rise of automated analytics”. The Wall Street Journal, January 14, 2015. Retrieved October 30, 2015 from http://www.tomdavenport.com/wp-content/uploads/The-Rise-of-Automated-Analytics.pdf.

Note: the above is an adapted excerpt from my book “Big Data Analytics: A Management Perspective” (Springer, 2016).

— —

Follow me on Medium

Big Data Strategy (Part I): tips for analyzing your data

This article originally appeared on Cyber Tales

We have seen in a previous post what are the common misconceptions in big data analytics, and how relevant it is starting looking at data with a goal in mind.

Even if I personally believe that posing the right question is 50% of what a good data scientist should do, there are alternative approaches that can be implemented. The main one that is often suggested, in particular from non-technical professionals, is the “let the data speak” approach: a sort of magic random data discovery that should spot valuable insights that a human analyst does not notice.

Well, the reality is that this a highly inefficient method: (random) data mining it is resource consuming and potentially value-destructive. The main reasons why data mining is often ineffective is that it is undertaken without any rationale, and this leads to common mistakes such as false positives; over-fitting; neglected spurious relations; sampling biases; causation-correlation reversal; wrong variables inclusion; or eventually model selection (Doornik and Hendry, 2015; Harford, 2014). We should especially pay specific attention to the causation-correlation problem, since observational data only take into account the second aspect. However, according to Varian (2013) the problem can be easily solved through experimentations.

Hence, I think that a hybrid approach is necessary. An intelligent data discovery process and exploratory analysis are valuable at the beginning to correctly frame the questions (“we don’t know what we don’t know” – Carter, 2011). Then, the question has to be addressed from several perspectives and using different methods, which sometimes may even bring some unexpected conclusion.


More formally, and in a similar fashion to Doornik and Hendry (2015), I think there are few relevant steps for analyzing the relationships in huge datasets. The problem formulation, obtained leveraging theoretical and practical considerations, tries to spot what relationships deserves to be deepened further. The identification step instead tries to include all the relevant variables and the effects to be accounted for, through both the (strictest) statistical methods as well as non-quantitative criteria, and verifies the quality and validity of available data. In the analytical step, all the possible models have to be dynamically and consistently tested with unbiased procedures, and the insights reached through the data interpretation have to be fed backward to improve (and maybe redesign) the problem formulation (Hendry and Doornik, 2014).

Those aspects can be incorporated into a lean approach, in order to reduce the time, effort, and costs associated with data collection, analysis, technological improvements, and ex-post measuring. The relevance of the framework lies in avoiding the extreme opposite situations, namely collecting all or no data at all. The next figure illustrates key steps towards this lean approach to big data: first of all, business processes have to be identified, as well as the analytical framework that should be used.

These two consecutive stages (business process definition and analytical framework identification) have a feedback loop, and the same is also true for the analytical framework identification and the dataset construction. This phase has to consider all the types of data, namely data at rest (static and inactively stored in a database); at motion (inconstantly stored in temporary memory); and in use (constantly updated and store in database).

The modeling step embeds the validation as well, while the process ends with the scalability implementation and the measurement. A feedback mechanism should prevent an internal stasis, feeding the business process with the outcomes of the analysis instead of improving continuously the model without any business response.

Big Data lean deployment approach

This approach is important because it highlights a basic aspect of big data innovation. Even if big data analytics is implemented with the idea of reducing world complexity, it actually provides multiple solutions to the same problem, and some of these solutions force us to rethink the question we posed in a first place.


All these considerations are valid both for personal project and companies’ data management. Working in a corporate context requires also further precautions, such as the creation of a solid internal data analytics procedure.

Internal data management process

Data need to be consistently aggregated from different sources of information, and integrated with other systems and platforms; common reporting standards should be created – the so-called master copy – and any information should be validated to assess accuracy and completeness. Having a solid internal data management, together with a well-designed golden record, helps to solve the huge issue of stratified entrance: dysfunctional datasets resulting from different people augmenting the dataset at different moments or across different layers.

All the information here presented are not one-size-fits-all solutions, and should be carefully adapted to different situations, teams, and companies, but are in my opinion a good starting point to ponder over big data processes.

References

Carter, P. (2011). “Big data analytics: Future architectures, Skills and roadmaps for the CIO”. IDC White Paper. Retrieved from http://www.sas.com/resources/asset/BigDataAnalytics-FutureArchitectures-Skills-RoadmapsfortheCIO.pdf.

Doornik, J. A., & Hendry, D. F. (2015). “Statistical model selection with big data”. Cogent Economics & Finance, 3, 1045216.

Harford, T. (2014). “Big data: Are we making a big mistake?” Financial Times. Retrieved from http://www.ft.com/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html#ixzz2xcdlP1zZ.

Hendry, D. F., & Doornik, J. A. (2014). Empirical model discovery and theory evaluation. Cambridge, Mass.: MIT Press.

Varian, H. (2013). “Beyond big data”. NABE annual meeting. San Francisco, CA, September 10th, 2013.

Note: the above is an adapted excerpt from the forthcoming book “Big Data Analytics: A Management Perspective” (Springer, 2016).

— —

Follow me on Medium

Deep Genetic Learning

Deep Genetic Learning

This article describes a new way to speed up back propagation or even use it as a replacement for back propagation in deep learning. The method uses genetic algorithms to handle local min/max problems and flat equations where gradient descent has problems.

Full Article Here

Innovative Model that simulates How the Mind Works can start a new era of Artificial Intelligence

Why this model can initiate a new era of Artificial Intelligence?

(original text…)

  • This model allows quantifying —in precise way— any fact, matter, phenomenon or thing, which has some relevance for the human being. For example: a state of happiness or sadness, a feeling, the expectation or concern about something, the need to relate, the quality of a product or service, the efficiency of a worker, the total management of a business, the condition of the weather, the probability that a lightning strike on a particular point of the earth… etc. In short, it is possible to quantify — with mathematical precision— any problem raised. In practical terms, the model offers an efficient universal methodology —which represents a drastic reduction in time and money— for the application and development of all those tools or systems oriented to facilitate decision making (Systems of Support to Decisions or DSS, Games Theory, Decision Theory, Complex System, Expert Systems, etc.).
  • The scope of this model is determined by its universal condition, characteristic that allows his application in any circumstance, with which it is constituted in a method, system, protocol or language, which fulfills exactly with the universal nature of the mind, and creates the Artificial Mind, concept upper than Artificial Intelligence, which materializes with the possibility of integrating global effort. In practical terms, means that robots can be configured with the best solutions —à la carte— that exist in the world. For example, to the Russian intelligent robot Virtual Actor —endowed with narrative and emotional intelligence, announced for this year (2017) by Dr. Alexei Samsonovich—, could be easily added the “skills” developed by Google to play GO and also the capabilities of the Deep Ocean Explorer, manufactured at the German Institute for Research in Artificial Intelligence. But also, —this robot with Artificial Mind— could also be an Expert System with knowledge in Game Theory, and orientate the decision making in any organization. This is the new era of Artificial Intelligence of which I am referring.
  • On base of important scientific references (one law, two theories and one principle) and from a totally innovative approach —in my, Thesis on How the Mind Works— five fundamental principles of the functioning of the mind or how the brain processes information are conceptually defined, and with this model are demonstrated —in a practical, clear and categorical— each one of them. Definition and demonstration that no science —until now— has performed. The following are some references of the current interest in how the brain processes information or How the Mind Works:
    • The website of the American Psychological Association (APA) in its section Brain Science and Cognitive Psychology, says: “Brain science and cognitive psychology is one of the most versatile psychological specialty areas today — and one of the most in demand. All professions have a compelling interest in how the brain works. Educators, curriculum designers, engineers, scientists, judges, public health and safety officials, architects and graphic designers all want to know more about how the brain processes information.”
    • The European Union financed with 1 billion euros The Human Brain Project (HBP), a project to imitate with supercomputers the human brain, with which it intends to control robots and also do medical tests (Banks, 2013).
    • The website of Harvard University in its section News & Events, announces an investment of $ 28 million for the project New ‘moonshot’ effort to understand the brain brings artificial intelligence closer to reality, which seeks, through latest generation laser microscopes specially built for the project, record rat brain activity as they learn and then make extra-fine cuts of their brain to be photographed under the world’s first multi-beam scanning electron microscope, the leader of the project and member of the Center for Brain Science at Harvard University, Dr. David Cox, says: “The scientific value of recording the activity of so many neurons and mapping their connections alone is enormous, but that is only the first half of the project. As we figure out the fundamental principles governing how the brain learns, it’s not hard to imagine that we’ll eventually be able to design computer systems that can match, or even outperform, humans.” (Burrows,2016).
    • From another perspective, the professor of the Cybernetics Chair at the National Nuclear Research University of Russia (MEPhI), Dr. Alexéi Samsonovich, says: “…a large number of scientists are scouring their expansive heads for the solution. Some investigate from the bottom up, trying to reproduce the structure of the brain step by step, from neurons. I opt for another way: we must penetrate the fundamental principles that manage our thinking and only then look for the possibilities of translating them into concrete models, say, in the same neural networks.” (Made in Russia: Emotional computer will make way for artificial intelligence, 2016).
  • The importance of this model —beyond revealing the mystery of How the Mind Works— consists, in allowing the creation a global network of artificial neurons, which establishes a remarkable advance in the simulation of the complex network of more than 86 thousand million neurons that interact in the human brain. Dr. Joaquim Fuster —recognized authority in the world of neuroscience— highlights the importance of the network, says: “…the network is the key, the neural network, especially the networks of the cerebral cortex, are the base of the whole knowledge and of all the memory, are formed throughout life, with the experience, by the establishment of connections, the ‘connect’ between neurons… between neurons that can be grouped in small groups especially in the primary sensory motor areas, which can be called modules… the modules are at the base, is to see, is to touch, is to hear… is to move, but the conscience of the knowledge and the conscience of the memory is in the network, is in the grouping…” (Redes 110: The soul is in the network of the brain – neuroscience, 2011). This phenomenon or synergy is summarized byGestalt Psychology, in the following sentence: The whole is more than the sum of the parts.
  • This model —universal as the mind— in any case, fully satisfies the following hypothesis, which is perhaps the main foundation of the Cognitive Science (formal field of study of the cognition):
    • The system interacts with symbols, but not with their meaning, and the system (mind) would function correctly when the symbols appropriately represents some aspect of external reality, or some aspect of this, and the information processing in the system (symbolic computation) leads to a successful solution of the problem that has been presented (Varela, Thompson, y Rosch, 1993, p.42).

Artificial Intelligence, represents a world project, “ideal” and difficult to achieve, is to say a utopia, in which any significant advance has great impact on our daily life, in this sense, this discovery, for a newspaper or magazine: represents technological information of great interest, for the academic or scientific institutions: constitutes a considerable contribution to his research, and for businessmen or investors: currently — according to Bank of America Merrill Lynch— the robotics market is 32,000 million euros and by 2020 is estimated to be about 142,000 million (Costantini,2016).

I have full disposition to go to any part of the world, to present my thesis and demonstrate in a practical and theoretical way that the model I propose, represents a universal solution of transferring our knowledge to a computer. In other words, it represents the ideal medium to simulate our intelligence or enable a machine withArtificial Intelligence.

I invite any person or institution interested to put in practice this model, to coordinate efforts to start the era of the Artificial Mind.

(see graphical example of the solution)

I appreciate any comments, orientation or questions.

We Need Your Help!

HomeAI.info was setup just over 18 months ago by Dr Andy Pardoe. Over that past year and a half, the site has grown both in terms of functionality and visitors. We have always had fantastic encouragement and feedback from the community which has lead to adding more and more to our offering.

Since our initial launch, we have added a number of additional sites and ways we are supporting the AI community. These include Events.AI, Awards.AI and most recently Neurons.AI

We have now reached a point in our development, where we need to seek the help and assistance from others to help us to continue to grow and increase the support we offer.

We are a non-profit organisation, and looking for people who wish to help support the AI community as a volunteer. We feel this is a wonderful opportunity for you to raise your own profile in the community while also supporting and giving back to an industry and community that has so much potential to be a world leader over the coming years.

What we Need

  1. Senior Leaders, CTOs, CIOs and most importantly, SMEs of AI and Machine Learning, to form a senior advisory board supporting Andy and the management team in terms of the direction and focus of the initiatives that we should drive forward. We will meet a few times a year to set the agenda and focus on the group.
  2. Content Providers, Journalists and Editors to help us increase our own content. Help give us our own voice and opinion. We feel this is really important to help build up our catalog of contributors.
  3. Sales & Marketing. To help us grow, we need to reach out to more companies and individuals to make them aware of the ways we can support them.
  4. Admin. As we grow we need to keep ontop of more and more administrations and paperwork.
  5. Operations & Logistics. While our initial focus has been on the on-line world, we are moving more into the off-line, real world in terms of organising meetups, and conferences. This requires all sorts of help both on the day of the event but also beforehand too.
  6. Regional and Country Leads. We have visitors from all over the world, and ideally we would like a country lead for each to ensure we are giving focus to each countries specific requirements. Plus to support our role out of real world events.
  7. Business Development. Help exploring new growth areas. We have a number of new websites we need help to grow, including our Jobs and Careers Portal (Vocation.AI) and our new Professional Networking site Neurons.AI. We also have a few ideas which we would like to roll out in 2017 too.
  8. Technical support. We run and administer an increasing number of websites, we would welcome support with these to allow us to add more functionality quicker to our sites, while ensuring we have a stable platform for our users and visitors.
  9. CEO, CTO and CMO. While Andy Pardoe has been performing all of these roles since inception, we would welcome the dedication and leadership that separating these roles would bring, together with benefits of strengthening the management team.

Currently none of these roles come with any renumeration, they are very much part-time voluntary roles. However, we are seeking funding sources that would change that situation in the near future.

Help us grow this initiative and allow us to continue our support of the AI community.

All enquiries should be sent to andy@informed.ai

AI Awards – Become a Judge

AI Awards – Become a Judge

We are looking recruit a team of volunteer judges to help with the nomination selections for our 3rd Annual AI Awards.

This is a great opportunity to raise your profile in the industry, to be seen as a thought leader within the AI community.

For more information please visit Awards.AI/become-a-judge

Celaton delivers machine learning to Symphony’s Digital Ecosystem for end-to-end automation

Milton Keynes, UK – 18th January 2017 – Symphony Ventures, the global leader in enterprise digital transformation leveraging robotic process automation (RPA), artificial intelligence (AI) and robotic BPO (R-BPO), and Celaton, an artificial intelligence software company, today announced a global partnership to offer enterprises an end-to-end solution for implementing and managing automation tools that optimize the execution of core business processes. Under the terms of the partnership, Celaton will join Symphony’s Digital Ecosystem, a platform that enables customers to construct comprehensive automation solutions using best-in-class tools and methodologies.

Celaton’s proprietary solution, inSTREAM™, applies sophisticated algorithms, including artificial intelligence and machine learning, to streamline labour-intensive clerical tasks and decision-making. The software uniquely learns the pattern of unstructured content through the natural consequence of processing and monitoring the actions and decisions of the people involved in the process.

“Celaton positions Symphony for great success in helping organizations optimize the way they manage their business processes,” said David Poole, CEO, Symphony Ventures. “The automation ecosystem is not a continuum of increasingly clever software, but rather a set of distinct tools that complement one another, delivered through Symphony’s best-in-class methodologies. Celaton appreciates, and thrives off, that distinction, and we could not be more thrilled for the opportunities that abound by having their machine learning capabilities join our Digital Ecosystem.”

Symphony’s Digital Ecosystem uses Blue Prism’s RPA software as the framework for its platform to support and exploit a range of automation tools. The addition of Celaton inSTREAM™ extends its capabilities to handle the complex, unstructured content that its clients receive from their customers and suppliers every day via mail, fax, email, social media and other electronic data streams.

“We see a huge opportunity in teaming up with Symphony and joining its Digital Ecosystem,” said Andrew Anderson, CEO, Celaton. “This partnership enables us to bring our machine learning capabilities to Blue Prism’s RPA software as part of a holistic automation capability, and accelerate our global expansion.”

Alastair Bathgate, CEO of Blue Prism, added: “Celaton’s inSTREAM™ technology perfectly complements Blue Prism’s software, creating a comprehensive and non-intrusive solution that will enable organizations to optimize and automate their business processes. We are honoured to count them as a partner in our quest to achieve smarter approaches to the way we work.”

About Symphony Ventures:

Symphony Ventures is a global consulting, implementation and managed services firm passionate about helping clients harness the “Future of Work.” Symphony Ventures specializes in robotic process automation (RPA), cognitive automation, and other inspired delivery models to help organizational leadership reduce costs, increase customer experience, repatriate work, and unleash resources to fund growth and shareholder value. Symphony has headquarters in London and offices in San Francisco, Boston and Poland. Founded in 2014, Symphony has been ranked an RPA Service leader by HfS Research, a leading service delivery automation

(SDA) focused service provider by Everest Group and a Cool Vendor by Gartner. The firm is rapidly growing and shaping an industry

predicated on work, value and customer experience. For more information, visit http://www.symphonyhq.com/ and follow the company

on Twitter at @SymphonyVenture.

About Celaton:

Celaton’s intelligent automation technology platform enables customers to achieve competitive advantage by delivering better service, faster, with fewer people. Celaton are the first company to create and apply intelligent automation technology to streamline labour intensive clerical tasks and decision making in the processing of unstructured content, the unpredictable stuff that organisations and governments receive from customers, residents, suppliers and staff by email, post, paper, fax and social media streams every day. Celaton are passionate about machine learning and artificial intelligence and create and apply it in the ‘real world’ to improve customer service, compliance and financial performance for their customers. They have invested heart and soul (not to mention over a 120 man years of development) in creating a technology platform that is transforming the way that ambitious brands handle unstructured, semi-structured and structured content.

For more information, visit http://www.celaton.com/ and follow the company on Twitter at @Celaton.

Industrial Robots: Are our jobs at risk?

Industrial Robots: Are our jobs at risk?

The era of Industry 4.0 has arrived and it brought changes. Between 2010 and 2015 the number of robots in operation grew by 98% according to the IFR. Smart factories have a positive effect on manufacturing: they increase productivity and efficiency. Business owners can profit highly by eliminating the factor of human mistakes. But as artificial intelligence improves, more jobs can be carried out by robots. This is why several experts fear the negative social consequences of the expansion of industrial robots and forecast a wave of massive job loss. But is the situation really that bad? Take a look at TradeMachines’s new inforgaphic to find out more about the subject! There might be another side to it…

By Molly Connell, TradeMachines

get_ready_for_robolution

AI Predictions for 2017

AI Predictions for 2017

So our 2017 Predictions for Artificial Intelligence are as follows;

  1. Art and Media Creation – I feel this will continue to build momentum next year with more advances in the type of content and quality of content produced. We are already seeing the quality coming close to that of humans, and maybe this year in a number of categories we will find it difficult to distinguish between AI generated and human created.
  2. Virtual Assistants – We are now seeing agents entering the home (not just on our phones) the integration with IoT devices will continue at pace in 2017. But can they do more than just switching lights on and off, doing simple search tasks and ordering repeat items from our favourite stores.
  3. More Physical Robots and Self-Driving Cars – This is an area that is very close to explode in terms of mainstream. Apple are design a vehicle, Uber are testing self-driving taxis. Robots to provide home help will be further developed in Japan (where there is a huge need for this). But we will see more robots doing simple tasks in the home and office?
  4. Start-ups come out of Stealth –  Over the past few years there has been a huge amount of VC funding for start-ups working in AI. We have seen the first firms already and many of them have been bought by the large technology players, but there will be many more start-ups surface from stealth this year, delivering more and more novel business models and solutions
  5. More Industries investigate use of AI – We will see many more diverse industries looking to employ the benefits of AI in there own environments. This will really drive AI in Business. But this may just start as advanced analytics and prediction / recommendation tasks, with robotics and deep learning applications coming into play in 2018.
  6. AI in the Workplace – We will start to see Augmentation (See the Four-A’s) in the workplace for a number of different jobs. This will be a gentle introduction to how AI is going to transform the workplace over the next five years. Sectors that are consume huge quantities of information, both structured and unstructured will see augmentation first, including the legal and accounting professions.
  7. More Lives Saved – We will see more applications of analytics and research empowered by AI that will be saving peoples lives, both immediate and from diagnosis and medical cures. I hope these applications get well publicised as this is one of the wonderful benefits of these advanced algorithms.
  8. More Talk about Governance and Ethics – This will continue until we have a world-wide agreement on a framework for both keys areas but don’t expect that in 2017. While we will see certain countries moving this forward faster than others, it will still be mostly superficial talk with limited progress in 2017.
  9. Further Advances in the Technology – There is a huge amount of research happening, from Universities, Large Companies and Start-ups, that we will continue to see advances is the models, algorithms, frameworks and platforms. But the progress will be incremental in 2017, with no major quantum leap advance.
  10. A Standard Definition for Artificial Intelligence – While the field is technically 60 years old this year, there is still not a standard definition that is widely accepted. I plan to help close this one out in 2017, more to follow on this one soon.
Artificial Intelligence App Ada is Changing the Game for Mobile Health

Artificial Intelligence App Ada is Changing the Game for Mobile Health

Ada grows to no. 1 medical app in more than 80 countries, including Canada, bringing next-level health care technology to doctors, patients and community health

In case you haven’t already downloaded the new artificial intelligence (AI) app that’s taking the country by storm – well, 80+ countries to be exact – Ada is changing the way we (consumers / users / patients) are able to assess and monitor our personal health. Designed to grow smarter as users engage with it, Ada’s intelligence amounts to much more than personalized health assessments for individuals – Ada supports doctors in providing more accurate assessments and through data collection and analysis, has the ability to help patients and doctors monitor health situations over time.

So what sets Ada apart from other medical assessment products on the market? She gets smarter with use. Ada intelligently checks symptoms by asking simple and individualized questions without complicated medical jargon, and becomes smarter as she becomes familiar with the user’s medical history. A detailed symptom assessment report is generated by analyzing all the symptom information provided by users, which can then be shared with the user’s doctor.

“While the topic of machine learning and AI comes with some unknowns, in the medical field, we know the future of AI is bright and the possibilities are endless,” said Daniel Nathrath, Ada Health co-founder and Chief Executive Officer. “We’re at the forefront of something special. Ada continues to get smarter with each passing day. At a time when health care resources are limited, Ada can work in concert with doctors to alleviate strain and allow them to focus on their core competencies.”

Developed by a team of medical doctors and scientists, Ada’s AI engine is a representation of where personal and community healthcare is headed. Since Ada’s global launch earlier this fall, the app has already climbed to no. 1 medical app in the App Store in 80 countries – more than any other iOS app in 2016.

Notable features & benefits for doctors include:

  • Earlier and better health assessment through a sophisticated decision support system.

  • Ada generates detailed symptom assessment reports that users are able to share with their doctors in advance, or during office visits

Notable features & benefits for individuals and community health include:

  • Allows individuals to check almost any symptom by answering simple, personalized questions about their health.

  • Builds and stores an overview of users’ health situation (i.e. allergies, medications, symptoms) – secure, up to date, and accessible from their pocket.

  • Allows users to track the health of loved ones through a multi-profile management platform – ideal for parents with young children, and adults with aging parents.

  • Makes the most of the user’s time spent in the doctor’s office.

  • Access to high quality health information and care for everyone in the world.

 

“What’s special about Ada is the level of detail and personalization of each interaction,” said Dr. Claire Novorol, Ada Health Co-founder and Chief Medical Officer. “At each step during an assessment Ada carefully selects follow up questions to gather the information that matters the most. But that’s not all – Ada can also help you to track symptoms and outcomes, which further improves and individualizes the experience over time. This has obvious benefits for those using Ada to assess, understand, monitor and manage their own health. Doctors are excited about it too, as Ada often collects important details that they might have otherwise missed or not had time to ask about.”

For more information, please visit www.ada.com, or engage with the Ada community on Facebook or Instagram.

Disrupting Urban Transport with Artificial Intelligence

stageintelligence

 

 

 

Mobile, applications and artificial intelligence (AI) are disrupting urban transport. Coupled with the rise of the sharing economy where consumers prefer to hire or borrow as they need things rather than invest in outright ownership, there is huge potential for players in the transport space to revolutionise the way we get around.

The challenge is to efficiently integrate different means of transport and manage the massive quantities of data needed to make new types of transport a reality. This data needs to be turned into actionable insights, which makes management easier to operate and delivers a better user experience. As new models are developed, AI is playing an increasingly important role and is the key to simplifying complex transportation networks.

AI is already supporting the integration and optimisation of new models for transport. Uber has been immensely successful across the globe in disrupting the taxi market by utilising consumer data, Global Positioning Systems (GPS) and AI to tailor its services to users. This is only the beginning for AI in transport as other businesses begin to converge transportation networks and harness technology and data to innovate further in citywide transport.

A former Uber executive in China has recognised the potential of Bike Share Schemes and has raised millions of dollars in funding for his Bike Share start-up Mobike. The funding allows Mobike to create a Bike-Sharing network where users can lend bikes via an integrated app. Bikes offer an alternative to taxis and with a unique model can be used as a sustainable option to cover short distances.

The challenge is to deliver these new models seamlessly while keeping it simple when it comes to managing new schemes. AI has a role to play in utilising available data to remove this growing complexity and deliver real-time visibility and optimisation. It permits extremely large quantities of data to be made accessible and useful for people to make faster and more precise decisions. It is enabling workers to manage and use more data with better results.

Simplifying the management of transportation services is vital for growth and success. Using resource efficiently enables operators to maximise the potential of their service and give consumers the best possible experience. Operators need a service that can run smoothly and remain profitable while users want a service that simply delivers what they need when they need it – access to transport.

AI can be used to innovate and manage transportation systems globally. It will help operators to efficiently distribute and maintain their services, removing the pain from consumer travelling. AI will provide transport operators with data-driven recommendations to overcome complex challenges throughout their service, which can later be used to justify and inform decisions before implementation.

AI will be at the forefront of this immensely potential market, especially for new comers. Its capabilities should be recognised and embraced as a smart solution to helping facilitate and bring efficiency to the transportation services industry while improving city life and increasing the health and welfare of citizens.

We are at the very beginning of AI in transport and it will play a leading role in supporting new models and innovations as well as how we experience our cities.

Organisations in the transportation industry should consider how AI can help them to simplify their operations and manage market disruption. New intelligence is changing what transportation can be.

Author: Tom Nutley
Business Development Director
Stage Intelligence
www.stageintelligence.co.uk

Celebrating Eighteen Months of homeAI.info and the Informed.AI Group

Celebrating Eighteen Months of homeAI.info and the Informed.AI Group

Another six months, and a lot of progress to report on.

Our main site homeAI.info still remains at the heart of our group. A growing directory of information resources, with an additional category added for fintech during the last period and still more to come. The news area is still very popular and we continue to see more user submitted stories. We have also continued to add more to our spotlight area and are always looking for more companies, startups and people to profile in our spotlight section.

We have just launched a new dedicate area for Students of AI accessed via the link http://Study.AI which we see as a major part to our educational offering going forward, and will over the coming months add more resources to this area.

We have launched the 2nd Annual Global AI Achievement Awards which has an amazing 21 categories, making it the biggest and best Awards for AI. This is the original AI Awards and we hope you all support this initiative by voting at http://Awards.AI. The Awards are a core part of us delivering our manifesto obligation of supporting the AI community and celebrating the achievements of those working in the field.

To mark the Eighteen month anniversary we are launching our most ambition website yet. We are calling it Neurons.AI and its a Professional Network for AI Practitioners and Researchers. The focus on this site is to provide a bridge between commercial and academic endeavours in the field of AI. We strongly believe that bringing the two groups together will produce even more amazing developments in the field of AI, Machine Learning and Data Science. The network is like a social media network, but with a significant emphasis on forums and discussions. Neurons.AI also includes an offline element in the form of regular meet-ups. We have an official Press Release for this launch which you can read here.

We are also preparing to launch our AI Showcase Quarterly meet up from Q1 2017, the details can be seen at http://Showcase.AI. The desire is to inform students of AI and Machine Learning about the inner workings of a commercial development of Machine Learning applications and systems. This meetup will also be an opportunity for startups to showcase their products.

We continue to develop the careers portal and jobs board at http://Vocation.AI and are actively looking for more companies or agencies wanting to list their job opportunities on our site for free.

As always without the support of the AI community we are nothing. We continue to get wonderful feedback, and look forward to develop our platform to further support the AI community. We are very excited to make significant progress in 2017. As part of this we are looking to build out our advisory board to help us shape the direction of our future growth and are exploring ways we can accelerate our growth and rollout in 2017.

Thank you for your continued support and encouragement.

Dr Andy Pardoe
Founder of the Informed.AI Group of Community Websites

Our group of websites includes:

http://homeAI.info

http://Awards.AI

http://Events.AI

http://Showcase.AI

http://Vocation.AI

http://Neurons.AI

http://Study.AI

http://Informed.AI

Our social media:

We have twitter accounts for all of our sites;

@homeAIinfo

@Awards_AI

@Events_AI

@Showcase_AI

@Vocation_AI

@Neurons_AI

Press Release – Neurons.AI Launches – The Network for AI Professionals

neurons-ai-frontpage

 

 

 

Neurons.AI Launches Today – The Network for AI Professionals

A new social network for artificial intelligence professionals called Neurons.AI is launching today that will both operate online and host real world meet-ups.

Neurons will be the Facebook for AI experts and also provide members with the chance to socialise at regular events, to learn more about the subject and share ideas with others in the field.

Neurons is the brainchild of UK-based Dr Andy Pardoe, a PhD in Artificial Intelligence and Founder of Informed.AI, a group of community websites supporting those interested in AI, machine learning and data science.

The network will officially launch in beta mode on the 27th November but is open today for early registrations with a limited membership for the first six months, to be followed later by open paid subscriptions. Beta members will not pay membership for the first year. All membership fees will be used to directly for activities of the Informed.AI group to help promote and support the wider AI community.

Founder, Andy Pardoe, said: ‘I want to build a place where people can talk and share their ideas and experiences about AI and machine learning and allow collaborations between researchers and those working  in a commercial setting.’

‘The idea is to have a more dynamic conversation about AI, a place where people can have a voice.’

He added that members would be able to learn more about the latest developments in the AI field, often before anyone else does, given that this will be a forum for experts from industry and academia.

The social dimension will also be front and centre with an objective to build new connections and make friends in the AI and machine learning world. There will also be opportunities for members and their organisations to make presentations to members at meet-ups.

Naturally there will be significant networking opportunities; the ability to share and contribute to online forums and articles connected to Neurons; and to participate and also present new ideas at meet-ups.

If you would like to become a member please visit http://Neurons.AI to find out more.

ai-solid-white-final-neuons-twitter

Learn Deep Learning the Hard Way

Learn Deep Learning the Hard Way

There are so many articles about learning Deep Learning but still I decided to write one more. The reason is I find many of those articles saying the same thing over and over again. The same set of online courses and the same set of books. I think there is a need for a new guide for learning DL for people who are already well-versed with traditional ML.

Deep Learning is as much science as it is art. It’s increasingly looking like the most promising candidate among a set of different techniques for solving Artificial Intelligence one day. I’ve met and spoken to a lot of people recently who believe doing deep learning is pretty easy, you only need an open source library like TensorFlow, Theano etc. and decent data at your disposal, and you are all set. Trust me, it’s not true.

Coming from a science background before venturing into the world of ML first as an engineer and then as a founder I think one should seriously dive very deep into a field to appreciate the low-level details before building models in a hackish way. This method is OK for a small personal project but not good if you want to be a good researcher in the field one day or if you have plans of building a great product for the real world.

We at Artifacia broadly classify all of AI into Visual Understanding and Language Understanding. This is not exactly the best approach in the world but it helps us organize and execute our projects pretty efficiently. Much of our work is applied in nature with a small part of it being basic and long term in nature such as Project Turing and Project Button. We expect to publish some of our ongoing work sometime next year.

Even though my co-founder and CTO Vivek primarily looks after technology and research at Artifacia, I continue to spend 20% of my time with the research team to be able to do the right kind of mapping between our technology and product, and between our product and business. Moreover, I like speaking to them and continue my learning of an area I believe will impact every industry similar in scale to the Internet and the Personal Computer before that.

The following is the list of essential read for anyone who really wants to learn the fundamentals of Deep Learning :

Notes:

1) This is going to be an evolving post and I’ll keep updating it. The latest update included second and third papers.

2) I’ve also taken inputs from Vivek@Artifacia, who specialises in visual understanding, and Rajarshee@Artifacia, who specialises in language understanding, to compile this list of essential papers.

3) The title of this post is inspired by a popular book series by Zed Shaw. His book Learn Python the Hard Way remains one of the most recommended books for people starting with Python or programming in general.

4) If you’ve already read most of these papers and understood all of it, you should really consider applying to Artifacia!

Can Artificial Intelligence Help Reduce Poverty?

Can Artificial Intelligence Help Reduce Poverty?

 

The number one goal in United Nations’ Sustainable Development Goals for 2030 is: eliminate poverty. Today, around 1 billion people, that’s roughly one seventh of the world’s population, live in extreme poverty by earning less than 1.90$ per day. Though studies reveal that global poverty is reducing, we are still a long way from our goal.

To eradicate poverty, we first need the poverty distribution across the globe. The following diagram gives a rough estimate.

tree-map-of-extreme-poverty-distribution

But unfortunately, data availability is poor. Numerous countries have no survey conducted over the last 3 decades, and others, only a few. More importantly, in many African countries, a single survey has been conducted over the last decade, which makes the data inaccurate. Lastly, the surveys don’t yield a perfect result. Therefore, it is evident that a new method has to be conceived to obtain more precise information.

A STUDY FROM SPACE

A team of social and computer scientists at Stanford University in California, led by Marshall Burke, aim to map poverty from space with the help of artificial intelligence (AI). They collected a large amount of night-time satellite images of the planet, taken by high quality cameras. Studying the glow of lights on the starry map using machine learning algorithms, they aim to distinguish the poor regions from the rich, as higher intensity of light indicates better development. Unfortunately, it was hard to discern the moderately poor regions from the extremely poor, as the intensity between the two wasn’t considerably different.

earth_night

Therefore, they had to study daytime images and obtain key indicators such as: closest urban marketplace, distance from agriculture fields, nearest water sources and other such subtle signs.

They fed the computer large training datasets of images of regions where income per capita was previously known. The computer then used neural nets, a machine learning technique, to create links, discover relationships and find patterns. Then, they verified the accuracy of the algorithm on a validation set, and finally, implemented it on the test set. They focused on the African countries: Nigeria, Malawi, Rwanda, Tanzania and Uganda. Evidently, this technique doesn’t eradicate poverty, but provides reliable data to governments and NGOs.

But in all honesty, disregarding the hype about AI, is the new data really going to make a considerable difference? The World Bank may not have reliable information, but it is unlikely that the governments are completely unaware of the poverty spread in their country. Though a notable effort, this solution is more sensational than practical. And therefore we need another approach, one which hits the problem directly in the heart.

AI IN EDUCATION

“Education is not a way to escape poverty – It is a way of fighting it.”
– Julius Nyerere, former President of the United Republic of Tanzania

The primary step to alleviate poverty is: education. Simply put, if an underprivileged child can receive a decent education, the likelihood of him breaking away from the cycle of poverty increases. Therefore, education plays a crucial role in eradicating poverty.

The major difficulty with educating the poor is the lack of teachers. The reason is evident: helping the poor doesn’t pay, and so, there is no incentive for educators.

Therefore, taking inspiration from the Hole in the Wall experiment conducted by Sugata Mitra in 1999, we could bypass the problem. The study reveals that children can educate themselves only with the aid of a basic computer, requiring nearly no adult guidance. This form of education, known as Minimally Invasive Education (MIE), has significantly benefited over 300,000 underprivileged children from India and Africa.

hole-in-the-wall

Today, MIE can be substantially enhanced with AI and be made the future of education in the slums. With smart virtual bots installed in the systems, the machines would not only provide information, but could also “teach” the children. No external human guidance would be required, just the systems with the virtual “teachers” installed. Let us briefly look into how this can be achieved.

A GLIMPSE OF THE REQUIREMENTS OF A VIRTUAL “TEACHERS”

To interact with human beings, the machines would require advanced Natural Language Processing (NLP) algorithms, such as automatic speech recogniser (ASR), part of speech tagging (POS), syntactic/semantic parser, natural language generator, text-to-speech engine (TTS) etc. They should “understand” the language of the specific area so that children who don’t know English could communicate effortlessly. This would require accurate translation which again uses advanced NLP techniques.

Evidently, to create a solid dialogue system we need a huge database and therefore a centralised server to link all the systems globally might be a solution. But this would require an expensive infrastructure which beats the point of this endeavour.

Additionally, Machine Learning algorithms should make the systems learn from past mistakes, so that in the future, children find it easier to communicate. Furthermore, the interface should be simple, clean, and not cluttered with too many options. The courses could be designed specifically for the rural children or could simply be MOOCs. This would depend on the governments and their educational policies.

To conclude, this merely outlines the task ahead, gives a vision, a step to eradicate poverty. The work, the team required, the involvement needed, are enormous. The funding for the research is considerable, but if we can come together for this project, the entire world, we could succeed. It should be open source, so that anybody can contribute, from leading professors of AI and computer science, to students, to investors, to educators, to government officials, to NGOs…anybody. So are you willing to join hands in this endeavour? Are you willing to help your needy brothers? Does it bother you enough to make a change?

home of Artificial Intelligence information

Resource Directory, News Stories, Videos, Twitter & Forum Streams, Spotlight, Awards, Showcase and Magazine

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!