The AI Times Monthly Newspaper

Curated Monthly News about Artificial Intelligence and Machine Learning

Artificial Intelligence – The New Superpower for Compliance

In our quest for business productivity and cost savings, compliance teams are all too often being given increasing demands to keep the organization out of trouble, but are not being allocated additional budget to achieve this goal.

It typically takes a high-profile violation or industry-wide regulations like FCPA, to kick-start the implementation of risk management and compliance programs. And even then, there’s resistance due to concerns about the cost for these new programs and the potential for additional bureaucracy, slower decision-making and operational inefficiencies. When given the choice, today’s business executive tends to err on the side of speed over process.

Today’s automated ERP systems are designed with the intention of streamlining information delivery and decision-making, as well as providing decision-makers with sufficient information to make informed decisions and then providing electronic decision-making and audit tracking. The best of both worlds: speed and process accuracy, plus compliance.

http://corporatecomplianceinsights.com/artificial-intelligence-new-superpower-business-compliance/?utm_source=appzen

Celebrating the Women Advancing Machine Intelligence in Healthcare

Celebrating the Women Advancing Machine Intelligence in Healthcare

As an all-female company, RE•WORK is a strong advocate for supporting female entrepreneurs and women working towards advancing technology and science.

Following a fantastic dinner in February, RE•WORK have will be holding the next Women in Machine Intelligence in Healthcare Dinner in London on 12 October, to celebrate the women advancing this field.

The event is sponsored by IBM Watson Health, and is open to anyone keen to support women progressing the use of machine intelligence in healthcare, medicine and diagnostics. Confirmed attendees include Bupa, Google DeepMind, Kings College London, Lloyds Online Doctor, Omixy, UCL and Playfair Capital.

Over the course of the dinner, hear from leading female experts in Machine Intelligence and discuss the impact of AI sectors including machine learning, deep learning and robotics in healthcare. Attendees will establish new connections and network with peers including Founders, CTOs, Data Scientists and Medical Practitioners.
 

Speakers include:

  • Razia Ahamed, Google DeepMind
  • Alice Gao, Deep Genomics
  • Kathy McGroddy Goetz, IBM Watson Health

There’s just a limited number of tickets left for this event! To book your tickets, please visit the event site here.

Check out RE•WORK’s Women in Tech & Science series & see their full events list here for summits and dinners taking place in London, Amsterdam, Boston, San Francisco, New York, Hong Kong and Singapore. 

CorTeX Assembly Language

CorTeX Assembly Language

This is an attempt to change the way a Neural Network is executed and trained. Instead of accellerating parts of the NN execution, the whole NN problem is converted into a new assembly language that can do evaluation and back propagation of any NN. Multiple passages of convolution steps can be combined into a fully parallell pipeline.

http://www.gizmosdk.com/archives/CorTeX/execution_example.pdf

Celaton, today announced the release of Personalised Response

Milton Keynes, UK, 06, September, 16 – Ai software company Celaton, today announced the release of Personalised Response, the latest artificially intelligent module for its inSTREAM™ platform.

The Institute of Customer Service recently reported that 46% of customers expect a response within 24 hours if they contact an organisation via email, with over two fifths saying the same for website contact and one third for social media enquiries. With customers demanding faster responses, resolutions times and the bar constantly being raised on service levels it is now harder than ever for organisations to distinguish themselves as leaders in customer service.

Personalised Response significantly extends the capabilities of inSTREAM by enabling it to present operators with the most appropriate response to send. The proposed response is based on inSTREAM’s understanding of the meaning and intent of each incoming correspondence and the enrichment of data from other data sources. Responses chosen by inSTREAM are presented to operators for validation or for them to make the final decision before submission to customers.

inSTREAM learns through the natural consequence of processing and gains confidence as a result of experience. When inSTREAM is not sure of the most appropriate response, it will suggest the all possible options for an operator to choose and subsequently learns from the actions and decisions they make, this learning helps to continually optimise the process.

It accelerates resolution times, ensuring responses are consistent and appropriate, enabling organisations to deliver better customer service faster with fewer people.

Personalised Response is especially important for large organisations who deal with consumers.

Spotlight – Nikolas Badminton – Futurist

Nikolas Badminton, Futurist

Nikolas Badminton is a world-respected futurist speaker that provides keynote speeches about the future of work, the sharing economy, and how the world is evolving. Nikolas is based in Vancouver, BC, and speaks across Canada, UK, Asia, and Europe.

His Artificial Intelligence Keynote can be viewed here https://www.youtube.com/watch?v=x7IbrYFX4Fs and the presentation can be seen here – http://www.slideshare.net/nikolasbadminton/the-future-of-society-the-artificial-intelligence-revolution

We look forward to sharing more from Nikolas in the future!

His website can be found here http://nikolasbadminton.com

Showcase 2016 – Our first conference on the 15th and 16th September

All,

The Showcase 2016 Event is on the 15th and 16th September and still has a few tickets available for purchase.

The 2 day event is hosted by FutureWorld.tech at The Old Truman Brewery, East London’s revolutionary arts and media quarter.

For more details visit Showcase.ai and FutureWorld.tech

Our confirmed speakers include:

  • Peter Morgan, DSP
  • Patrick Levy-Rosenthal, Emoshape
  • Clara Durodié, Independant Director
  • Melanie Warrick, Skymind
  • Parit Patel, IPsoft
  • Andy Pardoe, Informed.AI
  • Laure Andrieux, Aiseedo
  • Alexander Hill, Senesce
  • Dale Lane, IBM Watson

The 2nd AI Achievement Awards – Opens for Voting on Thursday 1st September

awards.ai-logo-final

All,

It is with great pleasure that we formally announce that the 2nd Annual Global AI Achievement Awards will be open for nomination votes this Thursday 1st Sept.

This is our second year of holding these awards, which we had ten awards categories, you can see the previous years winners on the awards website. This year we have doubled the number of categories to twenty, the full list can be found on the categories section of the website.

We encourage you all to support our initiative and vote for the companies and individuals you feel are contributing the most to the field of artificial intelligence. Support the community and celebrate the amazing work being done by thousands of people and hundreds of companies across the global.

Its a public vote open to everyone, for companies that wish to be nominated we suggest you inform your customers about the awards and suggest which category you would like to be nominated in.

There is still time to be a corporate sponsor of the awards. See the sponsors page for more details.

Don’t forget to visit the site http://Awards.AI on or after the 1st Sept to cast your vote.

awardstrophy

GeckoSystems, an AI Robotics Co., Signs U.S. Joint Venture Agreement

GeckoSystems, an AI Robotics Co., Signs U.S. Joint Venture Agreement

CONYERS, Ga., August 18, 2016 — GeckoSystems Intl. Corp. (Pink Sheets: GOSY | http://www.GeckoSystems.com) announced today that after effectuating an NDA, MOU, and LOI agreements with this NYC AI firm, that they have now executed a joint venture agreement. For over nineteen years, GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

“We are very pleased to announce our first US JV. We will jointly coordinate our advanced Artificial Intelligence (AI) R&D to achieve higher levels of human safety and sentient verbal interaction for the professional healthcare markets.  We expect not only near term licensing revenues, but also an initial AI+ CareBot(tm) sale. While we have several JV’s in Japan continuing to mature, it is gratifying to have gained demonstrable traction in the US markets.

“One of our primary software and hardware architecture design goals has been for our MSR platforms to be extensible such that obsolescence of the primary cost drivers, the mechanicals, would be as much as five or more years (or when actually worn out from use).  Consequently, our hardware architecture is x86 CPU centric and all our AI savants communicate over a LAN using TCP/IP protocols with relatively simple messaging. This means all systems on the Company’s MSR’s are truly “Internet of Things” (IoT) due to each having a unique IP address for easy and reliable data communications. Because of our high level of pre-existing, linchpin, 3-legged milk stool basic functionalities that make our AI+ CareBot so desirable by being easily upgraded, not only by GeckoSystems, but also third party developers, such as this advanced NYC AI firm.

“This is the strategic hardware development path that IBM used in setting PC standards that have enabled cost effective use of complex, but upgradeable for a long service life, personal computers for over thirty years now,” observed Martin Spencer, CEO, GeckoSystems Intl. Corp.

NYC has national prominence in the AI development community.  For example, NYC has twenty listed here: http://nycstartups.net/startups/artificial_intelligence  Atlanta, GA, reports only one AI robotics startup, Monsieur, a leader in the automated bartending space. http://monsieur.co/company/

 

Artificial intelligence technologies and applications span:

Big Data, Predictive Analytics, Statistics, Mobile Robots, Social Robots, Companion Robots, Service Robotics, Drones, Self-driving Cars, Driverless Cars, Driver Assisted Cars, Internet of Things (IoT), Smart Homes, UGV’s, UAV’s, USV’s, AGV’s, Forward and/or Backward Chaining Expert Systems, Savants, AI Assistants, Sensor Fusion, Point Clouds, Worst Case Execution Time (WCET is reaction time.) Machine Learning, Chatbots, Cobots, Natural Language Processing (NLP), Subsumption, Embodiment, Emergent, Situational Awareness, Level of Autonomy, etc.

 

An internationally renowned market research firm, Research and Markets, has again named GeckoSystems as one of the key market players in the service robotics industry. The report covers the present scenario and the growth prospects of the Global Mobile Robotics market for the period 2015-2019. Research and Markets stated in their report, that they: “…forecast the Global Mobile Robotics market to grow at a CAGR of nearly sixteen percent over the period 2015-2019.”

 

The report has been prepared based on an in-depth market analysis with inputs from industry experts and covers the Americas, the APAC, and the EMEA regions.  The report is entitled, Global Professional Service Robotics Market 2015-2019.

 

Research and Markets lists the following as the key vendors operating in this market:

Companies mentioned:

AB Electrolux

Blue River Technology

Curexo Technology

Elbit Systems

GeckoSystems

Health Robotics

MAKO Surgical Corp.
“GeckoSystems has been recognized by Research and Markets for several years now and it is the most comprehensive report of the global service robotics industry to my knowledge. I am pleased that their experienced market researchers are sufficiently astute to accept that small service robot firms, such as GeckoSystems, can nonetheless develop advanced technologies and products as well, or better, as much larger, multi-billion dollar corporations such as AB Electrolux, etc., reflected Martin Spencer, CEO, GeckoSystems Intl. Corp.

 

Research and Markets also discusses:

Professional service robots have the tendency to work closely with humans and can be used in a wide application ranging from surveillance to underwater inspection. They provide convenience and safety, among other benefits, thus creating demand worldwide. Technavio expects the global professional service robotics market to multiply at a remarkable rate of nearly 16% during the forecast period. Today, the adoption of robots is on the rise globally as they tend to minimize manual labor and reduce the chances of human error.

 

In the last decade, there have been numerous technological advancements in the field of robotics that have made the adoption of robots easy, viable, and beneficial. For instance, there has been a lot of innovations and improvements in the Internet of things, automation, M2M communications, and cloud. The modern robotic manufacturers are trying to take advantage of these technologies as a communication medium between the robots and humans, thus increasing the convenience as well as the transfer of real-time information within the business entity seamlessly.

 

Segmentation of the professional service robotics market by application:

– Defense, rescue, safety, and aerospace application

– Field application

– Logistics application

– Healthcare application

– Others

 

The defense application segment was the largest contributor to the growth of the global professional service robotics market with more than 44% share of the overall shipments in 2014. The demand for UGV and UAV for surveillance and safeguarding lives of personnel from ammunition, landmines, and bombs is expected to drive the demand for robotics.

 

“It is an honor that they recognize the value of the over 100 man-years we have invested in our proprietary AI robotics Intellectual Properties and my full time work for nearly 20 years now.   Our suite of AI mobile robot solutions is well tested, portable, and extensible.  It is a reality that we could partner with any other company on that list and provide them with high-level autonomy for collision free navigation at the lowest possible cost to manufacture.  There is also an opportunity for other cost reductions and enhancement of functionality with other components of our AI solutions,” stated Spencer.

 

In order for any companion (social) robot to be utilitarian for family care, it must be like a “three-legged milk stool” for safe, routine usage.  For any mobile robot to move in close proximity to humans, it must have:

(1) Human quick reflex time to avoid moving and/or unmapped obstacles, (GeckoNav(tm): http://tinyurl.com/le8a39r) (See the importance of WCET discussion below.)

(2) Verbal interaction (GeckoChat(tm): http://tinyurl.com/nnupuw7) for easy user dialogues and/or monologues with a sense of date and time (GeckoScheduler(tm): http://tinyurl.com/kojzgbx), and

(3) Ability to automatically find and follow designated parties (GeckoTrak(tm): http://tinyurl.com/mton9uh) such that verbal interaction can occur routinely with video and audio monitoring and/or teleconferences of the care receiver occur readily and are uninterrupted.

 

In the US, GeckoSystems projects the available market size in dollars for cost effective, utilitarian, multitasking eldercare social mobile robots in 2017 to be $74.0B, in 2018 to be $77B, in 2019 to be $80B, in 2020 to be $83.3B, and in 2021 to be $86.6B.  With market penetrations of 0.03% in 2017, 0.06% in 2018, 0.22% in 2019, 0.53% in 2020, and 0.81% in 2021, we anticipate CareBot social robot sales from the consumer market alone at levels of $22.0M, $44.0M, $176M, $440.2M, and $704.3M, respectively.

 

“This first US JV will continue to evolve, such that GeckoSystems enjoys revenues that increase shareholder value. After many years of patience by our current 1300+ stockholders, they can continue to be completely confident that this new, potentially multi-million-dollar JV licensing agreement further substantiates and delineates the reality that GeckoSystems will continue to be rewarded with additional licensing revenues furthering shareholder value,” concluded Spencer.

 

About GeckoSystems:

GeckoSystems has been developing innovative robotic technologies for nineteen years.  It is CEO Martin Spencer’s dream to make people’s lives better through AI robotic technologies.

 

The safety requirement for human quick WCET reflex time in all forms of mobile robots:

In order to understand the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why another Japanese robotics company desires a business relationship with GeckoSystems, it’s key to acknowledge some basic realities for all forms of automatic, non-human intervention, vehicular locomotion and steering.

  1. Laws of Physics such as Conservation of Energy, inertia, and momentum, limit a vehicle’s ability to stop or maneuver. If, for instance, a car’s braking system design cannot generate enough friction for a given road surface to stop the car in 100 feet after brake application, that’s a real limitation. If a car cannot corner at more than .9g due to a combination of suspension design and road conditions, that, also, is reality. Regardless how talented a NASCAR driver may be, if his race car is inadequate, he’s not going to win races.
  2. At the same time, if a car driver (or pilot) is tired, drugged, distracted, etc. their reflex time becomes too slow to react in a timely fashion to unexpected direction changes of moving obstacles, or the sudden appearance of fixed obstacles. Many car “accidents” result from drunk driving due to reflex time and/or judgment impairment. Average reflex time takes between 150 & 300ms. http://tinyurl.com/nsrx75n
  3. In robotic systems, “human reflex time” is known as Worst Case Execution Time (WCET). Historically, in computer systems engineering, WCET of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. In big data, this is the time to load up the data to be processed, processed, and then outputted into useful distillations, summaries, or common sense insights. GeckoSystems’ basic AI self-guidance navigation system processes 147 megabytes of data per second using low cost, Commercial Off The Shelf (COTS) Single Board Computers (SBC’s).
  4. Highly trained and skilled jet fighter pilots have a reflex time (WCET) of less than 120ms. Their “eye to hand” coordination time is a fundamental criterion for them to be successful jet fighter pilots. The same holds true for all high performance forms of transportation that are sufficiently pushing the limits of the Laws of Physics to require the quickest possible reaction time for safe human control and/or usage.
  5. GeckoSystems’ WCET is less than 100ms, or as quick, or quicker than most gifted jet fighter pilots, NASCAR race car drivers, etc. while using low cost COTS and SBC’s
  6. In mobile robotic guidance systems, WCET has 3 fundamental components.
  7. Sufficient Field of View (FOV) with appropriate granularity, accuracy, and update rate.
  8. Rapid processing of that contextual data such that common sense responses are generated.
  9. Timely physical execution of those common sense responses.

 

——————————————————————————————-

An earlier third party verification of GeckoSystems’ AI centric, human quick sense and avoidance of moving and/or unmapped obstacles by one of their mobile robots can be viewed here: http://t.co/NqqM22TbKN

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd” situations like a mall or an exhibit area.

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home.  You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area.  There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration.  SafePath(tm) navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.
GeckoSystems, Star Wars Technology

 

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly.  It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.   The company believes that the CareBot will increase the safety and well-being of its elderly charges while decreasing stress on the caregiver and the family.

GeckoSystems is preparing for Beta testing of the CareBot prior to full-scale production and marketing.   CareBot has recently incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

 

Kinect Enabled Personal Robot video:

http://www.youtube.com/watch?v=kn93BS44Das

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley.  There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control.  GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative.  Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

 

More information on the CareBot personal assistance robot:

http://www.geckosystems.com/markets/CareBot.php

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

 

GeckoSystems uses http://www.LinkedIn.com as its primary social media site for investor updates. Here is Spencer’s LinkedIn.com profile:

http://www.linkedin.com/pub/martin-spencer/11/b2a/580

 

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website:  http://www.geckosystems.com/

Source: GeckoSystems Intl. Corp.

 

Safe Harbor:

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

 

 

 

 

GeckoSystems, an AI Robotics Co., Gains Traction: LOI with NYC AI Firm

GeckoSystems, an AI Robotics Co., Gains Traction: LOI with NYC AI Firm

CONYERS, Ga., August 11, 2016 — GeckoSystems Intl. Corp. (Pink Sheets: GOSY | http://www.GeckoSystems.com) announced today that after over two years of negotiations with this advanced AI developer in New York City, that additional, substantive progress, a Letter of Intent (LOI), has been signed to form their first US joint venture. For over nineteen years, GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

 

“Less than two weeks ago, I met with this Artificial General Intelligence (AGI) firm’s CEO. Our two days of meetings were very cordial, frank and productive. To that end, we immediately effectuated our Safety Clause NDA such that our discussions became of sufficient substance for us to sign our second agreement, an MOU, clearly revealing that both parties believe significant AI synergies appropriate for multiple markets would be garnered by each firm.  Now we have the additional clarity with this third agreement, an LOI, as we continue to gain traction in our pursuit of this multi-million-dollar licensing revenue opportunity,” stated Martin Spencer, CEO, GeckoSystems Intl. Corp.

 

NYC has national prominence in the AI development community.  For example, NYC has twenty listed here: http://nycstartups.net/startups/artificial_intelligence  Atlanta, GA, reports only one AI robotics startup, Monsieur, a leader in the automated bartending space. http://monsieur.co/company/

 

Artificial intelligence technologies and applications span Big Data, Predictive Analytics, Statistics, Mobile Robots, Service Robotics, Drones, Self-driving Cars, Driverless Cars, Driver Assisted Cars, Internet of Things (IoT), Smart Homes, UGV’s, UAV’s, USV’s, AGV’s, Forward and/or Backward Chaining Expert Systems, Savants, AI Assistants, Sensor Fusion, Subsumption, etc.

 

Recently the Company revealed a new AI mobile robot concept for better, lower cost public safety. All of those affiliated with the Company, as do most Americans, share the abject horror we are all still trying to process regarding the recent, indoor (and outdoor) mass shooting and murdering of dozens of innocent victims.

 

To better manage those 21st century mass shootings, the Company is offering to prototype and deploy the GeckoNED(tm), a new type of mobile security robot that is a Non-violent Enforcement Device with a high level of independent mobile autonomy, sensor rich for enhanced situational awareness, and ease of complete control under tele-operation by designated, vetted public safety personnel.

 

The following was written by Spencer shortly after the Sandy Hook mayhem, but not updated since the Pulse nightclub carnage.

 

Safety for our children is a moral imperative for all enlightened civilizations. The present proliferation of lethal weaponry in the form of readily obtainable semi- and full automatic pistols and rifles has brought increased child safety to nearly blinding visibility that requires new thinking and solutions for this long overdue, poorly addressed need in our culture.

 

Mobile robots could be the most proximate and final deterrent to those that would harm our children in public schools and other venues.  GeckoSystems has named the mobile robot concept that would provide yet another barrier between our children and those immoral individuals intent on doing them significant harm the GeckoNEDä.  “NED” stands for Non-violent (or non-lethal) Enforcement Device.

 

Fundamentally, the GeckoNED (or “NED”) is a new type of mobile sentry robot that would deter, detect, and contain those that would violently harm our children in their schools. The NED would be a new type of school mascot that could be customized by the children, teachers and staff to be a daily part of their school time lives.  The NED would be able to automatically patrol all wheelchair accessible areas in any school without human oversight or intervention using GeckoSystems’ proven SafePathä mobile robot AI navigation software.

 

What does it do? Deters, Detects, and Contains to provide better Protection-

  1. Marquee deterrent video and audio surveillance systems with fully autonomous self-patrolling in loose crowds, etc.
  2. Quick detection using AI augmented sensor fusion systems with fully autonomous auto-find/seek
  3. Deployable, multiple non-lethal containment systems under direct human control only
  4. Ready mobile detection, protection and containment systems that are fully tele-operable remotely

 

The NED would be a marquee deterrent due to its robust audio and video surveillance systems employing WiFi LAN data communications to connect to the school’s Internet access. The primary, high-resolution pan/tilt zoom video camera and professional quality microphones would be selected such that their features and benefits are appropriate for the expanses to be “sight and sound” monitored in the school.

 

Further enhancing the marquee deterrence, the NED would be available for direct human tele-operation almost instantly when direct human control was appropriate and timely due to a clear and present danger to the children having been identified with a high level of confidence by the NED’s AI enhanced sensor systems. In addition, cell phone and police band communication capability could be included using the voice synthesis ability of GeckoChatä.

 

The NED would:

Enable prompt intruder detection using multiple, different sensor systems (sight, sound and smell) AI fused to produce a one plus one equals three synergy.  This counter-intuitive metaphor describes a common benefit of GeckoSystems’ advanced artificial intelligence and sensor fusion competencies.

 

The NED’s AI’s would sensor fuse:

  1. Augmented Vision
    1. These would include machine vision, including that visible to the human eye, and that invisible, such as infra-red (IR) due to body heat, heat from fired weapons, etc. and AI software
  2. Extended Hearing
    1. Frequency response range widened beyond human hearing, into the ultrasonic using multiple microphones (omni directional and directional) and AI software
  3. Enhanced Smell
    1. Odor detection systems for appropriate gas detection, whether odorless to human sense of smell or not with intelligent inhalation system and AI software

 

Singly, and in concert, the preceding systems would detect unwanted intruders by:

  1. Video surveillance enhanced by AI object recognition machine vision
    1.  In both visible and invisible IR light spectrums
  2. Audio surveillance enhanced by AI expert systems
    1. Within and outside human hearing range, atypical sounds such as
  1. Gun shots
  2. Breaking glass
  • Doors being broken down
  1. Students and/or staff stressed voices; screams
  1. Odor and odorless gas surveillance
    1. Smoke, carbon monoxide, and natural gas
    2. Potentially odors from:
  1. Handguns, long guns, rifles
  2. Guns in lockers
  • Explosives, gun ammunition in lockers

 

The NED would pre-position ready mobile protection that is fully tele-operable remotely when atypical situations arise. It would immediately alert pre-designated parties for human intervention and direct human control of the NED and its various containment systems.

 

  1. The NED’s exterior size would be about thirty (30) inches in diameter and seventy-two (72) inches tall
  1. Cannot be readily disabled by small arms fire thus affording cover for students and staff when the NED places itself between the intruder and all others.
  2. The NED’s shroud could be bulletproof covering using a combination of Kevlar, ceramic armor, and/or aluminum plates sufficient for absorbing small arms fire.
  1. Immediate intervention after detection resulting from a top speed, in obstacle free hallways, of up to 20 mph
  1. SafePath technologies with obstacle avoidance five to six times faster than a person preclude NED hitting anything, even when under teleoperation (direct human) control.
  1. Bull horns, sirens, high power speaker system and/or other sound projection systems capable of hitting the threshold of pain

 

The NED would have readily deployed, multiple non-lethal containment systems solely under the control of a designated, responsible party such as a “watch commander” at the local police station.

 

The non-violent and/or non-lethal containment capabilities would consist of:

  1. Targeted, high volume water spray
  2. Sleeping gas with directed dispersement
  3. Irritant sprays, such as pepper spray, tear gas, etc. with directed dispersement
  4. Acoustical stunners, flash-bangs, “stun bombs”
  5. Targeted net guns, “projectile nets”
  6. Targeted sticky foam, an extremely tacky material carried in compressed form with a propellant
  7. Targeted electrical stunners (Tasers)

 

In addition to providing children and staff in schools a higher level of safety, the school would now have a new kind of school mascot, a NED. The covering could be painted in school colors, and designed like the school mascot, if desired. For an example, Huber U. Hunt Elementary School is a tiger. They could have a tiger design with verbal UX customized for a pleasing dialect for the students. The NED’s battery recharging pads would be located at various desirable sentry positions throughout school. Literally the school’s NED would be unique in its use and appearance in every school.

 

Resulting from this LOI, the GeckoNED, for example, would benefit from more powerful, analytic, reliable, comprehensive AI software even more situationally aware and autonomous to provide an even higher level of safety for our school children and other “soft targets,” such as movie theatres, night clubs, etc. This is completely congruent with GeckoSystems’ strategic focus.

 

The Company is also negotiating an investment from a Japanese trading company, KISCO Ltd., and those discussions continue under NDA.

 

“This LOI portends well for us and our shareholders.  We are definitively on path to consummate our first domestic joint venture licensing agreement. It comes as no surprise, that a highly advanced NYC AI company understands the market potential of our suite of AI mobile robot solutions.

 

“We continue to have numerous ongoing joint venture and/or licensing discussions, not only in Japan, but also in the US, as revealed in this press release.  I am also pleased that as the Service Robotics industry begins to offer real products to eager markets, our capabilities are being recognized. Our 1300+ shareholders can continue to be confident that we expect to be signing numerous multi-million-dollar licensing agreements to further substantiate and delineate the reality that GeckoSystems will earn additional licensing revenues to further increase shareholder value and ROI,” concluded Spencer.

 

 

About GeckoSystems:

 

GeckoSystems has been developing innovative robotic technologies for nineteen years.  It is CEO Martin Spencer’s dream to make people’s lives better through AI robotic technologies.

 

The safety requirement for human quick WCET reflex time in all forms of mobile robots:

 

In order to understand the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why another Japanese robotics company desires a business relationship with GeckoSystems, it’s key to acknowledge some basic realities for all forms of automatic, non-human intervention, vehicular locomotion and steering.

 

  1. Laws of Physics such as Conservation of Energy, inertia, and momentum, limit a vehicle’s ability to stop or maneuver. If, for instance, a car’s braking system design cannot generate enough friction for a given road surface to stop the car in 100 feet after brake application, that’s a real limitation. If a car cannot corner at more than .9g due to a combination of suspension design and road conditions, that, also, is reality. Regardless how talented a NASCAR driver may be, if his race car is inadequate, he’s not going to win races.

 

  1. At the same time, if a car driver (or pilot) is tired, drugged, distracted, etc. their reflex time becomes too slow to react in a timely fashion to unexpected direction changes of moving obstacles, or the sudden appearance of fixed obstacles. Many car “accidents” result from drunk driving due to reflex time and/or judgment impairment. Average reflex time takes between 150 & 300ms. http://tinyurl.com/nsrx75n

 

  1. In robotic systems, “human reflex time” is known as Worst Case Execution Time (WCET). Historically, in computer systems engineering, WCET of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. In big data, this is the time to load up the data to be processed, processed, and then outputted into useful distillations, summaries, or common sense insights. GeckoSystems’ basic AI self-guidance navigation system processes 147 megabytes of data per second using low cost, Commercial Off The Shelf (COTS) Single Board Computers (SBC’s).

 

  1. Highly trained and skilled jet fighter pilots have a reflex time (WCET) of less than 120ms. Their “eye to hand” coordination time is a fundamental criterion for them to be successful jet fighter pilots. The same holds true for all high performance forms of transportation that are sufficiently pushing the limits of the Laws of Physics to require the quickest possible reaction time for safe human control and/or usage.

 

  1. GeckoSystems’ WCET is less than 100ms, or as quick, or quicker than most gifted jet fighter pilots, NASCAR race car drivers, etc. while using low cost COTS and SBC’s

 

  1. In mobile robotic guidance systems, WCET has 3 fundamental components.
  2. Sufficient Field of View (FOV) with appropriate granularity, accuracy, and update rate.
  3. Rapid processing of that contextual data such that common sense responses are generated.
  4. Timely physical execution of those common sense responses.

 

——————————————————————————————-

 

In order for any companion robot to be utilitarian for family care, it must be a “three legged milk stool.”

(1) Human quick reflex time to avoid moving and/or unmapped obstacles, (GeckoNav(tm): http://tinyurl.com/le8a39r)

(2) Verbal interaction (GeckoChat(tm): http://tinyurl.com/nnupuw7) with a sense of date and time (GeckoScheduler(tm): http://tinyurl.com/kojzgbx), and

(3) Ability to automatically find and follow designated parties (GeckoTrak(tm): http://tinyurl.com/mton9uh) such that verbal interaction can occur routinely with video and audio monitoring of the care receiver is uninterrupted.

 

An earlier third party verification of GeckoSystems’ AI centric, human quick sense and avoidance of moving and/or unmapped obstacles by one of their mobile robots can be viewed here: http://t.co/NqqM22TbKN

 

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

 

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd” situations like a mall or an exhibit area.

 

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home.  You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area.  There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration.  SafePath(tm) navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.
GeckoSystems, Star Wars Technology

http://www.youtube.com/watch?v=VYwQBUXXc3g

 

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly.  It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.   The company believes that the CareBot will increase the safety and well-being of its elderly charges while decreasing stress on the caregiver and the family.

 

GeckoSystems is preparing for Beta testing of the CareBot prior to full-scale production and marketing.   CareBot has recently incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

 

Kinect Enabled Personal Robot video:

http://www.youtube.com/watch?v=kn93BS44Das

 

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley.  There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control.  GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative.  Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

 

More information on the CareBot personal assistance robot:

http://www.geckosystems.com/markets/CareBot.php

 

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

 

GeckoSystems uses http://www.LinkedIn.com as its primary social media site for investor updates. Here is Spencer’s LinkedIn.com profile:

http://www.linkedin.com/pub/martin-spencer/11/b2a/580

 

 

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website:  http://www.geckosystems.com/

Source: GeckoSystems Intl. Corp.

 

Safe Harbor:

 

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

 

 

 

 

SparkCognition Launches DeepArmor, First Ever Cognitive Antivirus Solution

Today at Black Hat 2016, SparkCognition is launching DeepArmor, an AI-powered anti-malware platform that promises to protect networks from many new and never-before-seen cyber security threats. This signifies a major industry advancement, of baking advanced artificial intelligence techniques, including neural networks and Natural Language Processing, into anti-virus (AV). As many as 78% of security professionals no longer trust traditional antivirus because existing solutions cannot keep up with rapidly evolving malware. SparkCognition makes products that identify, analyze, learn, anticipate and adjust to impending and real time cyber security threats and the company is exhibiting this week at Black Hat in booth 372.

“Cyber crime is growing beyond our control. According to the Singapore Minister of Home Affairs, Law Shanmugam, an estimated $2 trillion will be lost through cybercrime by 2019,” said Lucas McLane, director of Security Solutions for SparkCognition. “This is a recipe for disaster, and the major reason why both state and federal governments are making cyber security the top priority.”

To combat this growing problem and technological deficiency, SparkCognition has released the industry’s first cognitive antivirus solution, DeepArmor. DeepArmor takes a unique approach to endpoint protection by leveraging neural networks, advanced heuristics, and data science techniques to find and remove malicious files. Instead of looking at static signatures, or even exploding files in a sandbox, DeepArmor looks at the DNA of every file to identify if any components are suspicious or malicious in nature.

“We are using cognitive algorithms to constantly learn new malware behaviors and recognize how polymorphic files may try to attack in the future. This keeps every endpoint safe from malware that leverages domain-generated algorithms, obfuscation, packing, minor code tweaks, and many other modern tools,” explained SparkCognition senior product manager, Keith Moore. “This is a necessary defense against potentially devastating Zero-Day threats, which often confound and evade existing tools.”

DeepArmor is powered by cutting edge technology that represents a quantum leap beyond techniques used for malware generation or propagation. Pulling from proprietary SparkCognition automated model-building algorithms, DeepArmor, starts by looking at every un-scanned file on a user’s desktop or laptop. It breaks each file into thousands of different pieces for initial review. It then elevates initially identified features using an advanced feature derivation algorithm to develop a comprehensive, multi-dimensional view of behaviors, workflows and techniques. All of these individually analyzed components are then run through continuously evolving ensembles of neural networks to find patterns that may be malicious in nature. Because these neural networks are trained on a bevy of threat types, from Worms to Ransomware, many malevolent patterns present are unearthed and called out immediately, even if the file that contains them doesn’t have a known-bad signature.

“We have tailored DeepArmor to operate seamlessly behind the scenes on each endpoint, and to only identify real threats without calling out false positives,” added Moore. “This gives any user the freedom to do what they would like without the fear that their computer may become infected.”

DeepArmor is being made available to 1,000 members of SparkCognition’s beta program. To register for a chance to work with DeepArmor, please visit: http://sparkcognition.com/cognitive-approach-anti-malware/ 

About SparkCognition
SparkCognition, Inc., the world’s first Cognitive Security Analytics company, is based in Austin, Texas. The company is successfully building and deploying a cognitive, data-driven analytics platform for Clouds, Devices and the Internet of Things (IoT) industrial and security markets by applying patent-pending algorithms that deliver out of-band, symptom-sensitive analytics, insights and security.

SparkCognition was named the 2015 Hottest Start Up in Austin by SXSW and the Greater Austin Chamber of Commerce. The Company was the only US-based company to win Nokia’s 2015 Open Innovation Challenge. In 2015, it was named a Gartner Cool Vendor, and in 2016 SparkCognition garnered the Frost and Sullivan Technology Convergence Award. Recently, the Edison Awards recognized the company’s cyber security achievements. For more, visit http://sparkcognition.com/

Sophia Genetics unveils SOPHiA, the world’s most advanced collective artificial intelligence for Data-Driven Medicine

Sophia Genetics LOGO

Global Leader in Data Driven Medicine

Sophia Genetics unveils SOPHiA,
the world’s most advanced collective artificial intelligence for Data-Driven Medicine

Logo SOPHiA

  • Sophia Genetics unveils SOPHiA, the world’s most advanced artificial intelligence (AI) for Data-Driven Medicine
  • SOPHiA continuously learns from thousands of patients’ genomic profiles, and experts’ knowledge, to improve patients’ diagnostics and treatments
  • SOPHiA revolutionary technology will shortly be made available to the member hospitals of Sophia Genetics’ community with the upcoming new 4.0 version of the Sophia DDM® platform
  • Thanks to SOPHiA, the 170 hospitals already using Sophia DDM® in 28 countries will immediately benefit from better and faster diagnostics for hundreds of patients every day

LAUSANNE, Switzerland – 27 July 2016 – Sophia Genetics, the global leader in Data-Driven Medicine, today unveiled SOPHiA, the world’s most advanced collective artificial intelligence (AI) for Data-Driven Medicine. A state-of-the-art technology, SOPHiA continuously learns from thousands of patients’ genomic profiles and experts’ knowledge to improve patients’ diagnostics and treatments. The unmatched analytical powers of SOPHiA rely on the genomic information pooled on Sophia DDM®, the world’s largest clinical genomics community for molecular diagnostics, gathering to date 170 hospitals from 28 countries.

Today, Sophia Genetics also revealed results proving how SOPHiA managed to obtain a 98% match with expert clinicians’ variant pathogenicity predictions for BRCA genes mutations, which bear a potential risk of susceptibility to breast cancer. To obtain such quality result, the Swiss technological company’s AI considered data from thousands of patients’ genomic tests, building on the information pooled by hospitals in Sophia DDM®, learning how to predict genomic variants pathogenicity almost the same way a clinical expert does, and evolving as more data became available.

An initial 85% match was obtained with 10.000 patient analysed, improving to 96% match with 20.000 tests and 98% with classifications by expert clinicians. The final results are based on the genomic profiles of 30.000 patients, containing 28.000 unique genomic variants. The variants considered by SOPHiA were identified and sorted by Sophia Genetics’ three proprietary and patented advanced technologies, PEPPER™, MUSKAT™ and MOKA™, ensuring the 99.9% specificity and sensitivity that oncologists, clinicians and medical specialists need to confidently report clinical genomics variants to their patients.

SOPHiA’s revolutionary technology will soon be available to hospitals and clinicians members of the Sophia DDM® community. Moving forward, the secure and private pooling of more patients’ genomic profiles on Sophia DDM® will allow for similar advances from SOPHiA for 40 other genome diseases; oncology, hereditary cancers, cardiology, metabolic disorders and paediatrics, and the advent of a true Data-Driven Medicine for patients.

Speaking about this breakthrough for breast cancer diagnostics and treatment, but also for Data-Driven Medicine as a whole, Jurgi Camblong, CEO and co-founder of Sophia Genetics declared “I am proud to announce that Sophia Genetics is the first company with such genomic variant classification power in molecular diagnostics. SOPHiA facilitates clinical interpretation and its artificial intelligence features give medical experts more time to focus on the study of complex cases. Moving forward, we will use this state-of-the-art technology to apply SOPHiA’s predictive power to the other applications supported on our clinical genomics platform Sophia DDM®. We are already participating in better and faster diagnosing 200 patients every day and we expect SOPHiA’s results presented today to dramatically increase this number by allowing clinicians to offer faster and better diagnostics, and patients to benefit from better treatments”.

About Sophia Genetics

Sophia Genetics, a global leader in Data-Driven Medicine, brings together expertise in genetics, bioinformatics, machine-learning and genomic privacy. Based in Switzerland, the company is known for its high medical standards and Swiss precision when it comes to accuracy and quality management. Sophia Genetics offers health professionals who perform clinical genetic testing bioinformatics analysis, quality assurance, and secure banking of patient DNA sequence data generated by NGS. Sophia Genetics does not hold personal information on patients, and the patient data the company does hold is anonymised. Sophia Genetics helps clinical laboratories to reduce the cost, overcome complexity and fulfil quality constraints related to the use of NGS in the clinic. For more information, visit sophiagenetics.com and follow @SophiaGenetics and @JurgiCamblong.

Media contact:

Tarik Dlala
Sophia Genetics
+41 78 822 29 28
tdlala@sophiagenetics.com

British Computer Society Machine Intelligence Competition 2016

******* MACHINE INTELLIGENCE COMPETITION 2016 *******
British Computer Society Machine Intelligence Competition 2016
http://bcs-sgai.org/micomp/

After a three-year gap it is with great pleasure that the British Computer Society Specialist Group on Artificial Intelligence (SGAI) relaunches the BCS Machine Intelligence Competition, as part of the Group’s one-day event Real AI 2016.

The eleventh BCS Machine Intelligence competition for live demonstrations of applications that show ‘progress towards machine intelligence’ will be held on Friday October 7th 2016 at the BCS London Office, First Floor, The Davidson Building, 5 Southampton Street, London. The winner will receive a cash prize plus a trophy.

The prize will be awarded on the basis of a 10-15 minute live demonstration (not a paper, not a technical description). The demonstration can be of either software (e.g. a question-answering system or a speech recognition system) or hardware (e.g. a mobile robot).

Full details of the competition and an online entry form are available on the website. There is no entry fee but competitors will be asked to meet their own costs. The closing date is Friday September 9th 2016. However early entry is strongly advised.

Attendance at the competition is free of charge for those attending Real AI 2016 (http://www.bcs-sgai.org/realai2016/). All those attending will be eligible to vote for the winning entry.

Organisers: Ms. Nadia Abouayoub (BCS SGAI) email: nadia_abou@hotmail.com and Prof. Max Bramer (Chair, SGAI) email: max.bramer@port.ac.uk

Pat Inc, launches private beta Natural Language Understanding (NLU) API.

Pat Inc, launches private beta Natural Language Understanding (NLU) API.

Led by John Ball, CTO and founder, Wibe Wagemans, CEO, and Professor Robert Van Valin, Jr., CSO, Pat integrated the RRG (Role and Reference Grammar) model with a patented neural network to advance a SaaS platform that facilitates understanding the meaning of A.I. text- and voice-based applications

Just as there is more than one way to skin a cat, there are numerous theories for how to enable artificial intelligence to better understand language. Pat’s approach of combining role and reference grammar (RRG) with a neural network solves the open problems in NLU of Word Sense Disambiguation, context tracking and the otherwise typical, combinatorial explosion.

 

Watch Room – A Short Film where AI meets VR

Watch Room – A Short Film where AI meets VR

Watch Room is a short film about three scientists who believe they’re creating an AI within the safety of virtual reality– until their creation learns it’s at risk of being shut down. It’s a sci-fi thriller with roots in AI and VR – in a sense, think Ex Machina meets Primer.

Watch Room speaks to the promise and perils of AI, in a way that respects its audience and the complexities of the field. The film also explores the intersection of AI and VR, with an eye towards the future as we begin to interact with AI in virtual environments.

With Watch Room, our goal is to contribute to the budding conversation around the promise and perils of Artificial Intelligence research, in a way that respects the complexities involved. As such, we’ve done our best to create a story that touches on everything from simulation theory, to brain emulation, to Roko’s Basilisk… to that most hallowed of science fiction questions: “What makes us human?”

Another goal of ours is to illustrate the possibilities within the realm of virtual reality.

Of course, Watch Room‘s scientific roots drink deeply from rich dramatic soil. On one level, we’re just plain old excited to make a film that’s a joy to watch: smart and twisting in a way that respects the audience and keeps you guessing right up to the end. It’s the film’s narrative merits that will help it break into the mainstream, joining a growing roster of conscientious sci-fi that treats A.I. as seriously as it deserves.

In short, our mission is one of education as well as entertainment. We need your help in bringing this story and its urgent scientific and ethical message to the world. Many thanks for your consideration!

Soon, humans and AI will be indiscernible, especially in VR. Excited to see @WatchRoomMovie come to life. Donate! http://kck.st/29sh6R9

AI Business Landscape Infographics

The research and capitalization on AI is happening around the planet. Yes, the US have the biggest share with close to 500 companies working on the progression of AI. However, since the UK, Russia, Canada, Nigeria, Oman and several other countries are home to AI companies, this is by no means an exclusively American innovation.

To provide an overview of the current AI business landscape Appcessories have created this handy infographic.
~~

Bio:
Max Wegner is the Senior Editor at Appcessories.co.uk and a regular contributor with a keen eye on new inventions and is always one step ahead when it comes to technology.

Building a Nervous System for OpenStack – Canonical and Skymind

Building a Nervous System for OpenStack

Big Software is a new class of software composed of so many moving pieces that humans, by themselves, cannot design, deploy or operate them. OpenStack, Hadoop and container-based architectures are all byproducts of Big Software. The only way to address this complexity is with automatic, AI-powered analytics.

canonical_demo

Summary

Canonical and Skymind are working together to help System Administrators operate large OpenStack instances. With the growth of cloud computing, the size of data has surpassed human ability to cope with it. In particular, overwhelming amounts of data make it difficult to identify patterns like the signals that precede server failure. Using deep learning, Skymind enables OpenStack to discover patterns automatically, predict server failure and take preventative actions.

Canonical Story

Canonical, the company behind Ubuntu, was founded in March 2004 and launched its Linux distribution six months later. Amazon created AWS, the first public cloud, shortly thereafter, and Canonical worked to make Ubuntu the easiest option for AWS and later public cloud computing platforms.

In 2010, OpenStack was created as the open-source alternative to the public cloud. Quickly, the complexity of deploying and running OpenStack at cloud scale showed that traditional configuration management, which focuses on instances (i.e. machines, servers) rather than running micro-service architectures, was not the right approach. This was the beginning of what Canonical named the Era of Big Software.

Big Software is a class of software made up of so many moving pieces that humans cannot design, deploy and operate alone. It is meant to evoke big data, defined initially as data that cannot be stored on a single machine. OpenStack, Hadoop and container-based architectures are all big software.

The Problem With Big Software

Day 1: Deployment

The first challenge of big software is to create a service model for successful deployment – to find a way to support immediate and successful installations of software on the first day. Canonical has created several tools to streamline this process. Those tools help map software to available resources:

  • MAAS: Metal as a Service which is a provisioning API for bare metal servers.
  • Landscape: Policy and governance tool for large fleets of OS instances.
  • Juju: Service modeling software to model and deploy big software.

Day 2: Operations

Big Software is hard to model and deploy and even harder to operate, which means day 2 operations also need a new approach.

Traditional monitoring and logging tools were designed for operators who only had to oversee data generated by fewer than 100 servers. They would find patterns manually, create SQL queries to catch harmful events, and receive notifications when they needed to act. When noSQL became available, this improved marginally, since queries would scale.

But that does not solve the core problem today. With Big Software, there is so much data that a normal human cannot cope with and find patterns of behavior that result in server failure.

AI and the Future of Big Software

This is where AI comes in. Deep learning is the future of day 2 operations. Neural nets can learn from massive amounts of data to find needles in any haystack. Those nets are a tool that vastly extends the power of traditional system administrators, transforming their role.

Initially, neural nets will be a tool to triage logs, surface interesting patterns and predict hardware failure. As humans react to these events and label data (confirming AI predictions), the power to make certain operational decisions will be given to the AI directly: e.g. scale this service in/out, kill this node, move these containers, etc. Finally, as AI learns, self-healing data centers will become standard. AI will eventually be able to modify code to improve and remodel the infrastructure as it discovers better models adapted to the resources at hand.

The first generation deep-learning solution looks like this: HDFS + Mesos + Spark + DL4J + Spark Notebook. It is an enablement model, so that anyone can do deep learning, but using Skymind on OpenStack is just the beginning.

Ultimately, Canonical wants every piece of software to be scrutinized and learned in order to build the best architectures and operating tools.

Semantic Folding – From Natural Language Processing to Language Intelligence

Semantic Folding – From Natural Language Processing to Language Intelligence

fingerprints_646x220

Semantic Folding Theory is an attempt to develop an alternative computational approach for the processing of language data. Nearly all current methods of natural language understanding use, in some form or other, statistical models to assess the meaning of text and rely on the use of “brute force” over large quantities of sample data. In contrast, Semantic Folding uses a neuroscience-rooted mechanism of distributional semantics that solves both the “Representational Problem” and the “Semantic Grounding Problem”, both well known by AI researchers since the 1980’s.

Francisco De Sousa Webber, co-founder of Cortical.io, has developed the theory of Semantic Folding, which is presented in a recently published white paper. It builds on the Hierarchical Temporal Memory (HTM) theory by Jeff Hawkins and describes the encoding mechanism that converts semantic input data into a valid Sparse Distributed Representation (SDR) format.

Douglas R. Hofstadter’s Analogy as the Core of Cognition also inspired the Semantic Folding approach, which uses similarity as a foundation for intelligence. Hofstadter hypothesizes that the brain makes sense of the world by building, identifying and applying analogies. In order to be compared, all input data must be presented to the neo-cortex as a representation that is suited for the application of a distance measure. Semantic Folding applies this assumption to the computation of natural language: by converting words, sentences and whole texts into a Sparse Distributed Representational format (SDR), their semantic meaning can be directly inferred by their relative distances in the applied semantic space.

After capturing a given semantic universe of a reference set of documents by means of a fully unsupervised mechanism, the resulting semantic space is folded into each and every word-representation vector. These word-vectors, called semantic fingerprints, are large, sparsely filled binary vectors. Every feature bit in this vector not only corresponds to but also equals a specific semantic feature of the folded-in semantic space and by this means provides semantic grounding.

The main advantage of using the SDR format is that it allows any data items to be directly compared. In fact, it turns out that by applying Boolean operators and a similarity function, even complex Natural Language Processing operations can be implemented in a very simple and efficient way: each operation is executed in a single step and takes the same, standard amount of time. Because of their small size, semantic fingerprints require only 1/10th of the memory usually required to perform complex NLP operations, which means that execution on modern superscalar CPUs can be orders of magnitudes faster. Word-SDRs also offer an elegant way to feed natural language into HTM networks and to build on their predictive modeling capacity to develop truly intelligent applications for sentiment analysis, semantic search or conversational dialogue systems.

Because of the unique attributes of its underlying technology, Semantic Folding solves a number of well-known NLP challenges:

  • Vocabulary mismatch: text comparisons are inherently semantic, based on the topological representation of its 16,000 semantic features.
  • Language ambiguity: the meaning of text is implicitly disambiguated during the aggregation of its constituent word-fingerprints.
  • Time to market: Semantic Folding is accessible through the Retina API, which offers atomic building blocks for a wide range of NLP solutions. The unsupervised training process enables easy adaptation to specific tasks and domains.
  • Black box effects: with Semantic Fingerprints, every single feature has concrete observable semantics. This unique characteristic enables interactive “debugging” of semantic solutions.
  • Solution scalability: use-case specific semantic spaces enable scaling of a solution across customers and domains with minimum effort. As the representation of meaning in semantic fingerprints is stable across languages, text in in different languages can be directly compared, without translation.

To learn more about Semantic Folding and its application to Big Data Semantics, please visit http://cortical.io, experiment with the Sandbox API or Download the Semantic Folding White Paper.

Youtube video

Timetable for the 2nd Annual AI Awards

We are pleased to announce the time table for the key events as part of the 2nd Annual AI Awards.

The Awards Timetable is:

  • 1st July – Awards Categories Listed
  • 1st September – Open for Nomination Voting
  • 15th January – Voting Closes
  • 1st February – Award Winners Announced

To keep informed about the AI Awards please visit the website Awards.AI and follow the twitter account @Awards_AI

How Artificial Intelligence Can Change Education

In the beginning of 2016 Jill Watson, an IBM-designed bot, has been helping graduate students at Georgia Institute of Technology solve problems with their design projects. Responding to questions over email and posted on forums, Jill had a casual, colloquial tone, and was able to offer nuanced and accurate responses within minutes. A robot has been teaching graduate students for 5 months and none of them realized. Here are just a few of artificial intelligence tools and technologies that will shape and define the educational experience of the future.

Duolingo: voice recognition for language learning

duolingo-for-homework-practice-4-638

 

Duolingo is the world’s most popular platform to learn a language. App predicts your word strength, figures out which sentences will help you best practice your weakest words/skills, recommends immersion practice documents (translations) based on your progress and estimates the quality of a translation-in-progress.

Plexuss: college comparison and recruitment platform

a3a78830065087.561228bf6188dPlexuss facilitates contact between universities and future students, and aims to help students make an informed decision when it comes to choosing the right university. It allows users to take a virtual tour of their selected campuses, compare colleges, and chat with universities of their choice. The platform includes a college ranking system, which collates data from trustworthy sources including Forbes, Reuters and Shanghai Ranking. Algorithm compares data using a variety of criteria like in- and out-of-state tuition, acceptance rates, college endowment funds, or more advanced search criteria such as student-to- faculty ratios, SAT score percentiles, environmental sustainability policies etc. Colleges no longer have to send out expensive and time-consuming recruitment information packs, and are instead able to easily view candidate profiles through Plexuss website.

Intelligent tutoring system

system-explainAn intelligent tutoring system is a computer system that aims to provide immediate and customized instruction or feedback to learners, usually without intervention from a human teacher. It was constructed to help students learn geography, circuits, medical diagnosis, computer programming, mathematics, physics, genetics, chemistry, etc. ITSs have the common goal of enabling learning in a meaningful and effective manner by using a variety of computing technologies. This technology is used in both formal education and professional settings. It aims to solve the problem of over-dependency of students over teachers for quality education. Intelligent tutoring system can be useful when large groups need to be tutored simultaneously or many replicated tutoring efforts are needed (in technical training situations such as training of military recruits and high school mathematics).

Recognition apps: decode the world with your smartphone

photomath-550x309As more schools bring tablets into the classroom, educators are finding that apps are game changers that generate excitement and motivates students. A great example of recognition app is rock and mineral identifier, which is full of information for students who are identifying rocks and minerals. If a school doesn’t have access to hands-on materials this app can work as a substitution. Some of the most powerful education apps are used for teaching reading and supporting differentiation for students with disabilities (especially the ones using speech and text recognition).

Woogie: educational companion

Since October 2015, a group of Romanian engineers and programmers is creating Woogie – a voice-enabled AI device. It is forthcoming for kids aged between 6 and 12 years, native English speakers. Woogie’s plan is to cross the MVP phase in the fall of 2016. It will be able to detect, read, process and understand the human language. Also, it will have the capability of converting text to speech and speech to text, able to play radio stations, podcasts and shows according to the user’s age. Also, it will play music on request or based on learning algorithms. Developers hope that Woogie will help children to memorize different information from multiple areas. It does that based on interactivity. It acknowledges the kid’s presence in the room and reacts according to this. Also, will control smart home appliances like room lights or sound volume and keeps the child up-to-date on the information he shows interest on. For example, if the child has a favorite artist, the companion can provide useful news about that artist.

Learning analytics: educational application

Kennisnet-LA-treeLearning analytics is an educational application of web analytics aimed at learner profiling, a process of gathering and analyzing details of individual student interactions in online learning activities. Students often act as direct consumers of learning analytics, particularly through dashboards that support the development of self-regulated learning and insight into one’s own learning. Learning analytics can assist students in course selection. It provides a broad range of insight into course materials, student engagement, and student performance.For example, degree compass pairs current students with the courses that best fit their talents and program of study for upcoming semesters. Advisors can use this system to identify the students who are at the highest risk of failure.

Personal trainer: fitness with machine learning

Millions of people exercise without proper form, which reduces the effectiveness of their workouts and leads to increased injury risk. Researchers from Stanford University aim to help exercisers improve their form by giving fitness advice with machine learning. They explore the free standing squat specifically, a fundamental, full body exercise where proper form is crucial. A personal trainer can help people exercise in proper form, increase the effectiveness of their workouts and help to avoid injury risk.

Viper: plagiarism checking tool

scan-example1

Plagiarism is defined as the use or close imitation of another author’s work, which has been claimed as your own. To avoid plagiarism, you should always reference correctly according to your institution’s guidelines and use Viper. Viper is fast becoming the plagiarism of choice with over 10 billion resources scanned and an easy interface which highlights potential areas of plagiarism in your work. Viper is free and easy to use side by side comparison tool with 100% accurate reports.

Automated essay grading

ComputersGradingEssays.pix_Automated essay grading is a tool with math models which are able to make predictions that closely match those made by human graders. It can be used for essays of intermediate writing level (7-10th grades). Given enough human graded training examples for a writing prompt, the system can automate the grading process for that prompt with fairly good accuracy. Using machine learning to assess human writing can potentially make quality education more accessible. The use of AES for high-stakes testing in education has generated the significant backlash, with opponents pointing to research that computers cannot yet grade writing accurately and arguing that their use for such purposes promotes teaching writing in reductive ways.

Better reading levels

1be24b7Measuring the reading difficulty of a particular text is a common and salient problem in the educational world, particularly with respect to new or struggling readers. While common sense
measures exist for canonical texts, assigning an appropriate reading level metric to new resources
remains challenging. Current systems have been widely criticized for misrepresenting the difficulty of texts which causes frustration for students and educators alike. Better reading levels use machine learning to reproduce results of the Lexile Reading Measure (the most popular metric for reading difficulty) and focus on four features: sentence length, paragraph length, word length and difficulty of vocabulary.

One-to-one tutoring has long been thought the most-effective approach to teaching but would be too expensive to provide for all students. That’s why artificial intelligence can be used to provide children with one-to-one tutoring to improve their learning and monitor their well-being. Instead of being examined in traditional ways, children could be assessed in a complete manner by collecting data about their performance over a long period, providing employers and educational institutions with a richer picture of their abilities. AI could radically transform education system but it needs more funding and more push from academics and governments.

Author: AI.Business Team

home of Artificial Intelligence information

Resource Directory, News Stories, Videos, Twitter & Forum Streams, Spotlight, Awards, Showcase and Magazine