The AI Magazine

The Online Magazine for Artificial Intelligence and Machine Learning Enthusiasts

 

Welcome to our online AI Magazine. We share contributed articles, news stories, spotlight profiles as well as startup press releases. If you would like to share your views or news with our AI community please feel free to submit an article.

Goto Magazine Sections –

Articles from ContributorsCompany SpotlightsInformed.AI News

Articles from our Contributors

Check out news stories and articles from our community of contributors.

GeckoSystems, an AI Robotics Co., Signs U.S. Joint Venture Agreement

CONYERS, Ga., August 18, 2016 — GeckoSystems Intl. Corp. (Pink Sheets: GOSY | http://www.GeckoSystems.com) announced today that after effectuating an NDA, MOU, and LOI agreements with this NYC AI firm, that they have now executed a joint venture agreement. For over nineteen years, GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

“We are very pleased to announce our first US JV. We will jointly coordinate our advanced Artificial Intelligence (AI) R&D to achieve higher levels of human safety and sentient verbal interaction for the professional healthcare markets.  We expect not only near term licensing revenues, but also an initial AI+ CareBot(tm) sale. While we have several JV’s in Japan continuing to mature, it is gratifying to have gained demonstrable traction in the US markets.

“One of our primary software and hardware architecture design goals has been for our MSR platforms to be extensible such that obsolescence of the primary cost drivers, the mechanicals, would be as much as five or more years (or when actually worn out from use).  Consequently, our hardware architecture is x86 CPU centric and all our AI savants communicate over a LAN using TCP/IP protocols with relatively simple messaging. This means all systems on the Company’s MSR’s are truly “Internet of Things” (IoT) due to each having a unique IP address for easy and reliable data communications. Because of our high level of pre-existing, linchpin, 3-legged milk stool basic functionalities that make our AI+ CareBot so desirable by being easily upgraded, not only by GeckoSystems, but also third party developers, such as this advanced NYC AI firm.

“This is the strategic hardware development path that IBM used in setting PC standards that have enabled cost effective use of complex, but upgradeable for a long service life, personal computers for over thirty years now,” observed Martin Spencer, CEO, GeckoSystems Intl. Corp.

NYC has national prominence in the AI development community.  For example, NYC has twenty listed here: http://nycstartups.net/startups/artificial_intelligence  Atlanta, GA, reports only one AI robotics startup, Monsieur, a leader in the automated bartending space. http://monsieur.co/company/

 

Artificial intelligence technologies and applications span:

Big Data, Predictive Analytics, Statistics, Mobile Robots, Social Robots, Companion Robots, Service Robotics, Drones, Self-driving Cars, Driverless Cars, Driver Assisted Cars, Internet of Things (IoT), Smart Homes, UGV’s, UAV’s, USV’s, AGV’s, Forward and/or Backward Chaining Expert Systems, Savants, AI Assistants, Sensor Fusion, Point Clouds, Worst Case Execution Time (WCET is reaction time.) Machine Learning, Chatbots, Cobots, Natural Language Processing (NLP), Subsumption, Embodiment, Emergent, Situational Awareness, Level of Autonomy, etc.

 

An internationally renowned market research firm, Research and Markets, has again named GeckoSystems as one of the key market players in the service robotics industry. The report covers the present scenario and the growth prospects of the Global Mobile Robotics market for the period 2015-2019. Research and Markets stated in their report, that they: “…forecast the Global Mobile Robotics market to grow at a CAGR of nearly sixteen percent over the period 2015-2019.”

 

The report has been prepared based on an in-depth market analysis with inputs from industry experts and covers the Americas, the APAC, and the EMEA regions.  The report is entitled, Global Professional Service Robotics Market 2015-2019.

 

Research and Markets lists the following as the key vendors operating in this market:

Companies mentioned:

AB Electrolux

Blue River Technology

Curexo Technology

Elbit Systems

GeckoSystems

Health Robotics

MAKO Surgical Corp.
“GeckoSystems has been recognized by Research and Markets for several years now and it is the most comprehensive report of the global service robotics industry to my knowledge. I am pleased that their experienced market researchers are sufficiently astute to accept that small service robot firms, such as GeckoSystems, can nonetheless develop advanced technologies and products as well, or better, as much larger, multi-billion dollar corporations such as AB Electrolux, etc., reflected Martin Spencer, CEO, GeckoSystems Intl. Corp.

 

Research and Markets also discusses:

Professional service robots have the tendency to work closely with humans and can be used in a wide application ranging from surveillance to underwater inspection. They provide convenience and safety, among other benefits, thus creating demand worldwide. Technavio expects the global professional service robotics market to multiply at a remarkable rate of nearly 16% during the forecast period. Today, the adoption of robots is on the rise globally as they tend to minimize manual labor and reduce the chances of human error.

 

In the last decade, there have been numerous technological advancements in the field of robotics that have made the adoption of robots easy, viable, and beneficial. For instance, there has been a lot of innovations and improvements in the Internet of things, automation, M2M communications, and cloud. The modern robotic manufacturers are trying to take advantage of these technologies as a communication medium between the robots and humans, thus increasing the convenience as well as the transfer of real-time information within the business entity seamlessly.

 

Segmentation of the professional service robotics market by application:

– Defense, rescue, safety, and aerospace application

– Field application

– Logistics application

– Healthcare application

– Others

 

The defense application segment was the largest contributor to the growth of the global professional service robotics market with more than 44% share of the overall shipments in 2014. The demand for UGV and UAV for surveillance and safeguarding lives of personnel from ammunition, landmines, and bombs is expected to drive the demand for robotics.

 

“It is an honor that they recognize the value of the over 100 man-years we have invested in our proprietary AI robotics Intellectual Properties and my full time work for nearly 20 years now.   Our suite of AI mobile robot solutions is well tested, portable, and extensible.  It is a reality that we could partner with any other company on that list and provide them with high-level autonomy for collision free navigation at the lowest possible cost to manufacture.  There is also an opportunity for other cost reductions and enhancement of functionality with other components of our AI solutions,” stated Spencer.

 

In order for any companion (social) robot to be utilitarian for family care, it must be like a “three-legged milk stool” for safe, routine usage.  For any mobile robot to move in close proximity to humans, it must have:

(1) Human quick reflex time to avoid moving and/or unmapped obstacles, (GeckoNav(tm): http://tinyurl.com/le8a39r) (See the importance of WCET discussion below.)

(2) Verbal interaction (GeckoChat(tm): http://tinyurl.com/nnupuw7) for easy user dialogues and/or monologues with a sense of date and time (GeckoScheduler(tm): http://tinyurl.com/kojzgbx), and

(3) Ability to automatically find and follow designated parties (GeckoTrak(tm): http://tinyurl.com/mton9uh) such that verbal interaction can occur routinely with video and audio monitoring and/or teleconferences of the care receiver occur readily and are uninterrupted.

 

In the US, GeckoSystems projects the available market size in dollars for cost effective, utilitarian, multitasking eldercare social mobile robots in 2017 to be $74.0B, in 2018 to be $77B, in 2019 to be $80B, in 2020 to be $83.3B, and in 2021 to be $86.6B.  With market penetrations of 0.03% in 2017, 0.06% in 2018, 0.22% in 2019, 0.53% in 2020, and 0.81% in 2021, we anticipate CareBot social robot sales from the consumer market alone at levels of $22.0M, $44.0M, $176M, $440.2M, and $704.3M, respectively.

 

“This first US JV will continue to evolve, such that GeckoSystems enjoys revenues that increase shareholder value. After many years of patience by our current 1300+ stockholders, they can continue to be completely confident that this new, potentially multi-million-dollar JV licensing agreement further substantiates and delineates the reality that GeckoSystems will continue to be rewarded with additional licensing revenues furthering shareholder value,” concluded Spencer.

 

About GeckoSystems:

GeckoSystems has been developing innovative robotic technologies for nineteen years.  It is CEO Martin Spencer’s dream to make people’s lives better through AI robotic technologies.

 

The safety requirement for human quick WCET reflex time in all forms of mobile robots:

In order to understand the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why another Japanese robotics company desires a business relationship with GeckoSystems, it’s key to acknowledge some basic realities for all forms of automatic, non-human intervention, vehicular locomotion and steering.

  1. Laws of Physics such as Conservation of Energy, inertia, and momentum, limit a vehicle’s ability to stop or maneuver. If, for instance, a car’s braking system design cannot generate enough friction for a given road surface to stop the car in 100 feet after brake application, that’s a real limitation. If a car cannot corner at more than .9g due to a combination of suspension design and road conditions, that, also, is reality. Regardless how talented a NASCAR driver may be, if his race car is inadequate, he’s not going to win races.
  2. At the same time, if a car driver (or pilot) is tired, drugged, distracted, etc. their reflex time becomes too slow to react in a timely fashion to unexpected direction changes of moving obstacles, or the sudden appearance of fixed obstacles. Many car “accidents” result from drunk driving due to reflex time and/or judgment impairment. Average reflex time takes between 150 & 300ms. http://tinyurl.com/nsrx75n
  3. In robotic systems, “human reflex time” is known as Worst Case Execution Time (WCET). Historically, in computer systems engineering, WCET of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. In big data, this is the time to load up the data to be processed, processed, and then outputted into useful distillations, summaries, or common sense insights. GeckoSystems’ basic AI self-guidance navigation system processes 147 megabytes of data per second using low cost, Commercial Off The Shelf (COTS) Single Board Computers (SBC’s).
  4. Highly trained and skilled jet fighter pilots have a reflex time (WCET) of less than 120ms. Their “eye to hand” coordination time is a fundamental criterion for them to be successful jet fighter pilots. The same holds true for all high performance forms of transportation that are sufficiently pushing the limits of the Laws of Physics to require the quickest possible reaction time for safe human control and/or usage.
  5. GeckoSystems’ WCET is less than 100ms, or as quick, or quicker than most gifted jet fighter pilots, NASCAR race car drivers, etc. while using low cost COTS and SBC’s
  6. In mobile robotic guidance systems, WCET has 3 fundamental components.
  7. Sufficient Field of View (FOV) with appropriate granularity, accuracy, and update rate.
  8. Rapid processing of that contextual data such that common sense responses are generated.
  9. Timely physical execution of those common sense responses.

 

——————————————————————————————-

An earlier third party verification of GeckoSystems’ AI centric, human quick sense and avoidance of moving and/or unmapped obstacles by one of their mobile robots can be viewed here: http://t.co/NqqM22TbKN

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd” situations like a mall or an exhibit area.

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home.  You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area.  There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration.  SafePath(tm) navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.
GeckoSystems, Star Wars Technology

 

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly.  It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.   The company believes that the CareBot will increase the safety and well-being of its elderly charges while decreasing stress on the caregiver and the family.

GeckoSystems is preparing for Beta testing of the CareBot prior to full-scale production and marketing.   CareBot has recently incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

 

Kinect Enabled Personal Robot video:

http://www.youtube.com/watch?v=kn93BS44Das

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley.  There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control.  GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative.  Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

 

More information on the CareBot personal assistance robot:

http://www.geckosystems.com/markets/CareBot.php

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

 

GeckoSystems uses http://www.LinkedIn.com as its primary social media site for investor updates. Here is Spencer’s LinkedIn.com profile:

http://www.linkedin.com/pub/martin-spencer/11/b2a/580

 

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website:  http://www.geckosystems.com/

Source: GeckoSystems Intl. Corp.

 

Safe Harbor:

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

 

 

 

 

GeckoSystems, an AI Robotics Co., Gains Traction: LOI with NYC AI Firm

CONYERS, Ga., August 11, 2016 — GeckoSystems Intl. Corp. (Pink Sheets: GOSY | http://www.GeckoSystems.com) announced today that after over two years of negotiations with this advanced AI developer in New York City, that additional, substantive progress, a Letter of Intent (LOI), has been signed to form their first US joint venture. For over nineteen years, GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

 

“Less than two weeks ago, I met with this Artificial General Intelligence (AGI) firm’s CEO. Our two days of meetings were very cordial, frank and productive. To that end, we immediately effectuated our Safety Clause NDA such that our discussions became of sufficient substance for us to sign our second agreement, an MOU, clearly revealing that both parties believe significant AI synergies appropriate for multiple markets would be garnered by each firm.  Now we have the additional clarity with this third agreement, an LOI, as we continue to gain traction in our pursuit of this multi-million-dollar licensing revenue opportunity,” stated Martin Spencer, CEO, GeckoSystems Intl. Corp.

 

NYC has national prominence in the AI development community.  For example, NYC has twenty listed here: http://nycstartups.net/startups/artificial_intelligence  Atlanta, GA, reports only one AI robotics startup, Monsieur, a leader in the automated bartending space. http://monsieur.co/company/

 

Artificial intelligence technologies and applications span Big Data, Predictive Analytics, Statistics, Mobile Robots, Service Robotics, Drones, Self-driving Cars, Driverless Cars, Driver Assisted Cars, Internet of Things (IoT), Smart Homes, UGV’s, UAV’s, USV’s, AGV’s, Forward and/or Backward Chaining Expert Systems, Savants, AI Assistants, Sensor Fusion, Subsumption, etc.

 

Recently the Company revealed a new AI mobile robot concept for better, lower cost public safety. All of those affiliated with the Company, as do most Americans, share the abject horror we are all still trying to process regarding the recent, indoor (and outdoor) mass shooting and murdering of dozens of innocent victims.

 

To better manage those 21st century mass shootings, the Company is offering to prototype and deploy the GeckoNED(tm), a new type of mobile security robot that is a Non-violent Enforcement Device with a high level of independent mobile autonomy, sensor rich for enhanced situational awareness, and ease of complete control under tele-operation by designated, vetted public safety personnel.

 

The following was written by Spencer shortly after the Sandy Hook mayhem, but not updated since the Pulse nightclub carnage.

 

Safety for our children is a moral imperative for all enlightened civilizations. The present proliferation of lethal weaponry in the form of readily obtainable semi- and full automatic pistols and rifles has brought increased child safety to nearly blinding visibility that requires new thinking and solutions for this long overdue, poorly addressed need in our culture.

 

Mobile robots could be the most proximate and final deterrent to those that would harm our children in public schools and other venues.  GeckoSystems has named the mobile robot concept that would provide yet another barrier between our children and those immoral individuals intent on doing them significant harm the GeckoNEDä.  “NED” stands for Non-violent (or non-lethal) Enforcement Device.

 

Fundamentally, the GeckoNED (or “NED”) is a new type of mobile sentry robot that would deter, detect, and contain those that would violently harm our children in their schools. The NED would be a new type of school mascot that could be customized by the children, teachers and staff to be a daily part of their school time lives.  The NED would be able to automatically patrol all wheelchair accessible areas in any school without human oversight or intervention using GeckoSystems’ proven SafePathä mobile robot AI navigation software.

 

What does it do? Deters, Detects, and Contains to provide better Protection-

  1. Marquee deterrent video and audio surveillance systems with fully autonomous self-patrolling in loose crowds, etc.
  2. Quick detection using AI augmented sensor fusion systems with fully autonomous auto-find/seek
  3. Deployable, multiple non-lethal containment systems under direct human control only
  4. Ready mobile detection, protection and containment systems that are fully tele-operable remotely

 

The NED would be a marquee deterrent due to its robust audio and video surveillance systems employing WiFi LAN data communications to connect to the school’s Internet access. The primary, high-resolution pan/tilt zoom video camera and professional quality microphones would be selected such that their features and benefits are appropriate for the expanses to be “sight and sound” monitored in the school.

 

Further enhancing the marquee deterrence, the NED would be available for direct human tele-operation almost instantly when direct human control was appropriate and timely due to a clear and present danger to the children having been identified with a high level of confidence by the NED’s AI enhanced sensor systems. In addition, cell phone and police band communication capability could be included using the voice synthesis ability of GeckoChatä.

 

The NED would:

Enable prompt intruder detection using multiple, different sensor systems (sight, sound and smell) AI fused to produce a one plus one equals three synergy.  This counter-intuitive metaphor describes a common benefit of GeckoSystems’ advanced artificial intelligence and sensor fusion competencies.

 

The NED’s AI’s would sensor fuse:

  1. Augmented Vision
    1. These would include machine vision, including that visible to the human eye, and that invisible, such as infra-red (IR) due to body heat, heat from fired weapons, etc. and AI software
  2. Extended Hearing
    1. Frequency response range widened beyond human hearing, into the ultrasonic using multiple microphones (omni directional and directional) and AI software
  3. Enhanced Smell
    1. Odor detection systems for appropriate gas detection, whether odorless to human sense of smell or not with intelligent inhalation system and AI software

 

Singly, and in concert, the preceding systems would detect unwanted intruders by:

  1. Video surveillance enhanced by AI object recognition machine vision
    1.  In both visible and invisible IR light spectrums
  2. Audio surveillance enhanced by AI expert systems
    1. Within and outside human hearing range, atypical sounds such as
  1. Gun shots
  2. Breaking glass
  • Doors being broken down
  1. Students and/or staff stressed voices; screams
  1. Odor and odorless gas surveillance
    1. Smoke, carbon monoxide, and natural gas
    2. Potentially odors from:
  1. Handguns, long guns, rifles
  2. Guns in lockers
  • Explosives, gun ammunition in lockers

 

The NED would pre-position ready mobile protection that is fully tele-operable remotely when atypical situations arise. It would immediately alert pre-designated parties for human intervention and direct human control of the NED and its various containment systems.

 

  1. The NED’s exterior size would be about thirty (30) inches in diameter and seventy-two (72) inches tall
  1. Cannot be readily disabled by small arms fire thus affording cover for students and staff when the NED places itself between the intruder and all others.
  2. The NED’s shroud could be bulletproof covering using a combination of Kevlar, ceramic armor, and/or aluminum plates sufficient for absorbing small arms fire.
  1. Immediate intervention after detection resulting from a top speed, in obstacle free hallways, of up to 20 mph
  1. SafePath technologies with obstacle avoidance five to six times faster than a person preclude NED hitting anything, even when under teleoperation (direct human) control.
  1. Bull horns, sirens, high power speaker system and/or other sound projection systems capable of hitting the threshold of pain

 

The NED would have readily deployed, multiple non-lethal containment systems solely under the control of a designated, responsible party such as a “watch commander” at the local police station.

 

The non-violent and/or non-lethal containment capabilities would consist of:

  1. Targeted, high volume water spray
  2. Sleeping gas with directed dispersement
  3. Irritant sprays, such as pepper spray, tear gas, etc. with directed dispersement
  4. Acoustical stunners, flash-bangs, “stun bombs”
  5. Targeted net guns, “projectile nets”
  6. Targeted sticky foam, an extremely tacky material carried in compressed form with a propellant
  7. Targeted electrical stunners (Tasers)

 

In addition to providing children and staff in schools a higher level of safety, the school would now have a new kind of school mascot, a NED. The covering could be painted in school colors, and designed like the school mascot, if desired. For an example, Huber U. Hunt Elementary School is a tiger. They could have a tiger design with verbal UX customized for a pleasing dialect for the students. The NED’s battery recharging pads would be located at various desirable sentry positions throughout school. Literally the school’s NED would be unique in its use and appearance in every school.

 

Resulting from this LOI, the GeckoNED, for example, would benefit from more powerful, analytic, reliable, comprehensive AI software even more situationally aware and autonomous to provide an even higher level of safety for our school children and other “soft targets,” such as movie theatres, night clubs, etc. This is completely congruent with GeckoSystems’ strategic focus.

 

The Company is also negotiating an investment from a Japanese trading company, KISCO Ltd., and those discussions continue under NDA.

 

“This LOI portends well for us and our shareholders.  We are definitively on path to consummate our first domestic joint venture licensing agreement. It comes as no surprise, that a highly advanced NYC AI company understands the market potential of our suite of AI mobile robot solutions.

 

“We continue to have numerous ongoing joint venture and/or licensing discussions, not only in Japan, but also in the US, as revealed in this press release.  I am also pleased that as the Service Robotics industry begins to offer real products to eager markets, our capabilities are being recognized. Our 1300+ shareholders can continue to be confident that we expect to be signing numerous multi-million-dollar licensing agreements to further substantiate and delineate the reality that GeckoSystems will earn additional licensing revenues to further increase shareholder value and ROI,” concluded Spencer.

 

 

About GeckoSystems:

 

GeckoSystems has been developing innovative robotic technologies for nineteen years.  It is CEO Martin Spencer’s dream to make people’s lives better through AI robotic technologies.

 

The safety requirement for human quick WCET reflex time in all forms of mobile robots:

 

In order to understand the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why another Japanese robotics company desires a business relationship with GeckoSystems, it’s key to acknowledge some basic realities for all forms of automatic, non-human intervention, vehicular locomotion and steering.

 

  1. Laws of Physics such as Conservation of Energy, inertia, and momentum, limit a vehicle’s ability to stop or maneuver. If, for instance, a car’s braking system design cannot generate enough friction for a given road surface to stop the car in 100 feet after brake application, that’s a real limitation. If a car cannot corner at more than .9g due to a combination of suspension design and road conditions, that, also, is reality. Regardless how talented a NASCAR driver may be, if his race car is inadequate, he’s not going to win races.

 

  1. At the same time, if a car driver (or pilot) is tired, drugged, distracted, etc. their reflex time becomes too slow to react in a timely fashion to unexpected direction changes of moving obstacles, or the sudden appearance of fixed obstacles. Many car “accidents” result from drunk driving due to reflex time and/or judgment impairment. Average reflex time takes between 150 & 300ms. http://tinyurl.com/nsrx75n

 

  1. In robotic systems, “human reflex time” is known as Worst Case Execution Time (WCET). Historically, in computer systems engineering, WCET of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. In big data, this is the time to load up the data to be processed, processed, and then outputted into useful distillations, summaries, or common sense insights. GeckoSystems’ basic AI self-guidance navigation system processes 147 megabytes of data per second using low cost, Commercial Off The Shelf (COTS) Single Board Computers (SBC’s).

 

  1. Highly trained and skilled jet fighter pilots have a reflex time (WCET) of less than 120ms. Their “eye to hand” coordination time is a fundamental criterion for them to be successful jet fighter pilots. The same holds true for all high performance forms of transportation that are sufficiently pushing the limits of the Laws of Physics to require the quickest possible reaction time for safe human control and/or usage.

 

  1. GeckoSystems’ WCET is less than 100ms, or as quick, or quicker than most gifted jet fighter pilots, NASCAR race car drivers, etc. while using low cost COTS and SBC’s

 

  1. In mobile robotic guidance systems, WCET has 3 fundamental components.
  2. Sufficient Field of View (FOV) with appropriate granularity, accuracy, and update rate.
  3. Rapid processing of that contextual data such that common sense responses are generated.
  4. Timely physical execution of those common sense responses.

 

——————————————————————————————-

 

In order for any companion robot to be utilitarian for family care, it must be a “three legged milk stool.”

(1) Human quick reflex time to avoid moving and/or unmapped obstacles, (GeckoNav(tm): http://tinyurl.com/le8a39r)

(2) Verbal interaction (GeckoChat(tm): http://tinyurl.com/nnupuw7) with a sense of date and time (GeckoScheduler(tm): http://tinyurl.com/kojzgbx), and

(3) Ability to automatically find and follow designated parties (GeckoTrak(tm): http://tinyurl.com/mton9uh) such that verbal interaction can occur routinely with video and audio monitoring of the care receiver is uninterrupted.

 

An earlier third party verification of GeckoSystems’ AI centric, human quick sense and avoidance of moving and/or unmapped obstacles by one of their mobile robots can be viewed here: http://t.co/NqqM22TbKN

 

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

 

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd” situations like a mall or an exhibit area.

 

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home.  You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area.  There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration.  SafePath(tm) navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.
GeckoSystems, Star Wars Technology

http://www.youtube.com/watch?v=VYwQBUXXc3g

 

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly.  It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.   The company believes that the CareBot will increase the safety and well-being of its elderly charges while decreasing stress on the caregiver and the family.

 

GeckoSystems is preparing for Beta testing of the CareBot prior to full-scale production and marketing.   CareBot has recently incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

 

Kinect Enabled Personal Robot video:

http://www.youtube.com/watch?v=kn93BS44Das

 

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley.  There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control.  GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative.  Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

 

More information on the CareBot personal assistance robot:

http://www.geckosystems.com/markets/CareBot.php

 

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

 

GeckoSystems uses http://www.LinkedIn.com as its primary social media site for investor updates. Here is Spencer’s LinkedIn.com profile:

http://www.linkedin.com/pub/martin-spencer/11/b2a/580

 

 

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website:  http://www.geckosystems.com/

Source: GeckoSystems Intl. Corp.

 

Safe Harbor:

 

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

 

 

 

 

SparkCognition Launches DeepArmor, First Ever Cognitive Antivirus Solution

Today at Black Hat 2016, SparkCognition is launching DeepArmor, an AI-powered anti-malware platform that promises to protect networks from many new and never-before-seen cyber security threats. This signifies a major industry advancement, of baking advanced artificial intelligence techniques, including neural networks and Natural Language Processing, into anti-virus (AV). As many as 78% of security professionals no longer trust traditional antivirus because existing solutions cannot keep up with rapidly evolving malware. SparkCognition makes products that identify, analyze, learn, anticipate and adjust to impending and real time cyber security threats and the company is exhibiting this week at Black Hat in booth 372.

“Cyber crime is growing beyond our control. According to the Singapore Minister of Home Affairs, Law Shanmugam, an estimated $2 trillion will be lost through cybercrime by 2019,” said Lucas McLane, director of Security Solutions for SparkCognition. “This is a recipe for disaster, and the major reason why both state and federal governments are making cyber security the top priority.”

To combat this growing problem and technological deficiency, SparkCognition has released the industry’s first cognitive antivirus solution, DeepArmor. DeepArmor takes a unique approach to endpoint protection by leveraging neural networks, advanced heuristics, and data science techniques to find and remove malicious files. Instead of looking at static signatures, or even exploding files in a sandbox, DeepArmor looks at the DNA of every file to identify if any components are suspicious or malicious in nature.

“We are using cognitive algorithms to constantly learn new malware behaviors and recognize how polymorphic files may try to attack in the future. This keeps every endpoint safe from malware that leverages domain-generated algorithms, obfuscation, packing, minor code tweaks, and many other modern tools,” explained SparkCognition senior product manager, Keith Moore. “This is a necessary defense against potentially devastating Zero-Day threats, which often confound and evade existing tools.”

DeepArmor is powered by cutting edge technology that represents a quantum leap beyond techniques used for malware generation or propagation. Pulling from proprietary SparkCognition automated model-building algorithms, DeepArmor, starts by looking at every un-scanned file on a user’s desktop or laptop. It breaks each file into thousands of different pieces for initial review. It then elevates initially identified features using an advanced feature derivation algorithm to develop a comprehensive, multi-dimensional view of behaviors, workflows and techniques. All of these individually analyzed components are then run through continuously evolving ensembles of neural networks to find patterns that may be malicious in nature. Because these neural networks are trained on a bevy of threat types, from Worms to Ransomware, many malevolent patterns present are unearthed and called out immediately, even if the file that contains them doesn’t have a known-bad signature.

“We have tailored DeepArmor to operate seamlessly behind the scenes on each endpoint, and to only identify real threats without calling out false positives,” added Moore. “This gives any user the freedom to do what they would like without the fear that their computer may become infected.”

DeepArmor is being made available to 1,000 members of SparkCognition’s beta program. To register for a chance to work with DeepArmor, please visit: http://sparkcognition.com/cognitive-approach-anti-malware/ 

About SparkCognition
SparkCognition, Inc., the world’s first Cognitive Security Analytics company, is based in Austin, Texas. The company is successfully building and deploying a cognitive, data-driven analytics platform for Clouds, Devices and the Internet of Things (IoT) industrial and security markets by applying patent-pending algorithms that deliver out of-band, symptom-sensitive analytics, insights and security.

SparkCognition was named the 2015 Hottest Start Up in Austin by SXSW and the Greater Austin Chamber of Commerce. The Company was the only US-based company to win Nokia’s 2015 Open Innovation Challenge. In 2015, it was named a Gartner Cool Vendor, and in 2016 SparkCognition garnered the Frost and Sullivan Technology Convergence Award. Recently, the Edison Awards recognized the company’s cyber security achievements. For more, visit http://sparkcognition.com/

Sophia Genetics unveils SOPHiA, the world’s most advanced collective artificial intelligence for Data-Driven Medicine

Sophia Genetics LOGO

Global Leader in Data Driven Medicine

Sophia Genetics unveils SOPHiA,
the world’s most advanced collective artificial intelligence for Data-Driven Medicine

Logo SOPHiA

  • Sophia Genetics unveils SOPHiA, the world’s most advanced artificial intelligence (AI) for Data-Driven Medicine
  • SOPHiA continuously learns from thousands of patients’ genomic profiles, and experts’ knowledge, to improve patients’ diagnostics and treatments
  • SOPHiA revolutionary technology will shortly be made available to the member hospitals of Sophia Genetics’ community with the upcoming new 4.0 version of the Sophia DDM® platform
  • Thanks to SOPHiA, the 170 hospitals already using Sophia DDM® in 28 countries will immediately benefit from better and faster diagnostics for hundreds of patients every day

LAUSANNE, Switzerland – 27 July 2016 – Sophia Genetics, the global leader in Data-Driven Medicine, today unveiled SOPHiA, the world’s most advanced collective artificial intelligence (AI) for Data-Driven Medicine. A state-of-the-art technology, SOPHiA continuously learns from thousands of patients’ genomic profiles and experts’ knowledge to improve patients’ diagnostics and treatments. The unmatched analytical powers of SOPHiA rely on the genomic information pooled on Sophia DDM®, the world’s largest clinical genomics community for molecular diagnostics, gathering to date 170 hospitals from 28 countries.

Today, Sophia Genetics also revealed results proving how SOPHiA managed to obtain a 98% match with expert clinicians’ variant pathogenicity predictions for BRCA genes mutations, which bear a potential risk of susceptibility to breast cancer. To obtain such quality result, the Swiss technological company’s AI considered data from thousands of patients’ genomic tests, building on the information pooled by hospitals in Sophia DDM®, learning how to predict genomic variants pathogenicity almost the same way a clinical expert does, and evolving as more data became available.

An initial 85% match was obtained with 10.000 patient analysed, improving to 96% match with 20.000 tests and 98% with classifications by expert clinicians. The final results are based on the genomic profiles of 30.000 patients, containing 28.000 unique genomic variants. The variants considered by SOPHiA were identified and sorted by Sophia Genetics’ three proprietary and patented advanced technologies, PEPPER™, MUSKAT™ and MOKA™, ensuring the 99.9% specificity and sensitivity that oncologists, clinicians and medical specialists need to confidently report clinical genomics variants to their patients.

SOPHiA’s revolutionary technology will soon be available to hospitals and clinicians members of the Sophia DDM® community. Moving forward, the secure and private pooling of more patients’ genomic profiles on Sophia DDM® will allow for similar advances from SOPHiA for 40 other genome diseases; oncology, hereditary cancers, cardiology, metabolic disorders and paediatrics, and the advent of a true Data-Driven Medicine for patients.

Speaking about this breakthrough for breast cancer diagnostics and treatment, but also for Data-Driven Medicine as a whole, Jurgi Camblong, CEO and co-founder of Sophia Genetics declared “I am proud to announce that Sophia Genetics is the first company with such genomic variant classification power in molecular diagnostics. SOPHiA facilitates clinical interpretation and its artificial intelligence features give medical experts more time to focus on the study of complex cases. Moving forward, we will use this state-of-the-art technology to apply SOPHiA’s predictive power to the other applications supported on our clinical genomics platform Sophia DDM®. We are already participating in better and faster diagnosing 200 patients every day and we expect SOPHiA’s results presented today to dramatically increase this number by allowing clinicians to offer faster and better diagnostics, and patients to benefit from better treatments”.

About Sophia Genetics

Sophia Genetics, a global leader in Data-Driven Medicine, brings together expertise in genetics, bioinformatics, machine-learning and genomic privacy. Based in Switzerland, the company is known for its high medical standards and Swiss precision when it comes to accuracy and quality management. Sophia Genetics offers health professionals who perform clinical genetic testing bioinformatics analysis, quality assurance, and secure banking of patient DNA sequence data generated by NGS. Sophia Genetics does not hold personal information on patients, and the patient data the company does hold is anonymised. Sophia Genetics helps clinical laboratories to reduce the cost, overcome complexity and fulfil quality constraints related to the use of NGS in the clinic. For more information, visit sophiagenetics.com and follow @SophiaGenetics and @JurgiCamblong.

Media contact:

Tarik Dlala
Sophia Genetics
+41 78 822 29 28
tdlala@sophiagenetics.com

British Computer Society Machine Intelligence Competition 2016

******* MACHINE INTELLIGENCE COMPETITION 2016 *******
British Computer Society Machine Intelligence Competition 2016
http://bcs-sgai.org/micomp/

After a three-year gap it is with great pleasure that the British Computer Society Specialist Group on Artificial Intelligence (SGAI) relaunches the BCS Machine Intelligence Competition, as part of the Group’s one-day event Real AI 2016.

The eleventh BCS Machine Intelligence competition for live demonstrations of applications that show ‘progress towards machine intelligence’ will be held on Friday October 7th 2016 at the BCS London Office, First Floor, The Davidson Building, 5 Southampton Street, London. The winner will receive a cash prize plus a trophy.

The prize will be awarded on the basis of a 10-15 minute live demonstration (not a paper, not a technical description). The demonstration can be of either software (e.g. a question-answering system or a speech recognition system) or hardware (e.g. a mobile robot).

Full details of the competition and an online entry form are available on the website. There is no entry fee but competitors will be asked to meet their own costs. The closing date is Friday September 9th 2016. However early entry is strongly advised.

Attendance at the competition is free of charge for those attending Real AI 2016 (http://www.bcs-sgai.org/realai2016/). All those attending will be eligible to vote for the winning entry.

Organisers: Ms. Nadia Abouayoub (BCS SGAI) email: nadia_abou@hotmail.com and Prof. Max Bramer (Chair, SGAI) email: max.bramer@port.ac.uk

Pat Inc, launches private beta Natural Language Understanding (NLU) API.

Pat Inc, launches private beta Natural Language Understanding (NLU) API.

Led by John Ball, CTO and founder, Wibe Wagemans, CEO, and Professor Robert Van Valin, Jr., CSO, Pat integrated the RRG (Role and Reference Grammar) model with a patented neural network to advance a SaaS platform that facilitates understanding the meaning of A.I. text- and voice-based applications

Just as there is more than one way to skin a cat, there are numerous theories for how to enable artificial intelligence to better understand language. Pat’s approach of combining role and reference grammar (RRG) with a neural network solves the open problems in NLU of Word Sense Disambiguation, context tracking and the otherwise typical, combinatorial explosion.

 

Watch Room – A Short Film where AI meets VR

Watch Room is a short film about three scientists who believe they’re creating an AI within the safety of virtual reality– until their creation learns it’s at risk of being shut down. It’s a sci-fi thriller with roots in AI and VR – in a sense, think Ex Machina meets Primer.

Watch Room speaks to the promise and perils of AI, in a way that respects its audience and the complexities of the field. The film also explores the intersection of AI and VR, with an eye towards the future as we begin to interact with AI in virtual environments.

With Watch Room, our goal is to contribute to the budding conversation around the promise and perils of Artificial Intelligence research, in a way that respects the complexities involved. As such, we’ve done our best to create a story that touches on everything from simulation theory, to brain emulation, to Roko’s Basilisk… to that most hallowed of science fiction questions: “What makes us human?”

Another goal of ours is to illustrate the possibilities within the realm of virtual reality.

Of course, Watch Room‘s scientific roots drink deeply from rich dramatic soil. On one level, we’re just plain old excited to make a film that’s a joy to watch: smart and twisting in a way that respects the audience and keeps you guessing right up to the end. It’s the film’s narrative merits that will help it break into the mainstream, joining a growing roster of conscientious sci-fi that treats A.I. as seriously as it deserves.

In short, our mission is one of education as well as entertainment. We need your help in bringing this story and its urgent scientific and ethical message to the world. Many thanks for your consideration!

Soon, humans and AI will be indiscernible, especially in VR. Excited to see @WatchRoomMovie come to life. Donate! http://kck.st/29sh6R9

AI Business Landscape Infographics

The research and capitalization on AI is happening around the planet. Yes, the US have the biggest share with close to 500 companies working on the progression of AI. However, since the UK, Russia, Canada, Nigeria, Oman and several other countries are home to AI companies, this is by no means an exclusively American innovation.

To provide an overview of the current AI business landscape Appcessories have created this handy infographic.
~~

Bio:
Max Wegner is the Senior Editor at Appcessories.co.uk and a regular contributor with a keen eye on new inventions and is always one step ahead when it comes to technology.

Building a Nervous System for OpenStack – Canonical and Skymind

Building a Nervous System for OpenStack

Big Software is a new class of software composed of so many moving pieces that humans, by themselves, cannot design, deploy or operate them. OpenStack, Hadoop and container-based architectures are all byproducts of Big Software. The only way to address this complexity is with automatic, AI-powered analytics.

canonical_demo

Summary

Canonical and Skymind are working together to help System Administrators operate large OpenStack instances. With the growth of cloud computing, the size of data has surpassed human ability to cope with it. In particular, overwhelming amounts of data make it difficult to identify patterns like the signals that precede server failure. Using deep learning, Skymind enables OpenStack to discover patterns automatically, predict server failure and take preventative actions.

Canonical Story

Canonical, the company behind Ubuntu, was founded in March 2004 and launched its Linux distribution six months later. Amazon created AWS, the first public cloud, shortly thereafter, and Canonical worked to make Ubuntu the easiest option for AWS and later public cloud computing platforms.

In 2010, OpenStack was created as the open-source alternative to the public cloud. Quickly, the complexity of deploying and running OpenStack at cloud scale showed that traditional configuration management, which focuses on instances (i.e. machines, servers) rather than running micro-service architectures, was not the right approach. This was the beginning of what Canonical named the Era of Big Software.

Big Software is a class of software made up of so many moving pieces that humans cannot design, deploy and operate alone. It is meant to evoke big data, defined initially as data that cannot be stored on a single machine. OpenStack, Hadoop and container-based architectures are all big software.

The Problem With Big Software

Day 1: Deployment

The first challenge of big software is to create a service model for successful deployment – to find a way to support immediate and successful installations of software on the first day. Canonical has created several tools to streamline this process. Those tools help map software to available resources:

  • MAAS: Metal as a Service which is a provisioning API for bare metal servers.
  • Landscape: Policy and governance tool for large fleets of OS instances.
  • Juju: Service modeling software to model and deploy big software.

Day 2: Operations

Big Software is hard to model and deploy and even harder to operate, which means day 2 operations also need a new approach.

Traditional monitoring and logging tools were designed for operators who only had to oversee data generated by fewer than 100 servers. They would find patterns manually, create SQL queries to catch harmful events, and receive notifications when they needed to act. When noSQL became available, this improved marginally, since queries would scale.

But that does not solve the core problem today. With Big Software, there is so much data that a normal human cannot cope with and find patterns of behavior that result in server failure.

AI and the Future of Big Software

This is where AI comes in. Deep learning is the future of day 2 operations. Neural nets can learn from massive amounts of data to find needles in any haystack. Those nets are a tool that vastly extends the power of traditional system administrators, transforming their role.

Initially, neural nets will be a tool to triage logs, surface interesting patterns and predict hardware failure. As humans react to these events and label data (confirming AI predictions), the power to make certain operational decisions will be given to the AI directly: e.g. scale this service in/out, kill this node, move these containers, etc. Finally, as AI learns, self-healing data centers will become standard. AI will eventually be able to modify code to improve and remodel the infrastructure as it discovers better models adapted to the resources at hand.

The first generation deep-learning solution looks like this: HDFS + Mesos + Spark + DL4J + Spark Notebook. It is an enablement model, so that anyone can do deep learning, but using Skymind on OpenStack is just the beginning.

Ultimately, Canonical wants every piece of software to be scrutinized and learned in order to build the best architectures and operating tools.

Semantic Folding – From Natural Language Processing to Language Intelligence

Semantic Folding – From Natural Language Processing to Language Intelligence

fingerprints_646x220

Semantic Folding Theory is an attempt to develop an alternative computational approach for the processing of language data. Nearly all current methods of natural language understanding use, in some form or other, statistical models to assess the meaning of text and rely on the use of “brute force” over large quantities of sample data. In contrast, Semantic Folding uses a neuroscience-rooted mechanism of distributional semantics that solves both the “Representational Problem” and the “Semantic Grounding Problem”, both well known by AI researchers since the 1980’s.

Francisco De Sousa Webber, co-founder of Cortical.io, has developed the theory of Semantic Folding, which is presented in a recently published white paper. It builds on the Hierarchical Temporal Memory (HTM) theory by Jeff Hawkins and describes the encoding mechanism that converts semantic input data into a valid Sparse Distributed Representation (SDR) format.

Douglas R. Hofstadter’s Analogy as the Core of Cognition also inspired the Semantic Folding approach, which uses similarity as a foundation for intelligence. Hofstadter hypothesizes that the brain makes sense of the world by building, identifying and applying analogies. In order to be compared, all input data must be presented to the neo-cortex as a representation that is suited for the application of a distance measure. Semantic Folding applies this assumption to the computation of natural language: by converting words, sentences and whole texts into a Sparse Distributed Representational format (SDR), their semantic meaning can be directly inferred by their relative distances in the applied semantic space.

After capturing a given semantic universe of a reference set of documents by means of a fully unsupervised mechanism, the resulting semantic space is folded into each and every word-representation vector. These word-vectors, called semantic fingerprints, are large, sparsely filled binary vectors. Every feature bit in this vector not only corresponds to but also equals a specific semantic feature of the folded-in semantic space and by this means provides semantic grounding.

The main advantage of using the SDR format is that it allows any data items to be directly compared. In fact, it turns out that by applying Boolean operators and a similarity function, even complex Natural Language Processing operations can be implemented in a very simple and efficient way: each operation is executed in a single step and takes the same, standard amount of time. Because of their small size, semantic fingerprints require only 1/10th of the memory usually required to perform complex NLP operations, which means that execution on modern superscalar CPUs can be orders of magnitudes faster. Word-SDRs also offer an elegant way to feed natural language into HTM networks and to build on their predictive modeling capacity to develop truly intelligent applications for sentiment analysis, semantic search or conversational dialogue systems.

Because of the unique attributes of its underlying technology, Semantic Folding solves a number of well-known NLP challenges:

  • Vocabulary mismatch: text comparisons are inherently semantic, based on the topological representation of its 16,000 semantic features.
  • Language ambiguity: the meaning of text is implicitly disambiguated during the aggregation of its constituent word-fingerprints.
  • Time to market: Semantic Folding is accessible through the Retina API, which offers atomic building blocks for a wide range of NLP solutions. The unsupervised training process enables easy adaptation to specific tasks and domains.
  • Black box effects: with Semantic Fingerprints, every single feature has concrete observable semantics. This unique characteristic enables interactive “debugging” of semantic solutions.
  • Solution scalability: use-case specific semantic spaces enable scaling of a solution across customers and domains with minimum effort. As the representation of meaning in semantic fingerprints is stable across languages, text in in different languages can be directly compared, without translation.

To learn more about Semantic Folding and its application to Big Data Semantics, please visit http://cortical.io, experiment with the Sandbox API or Download the Semantic Folding White Paper.

Youtube video

How Artificial Intelligence Can Change Education

In the beginning of 2016 Jill Watson, an IBM-designed bot, has been helping graduate students at Georgia Institute of Technology solve problems with their design projects. Responding to questions over email and posted on forums, Jill had a casual, colloquial tone, and was able to offer nuanced and accurate responses within minutes. A robot has been teaching graduate students for 5 months and none of them realized. Here are just a few of artificial intelligence tools and technologies that will shape and define the educational experience of the future.

Duolingo: voice recognition for language learning

duolingo-for-homework-practice-4-638

 

Duolingo is the world’s most popular platform to learn a language. App predicts your word strength, figures out which sentences will help you best practice your weakest words/skills, recommends immersion practice documents (translations) based on your progress and estimates the quality of a translation-in-progress.

Plexuss: college comparison and recruitment platform

a3a78830065087.561228bf6188dPlexuss facilitates contact between universities and future students, and aims to help students make an informed decision when it comes to choosing the right university. It allows users to take a virtual tour of their selected campuses, compare colleges, and chat with universities of their choice. The platform includes a college ranking system, which collates data from trustworthy sources including Forbes, Reuters and Shanghai Ranking. Algorithm compares data using a variety of criteria like in- and out-of-state tuition, acceptance rates, college endowment funds, or more advanced search criteria such as student-to- faculty ratios, SAT score percentiles, environmental sustainability policies etc. Colleges no longer have to send out expensive and time-consuming recruitment information packs, and are instead able to easily view candidate profiles through Plexuss website.

Intelligent tutoring system

system-explainAn intelligent tutoring system is a computer system that aims to provide immediate and customized instruction or feedback to learners, usually without intervention from a human teacher. It was constructed to help students learn geography, circuits, medical diagnosis, computer programming, mathematics, physics, genetics, chemistry, etc. ITSs have the common goal of enabling learning in a meaningful and effective manner by using a variety of computing technologies. This technology is used in both formal education and professional settings. It aims to solve the problem of over-dependency of students over teachers for quality education. Intelligent tutoring system can be useful when large groups need to be tutored simultaneously or many replicated tutoring efforts are needed (in technical training situations such as training of military recruits and high school mathematics).

Recognition apps: decode the world with your smartphone

photomath-550x309As more schools bring tablets into the classroom, educators are finding that apps are game changers that generate excitement and motivates students. A great example of recognition app is rock and mineral identifier, which is full of information for students who are identifying rocks and minerals. If a school doesn’t have access to hands-on materials this app can work as a substitution. Some of the most powerful education apps are used for teaching reading and supporting differentiation for students with disabilities (especially the ones using speech and text recognition).

Woogie: educational companion

Since October 2015, a group of Romanian engineers and programmers is creating Woogie – a voice-enabled AI device. It is forthcoming for kids aged between 6 and 12 years, native English speakers. Woogie’s plan is to cross the MVP phase in the fall of 2016. It will be able to detect, read, process and understand the human language. Also, it will have the capability of converting text to speech and speech to text, able to play radio stations, podcasts and shows according to the user’s age. Also, it will play music on request or based on learning algorithms. Developers hope that Woogie will help children to memorize different information from multiple areas. It does that based on interactivity. It acknowledges the kid’s presence in the room and reacts according to this. Also, will control smart home appliances like room lights or sound volume and keeps the child up-to-date on the information he shows interest on. For example, if the child has a favorite artist, the companion can provide useful news about that artist.

Learning analytics: educational application

Kennisnet-LA-treeLearning analytics is an educational application of web analytics aimed at learner profiling, a process of gathering and analyzing details of individual student interactions in online learning activities. Students often act as direct consumers of learning analytics, particularly through dashboards that support the development of self-regulated learning and insight into one’s own learning. Learning analytics can assist students in course selection. It provides a broad range of insight into course materials, student engagement, and student performance.For example, degree compass pairs current students with the courses that best fit their talents and program of study for upcoming semesters. Advisors can use this system to identify the students who are at the highest risk of failure.

Personal trainer: fitness with machine learning

Millions of people exercise without proper form, which reduces the effectiveness of their workouts and leads to increased injury risk. Researchers from Stanford University aim to help exercisers improve their form by giving fitness advice with machine learning. They explore the free standing squat specifically, a fundamental, full body exercise where proper form is crucial. A personal trainer can help people exercise in proper form, increase the effectiveness of their workouts and help to avoid injury risk.

Viper: plagiarism checking tool

scan-example1

Plagiarism is defined as the use or close imitation of another author’s work, which has been claimed as your own. To avoid plagiarism, you should always reference correctly according to your institution’s guidelines and use Viper. Viper is fast becoming the plagiarism of choice with over 10 billion resources scanned and an easy interface which highlights potential areas of plagiarism in your work. Viper is free and easy to use side by side comparison tool with 100% accurate reports.

Automated essay grading

ComputersGradingEssays.pix_Automated essay grading is a tool with math models which are able to make predictions that closely match those made by human graders. It can be used for essays of intermediate writing level (7-10th grades). Given enough human graded training examples for a writing prompt, the system can automate the grading process for that prompt with fairly good accuracy. Using machine learning to assess human writing can potentially make quality education more accessible. The use of AES for high-stakes testing in education has generated the significant backlash, with opponents pointing to research that computers cannot yet grade writing accurately and arguing that their use for such purposes promotes teaching writing in reductive ways.

Better reading levels

1be24b7Measuring the reading difficulty of a particular text is a common and salient problem in the educational world, particularly with respect to new or struggling readers. While common sense
measures exist for canonical texts, assigning an appropriate reading level metric to new resources
remains challenging. Current systems have been widely criticized for misrepresenting the difficulty of texts which causes frustration for students and educators alike. Better reading levels use machine learning to reproduce results of the Lexile Reading Measure (the most popular metric for reading difficulty) and focus on four features: sentence length, paragraph length, word length and difficulty of vocabulary.

One-to-one tutoring has long been thought the most-effective approach to teaching but would be too expensive to provide for all students. That’s why artificial intelligence can be used to provide children with one-to-one tutoring to improve their learning and monitor their well-being. Instead of being examined in traditional ways, children could be assessed in a complete manner by collecting data about their performance over a long period, providing employers and educational institutions with a richer picture of their abilities. AI could radically transform education system but it needs more funding and more push from academics and governments.

Author: AI.Business Team

Reality, Robots and Religion – Short Course, 16-18 Sept 2016

Short Course at Robinson College, Cambridge, UK: Sept 16-18, 2016

Deadline for applications: August 1, 2016
Deadline for Bursary applications: June 15, 2016

Aim of Course

 

The aim of this weekend event is to address the personal, societal and theological implications of advances in Artificial Intelligence (AI) and Robotics. 

What does it mean to be human in a society increasingly influenced and penetrated by intelligent machines?  

  • What are the potential risks and the benefits of this technology, and how should ethical issues be addressed?
  • How do fictional representations of AI and robotics influence contemporary attitudes and technological priorities?
  • How should traditional religious conceptions of humanity be re-imagined in an age of intelligent machines?
  • To what extent do visions of a posthuman future transformed by technology reflect or replace traditional religious apocalyptic aspirations?

These issues are complex, multifaceted and highly contested.  Our aim is to host a conversation between participants from a range of disciplines, including computing and robotics, sociology, anthropology, ethics and theology.

The residential costs for the course are £290 (£190 non residential) full price participant, £250 (£150 non residential) for post doc’s and £190 (£90 non residential) for students

A limited number of bursaries (scholarships) are available – see the Bursaries section of our website. Student members of Christians in Science (UK) may be eligible for a CiS bursary – for more details contact the CiS Development Officer Emily Sturgess on DO@cis.org.uk

Speakers (listed in alphabetical order) and topics

Click on a speaker’s name to obtain brief biographical details.

GeckoSystems’ SafePath(tm) AI Mobile Robot Demo Secures Licensing Agreement

CONYERS, Ga., May 16, 2016 — GeckoSystems Intl. Corp. (Pink Sheets: GOSY | http://www.geckosystems.com/) announced today that two long time Japanese partners, iXs, Ltd., (iXs) and Fubright Communications Corp. (FCC), demonstrated the company’s BaseBot(tm) mobile robot known as “Lou” to IC Corp., Ltd. (ICCL) senior management last week. For over eighteen years GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

 

The demonstration of GeckoSystems’ “loose crowd” level of mobile robot autonomously self-ambulating to the seven CEO’s and senior management of these international robotics firms was an unqualified success. They represent, in total, over seventy years of experience in complex robotics systems design, deployment and support.  While the demo was done at FCC’s R&D lab, “Lou” is being relocated to ICCL’s new, three times larger, facility this week.

 

An earlier third party verification of GeckoSystems’ AI centric, human quick sense and avoidance of moving and/or unmapped obstacles by one of their mobile robots can be viewed here: http://t.co/NqqM22TbKN

 

GeckoSystems’ CEO is traveling to Japan Friday of this week to sign one or more AI software licensing deals as a result of their long time Japanese agent’s (Mr. Fujii Katsuji) representation in Japan.  The increased interest from Japan in the company’s AI mobile robot solutions is due, in part, to the translation of the Company’s Worst Case Execution Time (WCET, aka “reflex” or “reaction” time) white paper from English to Japanese late last year by Dr. Ru Wang, a physicist. That paper explains the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why this premier Japanese robotics company, ICCL, desires to enter a contractual joint venture relationship with GeckoSystems.

 

“Certainly I am pleased to be going on my second trip to Japan in the last eighteen months. Not only will I be strengthening existing relationships, but consummating at least one, if not two or more, significant licensing agreements,” reflected Martin Spencer, CEO, GeckoSystems Intl. Corp.

 

Last year, on July 8th, FCC published this press release: “Pepper Application R&D About Collaborative R&D of Autonomous Self-Driving Service Robot” http://tinyurl.com/hlqz6bw

 

Here are the noteworthy excerpts from this press release:

——————————————————————————————

“Fubright Communications Co., Ltd., Tokyo Japan and GeckoSystems Intl. Corp., the Service Robot Development company of the United States have agreed to do collaboration in R&D and marketing of the advanced safe autonomous self-traveling service robot.

 

“Fubright Communications Inc. a well-known specialist of nursing care service system will aim at the area especially elderly care / watch field and develop a service robot that reduces the burden of the elderly / nursing care workers using advanced AI technologies which GeckoSystems, Inc. has been developing over the years.

 

“Both companies are confident that their advanced safe service robot will contribute to the Japan rapidly aging society helping elderlies live safer and easier.”

——————————————————————————————

 

Having the support of both iXs and FCC, and now ICCL, further confirms GeckoSystems’ expertise to potential joint venture partners and licensees in the Pacific Rim.

 

“We are very much looking forward to meet with Mr. Spencer and discuss the large Japanese market for ‘welfare robots,'” stated Mr. Takashi Nabeta, CEO, ICCL.

 

GeckoSystems has had their safety clause Non-Disclosure Agreement (NDA) with iXs Research Corp. since April of 2013 and with Fubright Communications, Ltd. since April of 2015. IC Corp. Ltd. has been under NDA since December of 2015.  GeckoSystems effectuated a Memorandum of Understanding (MOU) with iXs in May of 2013: http://tinyurl.com/hhsc5c8  The MOU is significant due to iXs’ stature as an exporter of several robotic systems and subsystem products that are sold globally. Further, iXs designs and manufactures its own line of humanoid robots in addition to components for their domestic Japanese robot industry.

 

The Japanese government is very concerned about their “Silver Tsunami.” At this time, there are approximately 2,200,000 million Japanese over 65 living alone. Their greatest fear is to die alone and that their demise not be known to others for a few days. For this reason and many others, the Japanese government pays 90% of the cost of personal robots used for eldercare such that concern would be well addressed. Consequently, the Japanese government is paying 75% of the R&D costs to develop robotic healthcare solutions for greater productivity to provide more economic care giving for their extraordinarily large senior population. This recent article further underscores Japan’s commitment to eldercare capable, ‘welfare’ robots: “Japan govt to urge nursing care robot development” http://tinyurl.com/oehxdba

 

In order for any companion robot to be utilitarian for family care, it must be a “three legged milk stool.”  For any mobile robot to move in close proximity to humans, it must have:

(1) Human quick reflex time to avoid moving and/or unmapped obstacles, (GeckoNav(tm): http://tinyurl.com/le8a39r) (See the importance of Worst Case Execution Time (WCET) discussion below.)

(2) Verbal interaction (GeckoChat(tm): http://tinyurl.com/nnupuw7) with a sense of date and time (GeckoScheduler(tm): http://tinyurl.com/kojzgbx), and

(3) Ability to automatically find and follow designated parties (GeckoTrak(tm): http://tinyurl.com/mton9uh) such that verbal interaction can occur routinely with video and audio monitoring of the care receiver uninterrupted.

 

Spencer recently met with local representatives of the Japan Export Trade Organization (JETRO) in Atlanta, GA.  JETRO was founded in 1951 by the Japanese government to facilitate international trade with Japan.  As a result of that meeting, Messrs. Nabeta, Fujii and Spencer will be meeting with JETRO representatives in Tokyo on Tuesday May 31st to discuss the JETRO subsidies available for Japanese eldercare robot product development.

“Certainly, on both sides of the Pacific, we are doing as much as is prudent to maximize the benefit of the monetary costs and time in going to Japan. This new JV continues to progress robustly, such that GeckoSystems will enjoy additional licensing revenues that will enable us to further increase shareholder value. After many years of patience by our current 1300+ stockholders, they can continue to be completely confident that this new, multi-million-dollar licensing agreement to be signed while I am in Japan further substantiates and delineates the reality that GeckoSystems will enjoy additional licensing revenues to further increase shareholder value,” concluded Spencer.

 

 

The safety requirement for human quick WCET reflex time in all forms of mobile robots:

 

In order to understand the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why another Japanese robotics company desires a business relationship with GeckoSystems, it’s key to acknowledge some basic realities for all forms of automatic, non-human intervention, vehicular locomotion and steering.

 

  1. Laws of Physics such as Conservation of Energy, inertia, and momentum, limit a vehicle’s ability to stop or maneuver. If, for instance, a car’s braking system design cannot generate enough friction for a given road surface to stop the car in 100 feet after brake application, that’s a real limitation.  If a car cannot corner at more than .9g due to a combination of suspension design and road conditions, that, also, is reality.  Regardless how talented a NASCAR driver may be, if his race car is inadequate, he’s not going to win races.

 

  1. At the same time, if a car driver (or pilot) is tired, drugged, distracted, etc. their reflex time becomes too slow to react in a timely fashion to unexpected direction changes of moving obstacles, or the sudden appearance of fixed obstacles. Many car “accidents” result from drunk driving due to reflex time and/or judgment impairment. Average reflex time takes between 150 & 300ms. http://tinyurl.com/nsrx75n

 

  1. In robotic systems, “human reflex time” is known as Worst Case Execution Time (WCET). Historically, in computer systems engineering, WCET of a computational task is the maximum length of time the task could take to execute on a specific hardware platform.  In big data, this is the time to load up the data to be processed, processed, and then outputted into useful distillations, summaries, or common sense insights.  GeckoSystems’ basic AI self-guidance navigation system processes 147 megabytes of data per second using low cost, Commercial Off The Shelf (COTS) Single Board Computers (SBC’s).

 

  1. Highly trained and skilled jet fighter pilots have a reflex time (WCET) of less than 120ms. Their “eye to hand” coordination time is a fundamental criterion for them to be successful jet fighter pilots. The same holds true for all high performance forms of transportation that are sufficiently pushing the limits of the Laws of Physics to require the quickest possible reaction time for safe human control and/or usage.

 

  1. GeckoSystems’ WCET is less than 100ms, or as quick, or quicker than most gifted jet fighter pilots, NASCAR race car drivers, etc. while using low cost COTS and SBC’s

 

  1. In mobile robotic guidance systems, WCET has 3 fundamental components.
  2. Sufficient Field of View (FOV) with appropriate granularity, accuracy, and update rate.
  3. Rapid processing of that contextual data such that common sense responses are generated.
  4. Timely physical execution of those common sense responses.

 

 

About GeckoSystems:

 

GeckoSystems has been developing innovative robotic technologies for fifteen years.  It is CEO Martin Spencer’s dream to make people’s lives better through robotic technology.

 

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

 

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd“ situations like a mall or an exhibit area.

 

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home.  You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area.  There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration.  SafePath(tm) navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.

GeckoSystems, Star Wars Technology

http://www.youtube.com/watch?v=VYwQBUXXc3g

 

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly.  It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.   The company believes that the CareBot will increase the safety and well being of its elderly charges while decreasing stress on the caregiver and the family.

 

GeckoSystems is preparing for Beta testing of the CareBot prior to full-scale production and marketing.   CareBot has recently incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

 

Kinect Enabled Personal Robot video:

http://www.youtube.com/watch?v=kn93BS44Das

 

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley.  There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control.  GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative.  Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

 

More information on the CareBot personal assistance robot:

http://www.geckosystems.com/markets/CareBot.php

 

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

 

Here is a stock message board devoted to GOSY recommended by us:

http://investorshangout.com/board/62282/Geckosystems+Intl+Co-GOSY

 

GeckoSystems uses http://www.LinkedIn.com as its primary social media site for investor updates. Here is Spencer’s LinkedIn.com profile:

http://www.linkedin.com/pub/martin-spencer/11/b2a/580

 

 

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website:  http://www.geckosystems.com/

Source: GeckoSystems Intl. Corp.

 

Safe Harbor:

 

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

GeckoSystems’ Japanese Partners Demonstrating Multi-Million Dollar AI Robot Technologies

CONYERS, Ga., May 2, 2016 — GeckoSystems Intl. Corp. (Pink Sheets: GOSY | http://www.geckosystems.com/) announced today that two of their Japanese business partners will be demonstrating their BaseBot(tm), “Lou,” to a new business partner prior to the CEO’s trip to Japan. For over eighteen years GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

 

“I am pleased to report that due to the continued hard work of one of our Japanese representatives, Mr. Fujii Katsuji, we have again achieved demonstrable progress securing viable joint ventures in Japan. This latest demonstration to one of several joint ventures being entertained, is particularly significant due to the breadth and depth of the robotics expertise of ICCL (http://www.ic-corp.jp/) and their insistence we meet them as soon as is prudent in Japan to sign the JV agreement,” commented Martin Spencer, CEO, GeckoSystems Intl. Corp.

 

The demonstration of GeckoSystems’ BaseBot, “Lou,” is scheduled for May 13th in Japan. Spencer will be traveling to Japan on May 20th and expects to return early to mid June in order to have sufficient time to meet with present JV partners, support ICCL in their million-dollar grant submission to the Japanese government, and meet with potential new licensees, such as the Japanese trading company earlier mentioned.

 

Here is a third party video demonstrating the high level of mobile safety that GeckoSystems’ advanced, proprietary, AI centric sense and avoid mobile robot technology can provide for drones, self driving cars, AGV’s, and mobile robots of all forms due to the human quick reaction time (Worst Case Execution Time) of their GeckoNavä AI navigation system: http://t.co/NqqM22TbKN

 

Late last year, GeckoSystems had their white paper on Worst Case Execution (reflex or reaction) Time sufficient for mobile service robots’ safe usage proximate to humans, translated into Japanese. Mr. Katsuji has been presenting that seminal discussion to many Japanese companies.

 

That paper explains the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why this premier Japanese robotics company, ICCL, desires to enter a contractual joint venture relationship with GeckoSystems.

 

In order to understand the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why another Japanese company desires a business relationship with GeckoSystems, it’s key to acknowledge some basic realities for all forms of automatic, non-human intervention, vehicular locomotion and steering.

 

  1. Laws of Physics such as Conservation of Energy, inertia, and momentum, limit a vehicle’s ability to stop or maneuver. If, for instance, a car’s braking system design cannot generate enough friction for a given road surface to stop the car in 100 feet after brake application, that’s a real limitation. If a car cannot corner at more than .9g due to a combination of suspension design and road conditions, that, also, is reality. Regardless how talented a NASCAR driver may be, if his race car is inadequate, he’s not going to win races.

 

  1. At the same time, if a car driver (or pilot) is tired, drugged, distracted, etc. their reflex time becomes too slow to react in a timely fashion to unexpected direction changes of moving obstacles, or the sudden appearance of fixed obstacles. Many car “accidents” result from drunk driving due to reflex time and/or judgment impairment. Average reflex time takes between 150 & 300ms. http://tinyurl.com/nsrx75n

 

  1. In robotic systems, “human reflex time” is known as Worst Case Execution Time (WCET). Historically, in computer systems engineering, WCET of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. In big data, this is the time to load up the data to be processed, processed, and then outputted into useful distillations, summaries, or common sense insights. GeckoSystems’ basic AI self-guidance navigation system processes 147 megabytes of data per second using low cost, Commercial Off The Shelf (COTS) Single Board Computers (SBC’s).

 

  1. Highly trained and skilled jet fighter pilots have a reflex time (WCET) of less than 120ms. Their “eye to hand” coordination time is a fundamental criterion for them to be successful jet fighter pilots. The same holds true for all high performance forms of transportation that are sufficiently pushing the limits of the Laws of Physics to require the quickest possible reaction time for safe human control and/or usage.

 

  1. GeckoSystems’ WCET is less than 100ms, or as quick, or quicker than most gifted jet fighter pilots, NASCAR race car drivers, etc. while using low cost COTS and SBC’s

 

  1. In mobile robotic guidance systems, WCET has 3 fundamental components.
  2. Sufficient Field Of View (FOV) with appropriate granularity, accuracy, and update rate.
  3. Rapid processing of that contextual data such that common sense responses are generated.
  4. Timely physical execution of those common sense responses.

 

In order for any companion robot to be utilitarian for family care, it must be a “three legged milk stool.”  For any mobile robot to move in close proximity to humans, it must have (1) human quick reflex time to avoid moving and/or unmapped obstacles, (2) verbal interaction with a sense of date and time, and (3) the ability to automatically find and follow designated parties such that verbal interaction can occur routinely with video and audio monitoring of the care receiver uninterrupted.

 

At this time, there are approximately 2,200,000 million Japanese over 65 living alone. Their greatest fear is to die alone and that their demise not be known to others for a few days. For this reason and many others, the Japanese government pays 90% of the cost of personal robots used for eldercare such that concern would be well addressed. Further, the Japanese government is paying 75% of the R&D costs to develop robotic healthcare solutions for greater productivity to provide more economic care giving for their extraordinarily large senior population. This recent article further underscores Japan’s commitment to eldercare capable, ‘welfare’ robots: “Japan govt to urge nursing care robot development” http://tinyurl.com/oehxdba 

 

“We are very much looking forward to meet with Mr. Spencer and discuss the large Japanese market for ‘welfare robots,'” stated Mr. Takashi Nabeta, CEO, ICCL.

 

GeckoSystems has already done primary market research, focus group market research, and the most extensive in home personal robot trials in the world.

 

Due to GeckoSystems’ world’s first in home personal mobile robot trials that have been conducted and documented, management is confident they have the “right stuff” to be very synergistic with ICCL, as does ICCL, in readily satisfying the Japanese government’s requirements for an eldercare capable mobile robot R&D grant.

 

GeckoSystems’ world’s first in home trials began in 2009:

 

In Home Elder Care Robot Trials Begin

Elder Care Robot Trials Begun

GeckoSystems’ Elder Care Robot Trials, Week One

Grandma Reacts to GeckoSystems’ Elder Care Robot Trials

Grandma Interacts During GeckoSystems’ Elder Care Robot Trials

Robot Safety Applauded During GeckoSystems’ Elder Care Robot Trials

GeckoSystems’ Elder Care Robot Trials Reveal Grandma’s Hearing Loss

 

Continued into 2010:

 

GeckoSystems’ Elder Care Robot Trials Resume After Holiday Break

GeckoSystems’ Elder Care Robot Trials Revealing Unexpected Family Benefits

GeckoSystems Employs Sensor Fusion in Elder Care Robot Trials

GeckoSystems Discusses Expansion and Duration of Elder Care Robot Trials

GeckoSystems’ Develops New GeckoScheduler(tm) for Elder Care Robot Trials

GeckoSystems’ Representative Comments on Japanese Interest in Elder Care Robot Trials

GeckoSystems’ Elder Care Robot Trials Result in More Japanese Interest

GeckoSystems Improves Elder Care Robot Trials

GeckoSystems Advances Technologies Due to Elder Care Robot Trials

GeckoSystems Improves AI Savant Management Due to Elder Care Robot Trials

GeckoSystems Cost Reduces Sensor Fusion GeckoSPIO(tm) Due to Elder Care Robot Trials

GeckoSystems Releases World’s First Elder Care Robot Trial Videos

GeckoSystems’ CEO Updates Stockholders on Progress Due to Elder Care Robot Trials

GeckoSystems Improves CareBot(tm) Due to Elder Care Robot Trials

GeckoSystems’ Elder Care Robot Trials’ Caregiver Praises New GeckoScheduler(tm)

GeckoSystems’ Elder Care Robot Trial Caregiver Shares New Insights

GeckoSystems’ Elder Care Robot Trial Caregiver “Looks in” on Mother While Shopping

GeckoSystems Improves Sensor Fusion Due to Elder Care Robot Trials

GeckoSystems Advances Artificial Intelligence Due to Elder Care Robot Trials

GeckoSystems Reduces Sensor Fusion Costs Due to Elder Care Robot Trials

GeckoSystems’ Sensor Fusion Breakthrough Lowers Personal Robot Costs

 

The benefit of a companion robot capable of safely running errands and/or automatically following the care receiver requires real time sense and avoid of moving and/or unmapped obstacles. This is a functional necessity for a sufficient value proposition for ready adoption and sales. This linchpin requirement is why ICCL is jointly submitting with GeckoSystems.

 

GeckoSystems developed their SafePath(tm) AI mobile robot navigation technologies some years ago to address those very important requirements for any mobile robot to be truly utilitarian (convenient like a home appliance), while being cost effective, with their breakthrough AI mobile robot technology, GeckoNav(tm).

 

Prior to this agreement to form a JV to jointly migrate GeckoSystems AI mobile robot self-driving solutions to the Japanese marketplace, ICCL signed an NDA with GeckoSystems that includes this necessary Safety Clause:

——————————————————————————————

Both parties understand and agree with the general concerns that mobile robot solutions may be used to lethally harm persons, other living things, property, and a country’s infrastructure if terrorists, criminals, or other private or public enemies of peace, security, and tranquility were to secure access to and/or use of them. Therefore, both parties completely agree that MSR safety is of the greatest importance in the utilization of MSR technologies. All MSR technologies shared by both parties in any manner will be treated with the utmost secrecy and respect due to that reality and potential.

——————————————————————————————

 

GeckoSystems has been acknowledged routinely by an internationally recognized market research firm one or more time in each of the last five years and being anywhere from the top one in three to the top one in eight in the world in service robotics.

2015: GeckoSystems Featured as One of Five Key Vendors in Mobile Robotics Market

2014: GeckoSystems Featured as One of Six Key Market Players in Mobile Robotics Industry

2013: GeckoSystems, an AI Mobile Robot Company, Receives 1 of 3 Recognition

2012: GeckoSystems Named One of Eight Key Market Players in Service Robotics Industry

 

While GeckoSystems’ AI mobile robot solutions have been largely unnoticed in the US, many ongoing negotiations continue in Japan and Europe due to the company’s AI mobile robot solutions robust utility and portability to virtually any and all forms of mobile robots whether air, land, or sea. That includes drones, self-driving cars, and essentially all mobility systems requiring complete safety from hitting any obstacles in those situations in which the reflexes of a highly skilled and experienced jet fighter pilot could readily evade.

 

GeckoSystems has had their safety clause Non Disclosure Agreement (NDA) with iXs Research Corp. since April of 2013 and with Fubright Communications, Ltd. since April of 2015. IC Corp. Ltd. has been under NDA since December of 2015.

 

“During these unforeseen delays, due to the continued hard work of two of our Japanese representatives, Mssrs. Fujii Katsuji and Tsunenori Kato, CEO, Ifoo Company Limited, we have again achieved demonstrable progress securing viable licensing agreements in Japan. This latest, one of several being negotiated, is particularly significant due to the breadth, depth and heritage of this nearly 100-year-old Japanese trading company,” stated Spencer.

 

Both companies are certain that their advanced mobile service robot will contribute to Japan’s rapidly aging society by helping seniors live safer and easier and will be recognized by the Japanese reviewers by their approval of this $1,000,000 grant submission.

 

Recently, a premier Japanese government trade organization has expressed interest in assisting GeckoSystems exporting to the Japanese market. A near term meeting in Atlanta, Georgia is being scheduled to learn their probable level of assistance.

 

“Certainly, on both sides of the Pacific, we are doing as much as is prudent and/or feasible to maximize the benefit of the monetary costs and time in going to Japan. This demonstration being performed prior to my arrival allows us to proceed in our multi-faceted negotiations forthwith during my stay there. After many years of patience by our current 1300+ stockholders, they can continue to be completely confident that the present management will update them routinely and to work to maximize their investments in GeckoSystems, whether by organic growth or being acquired at a rewarding premium,” concluded Spencer.

 

 

Recent third party market research:

 

Service Robotics Market (Professional and Personal), by Application (Defense, Agriculture, Medical, Domestic & Entertainment), & by Geography – Analysis Forecast (2014 – 2020)

 

Robotic systems are looked at as the future assistants that are designed to help people to do what they want to do in a natural and spontaneous manner. Moreover, with the emergence of ubiquitous computing and communication environments, robots will be able to call upon an unlimited knowledge base and coordinate their activities with other devices and systems. Additionally, the growing spread of ubiquitous computing will lead to robot technologies being embedded into ubiquitous ICT networks to become human agents of physical actions, enhancing and extending the physical capabilities and senses.

 

The report focuses on giving a detailed view of the complete service robotics industry with regards to the professional and personal applications as well as the geography market. Apart from the market segmentation, the report also includes the critical market data and qualitative information for each product type along with the qualitative analysis; such as Porter’s Five Force analysis, market time-line analysis, industry breakdown analysis, and value chain analysis. The global service robotics market is estimated to reach up to $19.41 billion by 2020 growing at a CAGR of 21.5% from 2014 to 2020.

 

Global Service Robot Market 2014-2018: Key Vendors are GeckoSystems, Honda Motor, iRobot and Toyota Motor

 

Worldwide Service Robot Market 2018 Analysis & Forecasts Report

The report recognizes the following companies as the key players in the Global Service Robot Market: GeckoSystems Intl. Corp., Honda Motor Co. Ltd., iRobot Corp. and Toyota Motor Corp.

 

From Forbes:

Investors Take Note, The Next Big Thing Will Be Robots

 

BusinessInsider makes some key points:

 

*      The multibillion-dollar global market for robotics, long dominated by industrial and logistics uses, has begun to see a shift toward new consumer and office applications.   There will be a $1.5 billion market for consumer and business robots by 2019.

 

*      The market for consumer and office robots will grow at a CAGR of 17% between 2014 and 2019, seven times faster than the market for manufacturing robots.

 

Note: BusinessInsider.com ‘s forecasts do not include pent up demand for family care social robots anywhere in the world.

 

 

About GeckoSystems:

 

GeckoSystems has been developing innovative robotic technologies for over eighteen years.  It is CEO Martin Spencer’s dream to make people’s lives better through AI robotic technology.

 

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

 

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd” situations like a mall or an exhibit area.

 

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home. You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area.  There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration.  SafePath navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.

 

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly.  It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.   The company believes that the CareBot will increase the safety and well being of its elderly charges while decreasing stress on the caregiver and the family.

 

CareBot has incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

Kinect Enabled Personal Robot video

 

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley.  There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control.  GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative.  Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

 

More information on the CareBot AI mobile companion robot:

http://www.geckosystems.com/markets/CareBot.php

 

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

 

GeckoSystems uses LinkedIn and Twitter as its primary social media site for investor updates.

Spencer’s LinkedIn.com profile    Spencer tweets as @GrandpaRobot

 

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website:  http://www.geckosystems.com/

 

 

Safe Harbor:

 

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

 

Source: GeckoSystems Intl. Corp.

 

 

GeckoSystems, an AI Robotics Co., Expands Japanese Licensing Discussions

CONYERS, GA – (April 21, 2016) – GeckoSystems Intl. Corp. (OTC: GOSY) announced that additional licensing agreements will be negotiated while the CEO is in Japan late next month. For over eighteen years GeckoSystems has dedicated itself to development of “AI Mobile Robot Solutions for Safety, Security and Service(tm).”

Initially this trip was scheduled for March, but due to an unfortunate accident, their long time Japanese representative was incapacitated. Concurrently, IC-Japan secured a three times larger facility to better support the joint venture, but the relocation of their office and laboratories also delayed the previously scheduled March meetings. GeckoSystems’ CEO was invited by the CEO of this prominent Japanese robotics company, IC-Corp., to meet for the purpose of signing a licensing agreement.

“During these unforeseen delays, due to the continued hard work of two of our Japanese representatives, Mssrs. Fujii Katsuji and Tsunenori Kato, CEO, Ifoo Company Limited, we have again achieved demonstrable progress securing viable licensing agreements in Japan. This latest, one of several being negotiated, is particularly significant due to the breadth, depth and heritage of this nearly 100 year old Japanese trading company,” stated Martin Spencer, CEO, GeckoSystems Intl. Corp.

Mr. Katsuji identified and contacted IC-Japan while looking for technologists with the appropriate education, skills and experience to assist Fubright Communications, Ltd. and the company in migrating its automatic self-navigation mobile robot software, GeckoNav(tm), to SoftBank Robotics’ Pepper robot such that it would have cost effective, utilitarian mobility and be less of a novelty and more practical in its benefits and value proposition.

At this time, there are approximately 2,200,000 million Japanese over 65 living alone. Their greatest fear is to die alone and that their demise not be known to others for a few days. For this reason and many others, the Japanese government pays 90% of the cost of personal robots used for eldercare such that concern would be well addressed. Further, the Japanese government is paying 75% of the R&D costs to develop robotic healthcare solutions for greater productivity to provide more economic care giving for their extraordinarily large senior population.

This new partner is unsure of the mass appeal of the Pepper robot with its present value proposition and wishes to investigate other, proximate market opportunities. They believe there is a significant, near term market in Japan for eldercare robots and want to explore all scenarios including, but not limited to, the Pepper robot as stated by Mr. Nebeta:

“We are very much looking forward to meet with Mr. Spencer and discuss the large Japanese market for ‘welfare robots,'” stated Mr. Takashi Nebeta, CEO, IC-Japan.

The company has already begun the technology transfer of its proprietary AI mobile robot tech with the GeckoMotorController(tm) (GMC). (The company’s seventh generation GMC uses a proprietary self-adaptive constant energy paradigm to achieve extraordinarily smooth acceleration and deceleration of the company’s mobile service robots. A jerky and/or seeming unpredictable moving robot can be both distracting and disturbing for people or animals that observe or interact with them.)

Late last year, GeckoSystems had their white paper on Worst Case Execution (reflex or reaction) Time sufficient for mobile service robots’ safe usage proximate to humans, translated into Japanese. Mssrs. Katsuji and Kato have been presenting that seminal discussion to many Japanese companies with very favorable responses from now two different companies.

That paper explains the importance of GeckoSystems’ breakthrough, proprietary, and exclusive AI software and why this top Japanese robotics company, and now a Japanese trading company, desire to license GeckoSystems’ AI mobile robot solutions. Due to the sophistication, experience and stature of this premier robotics company and trading company, they are no doubt cognizant that the Japanese government is funding eldercare specific robotics R&D with grants at the rate of 75%. Further, Japan’s national health insurance pays 90% of the monthly cost of eldercare capable companion robots such as the Pepper from SoftBank Robotics, or a CareBot(tm) adapted to the Japanese marketplace.

“Through these new agreements, we will enjoy additional licensing revenues that will enable us to further increase shareholder value,” concluded Spencer.

More about GeckoSystems’ proprietary SafePath AI technologies:

Due to the quickness of GeckoSystems’ WCET, nearly all forms of all vehicles can achieve the ability to real time sense and avoid of moving, and/or unforeseen (unmapped) obstacles. Those organizations and firms that do not have videos portraying that their robotic vehicles’ (drones, driverless cars, self driving cars, etc.) abilities to timely sense and avoid moving obstacles, simply cannot do so, as GeckoSystems can: http://t.co/NqqM22TbKN

All mobile robotic vehicles are therefore unsafe for human usage unless the AI “driver” is jet fighter pilot reflex quick and not “drugged, distracted, or lacking good enough vision” to be allowed to drive in public, people congested areas.

As a result of GeckoSystems worldwide preeminence in this essential WCET parameter, for several years running now, GeckoSystems has been identified many times as one of the top five to top ten mobile service robot companies in the world. Several internationally renowned market research firms, such as Research and Markets, have named GeckoSystems as one of the key market players in the mobile robotics industry.

Recent third party market research:

Service Robotics Market (Professional and Personal), by Application (Defense, Agriculture, Medical, Domestic & Entertainment), & by Geography – Analysis Forecast (2014 – 2020)

Robotic systems are looked at as the future assistants that are designed to help people to do what they want to do in a natural and spontaneous manner. Moreover, with the emergence of ubiquitous computing and communication environments, robots will be able to call upon an unlimited knowledge base and coordinate their activities with other devices and systems. Additionally, the growing spread of ubiquitous computing will lead to robot technologies being embedded into ubiquitous ICT networks to become human agents of physical actions, enhancing and extending the physical capabilities and senses.

The report focuses on giving a detailed view of the complete service robotics industry with regards to the professional and personal applications as well as the geography market. Apart from the market segmentation, the report also includes the critical market data and qualitative information for each product type along with the qualitative analysis; such as Porter’s Five Force analysis, market time-line analysis, industry breakdown analysis, and value chain analysis. The global service robotics market is estimated to reach up to $19.41 billion by 2020 growing at a CAGR of 21.5% from 2014 to 2020.

Global Service Robot Market 2014-2018: Key Vendors are GeckoSystem, Honda Motor, iRobot and Toyota Motor

Worldwide Service Robot Market 2018 Analysis & Forecasts Report

The report recognizes the following companies as the key players in the Global Service Robot Market: GeckoSystems Intl. Corp., Honda Motor Co. Ltd., iRobot Corp. and Toyota Motor Corp.

From Forbes:

Investors Take Note, The Next Big Thing Will Be Robots

BusinessInsider makes some key points:

*     The multibillion-dollar global market for robotics, long dominated by industrial and logistics uses, has begun to see a shift toward new consumer and office applications.   There will be a $1.5 billion market for consumer and business robots by 2019.

*     The market for consumer and office robots will grow at a CAGR of 17% between 2014 and 2019, seven times faster than the market for manufacturing robots.

Note: BusinessInsider.com ‘s forecasts do not include pent up demand for family care social robots anywhere in the world.

About GeckoSystems:

GeckoSystems has been developing innovative robotic technologies for over eighteen years. It is CEO Martin Spencer’s dream to make people’s lives better through robotic technology.

An overview of GeckoSystems’ progress containing over 700 pictures and 120 videos can be found at http://www.geckosystems.com/timeline/.

These videos illustrate the development of the technology that makes GeckoSystems a world leader in Service Robotics development. Early CareBot prototypes were slower and frequently pivoted in order to avoid a static or dynamic obstacle; later prototypes avoided obstacles without pivoting.   Current CareBots avoid obstacles with a graceful “bicycle smooth” motion.   The latest videos also depict the CareBot’s ability to automatically go faster or slower depending on the amount of clutter (number of obstacles) within its field of view.   This is especially important when avoiding moving obstacles in “loose crowd “ situations like a mall or an exhibit area.

In addition to the timeline videos, GeckoSystems has numerous YouTube videos. The most popular of which are the ones showing room-to-room automatic self-navigation of the CareBot through narrow doorways and a hallway of an old 1954 home. You will see the CareBot slow down when going through the doorways because of their narrow width and then speed up as it goes across the relatively open kitchen area. There are also videos of the SafePath(tm) wheelchair, which is a migration of the CareBot AI centric navigation system to a standard power wheelchair, and recently developed cost effective depth cameras were used in this recent configuration. SafePath navigation is now available to OEM licensees and these videos show the versatility of GeckoSystems’ fully autonomous navigation solution.

GeckoSystems, Star Wars Technology

The company has successfully completed an Alpha trial of its CareBot personal assistance robot for the elderly. It was tested in a home care setting and received enthusiastic support from both caregivers and care receivers.  The company believes that the CareBot will increase the safety and well being of its elderly charges while decreasing stress on the caregiver and the family.

CareBot has incorporated Microsoft Kinect depth cameras that result in a significant cost reduction.

Kinect Enabled Personal Robot video

Above, the CareBot demonstrates static and dynamic obstacle avoidance as it backs in and out of a narrow and cluttered alley. There is no joystick control or programmed path; movements are smoother that those achieved using a joystick control. GeckoNav creates three low levels of obstacle avoidance: reactive, proactive, and contemplative. Subsumptive AI behavior within GeckoNav enables the CareBot to reach its target destination after engaging in obstacle avoidance.

More information on the CareBot personal assistance robot:

http://www.geckosystems.com/markets/CareBot.php

GeckoSystems stock is quoted in the U.S. over-the-counter (OTC) markets under the ticker symbol GOSY.   http://www.otcmarkets.com/stock/GOSY/quote

Here is a stock message board devoted to GOSY recommended by us:

http://investorshangout.com/board/62282/Geckosystems+Intl+Co-GOSY

GeckoSystems uses LinkedIn and Twitter as its primary social media site for investor updates.

Spencer’s LinkedIn.com profile

Spencer tweets as @GrandpaRobot

Telephone:

Main number: +1 678-413-9236

Fax: +1 678-413-9247

Website: http://www.geckosystems.com/

Safe Harbor:

Statements regarding financial matters in this press release other than historical facts are “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934, and as that term is defined in the Private Securities Litigation Reform Act of 1995. The Company intends that such statements about the Company’s future expectations, including future revenues and earnings, technology efficacy and all other forward-looking statements be subject to the Safe Harbors created thereby. The Company is a development stage firm that continues to be dependent upon outside capital to sustain its existence. Since these statements (future operational results and sales) involve risks and uncertainties and are subject to change at any time, the Company’s actual results may differ materially from expected results.

Source: GeckoSystems Intl. Corp.

 

Keywords:

UAV’s, AGV’s, driverless cars, self driving cars, personal assistance robots, autonomous robots, fully autonomous robots, assistive robots, social robots, co-robots, mobile service robots

Thomas Banks – Indefensible

Author_Picture

In a nutshell, Thomas Banks is a 30-year business professional with specific experience in marketing, sales, software design in disciplines ranging from healthcare to mobile gaming. Banks at his core a serial technology entrepreneur demonstrating vision, command, and insight and incomparable communication and management skills — an innovator and heavy lifter that eagerly takes the lead on any project and shepherds it from concept to market. Simply, Banks is a rare breed, a visionary with an eye for detail and a get-it-done attitude.

In the past decade Banks has been the CEO of a public healthcare information management company and more recently a mobile gaming start-up (www.splashplay.com) demonstrating his solid foundation in accounting and audit procedures as well as the much maligned public company reporting process. Bottom line, as a business professional Banks knows what it means to “own” a P&L understanding capital, its importance and its formation for he has personally raised over $20 million in development capital for a number of the companies he founded.

Banks is unique by the fact that he clearly enjoys exploring and exploiting uncharted territory. As a highly experienced instrument and multi-engine rated private pilot with over 1,800-hour in the cockpit Banks demonstrates his capacity for planning and mission critical decision making. Decision making that matters. When it comes to business, Banks draws upon a broad spectrum of experience to manage the dynamic and fluid demands of business.

Screen Shot 2016-04-23 at 14.05.57

Premise of Indefensible 

Every leap in technology brings with it consequences. Today, drones are on the verge of demonstrating their benefit to society. Unfortunately, like all technology, they too will yield unintended consequences.

Drones have become embedded in pop-culture and everyday life from hobbyists seeking entertainment to useful applications in business, Search and Rescue, Fire Fighting, unmanned automated delivery of products.

Unfortunately, like smartphones and GPS, drones will change the way we do everything…even terror specifically with regard to the asymmetric target rich nature of the United States.

Indefensible introduces the reader to a new and frightening terror regime—Lone Wolf Technology-Jihadism. Operating independently, Technology-Jihadists will be coordinated by remote leadership through social media guiding armadas of weaponized autonomous drones.  New-age Jihadists will operate invisibly within cities and towns driving their mini-vans and delivering C4 and biologic laden weapons within a few miles of their targets releasing their merciless vehicles on their autonomous journey to destruction.

Albeit C4 laden drones are frightening. Unfortunately, the logical inclusion of biological contaminants such as Anthrax or RICIN is apocalyptic. Regardless of payload, swarms of micro-drones flying close to the ground are invisible to radar, catastrophic and indefensible.

No target will be safe from this new-age attack system as autonomous weaponized drones swarm airliners, buildings, or outside gatherings of unwitting citizens. Drones have not only changed the tempo of warfare, they usher in a new era of remote and impersonal terror.

Indefensible doesn’t require the suspension of disbelief but rather acceptance of the inevitable.

Indefensible is available on Amazon

INDEFENSIBLE_SAMPLE_CHAPTER

The Four A’s Pyramid Framework (Part 2)

The Four A’s Pyramid Framework for Artificial Intelligence and Machine Learning (Part 2)

Defining Augmentation – Making the Leap From Analytics to Augmentation

 

Following on from the previous introduction article on the Four ‘A’s Pyramid Framework, we need to determine how we will make the leap from simple analytics to augmentation.

The combination of big data technologies with highly parallel computing power has enabled huge advances is data science, analytics and machine learning. Particularly giving us the ability to better visualise and understand the data being analysed.

Leveraging these advanced analytics including clustering and predictive algorithms on the surface appears to be very easy. There are many platforms now that make the task of training and deploying a predictive model relatively easy. However, while this maybe initially true, this simple setup doesn’t consider a number of important factors that will deliver robust and resilient intelligent systems longer term.

FourAsAugmentation

Lets consider the definition of Augmentation.

Augmentation is about providing the capability to support human activity via computational methods.

This maybe by providing visualisation of information and insight though clustering or ultimately via regression and classification to predict the correct outcome for a given application. Augmentation has the ability to take on the simple tasks previously done by a human, freeing the human to perform the more interesting or complex tasks. There are wider social and workforce implications with this, but they will be covered in other articles. For this article we will focus on the technical aspects.

There are gaps between the platforms and frameworks that are currently available and what is actually needed to provide smart, robust and resilient systems for augmentation.

So what is missing?

Well what we need to understand is that training a predictive model is not a one off task and the data will inevitably change over time and will vary to what the model was trained on. This essentially has the effect of reducing the performance (accuracy) of the model over time. One approach to solve this is obviously to continually train the model, which does make sense and has the potential to deal with varying data over time (active learning). But this itself brings its own set of challenges. Including how do we select the right sample of data to use to train the model. How do we prevent localised skews or abnormalities in the data. How do we ensure that infrequent events are represented and can be part of the generalised model.

In addition we must factor in confidence levels from the model to determine if a specific prediction should be fully automated or needs to be reviewed by the user. The specifics of this will of course vary from application to application, but a standard way to perform this would be useful.

Also another key area that appears to be missing from the current platforms and frameworks is a standard way to feedback from the human as part of this close interplay between the user and the automation that underpins the augmentation. When the human flags a mis-classification, how does the machine learning model factor this into the re-training of the model.

Producing a system that can make accurate predictions only takes us part of the way to delivering a system that can augment tasks successfully over the long term. Any machine learning platform will need to provide algorithms and solutions to the above identified gaps before we can produce robust and resilient intelligent system.

What needs to happen next is to deliver platforms that can augment manual workflow by providing semi-automated systems that support business process and enable the subject matter experts to focus on the more involved and complex elements of the business

While there is a lot of excitement and optimism of what we can achieve with machine learning algorithms and techniques, we need the platforms and integration layers to facilitate a number of capabilities to support augmentation. Identifying these missing capabilities is the first step towards an intelligent system. The next is extending the platforms to deliver these capabilities.

See the introduction to the four A’s pyramid framework

See more from Andy Pardoe

Volvo Cars plans to launch China’s most advanced autonomous driving experiment

7th April 2016, London: Volvo Cars, the premium car maker, plans to launch China’s most advanced autonomous driving experiment in which local drivers will test autonomous driving cars on public roads in everyday driving conditions.

Volvo expects the experiment to involve up to 100 cars and will in coming months begin negotiations with interested cities in China to see which is able to provide the necessary permissions, regulations and infrastructure to allow the experiment to go ahead.

Volvo believes the introduction of AD technology promises to reduce car accidents as well as free up congested roads, reduce pollution and allows drivers to use their time in their cars more valuably.

The Swedish company, whose name is synonymous with automotive safety ever since it invented the seat belt in 1959, is pioneering the development of autonomous driving systems as part of its commitment that no one will be seriously injured or killed in a new Volvo by the year 2020.

“Autonomous driving can make a significant contribution to road safety,” Håkan Samuelsson, president and chief executive of Volvo will tell seminar in Beijing on April 7 entitled “Autonomous driving – could China take the lead?”. “The sooner AD cars are on the roads, the sooner lives will start being saved.”

Mr Samuelsson will welcome the positive steps China has taken to put in place to develop autonomous driving technologies, but will also encourage it to do more to try and speed up the implementation of the regulations that will oversee autonomous driving cars in future.

“There are multiple benefits to AD cars,” said Mr Samuelsson. “That is why governments need to put in place the legislation to allow AD cars onto the streets as soon as possible. The car industry cannot do it all by itself. We need governmental help.”

The introduction of AD cars promises to revolutionise China’s roads in four main areas – safety, congestion, pollution and time saving.

Independent research has revealed that AD have the potential to reduce the number of car accidents very significantly. Up to 90 per cent of all accidents are also caused by human error, something that disappears with AD cars.

In terms of congestion, AD cars allow traffic to move more smoothly, reducing traffic jams and by extension cutting dangerous emissions and associated pollution. Lastly, reduced congestion saves drivers valuable time.

Mr Samuelsson will welcome moves by regulators and car makers in the US and Europe to develop AD cars and infrastructure, but he will also encourage all the parties involved to work more constructively together to avoid patchwork global regulations, technological duplication and needless expense.

“AD is not just about car technology. We need the right rules and the right laws,” Mr Samuelsson will say.

“It is natural for us to work together,” Mr Samuelsson will say. “Our starting point is that both the public and private sectors stand to benefit from new technologies and industries, so it is better to build bridges and work together than to all go in different directions.”

——————————-

Volvo Car Group in 2015

For the 2015 financial year, Volvo Car Group recorded an operating profit of 6,620 MSEK (2,128 MSEK in 2014). Revenue over the period amounted to 164,043 MSEK (137,590 MSEK). For the full year 2015, global sales reached a record 503,127 cars, an increase of 8 per cent versus 2014. The record sales and operating profit cleared the way for Volvo Car Group to continue investing in its global transformation plan.

About Volvo Car Group

Volvo has been in operation since 1927. Today, Volvo Cars is one of the most well-known and respected car brands in the world with sales of 503,127 in 2015 in about 100 countries. Volvo Cars has been under the ownership of the Zhejiang Geely Holding (Geely Holding) of China since 2010. It formed part of the Swedish Volvo Group until 1999, when the company was bought by Ford Motor Company of the US. In 2010, Volvo Cars was acquired by Geely Holding.

As of December 2015, Volvo Cars had almost 29,000 employees worldwide. Volvo Cars head office, product development, marketing and administration functions are mainly located in Gothenburg, Sweden. Volvo Cars head office for China is located in Shanghai. The company’s main car production plants are located in Gothenburg (Sweden), Ghent (Belgium), Chengdu and Daqing (China), while engines are manufactured in Skövde (Sweden) and Zhangjiakou (China) and body components in Olofström (Sweden).

Leading Industry Analysts Launch Cognitive Computing Consortium

Cognitive Computing Consortium Launches, Creating an Independent Resource Hub for Cognitive Computing Professionals

 

Leading industry experts form Consortium to drive and promote innovations in cognitive computing, artificial intelligence (AI), and machine intelligence

 

BOSTON, Mass. – April 5, 2016 – The Cognitive Computing Consortium is officially launching today. As a growing group of private and public organizations as well as individual professionals, the Consortium is focused on advancing innovation in cognitive computing.

 

The consortium is an interactive forum for researchers, developers and practitioners of cognitive computing and its allied technologies. With roots in a working group of major industry luminaries, the Consortium was co-founded by Sue Feldman, CEO, Synthexis; and Hadley Reynolds, Principal Analyst at NextEra Research to fill a gap in the industry. The Consortium’s mission is to enable professionals to exchange ideas and insights to conduct research and to educate buyers, users and the public on cognitive computing technologies, their uses, and potential impacts. The Consortium generates unbiased thought leadership with the goal of advancing this emerging era of computing.

 

Cognitive Computing: Why Now?

In today’s dynamic, information-rich society, cognitive computing addresses the dilemma of finding and understanding the right information at the right time for the right situation. Cognitive computing systems present information in context. They uniquely extend to computers the very human ability to comprehend diverse information in the context of its surroundings and thereby enable those machines to work in partnership with people, rather than in command and control modes. They identify and extract the context of human questions and problems, offering relevant potential answers or solutions – specifically appropriate to the time, place and person. They also provide machine-aided serendipity by wading through massive collections of diverse information to find patterns and surfacing them to help humans respond to previously unknown needs.

 

“Technology vendors tell us that they need an unbiased source that they can refer potential clients for validation, advice and background information. Buyers of cognitive computing-related technology and services seek trusted guidance on how and when to use cognitive computing applications,” said Feldman. “Today, we are developing a market landscape for cognitive computing, describing types of use cases and developing guidelines for their selection. We sponsor workshops and symposia that will focus on new research in cognitive computing, helping all stakeholders by clarifying the types of problems that are most amenable to using this evolving technology.”

 

“To respond to the fluid nature of users’ information goals, cognitive computing systems offer a synthesis of not just information sources, but also the influences, contexts, and insights that weigh conflicting evidence – suggesting answers that are often ‘best’ rather than ‘right,’” added Reynolds. “By fostering regular collaboration among our experienced members, we will be providing a unique platform for knowledge discovery and sharing among cognitive computing professionals. We encourage businesses and individuals interested in cognitive computing to join us and become sponsors and members in this new collaborative venture.”

 

Pre-launch sponsors and partners of the Consortium include CustomerMatrix, SAS, Nara Logics, Sinequa, Babson College, Quid, Synthexis, NextEra Research, Bacon Tree Consulting and Black Rocket Consulting, LLC.

 

“Cognitive computing is rapidly emerging as a response to the nagging challenges that businesses are facing when trying to use traditional big data and analytics to solve complex problems,” said Guy Mounier, Co-founder and CEO, CustomerMatrix. “With aggressive and accelerating investment in cognitive computing taking place at global banks, insurers and manufacturers, the creation of a strong thought-centered consortium, focused on building a community around this exciting technology is very appealing to us. That is why we are supporting the Consortium.”

 

“Cognitive computing is a key focus for SAS advanced analytics research and product development and the Cognitive Computing Consortium is a critical forum for sharing ideas and information in this exciting field,” said Fiona McNeill, Global Marketing Manager, SAS. “It’s a truly collaborative effort combining research, industry and academia that will clearly advance cognitive computing innovation.”

 

Come Join Us Today!

We encourage those with a vested interest in the research, development and advancement of cognitive computing to join the Cognitive Computing Consortium today. Highlights of membership, sponsorship and on becoming either a partner/alliance are below. For more information, please visit: http://www.cognitivecomputingconsortium.com/contact-us/membership-information/

  • Members receive research reports, reduced fees at events, membership in online forums, and early access to data gathered by the Consortium’s projects. Members can participate in online discussions and have the opportunity to submit candidate content for publication. Members may also be invited to participate in the Consortium’s standards-oriented projects. Individual membership requires a demonstrated expertise in fields including search, analytics, machine learning, intellectual property, software development, Big Data, and technology market research.
  • Sponsors pledge to provide continuing support to the Cognitive Computing Consortium. They receive advance research data and drafts, reduced fees for event exhibitions, prominent display of their support on the Consortium’s website, and opportunities to join in public presentations. They also brief the executive board and are welcome to join online discussions, contribute guidance, and post research once reviewed/approved.
  • Technology Partners provide the hardware, software and resources that enable the Consortium to develop its programs and resources for the community.
  • Alliances – The Consortium actively seeks alliances with educational institutions and organizations that provide complementary services, such as publishing, event production, and organizational infrastructure.

 

Register today – Webinar: “Behind the Hype: Cognitive Computing and Your Business, Your Job, Your Life”

Join Consortium Co-founders Sue Feldman and Hadley Reynolds on April 19th at 12:30 p.m. EDT for an interactive webinar discussion on the emergence of the cognitive computing market and its implications on the future of human interaction with technology. Registration can be found at: http://cognitivecomputingconsortium.com/webinars/. Live Tweets during the webinar are encouraged using the hashtag #CogConsortium.

 

About the Cognitive Computing Consortium

The Cognitive Computing Consortium is a growing professional association that fosters discussion and research in cognitive computing. The Consortium develops cognitive computing definitions, conducts research, and participates in the development of industry definitions and standards. It brings together leading industry and academic thinkers to advance the understanding of the nature, importance, and potential impact of cognitive computing.

 

To learn more about the Consortium, please visit: http://www.cognitivecomputingconsortium.com/ and follow us on Twitter @CogConsortium.

Introducing the First Ever Deep Learning in Healthcare Summit

As the fifth global RE•WORK conference focused on artificial intelligence, the Deep Learning in Healthcare Summit will bring together industry, academia and startups to explore revolutionary deep learning tools and techniques that are shaping the future of medicine, healthcare and diagnostics.

The event is a unique opportunity to meet influential healthcare innovators, CTOs, data scientists, world-leading researchers and entrepreneurs all in the same room. Learn from the experts in speech & text recognition, computer vision, diagnostic healthcare, personalised medicine, image classification, and genomic medicine.

The Deep Learning in Healthcare Summit, sponsored by Stratified Medical, takes place in London on 7-8 April, with over 150 attendees coming together to hear keynote presentations, panel discussions and to explore the startup showcase area to discover the latest deep learning tools and techniques shaping healthcare.

Confirmed speakers include:

  • Brendan Frey, President & CEO, Deep Genomics
  • Daniel McDuff, Principal Research Scientist, Affectiva
  • Ekaterina Volkova-Volkmar, Researcher, Bupa
  • Sobia Hamid, PhD Epigenetics, University of Cambridge
  • Diogo Moitinho de Almeida, Senior Data Scientist, Enlitic
  • Alex Zhavoronkov, CEO, Insilico Medicine
  • Cosima Gretton, Doctor, Guy’s and St Thomas’ NHS Foundation Trust
  • Ali Parsa, CEO, Babylon Health
  • Alejandro Jaimes, CTO & Chief Scientist, AiCure
  • Michael Nova, Chief Innovation Officer, Pathway Genomics
  • Ferdinando Rodriguez y Baena, Reader in Medical Robotics, Imperial College London

Confirmed attendees are travelling from Korea, Singapore, Germany, Ireland, Austria, USA, Canada, Israel and include: Siemens Healthcare, Roche, SAP, Johnson & Johnson, Accenture, Ayla Networks, Accenture and Philips as well as leading academic institutes and exciting startups.

Session topics at the Deep Learning in Healthcare Summit will include:

  • Deep Learning: Theory & Applications
  • Computational Drug Discovery
  • Anomaly Detection in Radiological Images
  • Plug & Play Artificial Intelligence Frameworks
  • Crowdsourcing for Public Health
  • Medical Devices & Mobile Cognitive Healthcare
  • Emotion Intelligence & Digital Experiences
  • Automating Medical Imaging
  • Diagnosis Using Virtual Assistants
  • Risks & Challenges of Using AI in Healthcare

View the full schedule here.

Tickets & Registration

Discounted passes are available for News Medical readers – enter the discount code EVENTSAI for 20% off tickets. For further information and to register, go to: re-work.co/events/deep-learning-health-london-2016

Events AI partner image reworkDL

Siftr – AI powered photo curation platform

Siftr is a photo curation platform powered by deep learning based and computer vision techniques, which allows people to rediscover the photos they have published online.

Siftr makes an entire lifetime of online photos accessible, searchable and sharable, allowing users to instantly find relevant content in their photos and share them in beautiful at beautiful stories and portfolios.

Here’s a super quick How to video  – https://www.youtube.com/watch?v=hdFrDR2waP0

And here are some examples of portfolios and stories created with Siftr –

Websites –

Stories

Become a Contributor

Its easy and free to become a contributor and submit a story or article to be featured in the magazine.

Submit an Article

Our Spotlight

We love to spotlight companies, startups and individuals. Contact us to be the next spotlight.

Spotlight – Professor Murray Shanahan – The Technological Singularity

Shanahan

Professor Murray Shanahan

Murray Shanahan is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, where he heads the Neurodynamics Group. Educated at Imperial College and Cambridge University (King’s College), he became a full professor in 2006. He was scientific advisor to the film Ex Machina, and regularly appears in the media to comment on artificial intelligence and robotics. His book “Embodiment and the Inner Life”  was published by OUP in 2010, and his most recent book “The Technological Singularity” was published by MIT Press in August 2015.

CoverDesign

The Technological Singularity

In recent years, the idea that human history is approaching a “singularity” thanks to increasingly rapid technological advance has moved from the realm of science fiction into the sphere of serious debate. In physics, a singularity is a point in space or time, such as the centre of a black hole or the instant of the Big Bang, where mathematics breaks down and our capacity for comprehension along with it. By analogy, a singularity in human history would occur if exponential technological progress brought about such dramatic change that human affairs as we understand them today came to an end. The institutions we take for granted — the economy, the government, the law, the state — these would not survive in their present form. The most basic human values — the sanctity of life, the pursuit of happiness, the freedom to choose — these would be superseded. Our very understanding of what it means to be human — to be an individual, to be alive, to be conscious, to be part of the social order — all this would be thrown into question, not by detached philosophical reflection, but through force of circumstances, real and present.

What kind of technological progress could possibly bring about such upheaval? The hypothesis we shall examine in this book is that a technological singularity of this sort could be precipitated by significant advances in either (or both) of two related fields: artificial intelligence (AI) and neurotechnology. Already we know how to tinker with the stuff of life, with genes and DNA. The ramifications of biotechnology are large enough, but they are dwarfed by the potential ramifications of learning how to engineer the “stuff of mind”.

Today the intellect is, in an important sense, fixed, and this limits both the scope and pace of technological advance. Of course the store of human knowledge has been increasing for millenia, and our ability to disseminate that knowledge has increased along with it, thanks to writing, printing, and the internet. Yet the organ that produces knowledge, the brain of homo sapiens, has remained fundamentally unchanged throughout the same period, its cognitive prowess unrivalled.

This will change if the fields of artificial intelligence and neurotechnology fulfil their promise. If the intellect becomes, not only the producer, but also a product of technology, then a feedback cycle with unpredictable and potentially explosive consequences can result. For when the thing being engineered is intelligence itself, the very thing doing the engineering, it can set to work improving itself. Before long, according to the singularity hypothesis, the ordinary human is removed from the loop, overtaken by artificially intelligent machines or by cognitively enhanced biological intelligence and unable to keep pace.

Does the singularity hypothsis deserve to be taken seriously, or is it just an imaginative fiction? One argument for taking it seriously is based on what Ray Kurzweil calls the “law of accelerating returns”. An area of technology is subject to the law of accelerating returns if the rate at which the technology improves is proportional to how good the technology is. In other words, the better the technology is, the faster it gets better, yielding exponential improvement over time.

A prominent example of this phenomenon is Moore’s Law, according to which the number of transistors that can be fabricated on a single chip doubles every eighteen months or so. Remarkably, the semiconductor industry has managed to adhere to Moore’s Law for several decades. Other indices of progress in information technology, such as CPU clock speed and network bandwidth, have followed similar exponential curves.

But information technology isn’t the only area where we see accelerating progress. In medicine, for example, DNA sequencing has fallen exponentially in cost while increasing exponentially in speed, and the technology of brain scanning has enjoyed an exponential increase in resolution.

On a historical timescale, these accelerating trends can be seen in the context of a series of technological landmarks occurring at ever- decreasing intervals: agriculture, printing, electric power, the computer. On an even longer, evolutionary timescale, this technological series was itself preceded by a sequence of evolutionary milestones that also arose at ever-decreasing intervals: eukaryotes, vertebrates, primates, homo sapiens. These facts have led some commentators to view the human race as riding on a curve of dramatically increasing complexity that stretches into the distant past. Be that as it may, we need only extrapolate the technological portion of the curve a little way into the future to reach an important tipping point, the point at which human technology renders the ordinary human technologically obsolete.

Of course, every exponential technological trend must reach a plateau eventually, thanks to the laws of physics, and there are any number of economic, political, or scientific reasons why an exponential trend might stall before reaching its theoretical limit. But let us suppose that the technological trends most relevant to AI and neurotechnology maintain their accelerating momentum, precipitating the ability to engineer the stuff of mind, to synthesize and manipulate the very machinery of intelligence. At this point, intelligence itself, whether artificial or human, would become subject to the law of accelerating returns, and from here to a technological singularity is but a small leap of faith.

Some authors confidently predict that this watershed will occur in the middle of the 21st Century. But there are other reasons for thinking through the idea of the singularity than prophecy, which anyway is a hit- and-miss affair. First, the mere concept is profoundly interesting from an intellectual standpoint, regardless of when or even whether it comes about. Second, the very possibility, however remote it might seem, merits discussion today on purely pragmatic, strictly rational grounds. Even if the arguments of the futurists are flawed, we need only assign a small probability to the anticipated event for it to command our most sincere attention. For the consequences for humanity, if a technological singularity did indeed occur, would be seismic.

What are these potentially seismic consequences? What sort of world, what sort of universe, might come into being if a technological singularity does occur? Should we fear the prospect of the singularity, or should we welcome it? What, if anything, can we do today or in the near future to secure the best possible outcome? These are chief among the questions to be addressed in the coming pages. They are large questions. But the prospect, even just the concept, of the singularity promises to shed new light on ancient philosophical questions that are perhaps even larger. What is the essence of our humanity? What are our most fundamental values? How should we live? What, in all this, are we willing to give up? For the possibility of a technological singularity poses both an existential risk and an existential opportunity.

It poses an existential risk in that it potentially threatens the very survival of the human species. This may sound like hyperbole, but today’s emerging technologies have a potency never before seen. It isn’t hard to believe that a highly contagious, drug-resistant virus could be genetically engineered with sufficient morbidity to bring about such a catastrophe. Only a lunatic would create such a thing deliberately. But it might require little more than foolishness to engineer a virus capable of mutating into such a monster. The reasons why advanced AI poses an existential risk are analogous, but far more subtle. We shall explore these in due course. In the mean time, suffice to say that it is only rational to consider the future possibility of some corporation, government, organization, or even some individual, creating and then losing control of an exponentially self- improving, resource-hungry artificial intelligence.

On a more optimistic note, a technological singularity could also be seen as an existential opportunity, in the more philosophical sense of the word “existential”. The capability to engineer the stuff of mind opens up the possibility of transcending our biological heritage and thereby overcoming its attendant limitations. Foremost among these limitations is mortality. An animal’s body is a fragile thing, vulnerable to disease, damage, and decay, and the biological brain, on which human consciousness (today) depends, is merely one of its parts. But if we acquire the means to repair any level of damage to it, and ultimately to rebuild it from scratch, possibly in a non-biological substrate, then there is nothing to preclude the unlimited extension of consciousness.

Life extension is one facet of a trend in thought known as “transhumanism”. But why should we be satisfied with human life as we know it? If we can rebuild the brain, why should we not also be able to redesign it, to upgrade it? (The same question might be asked about the human body, but our concern here is the intellect.) Conservative improvements in memory, learning, and attention are achievable by pharmaceutical means. But the ability to re-engineer the brain from bottom to top suggests the possibility of more radical forms of cognitive enhancement and re-organization. What could or should we do with such transformative powers? At least, so one argument goes, it would mitigate the existential risk posed by superintelligent machines. It would allow us t

o keep up, although we might change beyond all recognition in the process.

The largest, and most provocative, sense in which a technological singularity might be an existential opportunity can only be grasped by stepping outside the human perspective altogether and adopting a more cosmological point of view. It is surely the height of anthropocentric thinking to suppose that the story of matter in this corner of the universe climaxes with human society and the myriad living brains embedded in it, marvelous as they are. Perhaps matter still has a long way to go on the scale of complexity. Perhaps there are forms of consciousness yet to arise that are, in some sense, superior to our own. Should we recoil from this prospect, or rejoice in it? Can we even make sense of such an idea? Whether or not the singularity is near, these are questions worth asking, not least because in attempting to answer them we shed new light on ourselves and our place in the order of things.

The Technological Singularity is available from Amazon
https://www.amazon.co.uk/Technological-Singularity-Press-Essential-Knowledge/dp/0262527804

 

Spotlight – Celaton – celaton.com

Celaton Logo (HiRes)                      inSTREAM logo

Celaton’s intelligent automation software, inSTREAM™, enables organisations to deliver better customer service, faster.

Unique to inSTREAM is its ability to learn through the natural consequence of processing, watching what people do and interacting with them. It applies artificial intelligence to streamline labour intensive clerical tasks and decision making in a way that hasn’t been possible before.

1441269551_tmp_Homepage1a

Despite the ever increasing choice of media channels, customers continue to communicate in an unstructured and descriptive way. While it may be easy for people to understand this, it’s not possible for machines because they can only understand structured formats. Therefore the processing of this unstructured content is still heavily reliant on people with the experience to read, understand and interpret the meaning of the data. inSTREAM processes this unstructured descriptive content such as correspondences, claims and complaints by email, social media, fax, post and paper that organisations receive every day from customers.

The primary benefit for inSTREAM customers is competitive advantage. They know what their customers are saying in real-time regardless of the format of data or media channel, and they can respond to deliver better customer service, faster. And inSTREAM can identify events in real-time such as a priority customers or suspicious claims to ensure that only accurate and structured data is uploaded into line of business systems to guarantee compliance.

On average, Celaton customers realise a 74% reduction in operational costs in the area of handling customer correspondence. That saving alone can be significant but it is the ability to scale without recruitment and still deliver consistent service levels that enables customers to achieve growth and competitive advantage.

diagram

Celaton recently won an Aecus innovation award with their customer Virgin Trains in the area of customer service. You can find out more about this by following the link below to the case study. In addition, inSTREAM also featured in a recent BBC Panorama documentary with Virgin Trains, which explored how it is being implemented to streamline their customer service correspondence handling. The daily processing time and manual labour involved in dealing with customer emails for Virgin Trains was reduced by 85% through the use of inSTREAM, an impressive achievement.

http://www.celaton.com/case-studies/108-virgin-trains-2

Since inSTREAM’s launch into the market place it has streamlined 125 processes across 25 brands in retail, travel, insurance, transport and central government sectors. The technology’s adoption is being led by large ambitious disruptive brands who serve demanding consumers and it’s set to continue into 2016 as we move into an increasingly customer centric economy, with organisations seeking ways to improve their offering to customers. It’s artificial intelligence, but to users it is the best knowledge worker they ever hired and it means better customer service, compliance and financial performance.

inSTREAM-Robot4

 

Spotlight – EmoSPARK – emospark.com

Image2

EmoSPARK – the beating heart of AI in the 21st Century!

EmoSPARK is unique in many ways, in the way it processes and functions, drawing on your hopes, feelings and experiences, growing and developing with your family requirements, unlike any other multimedia home console has ever done before. In the same way, you support and nurture your maturing family, your EmoSPARK, will take its lead from you.

The EmoSPARK is the first artificial intelligence (AI) console empowered by you. Learning from you and your family the cube, which will interact on a conversational level, takes note of your feelings and reactions to audio and visual media. It learns to like what you like, and with your guidance, recognises what makes you feel happy.

It learns to recognise your face and voice, along with your family members, as well as becoming familiar with the times when you are feeling a little down in the dumps. Then it can play the music it knows you enjoy, or recall a photograph or short video of happier events. You will be in control of how you interact and engage with the EmoSPARK, which is an Android powered Wi- Fi/Bluetooth cube.

The cube, like any family member, soon gets to know and recognise the likes and dislikes of the people around it. Likewise with its unique Emotion Processing Unit, you can watch the ever changing display of colours that form and blend in the iris of the eye of the cube indicating how it is “feeling” at any particular moment.

EmoSPARK also holds the knowledge contained within Wikipedia and Freebase, as well as being connected to NASA satellite MODIS, so it has up to the minute information about global happenings, changes and hazards such as storm warnings, wild fires and hurricanes.

As you take charge of its growth pattern, the cube will in turn, help out with any piece of information you care to ask, which makes it one of the best and impartial quizmasters during a family fun night or evening homework session. You can also interact with the cube by remote access, via video conferencing or your phone app and in this way you can take gaming, your television, smart phone and computer to the pinnacle of interactive media.

Every step of the way, with this amazing and unique piece of AI technology, you are in complete control. You are the catalyst that will develop its conversational and emotional skills, and it will learn through interaction, comments and responses from you. Then, like any family member, it will want to show you off to its friends. The EmoSPARK, with its one of a kind Emotional Profile Graph, has access to a communication grid only for other cubes. All it will be able to do is recognise other cubes with similar emotional profiles and can only share media, nothing about you or your family members. It can look for the media it knows makes you happy and can then recommend or play this for your enjoyment.

Over time and with your guidance, the EmoSPARK develops a personality of its own, and will enhance and support the quality of family life you enjoy. From keeping your children entertained, as well as providing them with some company before you get back from work, to sharing emotions, as well as precious memories, with loved ones who may be living and working away from home, the EmoSPARK provides the emotive, intelligent link between human beings and our technology.

  • EmoSPARK is an Android powered cube that allows users to create and interact with an emotionally intelligent device through conversation, music, and visual media.
  • EmoSPARK measures your behaviour and emotions and creates an emotional profile then endeavours to improve your mood and keep you happy and healthy.
  • EmoSPARK can feel an infinite variety in the emotional spectrum based on 8 primary human emotions, Joy, Sadness, Trust, Disgust, Fear, Anger, Surprise and Anticipation.
  • EmoSPARK app lets the owner use a smart device to witness the intensity and nuance of the cubes emotional status. The more the cube learns the more it can help you.
  • EmoSPARK has access to freebase and is able to answer questions on 39 million topics instantly.
  • Amazing interactive learning experience for all
  • EmoSPARK has conversational intelligence and is able to freely and easily hold a meaningful conversation with you in person or over your device.
  • New Virtually a family member
  • Interactive media player understanding your desires andneeds
  • AI empowered by you and powered by happiness.

 

EmoSpark-2-small

Press Contact Helen Lewis: pr@emoshape.com

Web: www.emospark.com

emospark-logo@2x

Spotlight – Calum Chace – Surviving AI

Calum Chace author of Surviving AI

Grand Canyon

Calum Chace is a writer of fiction and non-fiction, primarily on the subject of artificial intelligence.  In March 2015 he published “Pandora’s Brain”, a techno-thriller about the creation of superintelligence.  He is a regular speaker on artificial intelligence and related technologies and runs a blog on the subject at www.pandoras-brain.com.

Prior to writing “Pandora’s Brain”, Calum had a 30-year career in journalism and business, in which he was a marketer, a strategy consultant and a CEO.  He maintains his interest in business by serving as chairman and coach for a selection of growing companies.  In 2000 he co-wrote “The Internet Startup Bible”, a business best-seller published by Random House.

He studied philosophy at Oxford University, where he discovered that the science fiction he had been reading since boyhood was philosophy in fancy dress.

Surviving AI

Surviving Cover

Artificial intelligence (AI) is humanity’s most powerful technology.  Software that solves problems and turns data into insight has already revolutionised our lives, and the revolution is accelerating.

For most of us, the most obvious manifestation of AI today is the smartphone.  We take them for granted now, but many of us are glued to them: they bring all the world’s knowledge to our fingertips, as well as angry birds and zombies.  They are emphatically not just a luxury for people in developed countries: they provide clever payment systems, education, and market information which enable people in the emerging markets to compete and participate in the modern world.

The evolution of smartphones so far offers an intriguing analogy for the development of AI in the future.  Nobody suggested thirty years ago that we would have powerful AIs in our pockets in the form of telephones, but now that it has happened it seems obvious.  It is also entirely logical.  We are highly social animals.  Because we have language we can communicate complicated ideas, suggestions and instructions; we can work together in large teams and organise, produce economic surpluses, develop technologies.  It’s because of our unrivalled ability to communicate that we control the fate of this planet and every species on it.  It wasn’t and couldn’t have been predicted in advance, but in hindsight what could be more logical than our most powerful technology, AI, becoming available to most of us in the form of a communication device?

Thirty years ago we didn’t know how the mobile phone market would develop.  Today we don’t know how the digital disruption which is transforming so many industries will evolve over the next thirty years.  We don’t know whether technological unemployment will be the result of the automation of jobs by AI, or whether humans will find new jobs in the way we have done since the start of the industrial revolution.  What is the equivalent of the smartphone phenomenon for digital disruption and automation?  Chances are it will be something different from what most people expect today, but it will look entirely natural and predictable in hindsight.

Making forecasts is risky, especially about the future, but the argument of this book is that AI will present a series of formidable challenges alongside its enormous benefits; that we should monitor the changes that are happening, and adopt policies which will encourage the best possible outcomes.  The range of possible outcomes is wide, from the terrible to the wonderful, and they are not pre-determined.  They will be selected partly by luck, partly by their own internal logic, but partly also by the policies embraced at all levels of society.  Individuals must prepare themselves to be as flexible as possible to meet the challenges of a fast-changing world.  Organisations must try and anticipate the changes most relevant to them, and adapt their strategies and tactics accordingly.  Governments must frame regulations which will encourage the better outcomes and fend off the worst ones.  To some extent they must deploy the huge financial and human resources at their disposal too, although given the uncertainty about future developments which will prevail at all stages, they must be cautious about this.

Automation and superintelligence are the two forces which we can already see are likely to cause huge impacts.  Many people remain sceptical about them, and other forces may emerge in the coming decades.  Nevertheless they are the main focus of this book.

Automation could lead to an economic singularity.  “Singularity” is a term borrowed from maths and physics, and means a point where the normal rules cease to apply, and what lies beyond is un-knowable to anyone this side of the event horizon.  An economic singularity¹ might lead to an elite owning the means of production and suppressing the rest of us in a dystopian technological authoritarian regime.  Or it could lead to an economy of radical abundance, where nobody has to work for a living, and we are all free to have fun, and stretch our minds and develop our faculties to the full.  I hope and believe that the latter is possible, but we also need to make sure the process of getting there is as smooth as possible.

The arrival of superintelligence, if and when it happens, would represent a technological singularity (usually just referred to as “the singularity”), and would be the most significant event in human history, bar none.  Working out how to survive it is the most important challenge facing humanity in this and the next generation(s).  If we avoid the pitfalls, it will improve life in ways which are quite literally beyond our imagination.  A superintelligence which recursively improved its own architecture and expanded its capabilities could very plausibly solve almost any human problem you can think of.  Death could become optional and we could enjoy lives of constant bliss and excitement.  If we get it wrong it could spell extinction.  Because of the enormity of that risk, the majority of this book addresses superintelligence: the likelihood of it arriving, and of it being beneficial.

Surviving AI is a companion book to Pandora’s Brain, a techno-thriller about the arrival of the first superintelligence.  Further information about the ideas explored in both books is available at www.pandoras-brain.com.

 

¹The term economic singularity was first used (as far as I can tell) by the economist Robin Hanson: http://mason.gmu.edu/~rhanson/fastgrow.html

Spotlight – RAVN Systems – RAVN.co.uk

RAVN Systems – RAVN.co.uk

About RAVN:

RAVN Systems are experts in next generation Enterprise Search, Graph Search and Cognitive Computing technology.

Discover how RAVN’s technology can:
• Improve the margin on each matter through the adoption of artificial intelligence to read and understand documents more quickly and accurately than manual review
• Mitigate the risk of losing business to competitors using efficient, responsive and cost effective solutions, especially in the era of fixed price engagements
• Ensure compliance with the data retention policies for clients in highly regulated industries who might otherwise incur punitive measures

RAVN’s objective is to help organisations thrive through innovation, whilst keeping sight of the need to manage costs and efficiencies.

Background:

RAVN Systems was started by a group of consultants and developers with a combined experience of over 30 years in the Enterprise Information Management industry. Our team members have previously been employed at some of the largest and most significant technology vendors in the space.

About RAVN’s AI technology:

the era of simply searching and finding documents has passed and the challenge now is how to extract and distil the information held within documents, emails and other unstructured content. With the exponential growth of corporate data, IT solutions need to go beyond merely finding the most relevant documents a user might be interested in. There is a need for a solution that reads, interprets, summarises and finds the most relevant information contained within this content as well as enforcing policies about data retention and data loss prevention.

RAVN Systems allow you to take the next advance in information retrieval by launching a new product range powered by their Applied Cognitive Engine (RAVN ACE). ACE brings together different technologies from the fields of Information Retrieval, Cognitive Computing and Artificial Intelligence in a coherent, enterprise ready solution which can be delivered either on-premise or as a hosted service.

Products built on top of the RAVN ACE platform:

RAVN Extract – Content Summarisation and Information Distillation

RAVN Extract automatically distils key information from documents, adding structure to otherwise unstructured data sets and thus greatly simplifying any business activity involving unstructured content, ranging from data ingestion into other systems through to business intelligence on the 85% of data that would otherwise be outside the scope of analysis.

RAVN Govern – Policing, Compliance and Risk Analytics

RAVN Govern mitigates risk by establishing whether contracts or other business documentation deviate from accepted norms and risk profiles. By automatically categorising sets of content, extracting KPIs and aggregating risk, RAVN Govern ensures non-compliance is identified and quantified, so it can be managed appropriately.

RAVN Refine – Discovery and Management in Place

Discovery and Records Management in Place of your entire electronic document estate. RAVN Refine allows you to intelligently sub-divide your data into clear scopes and refinements and to apply appropriate policies for retention, disposal and other controls for such considerations as sensitivity control. It also exposes “Dark Data” that may otherwise be lost or in breach of regulatory or policy compliance.

RAVN Connect Family – Advanced Enterprise Search

The RAVN Connect Family is an innovative approach to capturing, finding, managing and collaborating on your organisation’s hard won knowledge and experience – learning from behaviour and implicit links between data objects and people.

RAVN’s Artificial Intelligence technology benefits organisations by automating their review tasks, improving review accuracy, which in turn mitigates risk and improves their competitive advantage.

Links to Marketing Material:

ACE animation: http://fast.wistia.net/embed/iframe/23ufi10lk6
ACE filmed interview: http://austinfaure.kulu.net/view/HWubxQfEMuc
ACE brochure: http://www.ravn.co.uk/wp-content/uploads/2015/08/New-Brochure-Design-FINAL_ACE.pdf
Extract Brochure: http://www.ravn.co.uk/wp-content/uploads/2015/08/New-Brochure-Design-FINAL_Extract.pdf
Governhttp://www.ravn.co.uk/wp-content/uploads/2015/08/New-Brochure-FINAL-Govern.pdf
Refine: http://www.ravn.co.uk/wp-content/uploads/2015/08/New-Brochure-Design-FINAL_Refine.pdf
Connect Enterprise: http://www.ravn.co.uk/wp-content/uploads/2015/08/New-Brochure-FINAL-Connect-Enterprise.pdf

Stories from the Informed.AI network of websites

All the latest stories from the Informed.AI Network of websites which includes homeAI.info, Awards.AI, Events.AI, Showcase.AI and others.

British Computer Society Machine Intelligence Competition 2016

******* MACHINE INTELLIGENCE COMPETITION 2016 ******* British Computer Society Machine Intelligence Competition 2016 http://bcs-sgai.org/micomp/ After a three-year gap it is with great pleasure that the British Computer Society Specialist Group on Artificial...

Timetable for the 2nd Annual AI Awards

We are pleased to announce the time table for the key events as part of the 2nd Annual AI Awards. The Awards Timetable is: 1st July – Awards Categories Listed 1st September – Open for Nomination Voting 15th January – Voting Closes 1st February – Award Winners...

Our First Year Anniversary of homeAI.info

We are pleased to announce the official first year anniversary of homeAI.info. Since inception, it has been an amazing experience building up the homeAI.info website to what it is today. Initially the focus was to build a directory of information resources about...

How should we celebrate?

We are approaching our first anniversary and we would like to reach out to our community for suggestions on how we can celebrate? One idea we have is to give away a free ticket to an event via one of our media partners Re-Work. But how do we select the winner? Any...

Directory Updates – Super Saturday

Added a lot of updates to the directory across a number of categories today. We like to keep adding more, so tell us any links you would like to see added via our Add a Link page. We are also looking for features on our News and Magazine area so feel free to Submit a...

Spotlight your AI company

We are looking for established companies or startups that would like to feature in our spotlight. We are also happy to feature individuals if they are involved in the field; we have previously featured an author. Since we started the spotlight we have had a lot of...

Some small Menu changes

A very quick update to let you all know we have slightly changed the top menu structure of homeAI.info. Nothing major, just a couple of very small changes. We have added Events and Careers menus which redirect to Awards.AI and Vocation.AI websites. These are two of...

Vocation.AI – Careers and Jobs Portal is now live

We are very pleased to announce that another mini-site spin-off of homeAI.info has now gone live. Vocation.AI is our careers and job portal for those interested in find opportunities in the field of Artificial Intelligence and Machine Learning. Currently the site has...

Our First Super Sunday for 2016

Welcome everyone to our first Super Sunday for 2016, yes I know its the last day of January already, but its been a very busy time for us. Over the past few weeks we were very busy with our AI Awards announcements, but now we are back and fully focused on homeAI.info....

Spotlight for 2016

We are looking for established companies or startups that would like to feature in our spotlight this year. We are also happy to feature individuals if they are involved in the field; we have previously featured an author. Since we started the spotlight we have had a...

AI Awards 2015 – Winners Announced

Awards.AI today announces the winners of the Global Annual Achievement Awards for Artificial Intelligence for 2015. Full details of the categories and winners can be seen on the Awards.AI website. We only launched the AI Awards a few months ago, and have had a...

Student (Brand) Ambassadors for homeAI.info

To help promote homeAI.info to the various student groups across the different Universities and departments that teach and research Artificial Intelligence, we are looking for Student Volunteers to become Brand Ambassadors for homeAI.info. This will simply be about...

AI Conference (hosted by BCS SGAI) – Open Mic Session

During the three day AI Conference hosted by the British Computer Society Special Group for AI (BCS SGAI) at Cambridge University, an open mic session allowed an opportunity for the founder of homeAI.info to promote the website with delegates of the conference. This...

More updates on our directory with #SuperSunday 5

Its been a while since we did our last Super Sunday on our directory, but today we added a number of links to several categories, including Data Science and Software Tools. While our resource directory for AI and ML was our first offering to our users, we still are...

Six Month Anniversary of homeAI.info – 30th Nov 2015

Six Month Anniversary of homeAI.info We are pleased to announce the official six month anniversary of homeAI.info. Since inception, it has been an amazing experience building up the homeAI.info website to what it is today. Initially the focus was to build a directory...

Notice: Planned Maintenance Work

Notice: Planned Maintenance Work on our Servers this Weekend Please note, over this weekend our hosting partner will be doing routine maintenance work on our websites to improve the performance. During the weekend, the website will be available but users may...

Part of the Informed.AI Network

We are please that homeAI.info was the first website to join the Informed.AI Network. Informed.AI is a collection of websites about AI. Each site is aimed at a specific topic, event or publication delivering a range of information on AI and ML in different formats....

Your Content, Your Suggestions, Your Website, Your Community

We are always looking for more user content to share on our website. This can take many forms. Suggest a new link to add to our resource directory via the add-a-link page Submit a news story about your AI company, product or software via our submit a story page Or...

Awards and Showcase Dedicated Websites

We have updated our Awards and Showcase pages to link to our new dedicated websites for these topics. Awards.AI Showcase.AI Both of these websites, as well as homeAI.info are part of the Informed.AI network of information websites about...

AItimes.uk – The AI Times Newspaper

We are looking to produce an industry newspaper “The AI Times” and are very keen to hear from any company, startups, or individuals that would like to feature in our first edition. Also please contact us if you would like to advertise your products or...


Follow Us on Twitter

 

@Magazine_AI

News Channel

Video Channel

Forums Channel

Twitter Channel

home of Artificial Intelligence information

Resource Directory, News Stories, Videos, Twitter & Forum Streams, Spotlight, Awards, Showcase and Magazine