Saturday, March 30, 2019

Boston Dynamics AI

President of Boston Dynamics Marc Raibert and moderator Brian Heater speak onstage during TechCrunch Disrupt at Pier 48  in San Francisco, California. Photograph: Steve Jennings.
Boston Dynamics shows off its nimble warehouse robot.

If artificial intelligence will inevitably be a central part of how we will interact with machines and services in the future, then learning more about who is developing these new AI systems and what they are making — is essential to us understanding and helping shape that world.

Robot-maker Boston Dynamics has shown off its latest futuristic artificial intelligence, the robots which can effortlessly lift weights and transport packages with more agility than standard handling of goods and cargoes by human. The American firm has previously revealed the Handle robot jumping around. However, a new video has emerged showing the technology being used in a more practical working environment, carrying boxes weighing about 11lb (5kg) – though it is capable of taking three times that.

Handle is seen working autonomously, quickly and carefully stacking boxes in a neat row on to pallets, providing a glimpse of how warehouse technology could be revolutionized in the future. It uses a vision system to track marked pallets for navigation and individual boxes that can then be grasped and placed elsewhere. Boston Dynamics is known for showing the world a new level of robotics, with human-like robots capable of doing parkour and a four-legged robot that looks like a dog among its creations.

Friday, March 1, 2019

AI Augmented Government

  The emerging of cognitive technologies have the potential to revolutionize the public sectors
Many government agencies are already capturing the potential of artificial intelligence technologies, using them to relieve, replace, and augment humans in completing job-related tasks. For many people, artificial intelligence (AI) conjures images of humanoid robots and talking computers straight out of a science fiction film. But the cognitive and automation technologies behind AI could fundamentally transform the way public-sector employees work—eliminating some jobs, redesigning countless others, and even creating entirely new professions within the government.

AI-based technologies include machine learning, computer vision, speech recognition, natural language processing, and robotics;1 they are powerful, scalable, and improving at an exponential rate. Developers are working on implementing AI solutions in everything from self-driving cars to swarms of autonomous drones, from “intelligent” robots to stunningly accurate speech translation. The AI could eventually revolutionize every facet of government operations. For instance, the Security, Legal and Immigration and Services in creating a virtual assistant platform like EMMAAI, that can respond accurately to human language.

Cognitive technologies are already having a profound impact on government work, with more dramatic effects to come. AI-based applications could potentially reduce backlogs, cut costs, overcome resource constraints, free workers from mundane tasks, improve the accuracy of projections, inject intelligence into scores of processes and systems, and handle many other tasks humans can’t easily do on our own, such as predicting fraudulent transactions, identifying criminal suspects via facial recognition, and sifting millions of documents in real time for the most relevant content. The potential is vast. AI will help business to increase speed, enhance quality, and reduce costs at the same time, but cognitive technologies offer that tantalizing possibility.

AI presents governments with new choices about how to get work done, with some work fully automated, some divided among people and machines, and some performed by people but enhanced by machines. In this study, we offer a roadmap for government leaders seeking to understand this emerging landscape. We’ll describe key cognitive technologies, demonstrate their potential for the government, outline some promising choices, and illustrate how government leaders can determine the best near-term opportunities.

In May 2017, Congress established the bipartisan Congressional Artificial Intelligence Caucus, and members have since introduced numerous pieces of AI legislation. More recently, the administration launched the American AI Initiative through a February 2019 executive order, and the Department of Defense released its own strategy on how to incorporate AI into national security. As government use of AI evolves, agency leaders will look for pathways to capitalize on opportunities, and the workforce will need new technical and social skills to succeed in AI-augmented workplaces.  At the lower end of the scale, automating tasks regularly performed by computers could free up 266 million U.S. federal government working hours annually, potentially saving $9.6 billion. At the higher end, as many as 1.1 billion working hours could be freed up every year over the course of the next five to seven years, saving $37 billion, as estimated in a recent Deloitte Consulting LLP report on AI-augmented government.

The report produced by the IBM Center for the Business of Government and the Partnership for Public Service, the report addresses how government can best harness AI's potential to transform public sector operations, services, and skill sets. The report draws on insights from a series of roundtables with government leaders to explore pressing issues surrounding AI, share best practices for addressing solvable challenges, and work toward an implementation roadmap for government to maximize the benefits of AI. More specifically, it finds that AI could enable agencies to fulfill their numerous roles efficiently and effectively by reducing or eliminating repetitive tasks, revealing new insights from data, driving better decision-making, improving customer service and enhancing agencies' ability to achieve their missions.  AI could help employees focus on core issues related to their agencies' missions and spend fewer hours on other administrative duties.

 There is a “Three Vs” framework created to help government agencies assess their best opportunities for investing limited resources in AI technologies. The framework helps enable decision-makers to gauge the extent to which AI may be viable in the near future, whether there’s value in assigning specific tasks to machines, and whether AI applications are vital to tasks involving information mining and analysis. 

Viable. Some tasks that require human or near-human levels of speech recognition or vision can now be performed automatically or semi-automatically using technology. Examples include initial telephone customer contacts and the processing of handwritten forms. Cognitive technologies, meanwhile, can make predictions based on large quantities of unstructured data, identify fraud patterns and clues buried in financial information, and spot trails behind public health crises. 

Valuable. Just because tasks can be automated doesn’t mean they should be. Some manual functions are already performed efficiently and competently and are not necessarily attractive candidates for automation. However, it makes sense to automate functions that can be easily monitored—and thus turned over to machines—or those involving massive volumes of information. Such tasks might include determining program eligibility, processing invoices, or tabulating tax data. Moreover, professionals frequently perform responsibilities that may not actually require their expertise so AI could free up their time to perform higher-value tasks. Accountants, for instance, may analyze hundreds of contracts looking for patterns and anomalies—likely relying more on reading than accounting skills. AI technologies could take over the processes of scanning and extracting contract terms. In fact, cognitive technology in the legal field can find relevant documents for discovery faster and more thoroughly than lawyers do. 

Vital. Processing high volumes of certain business transactions in government, such as those requiring a high degree of human attention and analysis, may not be achievable without the support of cognitive technologies. For example, with the help of optical character recognition, one Georgia agency processes 40,000 campaign finance disclosure forms per month, many of them handwritten. Machine learning could be critical to numerous government functions, from fraud detection to cybersecurity. A learning system that can respond to ever-changing threats by learning from past experience and external modeling may be the best defense against adversaries ranging from rogue states to cybercriminals.

Four Ways to Deploy AI in Government

Relieve. This approach lets technology take over mundane tasks, allowing workers and employees to focus on higher-value tasks. In the U.K., one central agency automated the most tedious aspect of its call center work—opening case numbers. The agency estimates this reduced handling times by 40 percent and processing costs by 80 percent. 

Split up. Automation technologies can be applied to a specific job or task, leaving humans to complete the rest and perhaps supervise only the work of the application. For example, at the United Nations, there is language translation software that creates live transcripts during assembly meetings for spectators, while human translators could revise them later for the publication. In addition, the White House and U.S. Citizenship and Immigration Services, have designed chatbots to answer some basic online questions while leaving more complicated queries to humans. 

Replace. In this model, AI can be used completely to replace entire functions or the job once performed by humans. The best opportunities include repetitive tasks.  For instance, the U.S. Postal Service uses handwriting recognition technology to sort out mail by ZIP code; (the work that belonged to a mailman ) some machines can process up to 18,000 pieces of mail an hour, in surpassing human with far extent. 

Augment. Human workforce and skills can be combined with AI technologies to achieve faster and better results. When technology is designed to augment, humans who still remain in the driver’s seat. An example is IBM’s Watson for Oncology, which uses cognitive technology to recommend individual patient treatment plans to physicians, citing evidence and a confidence score for each recommendation to help doctors make more fully informed decisions. 

Government employees in the future will need new skills to succeed in an AI-enabled world. As AI becomes more ubiquitous in everyday business government workplaces should emphasize expertise in technical, digital and data literacy.
  • The report recommends three paths for agencies: Sufficient funding for AI projects and basic machine learning skills to the government employees in extensive and ongoing training about technology, digital skills, and data analysis in order to succeed in an AI workplace.
  • The stakeholders should work with other relevant agencies and academic institutions to establish a team for AI talent similar to the U.S. Digital Service, governed by rules that make it easy to hire top AI talent from the private sector for time-limited stints in government to help the state department or agencies that need AI expertise.
I will likely fundamentally transform how government works, and the changes may come sooner than many expect. As cognitive and automation technologies advance in power and capability, government agencies can bring more creativity to strategic workforce planning and work design, and leaders can work together to analyze the interplay of talent, technology, and design to propose a path forward for AI in government.

Other contributors to this article: Claude Yusti, William D. Eggers, David Schatsky, Dr. Peter Viechnicki, Tatiana Sokolova, and Alayna Kennedy from IBM, and Peter Kamoscai, E.A Nambili Samuel, and Katie Malague from the Partnership for Public Service.

Sunday, February 24, 2019

Artificial Intelligence In Warfare

     Artificial Intelligence (AI) is becoming a critical part of modern warfare and defense measures
While artificial intelligence is often hyped up as a business savior and derided as a job killer, the question of AI ethics also comes in the minds whenever people discussing military technology, particularly in the wake of the Project Maven furore at Google. (Read the text of the letter.) which is part of the US Army Research Lab under the US Department of Defense and Joint Artificial Intelligence Center. AI has broadened the scope of modern application include war machines. It is due to this competency the technology offers, that scientists have started applying AI in the defense sector to patch up the limitations a human being has. 

AI obsession in the military application is not far away. The reality is that AI is already a growing repository of the modern military strategy of many countries, while the NATO and other countries such as China and Russia are increasingly lapped up its engagement for national defense as well security. Just this month, the Pentagon has released a memo that calls for the rapid adoption of AI in all aspects of the military and asked for the collaborative help of big tech firms.

Earlier in the year, US had sought clearer ethical guidelines for the use of AI. Dana Deasy, CIO at the US Department of Defense, told press: “We must adopt AI to maintain our strategic position and prevail on future battlefields.” Oracle, IBM, Google and SAP have all indicated interest in working on future Department of Defense AI projects. When people think of the use of AI by the military, they may first think of the ‘killer robots’ or autonomous weapons that many have warned about. While AI weapons are a stark reality, many of the deployments involve uses of latest tech such as automated diagnostics, defensive cybersecurity, and hardware maintenance assistance.

The contentious use of facial recognition by US immigration authority ICE can also be considered a deployment of AI in an increasingly militarised landscape. The usage of AI in defense is plentiful. Antony Edwards is COO of Eggplant, a provider of continuous intelligent test automation services which has some clients in the defense space. These services are used by NASA to ensure all the systems in the Orion spacecraft digital cockpit are behaving correctly. “That these instruments are showing the correct information and entering information into the instrument has the correct effect, is clearly critical to mission success,” Edwards explained. The Federal Aviation Administration also uses Eggplant to ensure its digital displays are correct: “ie if an aircraft comes into the monitored airspace, it shows on the appropriate screen in the appropriate way.”

How should AI be approached? 

According to an Electronic Frontier Foundation (EFF) white paper geared towards militaries, there are certain things that can be done to approach AI in a thoughtful way. These include supporting civilian leadership of AI research, supporting international agreements and institutions on the issues, focusing on predictability and robustness, encouraging open research and dialogue between nations, and placing a higher priority on defensive cybersecurity measures. Looking at ethical codes, some legal experts argue that ethics themselves are too subjective to govern the use of AI, according to the MIT Technology review.

AI applications

With giant leaps in the domains of AI and robotics, drones, and intensive hacking toolkit against the national defense's system which are no longer limited to sci-fi movies.  The applications of AI in the military environment are seeing rapid advancements with every passing day. 
  • 1. Military drones for surveillance: The popularity of military drones has skyrocketed in recent years. Drone technology has come a long way since its inception and is now finding application in unmanned aerial vehicles. These remote-controlled vehicles carry out all tasks, right from inspecting a terrain to flying an unmanned aerial vehicle. Military units across the world are employing drones to: Channel remote communication, both video and audio, to ground troops and to military bases, Track enemy movement and conduct reconnaissance in unknown areas of a war zone Assist with mitigation procedures after a war by searching for lost or injured soldiers, and giving recovery insights for a terrain Aid with operations like peace-keeping and border surveillance 
  • 2. Robot soldiers for combat: While drones help in guarding aerial zones, robots can be deployed on land to assist soldiers in ground operations. These high functionalities, intelligent robots, designed with such strategic goals, add a cutting edge to technology in the defense sector. With advancements in machine learning and robot building, scientists have succeeded in building bipedal humanoid robots to execute a variety of search and rescue operations, as well as, to assist soldiers during combat. Robot fleets function like soldier units and carry out collaborated armed activities using multiple techniques. They are self-reliant, adaptable, and have their fault-tolerant systems, all of which contribute to their ability to make and execute decisions swiftly and competently.
  • 3. Intelligent Management: While military tactics are being continuously improved, there also needs to be an improvement in the way information is analyzed in the army bases. The data collected by drones and robots, while on the war field, needs to be structured and grouped in an organized manner to make the information insightful. Satellite imagery, terrain information, and data from multiple sensors can be used to create situational awareness by applying deep learning, statistical analysis, and probabilistic algorithms to such data. 
  • 4. Cybersecurity: With a lot of military sites being digitized, it is necessary to secure the information stored on these web portals. AI comes to the rescue by offering cybersecurity options as a response to the malware, phishing, and brute force attacks on data centers and government websites.
Human rights issues

Many leading human rights organizations argue that the use of weapons such as armed drones will lead to an increase in civilian deaths and unlawful killings. Others are concerned that unregulated AI will lead to an international arms race. This is a concern for many who are not convinced that AI, as it exists now, should be deployed in certain circumstances, due to vulnerabilities and a lack of knowledge of the weaknesses in certain models.
AI expert David Gunning spoke about the issues: “We don’t want there to be a military arms race on creating the most vicious AI system around … But, to some extent, I’m not sure how you avoid it. “Like any technology arms race, as soon as our enemies use it, we don’t want to be left behind. We certainly don’t want to be surprised.” Edwards believes that more awareness of AI among software acquirers is an important element when it comes to using it in these contexts. “AI breaks many of the assumptions that people make about software and its potential negative impacts, so anyone acquiring a product that includes AI must understand what that AI is doing, how it works, and how it is going to impact the behavior of the software.

 “They must also understand what safety mechanisms have been built in to protect against errant algorithms.” AI ethics can be unclear Luca De Ambroggi is senior research director of AI at IHS Markit, with decades of experience in AI and machine learning. He says that when it comes to military projects, ethics “can get very muddy”. He added: “AI ethics are generally complex at a global level precisely because we share different cultures and have different values. 

 AI usage will remain with the human operator for now, as it is still intended to aid humans at a tactical and command level. “For this reason, it is vital a code is developed and adhered to. However, we must continue to research the benefits and pitfalls of widespread AI application and implementation within military usage, to further inform the ethics of AI.” Who makes the call? Principal technology strategist at Quest, Colin Truran, got to the core of the issue when it comes to AI ethics in a general sense: “The current overarching conundrum surrounding AI ethics is really in who decides what is ‘ethical’. 

AI is developing in a global economy, and there is a high likelihood of data exchange between multiple AI solutions.” Ultimately these are ethical quandaries that will likely take years to find an answer to if such a feat is even possible. As the EFF notes, the next number of years will be a critical period in determining how militaries will use AI: “The present moment is pivotal: in the next few years either the defense community will figure out how to contribute to the complex problem of building safe and controllable AI systems.

In January 2019, the head of U.S. Army acquisitions said that by allowing artificial intelligence to control some weapons systems may be the only way to defeat enemy weapons. U.S. military has embraced AI, arguing that America cannot compete against potential adversaries such as Russia and China without the futuristic technology. Concern over placing machines in charge of deadly weapons has prompted military officials to adopt a conservative approach to AI, one that involves a human in the decision-making process for the use of deadly force. 

Bruce Jette, assistant secretary of the Army for Acquisitions, Logistics and Technology (ASAALT), said it may not be wise to put too many restrictions on AI teamed with weapons systems. "People worry about whether an AI system is controlling the weapon, and there are some constraints on what we are allowed to do with AI," he said at a Jan. 10 Defense Writers Group breakfast in Washington, D.C. There are a number of public organizations that have gotten together and said, "We don't want to have AI tied to weapons," Jette explained. 

The problem with this policy is that it may hinder the Army's ability to use AI to increase reaction time in weapon systems, he said. "Time is a weapon," Jette said. "If I can't get AI involved with being able to properly manage weapons systems and firing sequences then, in the long run, I lose the time deal. "Let's say you fire a bunch of artillery at me, and I can shoot those rounds down, and you require a man in the loop for every one of the shots," he said. "There are not enough men to put in the loop to get the job done fast enough."

 Jette's office is working with the newly formed Army Futures Command (AFC) to find a clearer path forward for AI on the battlefield. AFC, which is responsible for developing Army requirements for artificial intelligence, like the established a center for AI at Carnegie Mellon University, he added that ASAALT will establish a "managerial approach" to AI for the service. 

Saturday, February 23, 2019

Can AI Replace Your Manager?

                 As AI capabilities grow some managerial positions are at risk for total automation. 
In 2018, Amazon abandoned development on a smart recruiting AI tool. Up until it was scrapped, this AI algorithm was considered the state of the art until it took a turn. As a learning machine, the AI was fed ten years worth of resumes to help identify patterns in successful hires; the only issue was this had been a predominantly male-dominated industry. The end result was a biased and sexist machine that began to favor male applicants over female applicants, even going so far as to filter out female names and applicants listing all-women colleges. However, the AI excelled in identifying patterns, good fits for applicants and positions, and organizing piles and piles of resumes to make suggestions. But hiring is another story. Machines don’t have the emotional capacity, the human touch, to get a personal feel for a potential hire, that is best left up to talented and intuitive hiring managers. 

What we learn from the failed Amazon AI project is not the failings of AI itself, but we have learned how well it can work alongside a human manager. By picking up the slack of paperwork, managers can focus on the candidates themselves.  It also reveals to us how AI manage data on a scale that no person ever could, but the lack of human touch poses its own issues - and opportunities.

Today, payroll managers and paymasters face a 96% chance of automation, going against many predictions that it would only be service industry positions and other predictable tasks that could see an overhaul of this kind. Gathering data, analyzing said data, and spitting out solutions is what this kind of AI does best and it seems to be creeping into management roles. 

Ask any smart manager and they’ll be the first to tell you how valuable humanity it when it comes to being a leader. Yet it seems more and more office managers and project managers find themselves inundated with repeatable, even mindless tasks. From ordering new office supplies to coordinating with the weekly cleaning crew, sometimes a manager is left with little time to actually manage. Wherever we go, tech seems to pick up our slack. Taking over the menial, the time-consuming, and the repeatable tasks of daily work operations save us time and valuable energy.

From a manager’s perspective, this absolutely changes the game. But making this kind of digital switch isn’t always easy, especially for small businesses with limited staff, limited resources, and limited funds. Around one in five small business leaders think the process of selecting and implementing new tech is just not worth the hassle and more than one in three think they don’t even need more tech. When it comes down to brass tacks, four in five small business believe they could benefit from better tech - as long as it’s the right kind of tech. Business leaders should ask themselves some key questions - What daily responsibilities consume too much time, and what gets overlooked with it’s crunch time? What could operations benefit from most, time-saving tech, data processing automation, maybe even mobile access support?

Daily office operations and the responsibilities of office managers get an auxiliary boost from programs like Managed By Q. Ordering office supplies, scheduling maintenance, and cleaning, can all be done with the touch of a button. Furthermore, built-in hiring algorithms to find receptionist, assistants, and other staff can easily be reached through the iPad hub it works from. Today, more than half of small businesses are using some form of tech to help with the hiring process.

Project management gets streamlined and simplified as well as AI platform iCEO takes on big projects and shrinks them down to size in easily achievable and scalable pieces. By the deadline of a project, iCEO is capable of generating a research report of up to 124-pages. Other AI programs from hiring to scheduling are available, but there really is no one-size-fits-all AI-bot for business. Ready to find the perfect fit for your business and get back to doing what matters? Let this infographic be a guide to AI management tools, its capabilities, weaknesses, and how to find the most sustainable and scalable option for any business.

Infographic: Can AI replace your manager?

Wednesday, January 30, 2019

AI and Genomic Medicines

 An intermarriage of Biomedical Imaging Technology with Artificial Intelligence.Photo/iStock. 
AI genomic medicine aims to develop qualitative drugs by using deep learning techniques to find related or contrast patterns in genomic and medical data

The next blockbuster drug could be developed with help from machine-learning techniques that are rapidly developing from AI research which aiming to enhance pharmacology labs. The utility of artificial intelligence (AI) has been explored in a multitude of industries (transportation, communication, security) with the healthcare industry being a core focus of AI research in 21st century. Healthcare is a complex industry and AI needed for each component can be varied. The Artificial intelligence involved in healthcare would segregate pharmaceuticals needs from the rigidity perspective of general practitioner, patient, regulatory authority, or health management system. A pharmaceutical based AI projects are on the forefront, most focus in two areas: Patient care platform specifically for diagnostics and therapeutic (drug discovery).

What exactly is artificial intelligence, and how can we use it?

The phrase “artificial intelligence” is unquestionably and undoubtedly, the technology is doing more than ever we knew or realize — whether for both good and bad. It’s already being deployed in health care and warfare; it’s currently assisting people effortlessly make music, write books and conference management; it’s scrutinizing your personal resume, judging your creditworthiness, and tweaking the photos you take on your phone or share on social media. In short, it’s making decisions that affect your life whether you like it or not.  In simplistic terms, AI itself is an algorithm and this can be in the form of software/hardware or application that has the capability to utilize massive amounts of data for multiple tasks. Machines have an advantage over human's brain memory capacity for store, access, and process limitless volumes of information in their memory and apply it quickly to a predefined task (or application).

AI and Drug Discovery: The process of drugs production can take years or even more to come to the market. It cost billions, and can even ruin a company if they fail in late-stage trials having poured in so much investment. The introduction of Artificial Intelligence and the autonomic concept has becomingly more and more important in addressing these issues and this it shows that AI increasingly is the future of drug discovery. The commercial drugs demand faster and better drug discovery as well as delivery.

Artificial Intelligence (AI) in the Pharmaceutical industry and its future innovations.  
Perhaps the most obvious application of artificial intelligence in pharma is using its ability to quickly ‘read’ vast amounts of scientific data: research published in journals, as well as patient records and tissue/blood samples, and using patterns in the data to make scientific hypotheses which can direct pharma companies’ drug development. The speed of AI in these processes allows companies to develop drugs based on biological markers, with greater accuracy, rather than the scatter gun approach of chemical screening. In this way, companies can be zeroing on particular indications which the drug is most likely to successfully treat.
The ever rising costs in drugs research and development, with the frustratingly long time spent in bringing new novel drugs to market and the high rate of failure in the processes needs to be tackled.
Boston-based biotech Berg’s Niven Narain says the company’s AI platform, Interrogative Biology, allows researchers to ping a look into 14 trillion data points in just one single tissue sample. Narain stated that artificial intelligence will halve the time (and potentially the cost). Berg soon will enter its candidate BPM31510 to the market. Similarly, IBM’s Watson supercomputer is currently conducting AI-based trials where it scans mutation data from the tumors of 20 brain cancer patients. This is something that usually would take human scientists several weeks or months to analyze, but Watson can do the same in a matter of minutes.

Through machine learning, Watson gets the process done in better and faster. Ultimately, the screening process could be fast enough to analyze the entire genome of each patient’s individual cancer and for treatments to be tailored based on its specific mutations if they exist. If not, there will be a company interested in putting that right. In the UK, the University of Manchester’s AI platform, known as Eve, can screen more than 10,000 compounds in a day, matching them to likely targets. Again, through machine learning and hypothesis testing, Eve recognizes why ‘she’ has succeeded, and so gets faster the more screening she performs. Lower drug pricing Cheaper drug development should enable cheaper prices. Drug pricing is a hugely controversial issue in the industry nowadays, and the reputations of pharma companies are suffering as a result of massive price hikes. Pharma investors will often justify such increases by citing the huge costs of researching and development, so if such costs can be significantly reduced – as Narain has suggested the cost-effective artificial taking over the pharma – possibly they will no longer be able to use this justification, and prices should (in theory) fall.

 AI can help pharma companies, but right up to approval and even in the general running of the companies. After a promising candidate is discovered, AI could be used to design more effective clinical trials and more quickly analyse the data that emerges from them. Even business decisions may be handed over to supercomputers. Consider the huge numbers of mergers and acquisitions already tendered. AI could more effectively analyse potential synergies gained from the merger of particular companies, allowing them to decide if a combination is worthwhile. If it is, then AIs can help make decisions on integrating the R&D departments, for example. It could also have a hand in the digital sales and marketing process.

In 2015 Eularis released a cloud-based marketing analytics platform for the pharma industry, backed by cutting edge algorithms and the same machine learning capability used in Waymo, which is Google’s driverless cars. These type of AI applications have the abilities to learn the input/output stimuli and effectively mimic them and apply them to new products. The application of AI in pharma is in its infancy, and it could take two decades to reach its full potential. However, the beginnings of a technical revolution that could change the way in which drugs are brought to market appear to have begun already behind the closet and backstage of medical labs, which is good news for pharma companies and patients alike. Where there is data to be analyzed or a business decision to be made, the betting is that the AIs of the future will challenge any current pharma executive to do it better and faster.

The Massachusetts Institute of Technology (MIT) has compiled the “Machine Learning for Pharmaceutical Discovery and Synthesis Consortium.” The group forged collaboration between the pharmaceutical and biotechnology industries and the departments of chemical engineering, chemistry, and computer science at MIT. The goal of the collaborative efforts is to facilitate the design of useful software for the automation of small molecule discovery and synthesis. Pharma companies currently involved in the consortium include are:
  • Amgen
  • BASF 
  • Bayer 
  • Lilly
  • Novartis
  • Pfizer
  • Sunovion
IBM AI research's Watson for Drug Discovery delivers is a cognitive platform and natural language processing trained in the life sciences domain. This AI-based approach facilitates the drugs analysis and processing a massive amount of database more comprehensively and faster than simple search tools or unaided research teams.

Dr. Robert Bowser (Ph.D) the steering hand on IBM Watson Drug Discovery research.  
As according to the Deep Genomics, a Canadian company that uses machine learning to trace potential genetic causes for disease, announced that it’s getting into drug development. It joins a growing list of AI companies betting that their techniques can help produce powerful new drugs by finding subtle signals in huge quantities of genomic data. Deep Genomics was founded by Brendan Frey, a professor at the University of Toronto who specializes in both machine learning and genomic medicine. His company uses deep learning, or very large neural networks, to analyze genomic data. Identifying one or more genes responsible for a disease can help researchers develop a drug that addresses the behavior of the faulty genes. The company will focus, at first, on early-stage development of drugs for Mendelian disorders, inherited diseases that result from a single genetic mutation. These diseases are estimated to affect 350 million people worldwide.

The paradigmatic shift of AI application to the medical world and drug development is partly encouraged by the emergence of some powerful new algorithms, in the market which is cost-effective and new fresh ways of sequencing whole genomes and be able to read out entire DNA genome at once. “There’s an opening of a new era of data-rich, information-based medicine,” Frey says. “There’s a lot of different kinds of data you can obtain today in a brief short period. And the best technology we have for dealing with large amounts of data is none then the machine learning and artificial intelligence.” Deep learning has emerged in recent years as a very powerful way to find abstract patterns using large amounts of training data. It has proved especially valuable for speech recognition and for classification (see “10 Breakthrough Technologies 2013: Deep Learning”). The approach is now rapidly finding new uses in major fields, where it offers a way to spot signs of disease in medical images for predicting disease from a medical record of the patients.

Frey, who trained as a computer scientist and studied at the University of Toronto under Geoffrey Hinton, a key figure in the development of deep learning, says Deep Genomics will seek to partner with a pharma company on drug development. But he adds that the company offers key expertise. “There’s going to be this really massive shake-up of pharmaceuticals,” Frey says. “In five years or so, the pharmaceutical companies that are going to be successful are going to have a culture of using these AI tools.”  The company has published work showing how deep learning can help identify patterns in DNA that might contribute to diseases such as spinal muscular atrophy and nonpolyposis colorectal cancer.

Stephan Sanders, an assistant professor at UCSF School of Medicine in San Francisco who also specializes in using genomics and bioinformatics to study disease, says deep learning could help with drug development by finding patterns in sparse pathology data combined with large genomic data sets. “We have vast amounts of data; three billion data points per individual,” Sanders says. “What we have less of is the other end: clean data of phenotypes or outcomes.”

Several other companies are seeking to apply machine learning to drug development. These include BenevolentAI, a British AI company, and Calico, a subsidiary of Alphabet. Dr. Ken Mulvany, the founder of benevolent, says his company is focused on diseases of inflammation and neurodegeneration and rare cancers. The AI project it aims to tap into largely unused research data. “Developing medicines is still a very lengthy, risky, and expensive process with high rates of attrition,” Mulvany says. “[But] there is an enormous amount of untapped data located in pharma R&D organizations without any plans to develop it.”

Argumentative Assumptions: The world’s leading AI experts and developers stressed that AI should be used for the tedious and monotonous tasks along with human supervisory. Humans are generally well suited with the natural wisdom plus emotional intelligence to “do no harm”, but again our weakness as human beings is that we lack projectable memories and abilities to apply quick response in accessing the volumes of data that may be stored in our brains (subconscious repository) in correlating with a multitude of options or events given. AI has the potential to be ''Yin and Yang'' to a human for the check-balance.

The core differentiators between the capabilities of a human and that of artificial intelligence could be incorporated into a new hybrid model within the pharmaceutical industry whereby AI assists humans to carry out daily tasks with efficiency and expertise. AI may have the “intelligence” for excellence with critical, yet repetitive, tedious tasks such as identification of a new therapy, while human applies “wisdom” required to balance efficacy vs. adverse events.

  9. How Data Analytics And Artificial Intelligence Are Changing The Pharmaceutical Industry
  12. How Data Analytics And Artificial Intelligence Are Changing The Pharmaceutical Industry.
  13. Artificial Intelligence Already Revolutionizing Pharma

Saturday, January 5, 2019

How AI Boost Airport Security And Speed Up The Operation?

Heathrow security bases on AI to help the throughput all-aspect of airport operations.

Over the last two decades, airports worldwide have significantly ramped up security in response to emerging threats. Meanwhile, rising passenger expectations have put pressure on major transport hubs to bolster throughput, cut queues and make the journey from entrance to departure gate as seamless as possible. How can these two objectives be squared? For a number of governments and aviation hubs around the world, artificial intelligence could be the answer. Earlier this year, the UK Government invested £1.8m into the development of new AI systems to boost security and alleviate wait times across some of the country’s busiest airports. The US Transportation Security Administration has recently introduced new computed tomography (CT) scanners, which use AI to help target threats, at Los Angeles International Airport, John F. Kennedy and Phoenix airports. AI is popping up across the entire aviation spectrum, from self-service check-in robots to facial recognition checks at customs. An online YouGov poll found that around 68% of UK-based passengers would welcome more AI solutions at airports. But when it comes to speeding up the cautious process of airport security, could it be an effective solution?

The Artificial Intelligence System at Healthrow Airport maintain security: Photo Evolv
Security scanners and machine learning: Crucially, AI systems improve as more and more information is fed into them. In the case of airport security, machine learning can be used to analyse data and identify threats faster than a human could. Items that previously needed to be scanned separately, such as laptops, can be kept in passenger luggage as they pass through security checkpoints. “AI enables us to do things today that we couldn’t do even five years ago,” says Evolv Technology CEO Michael Ellenbogen. “It enables us to train the computer in ways that we couldn’t before. You throw a lot of data at it and you use that data to train a model to recognise objects or signals of interest.”

Biometric ID management is a mainstay in the field of facial, voice recognition:Evolv
In addition to checkpoints, AI could sharpen up security at the landside area of airports. The Evolv Edge system uses a combination of camera, facial recognition and millimetre-wave technologies to scan people walking through a portable security gate. Machine learning techniques are used to automatically analyse data for threats, including explosives and firearms, while ignoring non-dangerous items – for example keys and belt buckles – users may be carrying. According to Evolv, up to 900 people can pass through the security gate in an hour, making it significantly faster than conventional X-ray scanners. Edge has been deployed to screen employees at Oakland International Airport in the US, and is set to be launched at another unnamed major international airport in the country to scan passengers at landside. Ellenbogen says the industry has been training computers to pull out threat information for decades, but up until the last five years, ‘conventional computer vision techniques’ have had limited functionality when it comes to analysing images. However, recent breakthroughs in neural networks – frameworks for machine learning algorithms that power AI – and high-capacity computer chips have allowed AI systems to flourish. “In the security environment, we can put systems in the field with a certain level of capability, and continuously collect data from those systems,” says Ellenbogen. “We can then use that data to further train our algorithms, which makes them that much smarter.”
AI enables us to do things today that we couldn’t do even five years ago.
One increasingly visible security concept, which goes hand in glove with AI, is biometrics. Earlier this year, tech specialist SITA reported that 77% of airports were planning major programmes in biometric ID management over the next five years. A mainstay in this field is facial recognition, which is already being used to scan passengers as they pass through customs at a number of major airports. At the time of writing, Hartsfield-Jackson Airport is in the process of launching its first biometric terminal in the US. Willing participants can use facial recognition scanners at self-service kiosks, TSA checkpoints and boarding gates. Fingerprinting, facial recognition and retinal scans are expected to become increasingly used for security purposes at airports. Meanwhile, tests are ongoing in behavioural biometrics. Researchers at the UK’s University of Manchester recently developed an AI system able to measure a human’s individual gait or walking pattern when they step on a pressure pad. “Each human has approximately 24 different factors and movements when walking, resulting in every individual person having a unique, singular walking pattern,” said Omar Costilla Reyes, a researcher from Manchester’s School of Electrical and Electronic Engineering, in a university press release.

Elsewhere, the recently launched, EU-funded iBrderCtrl project involves the trial of an AI programme to speed up border crossings. The solution consists of a virtual border guard asking passengers questions such as “What’s in your suitcase”, while a webcam analyses their facial expressions. If the passenger is deemed to be lying, further biometric information is taken before they are passed on to a human officer for review. A key concern over the use of these technologies is accuracy, given that previous studies have identified unintentional biases in these systems. Early testing of iBrderCtrl showed that it only had a 76% success rate, but one of the technology’s project coordinators told New Scientist that this could be bumped up to 85%. “In my opinion, the jury is still out on the basic science behind detecting abnormal behaviour as an indication of mal-intent, and using cameras and AI to do that,” says Ellenbogen.

AI at airports: When it comes to boosting security throughput, the question remains about whether investment in AI security scanners will be worth it. The TSA has come under fire for previously failed investments in scanner technology. A 2015 Politico article revealed that the organisation had spent $160m on body scanners, many of which had missed airport security threats during undercover testing. Another issue has come from privacy advocates, who remain concerned about the accuracy of biometrics and the potential misuse of information collected. Face recognition is a tool, and like any other tool, when used in the right way it can be used to great effect; if used in the wrong way, it can be inappropriate,” says Ellenbogen. “If our customers have a watch list, we can put that in the system so the security folks know ahead of time if somebody’s a person of concern. That’s different from trying to identify everyone who’s walking through.” There are challenges ahead for AI in the airport security space, but a clear appetite for the technology has been established. Ellenbogen says that technology is being refined, and that as it improves further, systems will become more cost-effective.

Where human staff are concerned, they will always be ‘part of the loop’ to deal with edge cases, such as someone leaving a gun in a bag. “The better the neural networks and the chips get, the smarter the system becomes and there’s more and more power available, which allows the neural networks to have more layers and have more intelligence in them, so it’s kind of a self-perpetuating cycle,” he says. “The better the systems are at focusing the human effort on areas of real concern, the smoother the entire process will go. If 99.99% of people, bags and cargo are automatically cleared by really smart systems and we’re focusing our human effort on the very small percentage of potential threats and those edge cases, then the entire process is going to get smoother for everybody.”



Time in Turkey: