NexBotix, the Robot-Process-Automation (RPA) service, has officially launched in the UK. With a managed dashboard solution applied to specific business objectives, it means that only the right processes are automated and ROI can be delivered in as little as 30 days.

NexBotix delivers a low-cost solution for businesses across finance and accounting, HR, IT, governance and compliance departments, across banking, financial services, insurance, automotive, logistics, legal, retail and local government. Unlike anything else currently in the market, the platform can be deployed into existing IT infrastructure in just 14 days.

NexBotix uses today’s leading technology from major vendors such as Microsoft, Google, IBM Watson, Automation Anywhere, NICE, UiPath and Abbyy, alongside its own NexBots. The platform provides businesses with the ability to scale up and down their operations according to demand and assist teams in focusing on more high-value tasks, all the while, driving down cost. The key to the multi-vendor approach, is the NexAnalytics capability that helps companies gain complete control of their digital workforce and ensure that the Business Case ROI is delivered as specified.

Chris Porter, CEO of NexBotix, says: “The ‘plug, play, and managed’ element of our technology means that there’s minimal disruption to existing operations, and with no-code to manage it doesn’t require users to be tech-savvy to operate it. With some of the more established players in the market, there’s typically a three month consultation period before any integration can begin, so is it any wonder that enterprises are becoming disillusioned with the actual impact automation can have? We’re so confident in our technology and team that we offer customers a guarantee of receiving ROI within three-to-nine months; though in many cases we’ve seen this happen within as little as a few weeks.”

“With NexBotix, it’s less about removing the human element, but more about working alongside process automation to arrive at the best possible outcome; both in terms of efficiency and profitability. Where most businesses fail with AI implementation is that they lack the foundations intrinsic to its success as a model. Where NexBotix differs is that we put a specific business situation first, and build around that.”

The platform is managed by a team of experts within NexBotix, so it removes the need for any company to have a dedicated technical resource and the service can deliver quantifiable benefits 30 days from implementation. In one case, NexBotix helped a customer service organisation with 3,000 employees achieve an ROI of 802% and payback within four weeks, for its sales department.

Nexbotix has been spun out from Camwood Ltd which has over 20 years of experience and a proven portfolio of products and services across intelligent automation. Most notably, it sold AppDNA to Citrix in 2011 for $91.3m.

Mauro Guillén Zandman, Professor of International Management, The Wharton School, University of Pennsylvania, USA Srikar Reddy, Managing Director and Chief…

Mauro Guillén Zandman, Professor of International Management, The Wharton School, University of Pennsylvania, USA

Srikar Reddy, Managing Director and Chief Executive Officer, Sonata Software Limited and Sonata Information Technology Limited

Artificial intelligence (AI) relies on big data and machine learning for myriad applications, from autonomous vehicles to algorithmic trading, and from clinical decision support systems to data mining. The availability of large amounts of data is essential to the development of AI.  But the scandal over the use of personal and social data by Facebook and Cambridge Analytica has brought ethical considerations to the fore. And it’s just the beginning. As AI applications require ever greater amounts of data to help machines learn and perform tasks hitherto reserved for humans, companies are facing increasing public scrutiny, at least in some parts of the world. Tesla and Uber have scaled down their efforts to develop autonomous vehicles in the wake of widely reported accidents. How do we ensure the ethical and responsible use of AI? How do we bring more awareness about such responsibility, in the absence of a global standard on AI?

The ethical standards for assessing AI and its associated technologies are still in their infancy. Companies need to initiate internal discussion as well as external debate with their key stakeholders about how to avoid being caught up in difficult situations.

Consider the difference between deontological and teleological ethical standards. The former focuses on the intention and the means, while the latter on the ends and outcomes. For instance, in the case of autonomous vehicles, the end of an error-free transportation system that is also efficient and friendly towards the environment might be enough to justify large-scale data collection about driving under different conditions and also, experimentation based on AI applications.

By contrast, clinical interventions and especially medical trials are hard to justify on teleological grounds. Given the horrific history of medical experimentation on unsuspecting human subjects, companies and AI researchers alike would be wise to employ a deontological approach that judges the ethics of their activities on the basis of the intention and the means rather than the ends.

Another useful yardstick is the so-called golden rule of ethics, which invites you to treat others in the way you would like to be treated. The difficulty in applying this principle to the burgeoning field of AI lies in the gulf separating the billions of people whose data are being accumulated and analyzed from the billions of potential beneficiaries. The data simply aggregates in ways that make the direct application of the golden rule largely irrelevant.

Consider one last set of ethical standards: cultural relativism versus universalism. The former invites us to evaluate practices through the lens of the values and norms of a given culture, while the latter urges everyone to live up to a mutually agreed standard. This comparison helps explain, for example, the current clash between the European conception of data privacy and the American one, which is shaping the global competitive landscape for companies such as Google and Facebook, among many others. Emerging markets such as China and India have for years proposed to let cultural relativism be the guiding principle, as they feel it gives them an edge, especially by avoiding unnecessary regulations that might slow their development as technological powerhouses.

Ethical standards are likely to become as important at shaping global competition as technological standards have been since the 1980s. Given the stakes and the thirst for data that AI involves, it will likely require companies to ask very tough questions as to every detail of what they do to get ahead. In the course of the work we are doing with our global clients, we are looking at the role of ethics in implementing AI. The way industry and society addresses these issues will be crucial to the adoption of AI in the digital world.

However, for AI to deliver on its promise, it will require predictability and trust. These two are interrelated. Predictable treatment of the complex issues that AI throws up, such as accountability and permitted uses of data, will encourage investment in and use of AI. Similarly, progress with AI requires consumers to trust the technology, its impact on them, and how it uses their data. Predictable and transparent treatment facilitates this trust.

Intelligent machines are enabling high-level cognitive processes such as thinking, perceiving, learning, problem-solving and decision-making. AI presents opportunities to complement and supplement human intelligence and enrich the way industry and governments operate.

However, the possibility of creating cognitive machines with AI raises multiple ethical issues that need careful consideration. What are the implications of a cognitive machine making independent decisions? Should it even be allowed? How do we hold them accountable for outcomes? Do we need to control, regulate and monitor their learning?

A robust legal framework will be needed to deal with those issues too complex or fast-changing to be addressed adequately by legislation. But the political and legal process alone will not be enough. For trust to flourish, an ethical code will be equally important.

The government should encourage discussion around the ethics of AI, and ensure all relevant parties are involved. Bringing together the private sector, consumer groups and academia would allow the development of an ethical code that keeps up with technological, social and political developments.

Government efforts should be collaborative with existing efforts to research and discuss ethics in AI. There are many such initiatives which could be encouraged, including at the Alan Turing Institute, the Leverhulme Centre for the Future of Intelligence, the World Economic Forum Centre for the Fourth Industrial Revolution, the Royal Society, and the Partnership on Artificial Intelligence to Benefit People and Society.

But these opportunities come with associated ethical challenges:

Decision-making and liability: As AI use increases, it will become more difficult to apportion responsibility for decisions. If mistakes are made which cause harm, who should bear the risk?

Transparency: When complex machine learning systems are used to make significant decisions, it may be difficult to unpick the causes behind a specific course of action. Clear explanations for machine reasoning are necessary to determine accountability.

Bias: Machine learning systems can entrench existing bias in decision-making systems. Care must be taken to ensure that AI evolves to be non-discriminatory.

Human values: Without programming, AI systems have no default values or “common sense”. The British Standards Institute BS 8611 standard on the “ethical design and application of robots and robotic systems” provides some useful guidance: “Robots should not be designed solely or primarily to kill or harm humans. Humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behaviour.”

Data protection and IP: The potential of AI is rooted in access to large data sets. What happens when an AI system is trained on one data set, then applies learnings to a new data set?

Responsible AI ensures attention to moral principles and values, to ensure that fundamental human ethics are not compromised. There have been several recent allegations of businesses exploiting AI unethically. However, Amazon, Google, Facebook, IBM and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public’s understanding, and to serve as a platform about artificial intelligence.

Peltarion, leading AI innovator and creator of an operational deep learning platform, today announced the findings of a survey of…

Peltarion, leading AI innovator and creator of an operational deep learning platform, today announced the findings of a survey of AI decision-makers examining what they see as the impact of the skills shortage, and suggestions on how to overcome it. The research, ‘AI Decision-Makers Report: The human factor behind deep learning’, presents the findings of a survey of 350 IT leaders in the UK and Nordics with direct responsibility for shepherding AI at companies with more than 1,000 employees.

The report finds that many AI decision-makers are concerned about the business impact of the deep learning skills shortage. 84% of respondents said their company leaders worry about the business risks of not investing in deep learning, with 83% saying that a lack of deep learning skills is already impacting their ability to compete in the market. These companies are exclusively focusing on recruiting data scientists (71% of AI decision-makers are actively recruiting to plug the deep learning skills gap), and this is already impacting their ability to progress with AI projects:

  • Almost half (49%) say the skills shortage is causing delays to projects
  • 44% believe the need for specialist skills is a major barrier to further investment in deep learning
  • However, almost half (45%) say they are struggling to hire because they don’t have a mature AI program already in place

“This report shows that companies can’t afford to wait for data science talent to come to them to progress their AI projects. The fact is, many organisations are already starting to lose their competitive edge by waiting for specialised data scientists. The current approach, which relies on hiring an isolated team of data scientists to work on deep learning projects, is delaying projects and putting strain on the talent companies do have,” explains Luka Crnkovic-Friis, Co-Founder and CEO at Peltarion. “In order to solve the deep learning skills gap, we need to make use of transferrable talent that can be found right under companies’ noses. Deep learning will only reach its true potential if we get more people from different areas of the business using it, taking pressure off data scientists and allowing projects to progress.” 

Less than half (48%) of respondents said they currently employ data scientists who can create deep learning models, compared to 94% that have data scientists who can create other machine learning models. This shortage is having a direct impact on teams: 93% of AI decision-makers say their data scientists are over-worked to some extent because they believe there is no one else who can share the workload. However, with the right tools, others can make a serious impact on AI projects.

“Organisations need to move projects forward by bringing on existing domain experts and investing in tools that will help them input into AI projects. This will reduce the strain on data scientists and lower deep learning’s barrier to entry,” concludes Crnkovic-Friis. “We need to make deep learning more affordable and accessible to all by reducing its complexity. By operationalising deep learning to make it more scalable, affordable and understandable, organisations can put themselves on the fast track and use deep learning to optimise processes, create new products and add direct value to the business.”