I recently had the pleasure of attending a roundtable discussion at the RSA on the future of "AI and Automation" in business, particularly among the so-called "low-skilled" workers. Many ideas were discussed under Chatham House rules, and there were a few of my own which didn't fit into the discussion.

A great deal was discussed, but a couple of highlights that I noted down:

  • The definition of Artificial Intelligence itself is tricky, and mischaracterising Machine Learning as Artificial Intelligence can raise expectations beyond what is possible or necessary.
  • "Switching costs" can be huge, with technology developing so quickly organisations can overlook the cost in resource of moving from one technology requirement to the next when the next move is already on the horizon.

The overall discussion prompted a few thoughts of my own, some points I made, some weren't appropriate to the conversation or didn't fit the timescale, deciding which are which is left as an exercise for the reader:

  • A largely artificial workforce, combined with the ever-present threat of ransomware, leads to some very interesting criminal possibilities.
  • With many employees being replaced by automated processes, or actual robots, there is a correspondingly massive increase in the attack surface that any organisation presents to an attacker? It will be much easier to affect or disrupt an organisation when so many more of its resources are on a network, and interact directly with many of the existing security systems.
  • With regard to Fraud Detection an intriguing difference might be that the increasing use of Machine Learning means success or failure of fraudulent activity is far more predictable than human analysts, therefore will it be possible for particularly smart organised crime groups to test their efforts in a safe environment before trying them out?
  • Alternatively will it be possible to steal and/or download fraud detection methodologies and test them offline for weaknesses, so that organised crime can then guarantee the success of efforts in the real world against known procedures?
  • It feels like too much of a "cyberpunk" idea, but if artificial intelligence is used by more and more organisations to detect fraudulent activity, or for other assessments that benefit the profitability of a business - i.e. determining insurance premiums - can criminal organisations use their own AI technology to determine how to bypass the AI of those organisations that they're attempted to defraud?

In the event that you're reading this, and thinking that these ideas have been covered before, and I should read a particular book or article or author, then please do list them in the comments section below.