From the Front Lines: What’s Happening With AI Today (Part 2)

By Dr. David Bray

Back in March, I posted a video in with Dr. Fredrik Bruhn who is CEO of Unibap, and I talked about what was happening from the front lines of AI regarding computer vision and robotic manufacturing. As a follow-up to that video, I’d like to share a link to an article that Bob Gourley co-founder of OODA and publisher of CTOvision and I did on “Risk Management of AI and Algorithms for Organizations, Societies and Our Digital Future”.

Without diving into the full article (I’ve posted the initial intro and a link to the full article below) — the main take away is while there may be great potential with AI, including Machine Learning and Artificial Neural Networks, some caution also is advised. These technologies may incorporate subtle biases or error can have serious consequences for individuals and communities. Both policymakers and providers of services using them should take the time to become informed about strengths, risks, and weaknesses. A lot of what we strive to do with the People-Centered Internet coalition is help communities groups, governance boards, and impact-focused leaders consider more meaningful and uplifting approaches regarding how we choose to use new technologies to benefit both individuals and communities.

1. Risk Management of AI and Algorithms for Organizations, Societies and Our Digital Future

Bob: AI can contribute to mitigating risks in organizations of all sizes. For smaller businesses that will not have their own data experts to field AI solutions, the most likely contributions of AI to risk mitigation will be by selecting security products that come with AI built in. For example, the old-fashioned anti-virus of years ago that people would put on their desktops has now been modernized into next-generation anti-virus and anti-malware solutions that leverage machine learning techniques to malicious code. Solutions like these are being used by businesses of all sizes today. The traditional vendors, like Symantec and McAfee, have all improved their products to leverage smarter algorithms, as have many newer firms like Cylance.

Larger organizations can make use of their own data in unique ways by doing things like fielding their own enterprise data hub. That’s where you put all of your data together using a machine-learning platform capability like Cloudera’s foundational data hub, and then run machine learning on top of that yourself. Now, that requires resources, which is why I say that’s for the larger businesses. But once that’s done, you can find evidence of fraud or indications of hacking and malware much faster using machine learning and AI techniques. Many cloud-based risk mitigation capabilities also leverage AI. For example, the threat intelligence provider Recorded Future uses advanced algorithms to surface the most critical information to bring to a firm’s attention. Overall I want to make the point that organizations of all sizes can now benefit from the defensive protections of artificial intelligence.

David: Bob is spot-on that what is happening is the “democratization of use of AI techniques” that now it can be available to even small-sized companies and startups that previously may not have been available, unless they had sufficient resources. He also is right about the scaling question. The additional lens that I would like to add is thinking about how AI can be used both for what an organization presents externally to the world, as well as what it does internally. For example, how you can use AI to understand if there are things on your website or in your mobile applications, that can be assessed for risk vulnerabilities on an ongoing basis?

Threats are always changing. That’s why having the ability to use continuous services to scan what you’re presenting externally with regards to a potential attack surface will be an advantage, for large and small companies.

The other lens is to look for abnormal patterns that may be happening internal to your organization. Risk occurs between the combination of humans and technologies. Smaller companies can obtain new tools through software as a service, or bigger companies use boutique tools to look for patterns of life. These tools try to establish what should be the normal patterns of life in your organization, so that if something else shows up that doesn’t match that pattern, it’s enough to raise a flag. The overarching goal is to use AI to improve the security and resilience of the organization, in how it’s presented externally and working internally.

David: You can think of artificial intelligence being like a five-year old that gets exposed to enough language data, learns to say, “I’m going to run to school today.” And when you ask the five-year-old, “Well, why did you say it that way as opposed, ‘To school today I’m going to run,’” which sounds kind of awkward, the five-year-old is going to say, “Well, it’s just because I never heard it said that way.”

The same thing is true for this current third wave of AI, which includes artificial neural network techniques to provide security and resilience for an organization. It’s looking for things that fit patterns or that are outside of patterns. It’s not discerning whether the patterns or things outside of the patterns are ethically correct.

[… the full interviews of our discussions are available at …] On 24 June, we co-chaired and spoke at a morning summit focused on AI Governance, Big Data, and Ethics followed by AI Security and Resilience discussions that same afternoon.

2. Going Faster Together: Advancing AI Governance, Big Data, and Ethics Across Sectors and Nations

As another quick update from the front lines of AI, recently Dr. Caryl Bryzmialkiewicz, Assistant Inspector General and Chief Data Officer at the Department of Health and Human Services; Dr. Anthony Scriffignano, Senior Vice President & Chief Data Scientist, Dun & Bradstreet; and I had a chance to do a webinar in advance of the planned 24 June summit.

The webinar was moderated by Bill Valdez, President of the Senior Executives Association. I’ve embedded a preview video below with a link to the full 30 minute discussion.

Coming soon will be a part 3 update with Dr. Fredrik Bruhn again as well as colleague Derry Goberdhansingh, founder and CEO of Harper-Paige regarding what they’re seeing for the long-term trends with AI that already are presenting themselves today.

This article was first published here.

If you liked this post, you can follow us on Twitter and Facebook for more updates.

Posted in Ethics & Business Models, PCI News.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.