Skip to content

Generative AI – The big questions you need to ask & get some answers to right now.

Fast track to exceptional productivity, fun novelty fad or fearsome sign of what’s to come?

One thing’s for certain – Generative AI is here to stay & it’s time to starting defining your answers to some of the big questions.

From the promise of self-driving cars to personalised recommendations on streaming platforms, AI has already begun to shape our ideas about what the future may hold. But whilst we’ve all been focusing on how our social media feeds get curated for us or shaking our heads at the fearful sight of acrobatic robots in YouTube videos, Generative AI tools like ChatGPT have very quickly delivered powerful AI capabilities literally right to our fingertips.

In the same way we talk about not if, but when… in the context of cybersecurity – the evolution of artificial intelligence is right now at that same tipping point. We may already be approaching that point of no return.  

2023 was widely predicted to represent the inflection point in the development of AI. And so far, this has indeed provided to be the case with the next-after-next release of Generative AI tools like OpenAI’s ChatGPT-4, Google’s Bard, not to mention the fast-approaching general release of Microsoft Co-Pilot.

If AI is already a not if, but when proposition, then if not now, then when are businesses going to ask the big questions on the application and implications of Generative AI for our personal, business and national data, security and safety, and seek to define their own answers.

Let’s explore the fast, the fun and the fearsome of Generative AI right now, along with our tips on how you can start to build a platform on which to fail safely and succeed securely on your AI journey.

How did we get here so fast?

Social media commentators love to track the “success” of any new platform release based on the time it takes to reach 100 million monthly users. Not too long ago we were amazed that new-comer Instagram only took 2 years and 6 months to hit this milestone target compared to the seemingly ubiquitous twitter, which took a whole 5 years and 5 months. Compare both of these powerhouse platforms with ChatGPT though and there is no comparison – ChatGPT took a mere TWO months to attract 100 million users a month.

How did we get here so fast? Best answer a combination of human curiosity, easy access, and the lure and wonder of something that seems to make a whole range of our “thinking” tasks easier.

What is Generative AI?

Well to answer that question, let’s use Generative AI itself! In this case Open AI’s ChatGPT It’s surprising easy and fun (if not a little bit intimidating….)

Prompt: “What is Generative AI?”

ChatGPT: Generative AI, short for Generative Artificial Intelligence, refers to a class of artificial intelligence systems that have the capability to generate content, such as text, images, audio, or other data, in a way that is often indistinguishable from human-created content. These AI systems leverage complex machine learning models to understand patterns and structures in existing data and then use that understanding to create new data.

Ok great, but alongside AI, we often hear the terms machines learning and large language models bandied about…so let’s try it again…

Prompt: What is the connection between machine learning and large language models?

ChatGPT: Machine learning is a subfield of artificial intelligence (AI) focused on creating algorithms and models that can learn from data and make predictions based on that data. It involves training models to recognize patterns, relationships and structures in data, enabling them to generalise and perform tasks without explicit programming.

Large language models are a specific type of machine learning model designed to work with natural language data, such as text, the use deep learning architectures, often based on neural networks, to process and general human-like text.

Useful, right? Sure, but read on to find out where and why the buyer definitely needs to beware!

What could be so bad about a tool that does the heavy thinking for you?

Nothing in essence. But just like your flat pack IKEA furniture, things are a lot easier and safer if you’ve actually found and read the instructions so that you know how to secure that bookcase to the wall, right? But with Generative AI right now, there are no safety warnings or guidelines. It’s an iterative, fast evolving, self-learning platform that has already attracted all kinds of people, groups and even nation states using it for good, bad and anything in between.

Without deep diving into the technical or ethical considerations, as businesses and organisations with a responsibility to our clients, our staff and the broader community, we need to understand more about where the data we provide to any Generative AI platform goes and how it is used to generate outputs for ourselves and others. By its very nature, machine learning driven AI is designed to feed the data you provide, in fact all our data, back into the platform so the AI can continue to learn and refine its capabilities and output.

  • First, consider how that data is stored, for how long and by who, as well as who has access to use or even manipulate that data on open-source public platforms like ChatGPT. These are all-important questions we should be considering before readily supplying anything personal, sensitive or business-related into Generative AI platforms…
  • Next, think about what capacity, processes and skills you have in place to provide the human governance and oversight required to monitor for erroneous, malicious or antisocial inputs or outputs that may nurture any kind of bias contrary to your aims and values. Are you in a position to be able to QA the outputs to ensure you are not inadvertently relying on incorrect or biased information, or even generating fake news yourself?

To ChatGPT or not to ChatGPT? That is the question…

You might be thinking, well if it’s here to stay, we won’t have a choice, right? Or that if you just don’t use it at all, then surely no harm will come?

Unfortunately it’s just not that simple. Just as Shadow IT can quickly become a security and compliance nightmare for organisations, the much larger, looming problem of Shadow AI is now the cool, fun, but little bit nasty, teenage cousin that’s rocked up to the family BBQ.

Shadow IT finds its way into your business when you don’t provide the right solutions to your staff for their roles and tasks; so, they simply find alternatives to get their job done. This is how organisations end up with business files sitting in personal Dropbox accounts with little to no security, or how your company passwords end up in personal password vaults. Impossible to manage, access or properly secure.

Shadow AI exacerbates this problem in two more ways.

  • Firstly, without a company directive on your position on the use of Generative AI and what’s ok and not ok, your people might make their own decisions around what’s ok or not ok (sensitive or not sensitive) to feed into a Generative AI tool to get their job done faster. They probably haven’t done their due diligence on how that data is actually used, processed, shared or handled, and you may find after the fact that you’ve exposed your data in ways that break your security and compliance obligations.
  • Secondly, is the problem of the AI that sits behind the AI. If Large Language Models are a subset of machine learning, which in turn is a subset of AI, Generative AI is powered and operated by many other flavours of AI tools and models under the hood. Not knowing how your data is handled in this broader ecosystem also presents a security and compliance challenge.

How to get started with setting some Generative AI guardrails

  1. Start having the Generative AI conversation now. Start looking at what Generative AI does, ask your team if they are using it, and promote some general awareness about some of the issues raised in this article.
    • Why not share this article with your team as first step?
  2. Talk to your management team and your board about getting out ahead of the curve by setting some boundaries around what’s ok and not ok, so that your people can test the limits of what Generative AI can do for them safely and with oversight.
  3. Do the due diligence on data handling and the ecosystem of tools used. You may find it safer and more appropriate to concentrate on using in-ecosystem Generative AI tools, like Microsoft Co-Pilot, to keep your data contained within some boundaries. Talk to your Managed IT Service partner for some tips on what you should be on the look out for, or how some of the in-ecosystem tools are already beginning to integrate with your existing productivity applications anyway.
  4. Draft and communicate some guidelines, rules and best practices around what your risk appetite is around Generative AI, and what’s ok and not ok for your organisation and your data.

If you’d like to find out more about Microsoft 365, Microsoft CoPilot or start a discussion on Generative AI in the context of your business, reach out to book a time with one of our IT Strategy Consultants.