Security: Say How, Not No
Key Points
- Security teams should focus on “how” to enable safe adoption of new technology rather than simply saying “no,” because outright denial pushes risky behavior underground where it can’t be monitored.
- Acting as a “brake” that controls speed—like high‑performance car brakes that allow fast driving without crashing—makes security an enabler that supports calculated risk and business agility.
- When security becomes the “department of no,” users inevitably find work‑arounds (the “how”), leading to unmanaged, insecure practices that expose the organization to greater risk.
- A concrete example is BYOD: employees bypass security controls by using personal devices and remote‑access tools, introducing unvetted software and viruses into the corporate network.
- To stay effective, security must collaborate with the business, providing controlled pathways for innovation instead of acting as a constant inhibitor.
Sections
- Security Should Enable, Not Block - Security teams need to answer “how” instead of “no,” acting as controlled brakes that guide safe innovation and keep the organization in the loop rather than driving risky behavior underground.
- User Bypasses Forbidden Wireless Policy - The speaker explains how a corporate ban on Wi‑Fi prompted a user to install an unsecured access point, creating a vulnerable network entry, and argues that providing a managed, encrypted hotspot would have mitigated the risk.
- Shadow AI’s Hidden Breach Costs - The speaker explains that shadow AI can add roughly $670,000 to the already $10 million average U.S. data‑breach cost and recommends assessing risks and offering alternatives instead of simply denying AI usage.
Full Transcript
# Security: Say How, Not No **Source:** [https://www.youtube.com/watch?v=U9Ckc3MecvA](https://www.youtube.com/watch?v=U9Ckc3MecvA) **Duration:** 00:16:29 ## Summary - Security teams should focus on “how” to enable safe adoption of new technology rather than simply saying “no,” because outright denial pushes risky behavior underground where it can’t be monitored. - Acting as a “brake” that controls speed—like high‑performance car brakes that allow fast driving without crashing—makes security an enabler that supports calculated risk and business agility. - When security becomes the “department of no,” users inevitably find work‑arounds (the “how”), leading to unmanaged, insecure practices that expose the organization to greater risk. - A concrete example is BYOD: employees bypass security controls by using personal devices and remote‑access tools, introducing unvetted software and viruses into the corporate network. - To stay effective, security must collaborate with the business, providing controlled pathways for innovation instead of acting as a constant inhibitor. ## Sections - [00:00:00](https://www.youtube.com/watch?v=U9Ckc3MecvA&t=0s) **Security Should Enable, Not Block** - Security teams need to answer “how” instead of “no,” acting as controlled brakes that guide safe innovation and keep the organization in the loop rather than driving risky behavior underground. - [00:05:51](https://www.youtube.com/watch?v=U9Ckc3MecvA&t=351s) **User Bypasses Forbidden Wireless Policy** - The speaker explains how a corporate ban on Wi‑Fi prompted a user to install an unsecured access point, creating a vulnerable network entry, and argues that providing a managed, encrypted hotspot would have mitigated the risk. - [00:12:56](https://www.youtube.com/watch?v=U9Ckc3MecvA&t=776s) **Shadow AI’s Hidden Breach Costs** - The speaker explains that shadow AI can add roughly $670,000 to the already $10 million average U.S. data‑breach cost and recommends assessing risks and offering alternatives instead of simply denying AI usage. ## Full Transcript
Security teams, listen up. Don't say no, say how. Because when you say no, your users say
how, and you are not going to like their answer. Look, I get it. I'm a security guy myself. I know
how risky new tech can be, but I believe it's better to get out in front of this stuff rather
than stick your head in the sand to pretend that when you say no, that's what actually happens,
because it isn't. you say no, you just drive the behavior underground where you can't monitor
or control it. You're essentially out of the loop at that point. It's a lot better to get on board
the train and have a say in where it's going, than to stand in front of it and just yell stop!
Another way to look at it is through this analogy that I've used on some other videos, Why do you
put brakes on a car? So you can stop? No. So you can go really fast. Don't believe me? How fast
would you go in a car that had no brakes or that had really bad brakes? In fact, it turns out the
fastest cars in the world have the best brakes. They have to or otherwise they just crash into
the wall. So, that's the way security should be done. should be an enabler. It allows the
organization to take calculated risk and do things that they otherwise wouldn't be able to do.
It shouldn't be like a parking brake that's just on all the time and saying no all the time. That
becomes an inhibitor to the business. And if you remain an inhibitor to the business, the business
will go around you. So it's better to be the brakes on the high-performance car so that you
are part of the solution and not part of the problem. Those who fail to learn from history are
destined to repeat it. So what happens? What does history tell us happens when the security
department becomes the department of no? Where they just, for everything that you come up with,
they give you a reason why you can't do that. Well, what happens is the user says how. Let me give you
some examples of how users said how when the security department says no. And you're not going
to like these answers if you're in security. So, let's start with something that we're still
dealing with today, but this one has been an ... historically an issue for quite some time: bring
your own device. So an employee has at home, uh, let's say a desktop system, a laptop, something
like that. And they decide they're just going to either bring that into the office uh, and connect in,
or maybe they're on a trip, maybe they're on vacation, they still want to get some work done, so
they're going to use that, or they've just got it at home and want to access some files from the
office. So, they just go ahead and do this. And how would they do it? Well, they could use some remote
control software to connect into the corporate network. And uh, now you've got a system which uh, is
not secured. In other words, the kids may be playing games on this thing as well. And it's got
viruses and all kinds of other stuff uh, running on it. And now it's got access into the system. So,
that's what happens. If the company says your system is too insecure, just don't connect. Don't
use your system. Do not bring your own device; we don't allow that. Well, the ... the employee just
figures out how to do it and does it anyway. Now you end up with an insecure system on your ... your
network anyway. So I say, when it comes to BYOD, there are really uh, only two types of organizations: those
who have a good BYOD program where they have known which devices are coming in and
they've put in the security controls in place, and those who haven't. But there's not a third group
that doesn't have BYOD. You might outlaw it, but your employees figured out the how and you've got
bring your own device in your environment. Another example, and this one goes back a little further
to when we first started getting mobile phones. A lot of companies would say mobile devices are not
secure enough for you to have your corporate email downloaded into this thing. So we are not
going to allow that; not secure enough. Again, the security department said no. How does the user say
how? Well, you're not going to like this answer either, because what they decided to do in some of
these cases is say, okay, if you won't let me download my corporate email to my phone, which is
where it's really convenient for me to access it when I'm in between meetings and stuff like that,
well, then I'll tell you what I'll do. I will forward from the corporate email server over to a
public email server. Uh, use your favorite uh, personal email server as an example here. And
then from there I'll be able to download to here. So now what you've done, again, in this case,
instead of providing a how mechanism that would have allowed the corporate email to come down to
this device and allow it, instead, the user went with taking the sensitive information, putting it
into a public server, which we have no control and visibility over, and now they're getting it down
here anyway. Again, the organization said no; the user figured out how. And this is much worse
than what it would have been if we had crafted a solution for them in the first place. Let's take a
look at another example. How about, uh, bring your own wireless? Well,
we take wireless access pretty much for granted these days because we seem it feels like
everywhere we go, we can go into a building and we just expect there to be a wifi hotspot. Well, it
hasn't always been the case. In the early days of wireless technology coming out, a lot of companies
didn't deploy it. Why would they not deploy it? Well, their view of it was, we're going to take all
these workstations that we have, and we're going to put them on a LAN of some sort, a local area
network, and we're going to connect all these ... these things up in a hardwired situation where
with that, now, th ... I can't have a situation where someone is sitting out in the parking lot and
sniffing our network traffic, because that's a risk with wireless. Wireless goes through the air.
So, it's not bounded just to the building itself. Unless you've got some sort of shielding in the
building, and most people don't have that. So, the companies looked at wireless and said these
wireless access points, those are too risky. We are not providing wireless access. So, again, the
security department said no. What did the user say? How. I know how to do that. I can go to my local
electronics computer store, whatever. I can buy one of these relatively cheap access points and that
will give me wireless access. And now we have an unsecured access point that has direct connection
into our network, and they just run it off a port that's sitting right there in their office.
So again, the organization, the security department said no; the user said how. The user now puts out
an insecure version of this. What would have been better, again, is if the organization had said, you
know what, we'll give you a secured wireless hotspot and we'll use the right kind of
cryptography and all of those kinds of things to make sure that this is a protected connection.
Saying no led to a worse situation with the how. Let's go back even further in the way back
machine, and I remember these days. Uh, bring your own internet. The internet was not ubiquitous and
everyone had it all the time. That has not always been the case. In the early days of the internet,
if you wanted to get access, a company might say, okay, this is your workstation right here, and you
have access to our internal network. So that is intranet.
And we'll give you that level of access. That was not uncommon. But here's this big, bad,
scary external internet. And in this case, companies said, hey, we don't know
what's out there. You shouldn't be on there; just stay away from it. Well, so in saying no, guess what
the user said? How. How might they do that? Well, these devices here, if that was a laptop, back in
those days, if you wanted to access systems while you were traveling and things like that, they
would have a modem port in there. So there was a modem built into it. You could plug it into an
analog line in your hotel room or whatever and then dial out. So, what if somebody just did that
with the analog line that might still be in their office in those days? Not so common now, but it was
then. So then what happens is a user is connected to the internal network and
simultaneously to the external internet that we were trying to avoid because we thought it was
too risky. Now this workstation has essentially become a router between the two. So the
company didn't want that to happen for sure. That's a worst-case scenario. What they should
have done is in ... instead of saying no, they should have said how and the way they could say how is
to say we're going to put the right controls on your system. And some companies would put maybe a
proxy server you had to log in through, or we're going to put a firewall here that's going to do
other types of mitigation. We'll put intrusion detection systems, we'll do the kind of monitoring
that's necessary and educate the users so that they can go on the internet and be safe. We take
all that stuff for granted now. But initially, security department said no to bring your own
device, no to bring your own wireless, no to bring your own internet. And now, eventually, those have
become the norm, because every one of these things, the user figured out how and it resulted in a
massive failure. Okay, now let's move a little closer to the current times and things that we're
dealing with really on a regular basis. Uh, next version of turn of this crank was essentially
bring your own cloud, where now I can basically say, you know, uh, there are cloud providers
that are out there. Uh, it's not expensive. There are apps that are out here in this space. There are
file sharing services out here in this space. If I want to send you a really big file and I don't
want to send it through email because I want to be maybe share it with lots of people, well, then I
just upload this to a file sharing uh, system in the cloud, and then all the people that come along
that I want to have access to it can get it. Along with all the people maybe that I didn't intend to
have access will also be able to come and get it. So, that's a big risk. Again, when the cloud first
came out, what did organization security department say? Don't use that. No, because we don't
control all of this. This thing is risky and I don't know what's going to happen with it. So, if uh,
if we don't provide users a method to do file sharing, then they're just going to figure out how
to do it and they're going to use the cloud anyway because you can outlaw it, but that doesn't
make it stop happening. If people have mobile devices, then it doesn't matter what ... what rules
you're putting in your firewall. That's not stopping those mobile devices from getting out. So,
people went ahead and used these things anyway. What would have been a better solution in this
case? Instead of saying no, say, you know what, we actually have contracted with this other cloud
provider where we have vetted their security and they have a file sharing app, and we want
everybody to use this one. Or, if there's another particular app that we think our employees are
going to use, have them use the approved authorized version and have them avoid this
version that hasn't been. That's an example of saying how. Not just saying no. Now, if we move this
into the future a little more there, a little more to where we are right now, uh, the big thing is bring
your own AI. And I've worked with companies that say, you know what, those public chatbots, we know
that they leak data because when you tell them the information, that information then can be used
to train their models and all kinds of bad stuff can happen from this, potentially. So, we're going
to uh, outlaw that. You cannot uh, use these AI systems that are sitting
around out here. It's just too risky for you to be doing this kind of stuff. So, we'll say
no to that. Well, again, you can block it at the firewall. I've got a mobile device, I'll just
access it from that. So, saying no drives the behavior below ground where the security
department now has no control. What would be a better option in this case? The better option
would be to say, you know what, we're going to pick an AI provider and we're going to use their
service because we've vetted what the security is of it. Better still, we're going to build our own
in-house version of this. Now, when I say build, that doesn't mean you have to go out and do all
the model development and design and tuning and all that kind of business, but you could use a
platform from a cloud provider and run it on your own private instance, either in your environment,
on premise, or in a private tenant in a cloud system that you have vetted the security on, and
then tell your employees, do not use this one, use this one. Now they have an option.
You have said how and not only said no. According to the 2025 IBM
Cost of a Data Breach report, the average cost of a data breach in the US was greater than
10 million dollars. That's a big chunk of change. And if shadow AI was involved ... What's shadow AI? Well, that
basically means the user went ahead and created their own AI. They downloaded their own models,
they ran it in a cloud instance and then maybe put in some of your data. Shadow AI increased the
cost of a data breach more than 670,000 dollars. So, these are the things
that are costing companies real money and driving AI behaviors below ground. Actually ends
up costing you more, because now you have the shadows to deal with. So, all of that stuff just
adds salt to the self-inflicted wound of saying no. So, let's take a look at how you can say how
without giving away the whole store in the process. So, some of the things that you can do:
start off by assessing the risk. Understand what it is that we're actually facing here, rather
than just fear and superstition, fear, uncertainty and doubt, that kind of thing. That big scary
internet, that uh, terrible AI, those uh, infected uh, devices that users
have in their homes ... Go understand what's actually there and consider in the risk calculation the
alternatives. If we don't provide them something, then they will find their own way. And even though
you thought that internet was scary back in the day, it's a lot scarier if users end up making
your system uh, into a router that's connecting your network in ways that you didn't. So, assess the
full picture of the risk. And a lot of organizations tend not to do that. Then, look for
alternatives. See, are there ways that we can do it? Maybe we don't want you to use that particular AI
because there are too many risks with it, but here's another one that you can use instead. Maybe
we don't want you doing file sharing just in general, because now we have a problem with shadow data,
data that's sitting around in all sorts of places that we have no visibility and no control
over, and therefore it's not encrypted, it's potentially leaking out to the world. Find
alternatives. Don't use this file sharing service, use this one that we've approved. Don't use this
AI, use this one that we built for you, or one that we have contracted with. And we understand and
know that it will do what we intended to do. Another thing we can do is train our users. Make
sure they understand what the risks are, because in many cases they don't. They just look at the
bright, shining object of the new technology and say, isn't this cool? Let me do that. Well, they need to
understand that in some cases, what they're doing is putting not only the company at risk, but
downstream, their jobs at risk. So this is existential threats for everyone involved. And
then, we need to do discovery. We need to discover all the cases where we may have shadow
AI, uh, where we have shadow data, where we have people that are bringing their own devices that have not
been secured. Uh, we've got to do a ... a good job of making sure we understand what the full threat space
looks like. So, ultimately, by doing this kind of stuff, you get out in front of the risk and keep
everything above board where you can monitor and control it. In other words, don't say no, say
how. Every chance you get.