TL, DR:
Programmatic ads can scale your reach, or tank your reputation if brand safety fails. We’ve seen numerous brands dragged into scandals where their ads landed on extremist, illegal, or NSFW content. With GARM gone, brand safety is now a decentralized game: every advertiser, DSP, and publisher has to protect themselves.
The risks go beyond ugly adjacencies: bots, domain spoofing, and malvertising silently eat budgets and trust. The good news: you can fight back. Layered defenses, pre-bid filters, accredited fraud detection, post-bid verification, contextual AI, transparent supply paths, and trusted DSP partnerships (like Epom + Pixalate + Lotame) keep your ads visible to real people in safe, relevant contexts.
Nestle ads on extremist pedo videos? Coca-Cola placement under a viral white supremacist post? Global supply-side platforms leading advertisers to hardcore adult materials? That’s not some random gibberish; that’s what happens when brand safety in programmatic doesn’t work.
Despite how revolutionary programmatic advertising has been, it still has its plethora of problems. Today, we won’t discuss broken supply chains or the complexities of video ads. Instead, let’s dive into the problem that unites all of ad tech: brand safety in advertising and programmatic ad fraud.
In this piece, we’ll learn what is considered a danger to brand safety, the loudest cases of its violation, additional threats to programmatic campaigns, and most importantly, what you can do to avoid all of them.
What Is Brand Safety in Programmatic?
Let’s start with the basic definitions.
Brand safety in programmatic refers to the policies, tools, and practices that ensure a brand’s digital ads do not appear in places that could harm the brand’s reputation.
Okay, but what is the point?
According to the Interactive Advertising Bureau (IAB), brand safety in online advertising measures aim to guarantee a campaign “will not appear next to any content that is illegal (e.g., drug-related content) or dangerous (e.g., pornography or violence)”, thereby protecting the brand’s image.
Who Defines Brand Safety Measures?
Early on, brands and their agencies set basic guidelines (e.g., blacklists of porn or profanity resources). Industry bodies then stepped in to standardize definitions of inappropriate content.
Organizations like the 4A’s and the World Federation of Advertisers (WFA) (through its now-former Global Alliance for Responsible Media, GARM) established common categories of harmful content and a “brand safety floor.”
These categories, such as adult content, hate speech, extreme violence, piracy, and so on, delineate what content is off-limits for any advertising. Then, verification tech refined these definitions and built algorithms to assure brands that such content is defined automatically.
The rules aren’t universal, obviously, as brand advertising, brand values, and brand messaging are widely different across the industry. You won’t see a casino ad in your TikTok feed, but you’ll easily see four of them watching NSFW content during work time.
That’s why the paradigm has evolved into two distinct definitions: “brand safety” and “brand suitability.”
- Brand Safety is a broad, industry-wide standard, a baseline of universally unsafe content that virtually all brands want to avoid (for example, no reputable brand wants their ad next to a video promoting terrorism or a site distributing malware).
- Brand Suitability strategy goes a step further, tailoring the content adjacency to each brand’s specific values and risk tolerance. As Andriy Liulko, CSO at Epom, put it:
“Brand safety today is table stakes: it’s about keeping your ads away from toxic content. Brand suitability is different: it’s about putting your advertising efforts in the right context. Safety protects your ad spend, suitability makes it relevant. One keeps you out of trouble, the other makes sure your dollars actually work.”
In other words, beyond avoiding the worst-case scenarios, advertisers now calibrate suitability criteria unique to them. For example, a luxury fashion brand might deem gossip tabloid sites or low-brow memes as unsuitable (even if they aren’t “unsafe” per se), whereas an edgy entertainment brand might be comfortable appearing alongside pop culture gossip but not on political news.
Still not persuaded with the importance of brand safety?
Watch how a simple fix to security gave a second life to Epom’s partner
Discover hereWhy Does Centralized Regulation Hardly Ensure Brand Safety?
Our attentive readers are already asking themselves, “Isn’t GARM dead?”

Yeah, it is. GARM (a coalition under WFA formed in 2019) was instrumental in setting a global framework for brand safety and suitability, defining 11 content categories and risk levels. Its Safety Floor identified content that should never be monetized (like child abuse, graphic self-harm, hardcore pornography), and provided suitability tiers for sensitive topics (like tragedy, firearms, political discourse) that brands could choose to avoid or approach with caution.
What happened then?
The landscape shifted in late 2023 when GARM was discontinued amid "fake news" controversy. Long story short, Elon Musk sued the WFA and GARM participants, alleging an “advertiser boycott” conspiracy and won, leading the WFA to shut down GARM to avoid further “distraction” and resource drain.
How Did Brand Safety Change? [Loudest Controversies in Digital Advertising]
With GARM on pause, brand safety governance is becoming more decentralized. Responsibility of advertising online now falls on individual platforms, ad tech companies, and brands. Advertisers decide for themselves how to protect their ad revenue, brand identity, and deal with ad fraud and programmatic scams.
We would typically blame Mr. Musk for once again lobbying his own interests, but it’s not entirely the case. You see, while GARM worked in theory, it was not a panacea for the industry. The loudest brand safety issues and related scandals either happened when GARM was alive and fine or were too global to be saved by it.
These include:

The point is: despite losing consumer trust, with each scandal, the advertising industry fixed its glaring issues.
No matter how slow ad tech and traditional advertising are, brands didn’t just sit still; they learnt from their mistakes and became better. The thing you need to understand is that advertisers and media agencies who managed to come out on top of the crisis did it by themselves.
Basically, no one will help you if you can’t help yourself. That’s why the steps to improving brand safety we suggest allude to your own active actions.
What Are the Other Brand Safety Risks and How to Deal with Them?
Before we come to the main course, there is (or are) an issue (s) you’d want to pay attention to as well.
Pro tip #1: Use automated brand safety scoring.
This anti-fraud tool assigns brand safety risk scores to domains and apps based on historical data, content category, and ad visibility. This lets you automatically avoid placements that could hurt your brand or performance.
For example, Epom has collaborated with Pixalate the leading provider of this feature.
Now, by processing over 2 trillion ad events every month, Pixalate identifies new fraud schemes in real time. Such proactive monitoring ensures Epom filters out bad traffic before bids are placed, not after damage is done.
Putting security first is not the first priority for most DSPs, and that’s where Epom changes the game. Discover how our collaboration with Pixalate and Lotame focuses on brand safety in programmatic.
Level up your ad fraud prevention tools with Epom, Lotame, and Pixalate
Learn howHow to Ensure Brand Safety in Programmatic Advertising [5 Methods]
First thing you need to understand is that brand safety protection requires a multi-layered approach. Here are the key strategies, tech, and tools used today:
Method # 1. Pre-Bid Filters and Contextual Targeting
One of the first lines of defense is to stop risky impressions before the ad is ever served (pre-bid). DSPs (demand-side platforms) and exchanges allow buyers to set filters so that bids are only placed on inventory that meets certain safety criteria.
- Keyword and Category Filters:
Basically, you exclude specific content categories with the use of keyword blacklisting. You can upload lists of blocked keywords (e.g. “porn,” “shooting,” “drugs”) or use industry category exclusions (like IAB content categories).
However, it has its limits. You can’t guess the context with this method. That’s where contextual (who would’ve guessed) targeting comes in handy. In this case, DSPs analyze the full context of the page, and you can, for example, ask “bid only on pages about sports or travel, and never on pages about tragedy.”
- Site Whitelists/Blacklists:
This is pretty much the same, but for specific websites. The benefits of white-listing are clear: you have a compiled list of trusted publishers. The cons: scale and constant maintenance. Blacklists work on a grander scale; for instance, an agency holding company might maintain a blacklist of thousands of sites that are known for piracy, hate content, or high fraud, and apply this blacklist across all client campaigns.
Method # 2. Ad Fraud Detection and Traffic Quality Verification
Alongside content adjacency, ad fraud prevention is as important. Bots, fake sites, and other invalid traffic are the dangers you can’t afford to ignore. Key elements of protection include:
- MRC-Accredited Measurement:
Сhoose tools accredited by the Media Rating Council (MRC) for IVT detection across channels, including CTV, which has become a fraud hotspot.
- Post-Bid Monitoring & Viewability
Not all fraud is caught pre-bid, so verification tags (IAS, DV, Pixalate) track impressions after delivery. These tags report whether ads were viewable, served to humans, or flagged as invalid automated placements.
- Supply Chain Transparency
Industry standards like ads.txt, app-ads.txt, and sellers.json let buyers confirm if inventory is legitimately offered. By prioritizing direct supply paths and private marketplaces (PMPs), advertisers minimize spoofing, fake clicks, and arbitrage risk.
Since tools like ads.txt became common, large-scale scams (e.g. Methbot) are far harder to pull off.
Method # 3. Consider Brand Safety Verification Services
Moat by Oracle, and Zefr, offer programmatic brand safety solutions that basically do all the heavy lifting for you.
These services typically operate by analyzing each impression (or a sampled set of impressions) and reporting on:
- Was the ad served on a brand-safe page?
- Was the content category as expected?
- Was any potentially harmful content detected on the page?
- Were there fraud signals (pre-bid segmenting) and viewability risks?
The value here is that these verifiers are independent and have high expertise. They create standardized “brand safety ratings” for pages or videos and allow brands to customize what should be blocked or flagged.
However, it’s important to note that verification tools are not 100% perfect. As seen in the 2025 Adalytics investigation, even top vendors missed some extremely problematic placements. The reason here is that the cat-and-mouse nature of the problem: bad actors find ways to evade detection.
Method # 4: AI and Machine Learning for Content Analysis
Okay, what if blocklists, keyword filters, and even dedicated agencies don't work? Now it’s the turn for AI.
With programmatic scale, brands now use machine learning models to analyze text, audio, and video in real time, identifying both unsafe and contextually unsuitable placements.
Take Natural Language Processing (NLP) for example. It parses articles and transcripts, distinguishing between “gun shot at a concert” vs. “photo shoot in Milan.”
Sentiment analysis gauges whether content is neutral, positive, or negative, allowing advertisers to avoid potentially harmful or emotional ad slots.
AI’s adaptive learning also means models update as new slang, memes, or harmful trends emerge, keeping filters aligned with reality. In short, AI shifts brand safety from crude blocking to context-aware decision-making at scale.
That's definitely the future of programmatic ecosystem protection, but what about now?
Method # 5: Publisher Controls, DSP Transparency, and Partnerships
No matter how much money you pour into tech or how much time you spend on block lists, it's the transparent supply-demand relationships that ensure your placements reach the right audience.
Ad security isn’t only an advertiser’s job; brand safety for publishers and media buying platforms plays a big role. Publishers that know the game increasingly tag and classify their content, suppressing ads on sensitive pages to protect advertisers and ensure ads are placed within relevant content.
On the other side, DSPs also help by exposing domain lists, supply paths, and risk scores, letting their users audit and block problematic sources proactively.
At this point, transparency is a baseline expectation. Buyers want log-level data to trace where impressions ran and how much each path cost. The industry’s push for supply path optimization (SPO) means advertisers prioritize clean, direct routes over opaque reseller chains. This doesn’t just improve ROI — it reduces the chance of ads leaking onto disallowed or fraudulent inventory.
Finally, partnerships matter. DSPs that integrate fraud detection and contextual intelligence (Lotame, IAS, DV) give advertisers out-of-the-box protection without complex setups. For sensitive sectors like iGaming, crypto, or pharma, where regulations and scrutiny are higher, these integrations provide both compliance and confidence.
Platforms that offer built-in, always-on brand safety are the ones advertisers increasingly trust, since they reduce risk without sacrificing campaign scale.
And when it comes to safe digital advertising, Epom is the name of the game. Our white-label DSP's security is constantly evolving to accomodate all the chaos of the advertising industry.
Every unsafe impression is one too many. Don’t risk your brand safety!
Try Epom’s fraud-free DSP nowFAQ
-
What is a brand safety strategy in programmatic advertising?
A brand safety strategy is the set of rules, filters, and tech you apply to make sure your ads don’t land next to content that could potentially harm your reputation. It’s about more than blocking the worst stuff; it’s also about making your campaigns brand suitable for your audience.
-
Why do ad placements matter so much?
Because where you place ads defines how people perceive your brand. Even if you’ve got only the top ad slot on a site, if that site is toxic, you lose. That’s why media agencies obsess over contextual ads and suitability controls — to keep brands out of trouble.
-
How does ad fraud sneak in?
Fraudsters use tricks like ad stacking or schemes to generate fake clicks that mimic human behavior. On paper, performance looks great, but no real users see the ads. This is why demand side platforms (DSPs) must integrate fraud detection to protect budgets.
-
Does this include Google Ads?
Yes — programmatic advertising makes use of Google Ads, DSPs, and exchanges. The same risks apply everywhere: if you don’t enforce safety controls, you can place ads in contexts that undermine your brand.