Advertiser Content Understanding via LLMs for Google Ads Safety
Ads Content Safety at Google requires classifying billions of ads for Google Ads content policies. Consistent and accurate policy enforcement is important for advertiser experience and user safety and it is a challenging problem, so there is a lot of value for improving it for advertisers and users....
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
09.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Ads Content Safety at Google requires classifying billions of ads for Google
Ads content policies. Consistent and accurate policy enforcement is important
for advertiser experience and user safety and it is a challenging problem, so
there is a lot of value for improving it for advertisers and users.
Inconsistent policy enforcement causes increased policy friction and poor
experience with good advertisers, and bad advertisers exploit the inconsistency
by creating multiple similar ads in the hope that some will get through our
defenses. This study proposes a method to understand advertiser's intent for
content policy violations, using Large Language Models (LLMs). We focus on
identifying good advertisers to reduce content over-flagging and improve
advertiser experience, though the approach can easily be extended to classify
bad advertisers too. We generate advertiser's content profile based on multiple
signals from their ads, domains, targeting info, etc. We then use LLMs to
classify the advertiser content profile, along with relying on any knowledge
the LLM has of the advertiser, their products or brand, to understand whether
they are likely to violate a certain policy or not. After minimal prompt tuning
our method was able to reach 95\% accuracy on a small test set. |
---|---|
DOI: | 10.48550/arxiv.2409.15343 |