Automated Fascism: Backdoor Insights from … Skateboarding

Kent Dahlgren
12 min readFeb 9, 2022

I learned a lot about local “politics” as a skateboard advocate, beginning in the mid-1980’s, which served as my primary hobby since I was still in high school.

The 17-year-old victim of a June 8, 2021 beating by four Glendale Police Officers inside a Dick’s Sporting Goods store suffered bruises and injuries to the head. The boy was kicked in the head because he matched the description of a suspect.

As it turns out, the issues weren’t really just about skateboard advocacy; the challenges we faced ultimately had more to do with what I call economic apartheid.

Apartheid is defined as segregation on grounds other than race.

Of course there’s other forms of apartheid, which can be political, social, religious, etc., but for now let’s focus upon economic and consider the implications of additional layers as this article evolves certain realistic scenarios.

As skaters, we wanted a safe and legal place to skate, because the number one most common reason a skateboarder dies is by being hit by an automobile, and the number one most common reason skateboarders are hospitalized is also due to collision with automobiles.

Additionally, skateboarding is widely criminalized, and frankly skaters would like an opportunity to get exercise without being chased by cops and private security.

One might assume that it would be relatively simple to persuade policymakers and members of the community that a safe and legal place to skate is a good idea. After all, we’re talking about skateboarders, right? Just a bunch of kids.

Some quick demographics:

About one in six American youth ride a skateboard, about a third of them skate every day, and the average age is about 14, but most start skateboarding at about nine and stop when they enter adulthood.

Alas, it proved a significant challenge to rally the community and persuade the “powers that be,” and their aligned resistance against a safe alternative was a truly sobering and radicalizing experience, because the system reveals it has very little interest in those who aren’t able to afford to be in public spaces.

Ergo, my insistence that we live within a society that qualifies as an economic apartheid.

Anti-skateboarding (and anti-houseless) ordinances do a decent job of illustrating how far municipalities will go to keep “economic undesirables” out of public spaces, and these exclusionary policies can get weirdly political in a hurry, but again: let’s stick with economic for the time being.

Let’s allow Frank Keel (former spokesman for Philadelphia Mayor John Street) to explain what’s really behind anti-skateboarding ordinances:

“…there is not one economic impact study nor any economic evidence that these skateboarding folks bring any money into city coffers.”

And there you have it.

Why is it necessary to seek some context from the former spokesman for Philadelphia's mayor? Because they sought to criminalize skateboarding on behalf of their true constituents (businesses).

Within our culture, public spaces are reserved to those who can afford to be there, and “loitering” is the crime of existing without being a consumer.

Ergo, my belief that we live within an economic apartheid.

As expected, municipalities (and corporate entities that manage public spaces such as open-air malls) define anti-skateboarding (and anti-houseless) policies to protect their customers/businesses, and the slope gets very slippery in a hurry.

Imagine if an open-air mall finds it advantageous to ban anyone wearing a “Make America Great Again” hat under the pretext that these people are predisposed towards violence and/or conflict?

Just hold onto that thought for a second, because it’s not that far-fetched.

Let’s consider facial-recognition vendor Odin, which provides their service to law enforcement to identify those who are houseless. Odin lists “poor hygiene (use street as a restroom),” “reduction of property values,” and “degradation of a city’s culture” as problems related to homelessness.

Consider that municipalities are willing to invest millions of dollars in facial-recognition technology to identify and assist in the removal of houseless persons before they invest in alternatives to houselessness, and the same is true of skateboarding and other “deviant” activities that run counter to our society’s desire to optimize living conditions for those who can afford to be present.

Let’s consider a scenario:

Let’s say you’ve been a homeowner in a city (such as Austin, Texas) for decades, but as the city grows the tax burden becomes heavier, and before long you are priced out of your home, and end up moving somewhere more affordable.

One weekend you come back to town to visit your grandchildren, and you go to The Domain (a high end open-air “mall” in North Austin), and you find yourself encouraged by security to move along.

Why?

Private security says it’s because the children aren’t allowed to play in the small fountain unless the parents have patronized at least one local vendor, and maybe you don’t want to spend $7 for an ice cream cone.

Maybe you can’t afford it?

That’s a simple example, but the example extrapolates in a dystopian manner, and quickly.

Allow me to provide some background.

Prior to my ejection and effectively blacklisted from the information security / anti-fraud industries, I was responsible for machine learning and “device fingerprinting” for a company called iovation (now owned by Transunion), which actively monitors about seven billion devices, including all Amazon transactions, all Citibank transactions, and thousands of other vendors.

And this gets very weird, for example: if your profile has been reported on dating sites, porn sites, or social media platforms, it’s actually less likely you’ll get ad hoc work through Upwork, or credit transactions approved through Citibank, Chase, or Amazon.

This is done through the use of a third-party “consortium,” allowing vendors to crowd-source their anti-fraud work, which means that one vendors “tagging” of a device (such as a phone) means it’s now assessed as having a higher risk by another vendor.

These thousands of vendors shared this data within a vast graph-based database which contained billions of transactions, from thousands of vendors, from over many years.

These data served as a rich playground for certain anti-fraud algorithms, often more commonly referred to as “AI.”

In 2017, just months prior to my weirdly-contrived departure from the anti-fraud industry, I was asked to speak at ICS West, on the use of predictive analytics in physical security, which would allow retailers to automatically assess the creditworthiness of people walking through open-air shopping malls so the poor ones can be encouraged to leave by private security; the basis of economic apartheid.

Again, hold this thought, because I’ll provide more in a moment.

Just prior to my tenure in the anti-fraud industry, I managed and designed security infrastructure used by almost every critical company in the modern world: finance, banking, insurance, retail, ecommerce, energy, manufacturing, nuclear (within the adjacent “information security” industry).

In 2017 I was asked to speak at ICS West, which is an annual global summit for access control systems. There’s a lot of cop stuff there, a lot of AI tech, but it’s mostly about secure access control.

My presentation pertained to how predictive analytics is being adopted by those interested in physical security — kind of a crossing of a technological chasm, if you will, between predictive AI and mall security.

So let us return to our scenario at the outdoor mall.

Let’s assume you like to shop at Nordstrom, which is a moderately high-end vendor. They have physical stores and an on-line presence.

Nordstrom (online) already uses predictive analytics to assess the risk that someone will engage in fraud, through a variety of technologies that enable them to accept a phone or a computer as a proxy for a person.

There are anti-fraud “policies” in effect that determine which transactions are allowed, which are blocked, and which are automatically redirected to manual review, based upon a variety of measurements and factors.

I know this because I was responsible for these technologies, which is why I was asked to speak at ICS West in the first place.

Each device (such as a computer, a tablet, or a phone) is harvested for thousands of very intimate details, and the device itself serves as a proxy for a person. In other words: your phone represents you in the automated assessment of your fitness as a customer.

There are thousands of automation scripts that assess each device for relative risk, and the monitoring gets very creepy.

For example, many bank applications automatically assess your physical location, so if you visit a different coffee shop on your drive to work one morning, the bank might assess your attempted credit card transaction as potentially fraudulent.

There are thousands of these kinds of automated scripts, and they offer a surprising insight into how much the system is biased against “bad customers” (the poor).

Did you know that low battery life (phone) is statistically associated with higher rates of fraud?

In other words: if your phone has a low battery, AND, if you try and buy something on-line, greater scrutiny is placed upon your attempted transaction. The transaction may be reviewed, or even blocked, depending upon your “risk score.”

By the way, the “risk score” is not your credit score, and you have no way to see your score. You are not entitled to review and challenge this score.

Because the device (your phone, for example) serves as a proxy to a person, these vendors claim they are satisfying the letter of regulatory requirements regarding privacy, and this is why iovation has over 20 years of transaction history, without any requirement to provide transparency into their system archive.

This “risk score” is maintained by one of several vendors that are used across thousands of vendors, and as I said, the one I worked for was iovation, now owned by TransUnion, and vendors share risk scores and other information across a backchannel caller a “consortium.”

So, if your account was flagged by someone on a dating site, your risk score increases on Amazon.com, or Nordstrom, or thousands of other vendors (including social media platforms).

Here’s another example:

If you use a wifi network that’s been used for accessing the TOR network, your device is now given a higher risk score, even if you have no direct access to the person who used TOR.

Or, maybe you use your friend’s wifi network, and he’s an activist. Suddenly, due to what’s called “link associations,” your device is now associated with a risk-taking profile.

There are thousands of such measures, and it’s for this reason that anti-fraud vendors boast such high success rates, which actually delivers egregiously high “false positive” rates, which result in slower or even canceled transactions.

What if your friend’s wifi network is considered “risky” because his activism has resulted in a high number of reported violations of a social media platform’s terms of service?

Suddenly, your device is considered risky, and because your device serves as a proxy to you, you are now considered risky.

So when you try and use your debit or credit card and the transaction is blocked? I just described what likely happens behind the scenes.

So. Let’s consider how this might look in the not-to-distant future.

What happens when the egregiously high false positive rates of on-line anti-fraud meets physical security in an open-air mall, such as the Domain?

What happens if you are approached by private security because they had been informed that an unfavorable potential customer is loitering in a common area that’s reserved for good customers?

Economic apartheid, potentially magnified by certain political biases within some social media platforms.

In 2017, the attendees of ICS West were sent to Las Vegas on behalf of their employer (often: a retail vendor, an institution, or even law enforcement within a municipality).They wanted to know what options might be available to help “AI” (often just machine learning) create more intelligent solutions, to use the parlance of marketing.

Returning to my example about the open-air mall, it’s therefore not impossible to imagine an instance where walking into a mall results in a near immediate challenge by mall security, because the anti-fraud team was informed that you (actually, your device) has been detected near a vendor that has you (your device) as a high risk.

Before you dismiss this as fantasy, consider that a huge number of vendors automatically track your location, multiple times per minute, using various geolocation measurements.

The Bank of America mobile app uses a technology their internal team created, then spun off as a separate company, and is now used by thousands of vendors.

It uses various ways to automatically detect if you’ve deviated from your normal travel patterns, because it’s assessed that deviations from norm are possible indicators that your device has been stolen, which is why a transaction is more likely to be challenged if you’re visiting a new coffee shop on your way to work.

How broadly are these systems used?

As shared, the company I worked for regularly monitored seven billion devices, more than half of them mobile.

Creepy, huh? Well, it gets worse:

I should not have to tell you that this becomes very dystopian, when enforced by police that have become militarized.

Did you know that if local and state law enforcement in the United States was its own country, theirs would be the third-largest military on earth, with almost twice the resources of the entire Russian military?

I’m not just talking about equipment; it’s also combat-ready software. More on this in a moment.

Let’s assume a scenario.

Let’s assume you’re a veteran, and are retired. You lived in Austin for decades, but decided to sell your family home and move north to make every dollar count.

You come into town to visit your grandchildren and take them to visit the small fountain at the Domain, and within minutes are challenged by mall security, who starts asking you questions you don’t feel like answering. You get a little surly, and they call the police.

And increasingly, the police are using “battle tested” software originally designed in the context of warfare, nowadays sold as “pre-crime” solutions.

Did you know that police departments have been purchasing systems such as Palantir Technologies? These systems were created to automatically detect evidence of insurgencies via social media indicators, and has been touted as highly successful in detecting and destroying terrorist cells.

For those who enjoy war movies, what I just talked about may sound cool, but when the same technology is deployed in cities, the policies end up looking a little weird.

For example, let’s say that you’re the kind of person who had previously joined Alex Jones in a protest at the Texas Capitol building, and although you aren’t aware of this, the police system has you identified as a possible threat, which gives them permission to be a little more aggressive than usual in dealing with your disruption at the Domain.

In other words: by getting surly with the private mall cops, you earn a visit from policemen with their army guns, and the whole thing was triggered because you’re tagged as a less-than-favorable customer (thanks to automated fraud detection that uses a device as a proxy for a person).

Sounds awful, doesn’t it?

Loitering is the crime of existing without being a consumer.

And:

Apartheid is enforced segregation on various grounds, inclusive but not limited to race.

Anti-loitering ordinances (“NO SKATEBOARDING”) are great examples of economic apartheid, as are anti-camping ordinances.

When economic apartheid is enforced by police power, it’s fascism.

Quoting the dictionary, which served as the consistent definition of fascism for the past 80 years:

Fascism is a form of far-right, authoritarian ultra-nationalism characterized by dictatorial power, forcible suppression of opposition, and strong regimentation of society and of the economy.

I should serve as a canary in a coalmine on this issue, if you will; if someone like me is assessed as a bad guy, there’s a real problem.

What can you do about it?

First, you can become a little better educated about how much of this information you allow these vendors to use against you, under the auspices of protecting you. The Electronic Frontier Foundation does a good job of providing free utilities, such as Privacy Badger, which automatically blocks many attempts to harvest personal information from your device. Another alternative is to use more privacy-centric tools, such as that which is recommended by the EFF and other sites.

Second, you can press your technology vendors to do a better job of protecting your privacy in the first place. In April 2021, Apple enabled iPhone users to choose which apps are allowed to track their behavior across other apps, and Facebook announced that this would result in $10 billion in lost revenue.

Finally, you can be more proactive in your activism, ensuring our communities do not become places that are only for “good customers,” because the slope gets slippery in a hurry, and it won’t take long before you end up on the other side of the rope, within “public spaces” that are exclusively reserved for those who can afford to be there.

You can lobby your representative to support legislation that affords you better access and visibility into how “anti-fraud” and “security” products are using your information against you, in support of “good customers.”

--

--

Kent Dahlgren

Product management fix-it guy. World-famous people skills. Extremely small hands. (edit) marketing lady says I’m also supposed to say “CEO of software company”