u/EmergencyUpstairs309

▲ 1 r/DontThinkForMe+1 crossposts

Knock knock, TikTok — I want to see, edit, or delete my Algorithm.

As a UX designer I'm always trying to protect the user. We live in a world run by algorithms. Every time we swipe, like, or click, an invisible algorithm shapes our digital experiences. TikTok, for example, curates a perfectly tailored feed based on what it thinks we want. But wouldn’t it be nice if we could actually see how these algorithms work? Better yet, what if we had the ability to edit these algorithms? Or, in some cases, hit delete?

>

This isn’t just a fantasy. It’s about taking control of how we’re influenced, from TikTok to any other platform using algorithms to nudge us toward certain content or products.

1. Algorithms Control What You See (And You Don’t Even Know It)

TikTok’s algorithm watches everything: how long you watch a video, what you comment on, and what you like. Then it predicts what you’ll want next. Great, right? Sure, until it gets a little too accurate. One day you’re into home decor; the next, it’s all DIY hacks that you’ve never shown an interest in.

The issue? We only see the output of this process, not how it works. Wouldn’t it be wonderful to understand why the app decided you suddenly needed 45 cat videos in a row? Algorithm transparency would reveal why it thinks you’re into certain topics, letting you decide if it’s accurate or not.

2. Editing Your Algorithm: Let’s Get Hands-On

Here’s the dream: we don’t just see the algorithm; we can edit it. If TikTok pegs you as a fan of cooking videos just because you looked up one recipe, you should be able to say, “Nah, I’m not a chef,” and remove it from your profile.

>

3. Delete the Algorithm: Start Fresh

Sometimes, you don’t want to tweak; you want to hit the big red “reset” button. Delete your algorithm. Imagine a fresh start on TikTok. No more assumptions based on that one weird rabbit hole you fell down last year.

This is perfect for people whose feeds don’t reflect their current interests. Why should your digital profile be based on outdated preferences? You would gain control over your online persona by resetting your preferences.

4. It’s Not Just TikTok: Expand It to Every Platform

TikTok isn’t the only one using algorithms to nudge your preferences. Algorithms are present in various platforms such as Instagram, YouTube, Netflix, and Amazon. The same ideas apply: you should be able to see how they work, edit them, and delete them when they no longer serve you.

>

5. The Challenges: Ethical Design Without Losing Control

Of course, with great power comes great responsibility. What if people, given too much control, start reinforcing bad habits? Think of someone with a gambling addiction who tweaks their algorithm to show more betting ads.

Platforms also don’t want people gaming their systems. Transparency is good, but we have to balance that with ethical design, ensuring people can’t use algorithm control to exploit the system.

Conclusion: Time to Take Control

The push to see, edit, or delete our algorithms is about more than control — it’s about ownership. We should have a voice in shaping our digital selves. If we can correct our credit reports, we should be able to do the same with algorithms that define our online lives.

Imagine a world where we’re not passive consumers but active participants in the algorithms that influence us. Sounds like a better future, right? Let’s start by opening the algorithmic black box.

— — —

Several U.S. Senators have been involved in discussions around TikTok and digital privacy, focusing on the broader issue of data security, privacy concerns, and potential foreign influence. Two prominent senators have been involved in these discussions.

  1. Senator Mark Warner (D-VA): Warner, the chair of the Senate Intelligence Committee, has been a leading voice on issues related to technology, privacy, and national security. He has raised concerns about TikTok’s data practices and its connections to the Chinese government through its parent company, ByteDance. Warner has introduced or supported legislation aimed at enhancing cybersecurity and protecting user data, including efforts to regulate how foreign-owned apps handle American data.
  2. Senator Josh Hawley (R-MO): Hawley has been very vocal about his concerns with TikTok, focusing on its ties to China and its potential risks to national security and children’s privacy. He has introduced several pieces of legislation aimed at banning TikTok on government devices and has advocated for stricter regulations on digital platforms that collect and use personal data.

Both Warner and Hawley have been at the forefront of legislative efforts related to TikTok and broader digital privacy concerns, making them key figures in this ongoing debate.

reddit.com
u/EmergencyUpstairs309 — 2 days ago
▲ 3 r/IndustrialDesign+1 crossposts

The UX of Robotics - The Door Test

How to tell if a humanoid robot is ready to live in your home, or is it just an expensive toy?

The Door Test. Can a Robot live in my home?

I love the idea of robots. At the moment though, I think there’s far too much “Wizard of Oz” marketing around consumer robots.

Great work is being done in non-humanoid robotics, with machines running tirelessly in “lights-out” factories. But humanoid robotics are a unique and complex thing. Watching Elon Musk “dance” with his robots is a pure Wizard of Oz spectacle. Robotics is full of demo theatrics. Seeing robots do backflips and run (pre-selected videos often shown when the robot didn’t face-plant), is at best an omission of the truth.

Standing up and opening doors in uncontrolled, varied home environments is a much harder generalization problem than choreographed stunts in a known environment.

However, the simple things in life, like standing up from a chair, or opening a door, are incredibly complex. To stand up from a chair requires a delicate balancing act of a multitude of factors — like the height and type of the chair (for example, armrests or swivel), the angle and weight of the body.

Consider whether you have a robot in your home. In your house, you have many doors. We as humans effortlessly go in and out of rooms through doors. But doors are complex things in themselves. Here is a list of possibilities to be considered when going through a door.

  • Is the door locked?
  • Does the door open outward or inward?
  • On which side of the door has the knob, left or right?
  • What type of knob is it — twist, pull, push? Does the robot have the hand dexterity to do this?
  • Is it open already, or ajar?
  • Does the door open and close on its own, or need to be pushed?
  • Is it heavy or light?
  • Is it a sliding door, a double door, a garage door or even maybe a swinging door. What about rotating doors?
  • Is there anyone coming through the other way?
  • Do you close it after you?
  • Is there a door threshold to trip over?
  • Are there steps going up to or down from the door?
  • Do you give the robot a set of keys?
  • Will the robot know which key to use for which door?
  • If it was locked, will it lock it after it, or leave it open if it was unlocked?

If you are to welcome a robot into your house, you don’t want to spend all your time opening and closing doors for it. I tried to find a video of a robot opening a door, but I couldn’t find any convincing home door-opening demos in unconstrained settings outside of highly staged lab demos. Help me if you know of something.

So, like the Turing Test, I'm opening up the Door Test: a simple, everyday feat.

The Door Test: A humanoid robot should be able to approach an unfamiliar household door, infer how it works, open it safely, pass through, and close it appropriately (leaving it locked or open) without human assistance or per-door programming.

I’m not opening the door of my house to any robot until it can do it itself.

reddit.com
u/EmergencyUpstairs309 — 5 days ago
▲ 0 r/DontThinkForMe+1 crossposts

How a database default set by someone who'd never met a user shaped the working lives of senior executives at a Fortune 500 company — and what it tells us about AI design

I was brought in to fix a UX problem at a Fortune 500 company.

The problem was a single input field.

Senior managers — people with decades of experience and significant organisational authority — were visibly frustrated with a data entry form they used every day. The source of their frustration was a fifty-character minimum on one particular field. You could not submit the form without typing at least fifty characters, regardless of whether fifty characters were appropriate for what you needed to say.

The workarounds were creative. People were padding entries with spaces. Adding meaningless qualifiers. Typing the same phrase twice. A small masterpiece of human adaptability in the face of arbitrary constraint. There was talk, at senior level, of building an entirely separate application just to route around this one field.

I asked why the minimum existed.

Nobody knew.

This is the part of the story I find most interesting.

Not the frustration. Not the workarounds. The fact that nobody knew.

The field connected to a database. The database had a fifty-character minimum on that column. The minimum had been set during the database installation. The installation had been performed by an intern. The intern had set the minimum at fifty characters because that was what he had learned at school was good practice for keeping a database efficient.

He was applying a principle he had been taught, to a context he didn't fully understand, affecting users he would never meet.

Nobody had questioned it because the field had a minimum, and fields have minimums for reasons, and querying every technical decision in an enterprise system is not how organisations function. The assumption was that someone had decided this. The reality was that a default had decided it — a default set by someone who was thinking about database efficiency and not thinking about users at all.

The fix took an afternoon. Remove the minimum. Done.

The workarounds stopped. The separate application was never built. Several senior managers had a noticeably better Tuesday.

I've been a UX designer for thirty years. I've worked with banks, telecoms, healthcare companies, universities. And I can tell you that the intern's default is not an unusual story. It is, in various forms, one of the most common stories in enterprise software.

The specific details change. The structure is always the same.

A decision gets made — often by someone junior, often quickly, often without any visibility into how it will affect the people who eventually encounter it. That decision becomes a default. The default becomes infrastructure. The infrastructure becomes invisible. Years pass. The person who made the decision has left. Nobody remembers why it exists. And somewhere in the organisation, real people with real jobs are spending real time working around something that should have taken an afternoon to fix.

This is what I mean when I say that defaults are policy decisions.

Not metaphorically. Literally. When you set a default — in a form field, in a software setting, in an AI system — you are making a choice on behalf of every person who will ever encounter that interface and not change it. Which, as research consistently shows, is most of them. Most people accept defaults. Most people don't adjust settings. Most people work within whatever constraints the system presents, assuming those constraints are there for a reason.

Sometimes they are. Often they aren't. Sometimes they're there because an intern learned something at school.

This matters more than ever right now, because AI systems are default machines.

Every AI-powered product you use has made thousands of decisions before you arrive. What to show you. What to filter out. How much to automate. What to assume about your preferences. What level of confidence to present its outputs with. These are all defaults. They were all set by someone. And most users will never change them — not because they don't care, but because the default is presented as the normal, expected, recommended state.

Which means the people who set those defaults are, effectively, setting policy for everyone who uses the product.

This is not a small responsibility. It is, in fact, one of the most consequential design decisions in any AI-powered system. And it is almost never discussed as such.

When we talk about AI ethics, we talk about bias in training data, about algorithmic accountability, about the rights of people affected by automated decisions. These are important conversations. But somewhere upstream of all of them is a more mundane question that doesn't get nearly enough attention:

Who set the default? What were they optimising for? And did they ever meet a user?

The intern, for what it's worth, was probably perfectly competent. He was doing his job. He applied what he knew to the task in front of him. The problem was not his competence. The problem was the gap between his context — database efficiency — and the context of the people who would eventually live inside the system he was configuring.

That gap is where most UX problems live. Not in malice. Not in incompetence. In the distance between the person making the decision and the person affected by it.

Closing that gap is the whole job.

Andy Grogan is a UX designer with thirty years of experience inside some of the world's largest organisations. His book Don't Make Me Think, But Don't Think For Me: The Joys and Horrors of AI Design is out June 2025.

Around 900 words. A few things worth noting about this piece specifically:

It earns the AI point. The piece spends two thirds of its length on a human story before connecting it to AI design. That structure means the AI argument lands with weight rather than feeling like a hot take. Readers who don't care about AI still find value in the first two thirds. Readers who do care about AI get a specific, grounded argument rather than a generic one.

The title does work. "The Intern Who Broke Enterprise Software for Five Years" is specific enough to create curiosity and relatable enough that anyone who has worked in a large organisation will recognise it immediately. It will perform well in Substack subject lines and as a LinkedIn post title.

The book mention is earned. It's one line in the author bio. The piece doesn't need the book to make its argument. That's the right relationship between content and promotion.

The discussion question is implicit. "Who set the default? What were they optimising for? And did they ever meet a user?" Those three questions at the end invite comments without explicitly asking for them. Reddit and LinkedIn readers will respond to those naturally.

Post this as your Substack launch piece. It sets the tone, demonstrates the voice, and tells a story that anyone in tech will recognise from their own experience.

u/EmergencyUpstairs309 — 9 days ago

How to tell if a humanoid robot is ready to live in your home, or is it just an expensive toy?

The Door Test. Can a Robot live in my home?

I love the idea of robots. At the moment though, I think there’s far too much “Wizard of Oz” marketing around consumer robots.

Great work is being done in non-humanoid robotics, with machines running tirelessly in “lights-out” factories. But humanoid robotics are a unique and complex thing. Watching Elon Musk “dance” with his robots is a pure Wizard of Oz spectacle. Robotics is full of demo theatrics. Seeing robots do backflips and run (pre-selected videos often shown when the robot didn’t face-plant), is at best an omission of the truth.

Standing up and opening doors in uncontrolled, varied home environments is a much harder generalization problem than choreographed stunts in a known environment.

However, the simple things in life, like standing up from a chair, or opening a door, are incredibly complex. To stand up from a chair requires a delicate balancing act of a multitude of factors — like the height and type of the chair (for example, armrests or swivel), the angle and weight of the body.

Consider whether you have a robot in your home. In your house, you have many doors. We as humans effortlessly go in and out of rooms through doors. But doors are complex things in themselves. Here is a list of possibilities to be considered when going through a door.

  • Is the door locked?
  • Does the door open outward or inward?
  • On which side of the door has the knob, left or right?
  • What type of knob is it — twist, pull, push? Does the robot have the hand dexterity to do this?
  • Is it open already, or ajar?
  • Does the door open and close on its own, or need to be pushed?
  • Is it heavy or light?
  • Is it a sliding door, a double door, a garage door or even maybe a swinging door. What about rotating doors?
  • Is there anyone coming through the other way?
  • Do you close it after you?
  • Is there a door threshold to trip over?
  • Are there steps going up to or down from the door?
  • Do you give the robot a set of keys?
  • Will the robot know which key to use for which door?
  • If it was locked, will it lock it after it, or leave it open if it was unlocked?

If you are to welcome a robot into your house, you don’t want to spend all your time opening and closing doors for it. I tried to find a video of a robot opening a door, but I couldn’t find any convincing home door-opening demos in unconstrained settings outside of highly staged lab demos. Help me if you know of something.

So, like the Turing Test, I'm opening up the Door Test: a simple, everyday feat.

The Door Test: A humanoid robot should be able to approach an unfamiliar household door, infer how it works, open it safely, pass through, and close it appropriately (leaving it locked or open) without human assistance or per-door programming.

I’m not opening the door of my house to any robot until it can do it itself.

reddit.com
u/EmergencyUpstairs309 — 10 days ago
▲ 0 r/UX_Design+1 crossposts

I remember studying graphic design in Amsterdam in the 1980s at the Rietveld Academie. I was designing a poster during a period when it was fashionable to create theater posters and album covers for LPs, when a visiting professor of design from Germany walked into the classroom. He asked me what theory of design I was applying. I wanted to become a designer because I wanted to design, be creative, follow my thought process, and jump around in my head in a random flow of free association. That, to me, was pure design. I told him I was free-associating, and he said, “Aha! Design theory: 47”.

My first reaction was sadness. To me, design theory 47 represented the sole approach to design. It served as the driving force behind my desire to become a designer. I looked at the other forms of designing, and they all came across (maybe arrogantly) as methods and processes for designers who lacked talent. These methods facilitated the creation of designs, requiring only an individual to input data and operate the controls.

But was this ego? What if the processes did produce better designs? As I progressed in my career as a professional designer, I discovered aspects of design work that I had not previously considered, but were now integral to my profession.

Firstly, the sheer volume and tempo of designing at a professional level require a specific process. Otherwise, the designer will burn out quickly.

The designer needs to be consistent and produce consistent styles and results, not results at his whim and fancy. A client chooses a designer because of what he has done, not what he hopes he might do this time.

The designer needs to be accountable for explaining to the business paying for the design product that the design is based on data and not risky assumptions.

Personal taste is subjective; the designer may create a masterpiece that others find incoherent.

I fully committed to my career as a professional designer, utilizing processes, data, metrics, research, surveys, A-B testing, best practices, and any necessary resources.

However, a part of me treasures the primary reason I pursued this career path.

I want to have FUN!

I often make alternative designs based on gut feeling and whimsy to show my clients that there are aspects of design that could help make their products unique.

And as AI becomes a part of all our professional lives, Design Theory 47 might just save us.

reddit.com
u/EmergencyUpstairs309 — 11 days ago
▲ 5 r/UIUX

I remember studying graphic design in Amsterdam in the 1980s at the Rietveld Academie. I was designing a poster during a period when it was fashionable to create theater posters and album covers for LPs, when a visiting professor of design from Germany walked into the classroom. He asked me what theory of design I was applying. I wanted to become a designer because I wanted to design, be creative, follow my thought process, and jump around in my head in a random flow of free association. That, to me, was pure design. I told him I was free-associating, and he said, “Aha! Design theory: 47”.

My first reaction was sadness. To me, design theory 47 represented the sole approach to design. It served as the driving force behind my desire to become a designer. I looked at the other forms of designing, and they all came across (maybe arrogantly) as methods and processes for designers who lacked talent. These methods facilitated the creation of designs, requiring only an individual to input data and operate the controls.

But was this ego? What if the processes did produce better designs? As I progressed in my career as a professional designer, I discovered aspects of design work that I had not previously considered, but were now integral to my profession.

Firstly, the sheer volume and tempo of designing at a professional level require a specific process. Otherwise, the designer will burn out quickly.

The designer needs to be consistent and produce consistent styles and results, not results at his whim and fancy. A client chooses a designer because of what he has done, not what he hopes he might do this time.

The designer needs to be accountable for explaining to the business paying for the design product that the design is based on data and not risky assumptions.

Personal taste is subjective; the designer may create a masterpiece that others find incoherent.

I fully committed to my career as a professional designer, utilizing processes, data, metrics, research, surveys, A-B testing, best practices, and any necessary resources.

However, a part of me treasures the primary reason I pursued this career path.

I want to have FUN!

I often make alternative designs based on gut feeling and whimsy to show my clients that there are aspects of design that could help make their products unique.

And as AI becomes a part of all our professional lives, Design Theory 47 might just save us.

reddit.com
u/EmergencyUpstairs309 — 13 days ago