I write about technology at theluddite.org

  • 1 Post
  • 9 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • No need to apologize for length with me basically ever!

    I was thinking how you did it in the second paragraph, but even more stripped down. The algorithm has N content buckets to choose from, then, once it chooses, the success is how much of the video the user watched. Users have the choice to only keep watching or log off for simplicity. For small N, I think that @kersplomp@programming.dev is right on that it’s the multi-armed bandit problem if we assume that user preferences are static. If we introduce the complexity that users prefer familiar things, which I think is pretty fair, so users are more likely to keep watching from a bucket if it’s a familiar bucket, I assume that exploration gets heavily disincentivized and exhibits some pretty weird behavior, while exploitation becomes much more favorable. What I like about this is that, with only a small deviation from a classic problem, it would help explain what you also explain, which is getting stuck in corners.

    Once you allow user choice beyond consume/log off, I think your way of thinking about it, as a turn based game, is exactly right, and your point about bin refinement is great and I hadn’t thought of that.




  • Thanks!

    I feel enlightened now that you called out the self-reinforcing nature of the algorithms. It makes sense that an RL agent solving the bandits problem would create its own bubbles out of laziness.

    You’re totally right that it’s like a multi-armed bandit problem, but maybe with so many possibilities that searching is prohibitively expensive, since the space of options to search is much bigger than the rate that humans can consume content. In other ways, though, there’s a dissimilarity because the agent’s reward depends on its past choices (people watch more of what they’re recommended). It would be really interesting to know if anyone has modeled a multi-armed bandit problem with this kind of self-dependency. I bet that, in that case, the exploration behavior is pretty chaotic. @abucci@buc.ci this seems like something you might just know off the top of your head!

    Maybe we can take advantage of that laziness to incept critical thinking back into social media, or at least have it eat itself.

    If you have any ideas for how to turn social media against itself, I’d love to hear them. I worked on this post unusually long for a lot of reasons, but one of them was trying to think of a counter strategy. I came up with nothing though!





  • I’d say that’s mostly right, but it’s less about opportunities, and more about design. To return to the example of the factory: Let’s say that there was a communist revolution and the workers now own the factory. The machines still have them facing away from each other. If they want to face each other, they’ll have to rebuild the machine. The values of the old system are literally physically present in the machine.

    So it’s not that you can do different things with a technology based on your values, but that different values produce technology differently. This actually limits future possibilities. Those workers physically cannot face each other on that machine, even if they want to use it that way. The past’s values are frozen in that machine.


  • No problem!

    Technology is constrained by the rules of the physical world, but that is an underconstraint.

    Example: Let’s say that there’s a factory, and the factory has a machine that makes whatever. The machine takes 2 people to operate. The thing needs to get made, so that limits the number of possible designs, but there are still many open questions like, for example, should the workers face each other or face away from each other? The boss might make them face away from each other, that way they don’t chat and get distracted. If the workers get to choose, they’d prefer to face each other to make the work more pleasant. In this way, the values of society are embedded in the design of the machine itself.

    I struggle with the idea that a tool (like a computer) is bad because is too general purpose. Society hence the people and their values define how the tool is used. Would you elaborate on that? I’d like to understand the idea.

    I love computers! It’s not that they’re bad, but that, because they’re so general purpose, more cultural values get embedded. Like in the example above, there are decisions that aren’t determined by the goals of what you’re trying to accomplish, but because computers are so much more open ended than physical robots, there are more decisions like that, and you have even more leeway in how they’re decided.

    I agree with you that good/evil is not a productive way to think about it, just like I don’t think neutrality is right either. Instead, I think that our technology contains within it a reflection of who got to make those many design decisions, like which direction should the workers sit. These decisions accumulate. I personally think that capitalism sucks, so technology under capitalism, after a few hundred years, also sucks, since that technology contains within it hundreds of years of capitalist decision-making.


  • I didn’t find the article particularly insightful but I don’t like your way of thinking about tech. Technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that the pens example doesn’t help us explain. Similarly, society shapes the way that we make technology. Technology is constrained by the rules of the physical world, but that is an underconstraint. The leftover space (i.e. the vast majority) is the process through which we embed social values into the technology. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them.

    This way of thinking helps explain why computer technology specifically is so awful: Computers are shockingly general purpose in a way that has no parallel in physical products. This means that the underconstraining is more pronounced, so social values have an even more outsized say in how they get made. This is why every other software product is just the pure manifestation of capitalism in a way that a robotic arm could never be.

    edit to add that this argument is adapted from Andrew Feenberg’s “Transforming Technology”