So, I was watching Black Mirror´s “Hated in the nation” (spoilers ahead). I´d say it has been my favorite episode so far in season 3. I found it a lower key and was more realistic (and hence scarier) than previous episodes in this third season.
I liked the moral lesson about the limits and consecuences of scourging people on social media (CaraAnchoa would be the latest, local example). Do we really mean harm to anyone? How would we react if that one person got murdered or killed himself unable to cope with the social pressure?
The bees attacks are a clear reference to Hitchcock´s “The Birds”, only more effective and with and explanation behind. Their relentlessness made me think me of Terminator´s “T-1000”.
My only objection to the plot is that I was expecting the bees attacks to bee a “malfunction” of their autonomous decision-making software. In fact the first thing I thought of was Asimov´s three laws of robotics.
The laws go as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws seem straightforward enough but Asimov wrote innumerable stories around how they can be interpreted and on how conflicts between them could be solved
I was disappointed to find out that Charlie Brooker resorted to a “bad guy” taking over the system in order to trigger the plot. I would have much rather preferred the bees themselves to have somehow concluded that killing that very hated human was somehow fulfilling law 1, and that it did not conflict with law 2. Not that Asimov´s laws of robotics have much bearing in real AI development (as explained here) but it would have made for a far more interesting story, I think.