Radiolab dating bot

This part starts 20 minutes in and is 20 minutes long.

The first third of the episode is about the Turing test.

I seem to misunderstand a lot more words than the people around me.

I'm the living version of that Benny Hill gag. Regardless, I either have to take notes or totally lock my attention onto the speaker (which most people find creepy) to retain much. Audio for learning is like getting a fixed dataset in a linked list, and text/transcript is like getting it in a vector. For me every time someone pauses I tend to "tune out" of what they are saying. Here's a negative path that I hope doesn't occur - we build toys that are more lifelike, communicative and designed to invoke empathy, and some children will just become desensitized to empathy triggers.

On 2x, I have to pay attention or I'll miss something.

Listening to audio is significantly slower than reading, and often by the time the speakers get to describing the results, a lot of time has passed from when they described the context.

There's zero benefit from the former, and by losing efficient random access, you can no longer skim/skip/search for things. I wonder if there's any research to prove/disprove something like that? Everyone uses the more mild version of dehumanizing language, reduction to abstraction [1], because it's more efficient to consider groups of people and it removes emotions from decisions.

For example, when considering latency you might say: The service is experiencing 3 seconds of latency at the 99th percentile with periodic timeouts.

You wouldn't say Alice wasn't able to buy medicine because the transaction failed.[1] As I understand it, this has to do with who was asking for a thing and what they wanted.

The upper class spoke French and when they asked for beouf, they wanted meat, not a cow.

Search for radiolab dating bot:

radiolab dating bot-63

Leave a Reply

Your email address will not be published. Required fields are marked *

One thought on “radiolab dating bot”