One of the nice perks of $Dayjob is that they celebrate their employees. The first Monday of every month we get giant cakes in all the lunchrooms to celebrate the work-aversarys of people who were hired in that particular month. Last week, we had the birthday celebration for $Dayjobbers who were born in July, August, and September. Where the workaversary cake is for everyone, the birthday one is for birthday peeps only. The birthday celebration features cupcakes and a game of Ask Me Anything from the E-level people who attend.
This month, Big Bossman and the head of Engineering were there. Over the course of the hour, the conversation turned to automation and the loss of jobs that will result from artificial intelligence. We are educational tech, so part of the conversation was about the role we can play in the re-training effort, but Big Bossman said something that I’ve been thinking about ever since. He said that artificial intelligence can do repeatable tasks that meet a binary pass/fail benchmark, but you’ll always need humans to add a layer of judgement.
A while ago, I did an escape room as a team building event. We got the door to open, but it wasn’t the solution that EscapeWorx had in their script. We McGuyvered it. They wrote an addendum to their script for the solution we found. According to the flow of the original script, we would have failed the test. Yet, the door swung open, so we passed. The point of the exercise was to complete a circuit, which we did, so full points for ingenuity, but lost points for expected method. Or something. If there was only one path to success by following steps 1 through 10, AI would have not permitted us to succeed. This is not a perfect analogy, since we got out without intervention of the EscapeWorx Overlords, but the point remains. We beat the system, so full points to us.
I’m also presently in the process of helping develop a way to determine a benchmark for cadet attendance that will qualify them for a Very Expensive (to us, but free to them) trip in May. On the surface, we thought that we have a SooperComplex algorithm that determined what the maximum events that a cadet *could* be at, minus the things they were excused from, and the resulting number gave us an approximation of their participation percentage. Except for the part where Band kids had twice as many obligatory events when you count band practises and performances they’ve done. And if you’re a kid who has just joined, (perhaps) 3 weeks of regular parade gets you 100%. That’s where the judgement comes in. Surely, having to be at 50 things and missing 5 of them should still indicate a very solid committment, right?
I like the SooperComplex algorithm because it removes some of the bias that might occur with the optics of presence. But on it’s own, it’s not a sufficient gauge for our purposes. Still, if the Hoomans still need to provide some kind of judgement here, how does one quantify that? What exactly does “good judgement” mean?
I’ve been listening to when people say things like “good judge of character” or advise their children (or competitive a capella group) to “make good choices”. There’s a personal moral compass that determines good choices, but my compass and Child’s compass (the person whom I advise to make good choices most often), may be different on any given decision point on any given day. Which makes them fluid, which perhaps is the crux of it.
The intersection of Good Judgement and a repeatable set of criteria isn’t easily navagible. At present, I’m finding the road made more precarious by potholes filled with whitehotfury. I’d be glad to hand some of the responsibility to some dispassionate automaton that could point to easy answers for me, but apparently that’s only good until need to use Good Judgement.
It’s like the song that never ends.
Call Elon Musk. When the Rise of the Machines actually happens, I’mma need way more cupcakes.