Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
I've been happy with my 2016 HTPC, but the situation has changed, largely because of something I mentioned in passing back in November:
The Xbox One and PS4 are effectively plain old PCs, built on:
The golden age of x86 gaming is well upon us. That's why the future of PC gaming is looking brighter every day. We can see it coming true in the solid GPU and idle power improvements in Skylake, riding the inevitable wave of x86 becoming the dominant kind of (non mobile, anyway) gaming for the forseeable future.
And then, the bombshell. It is all but announced that Sony will be upgrading the PS4 this year, no more than three years after it was first introduced … just like you would upgrade a PC.
Sony may be tight-lipped for now, but it's looking increasingly likely that the company will release an updated version of the PlayStation 4 later this year. So far, the rumoured console has gone under the moniker PS4K or PS4.5, but a new report from gaming site GiantBomb suggests that the codename for the console is "NEO," and it even provides hardware specs for the PlayStation 4's improved CPU, GPU, and higher bandwidth memory.
In PC enthusiast parlance, you might say Sony just slotted in a new video card, a faster CPU, and slightly higher speed RAM.
This is old hat for PCs, but to release a new, faster model that is perfectly backwards compatible is almost unprecedented in the console world. I have to wonder if this is partially due to the intense performance pressure of VR, but whatever the reason, I applaud Sony for taking this step. It's a giant leap towards consoles being more like PCs, and another sign that the golden age of x86 is really and truly here.
I hate to break this to PS4 enthusiasts, but as big of an upgrade as that is – and it really is – it's still nowhere near enough power to drive modern games at 4k. Nvidia's latest and greatest 1080 GTX can only sometimes manage 30fps at 4k. The increase in required GPU power when going from 1080p to 4k is so vast that even the PC "cost is no object" folks who will happily pay $600 for a video card and $1000 for the rest of their box have some difficulty getting there today. Stuffing all that into a $299 box for the masses is going to take quite a few more years.
Still, I like the idea of the PS4 Neo so much that I'm considering buying it myself. I strongly support this sea change in console upgradeability, even though I swore I'd stick with the Xbox One this generation. To be honest, my Xbox One has been a disappointment to me. I bought the "Elite" edition because it had a hybrid 1TB drive, and then added a 512GB USB 3.0 SSD to the thing and painstakingly moved all my games over to that, and it is still appallingly slow to boot, to log in, to page through the UI, to load games. It's also noisy under load and sounds like a broken down air conditioner even when in low power, background mode. The Xbox One experience is way too often drudgery and random errors instead of the gaming fun it's supposed to be. Although I do unabashedly love the new controller, I feel like the Xbox One is, overall, a worse gaming experience than the Xbox 360 was. And that's sad.
Or maybe I'm just spoiled by PC performance, and the relatively crippled flavor of PC you get in these $399 console boxes. If all evidence points to the golden age of x86 being upon us, why not double down on x86 in the living room? Heck, while I'm at it … why not triple down?
This, my friends, is what tripling down on x86 in the living room looks like.
It's Intel's latest Skull Canyon NUC. What does that acronym stand for? Too embarrassing to explain. Let's just pretend it means "tiny awesome x86 PC". What's significant about this box is it contains the first on-die GPU Intel has ever shipped that can legitimately be considered console class.
It's not cheap at $699, but this tiny box bristles with cutting edge x86 tech:
All impressive, but the most remarkable items are the GPU and the Thunderbolt 3 port. Putting together a HTPC that can kick an Xbox One's butt as a gaming box is now as simple as adding these three items together:
Ok, fine, it's a cool $1,080 plus tax compared to $399 for one of those console x86 boxes. But did I mention it has skulls on it? Skulls!
The CPU and disk performance on offer here are hilariously far beyond what's available on current consoles:
Disk performance of the two internal PCIe 3.0 4x M.2 slots, assuming you choose a proper NVMe drive as you should, is measured in not megabytes per second but gigabytes per second. Meanwhile consoles lumber on with, at best, hybrid drives.
But most importantly, its GPU performance is on par with current consoles. NUC blog measured 41fps average in Battlefield 4 at 1080p and medium settings. Digging through old benchmarks I find plenty of pages where a Radeon 78xx or 77xx series video card, the closest analog to what's in the XBox One and PS4, achieves a similar result in Battlefield 4:
I personally benchmarked GRID 2 at 720p (high detail) on all three of the last HTPC models I owned:MaxMinAvg i3-4130T, HD 4400322127 i3-6100T, HD 530503239 i7-6770HQ, Iris Pro 580965978
When I up the resolution to 1080p, I get 59fps average, 38 min, 71 max. Checking with Notebookcheck's exhaustive benchmark database, that is closest to the AMD R7 250, a rebranded Radeon 7770.
What we have here is legitimately the first on-die GPU that can compete with a low-end discrete video card from AMD or Nvidia. Granted, an older one, one you could buy for about $80 today, but one that is certainly equivalent to what's in the Xbox One and PS4 right now. This is a real first for Intel, and it probably won't be the last time, considering that on-die GPU performance increases have massively outpaced CPU performance increases for the last 5 years.
As for power usage, I was pleasantly surprised to measure that this box idles at 15w at the Windows Desktop doing nothing, and drops to 13w when the display sleeps. Considering the best idle numbers I've measured are from the Scooter Computer at 7w and my previous HTPC build at 10w, that's not bad at all! Under full game load, it's more like 70 to 80 watts, and in typical light use, 20 to 30 watts. It's the idle number that matters the most, as that represents the typical state of the box. And compared to the 75 watts a console uses even when idling at the dashboard, it's no contest.
Of course, 4k video playback is no problem, though 10-bit 4K video may be a stretch. If that's not enough — if you dream bigger than medium detail 1080p gameplay — the presence of a Thunderbolt 3 port on this little box means you can, at considerable expense, use any external GPU of your choice.
That's the Razer Core external graphics dock, and it's $499 all by itself, but it opens up an entire world of upgrading your GPU to whatever the heck you want, as long as your x86 computer has a Thunderbolt 3 port. And it really works! In fact, here's a video of it working live with this exact configuration:
Zero games are meaningfully CPU limited today; the disk and CPU performance of this Skull Canyon NUC is already so vastly far ahead of current x86 consoles, even the PS4 Neo that's about to be introduced. So being able to replace the one piece that needs to be the most replaceable is huge. Down the road you can add the latest, greatest GPU model whenever you want, just by plugging it in!
The only downside of using such a small box as my HTPC is that my two 2.5" 2TB media drives become external USB 3.0 enclosures, and I am limited by the 4 USB ports. So it's a little … cable-y in there. But I've come to terms with that, and its tiny size is an acceptable tradeoff for all the cable and dongle overhead.
I still remember how shocked I was when Apple switched to x86 back in 2005. I was also surprised to discover just how thoroughly both the PS4 and Xbox One embraced x86 in 2013. Add in the current furor over VR, plus the PS4 Neo opening new console upgrade paths, and the future of x86 as a gaming platform is rapidly approaching supernova.
If you want to experience what console gaming will be like in 10 years, invest in a Skull Canyon NUC and an external Thunderbolt 3 graphics dock today. If we are in a golden age of x86 gaming, this configuration is its logical endpoint.[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Our kids have reached the age – at ages 4, 4, and 7 respectively – that taking longer trips with them is now possible without everyone losing what's left of their sanity in the process. But we still have the same problem on multiple hour trips, whether it's in a car, or on a plane – how do we bring enough stuff to keep the kids entertained without carting 5 pounds of books and equipment along, per person? And if we agree, like most parents, that the iPad is the general answer to this question, how do I get enough local media downloaded and installed on each of their iPads before the trip starts? And do I need 128GB iPads, because those are kind of expensive?
We clearly have a media sharing problem. I asked on Twitter and quite a number of people recommended the HooToo HT-TM05 TripMate Titan at $40. I took their advice, and they were right – this little device is amazing!
The value of the last two points is debatable depending on your situation, but the utility of the first two is huge! Plus the large built in battery means it can act as a self-powered WiFi hotspot for 10+ hours. All this for only forty bucks!
It's a very simple device. It has exactly one button on the top:
Once you get yours, just hold down the button to power it on, let it fully boot, and connect to the new TripMateSith WiFi network. As to why it's called that, I suspect it has to do with the color scheme of the device and this guy.
I am guessing licensing issues forced them to pick the 'real' name of TripMate Titan, but wirelessly, it's known as TripMateSith-XXXX. Connect to that. The default password is 11111111 (that's eight ones).
Once connected, navigate to 10.10.10.254 in your browser. Username is admin, no password.
This interface is totally smartphone compatible, for the record, but I recommend you do this from a desktop or laptop since we need to upgrade the firmware immediately. As received, the device has firmware 2.000.022 and you'll definitely want to upgrade to the latest firmware right away:
For this particular use, so we can attach the storage, leave it attached forever, and kinda-sorta pretend it is all one device, I recommend a tiny $32 128GB USB 3.0 drive. It's not a barn-burner, but it's fast enough for its diminutive size.
In the past, I've recommended very fast USB 3.0 drives, but I think that time is coming to an end. If you need something larger than 128GB, you could carry a USB 3.0 enclosure with a traditional inexpensive 2.5" HD, but the combination of travel and spinning hard drives makes me nervous. Not to mention the extra power consumption. Instead, I recommend one of the new, budget compact M.2 SSDs in a USB 3.0 enclosure:
I discovered this brand of Phison controller based budget M.2 SSDs when I bought the Scooter Computers and they are surprisingly great performers for the price, particularly if you stick with the newest Phison S10 controller. And they run absolute circles around large USB flash drives in performance! The larger the drive, believe me, the more you need to care about this, like say you need to quickly copy a bunch of reasonably new media for the kids to enjoy before you go catch that plane.Settings and WiFi
Let's continue setting up our HooToo Tripmate Titan. In the web interface, under Settings, Network Settings, these are the essentials:
In Host Name, first set the device name to something short and friendly. You will be typing this later on every device you attach to it. I used mully and sully for mine.
In Wi-Fi and LAN
There's more here, if you want to bridge wired or wirelessly, but this will get you started.Windows
Connect to the HooToo's WiFi network, then type in the name of the device (mine's called sully) in Explorer or the File Run dialog, prefixed by \\.
The default user accounts are admin and guest with no passwords, unless you set one up. Admin lets you write files; guest does not.
Once you connect you'll see the default file share for the USB device and can begin browsing the files at UsbDisk1_Volume1.
I use the File Explorer app for iOS, though I am sure there are plenty of other alternatives. It's $5, and I have it installed on all my iOS devices.
Connect to the HooToo's WiFi network, then add a new Windows type share via the menu on the left. (I'm not sure if other share types work, they might, but that one definitely does.) Enter the name of the device here and the account admin with no password. If you forget to enter account info, you'll get prompted on connect.
Once set up, this connection will be automatically saved for future use. And once you connect, you can browse the single available file share at UsbDisk1_Volume1 and play back any files.
Be careful, though, as media files you open here will use the default iOS player – you may need a third party media player if the file has complex audio streams (DTS, for example) or unusual video encoders.Caveats
For some reason, with a USB 3.0 flash drive attached, the battery slowly drains even when powered off. So you'll want to remove any flash drive when the HooToo is powered off for extended periods. I have no idea why this happens, but I was definitely able to reproduce the behavior. Kind of annoying since my whole goal was to have "one" device, but oh well.
This isn't a fancy, glitzy Plex based system, it's a basic filesystem browser. Devices that have previously connected to this WiFi network will definitely connect to it when no other WiFi networks are available, like say, when you're in a van driving to Legoland, or on a plane flying to visit your grandparents. You will still have to train people to visit the File Explorer app, and the right device name to look for, or create a desktop link to the proper share.
But in my book, simple is good. The HooToo HT-TM05 TripMate plus a small 128GB flash drive is an easy, flexible way to wirelessly share large media files across a ton of devices for less than 75 bucks total, and it comes with a large, convenient rechargeable battery.
I think one of these will live, with its charger cable and a flash drive chock full of awesome media, permanently inside our van for the kids. Remember, no matter where you go, there your … files … are.[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind.
Since I started working on Discourse, I spend a lot of time thinking about how software can encourage and nudge people to be more empathetic online. That's why it's troubling to read articles like this one:
My brotherâ€™s 32nd birthday is today. Itâ€™s an especially emotional day for his family because heâ€™s not alive for it.
He died of a heroin overdose last February. This year is even harder than the last. I started weeping at midnight and eventually cried myself to sleep. Todayâ€™s symptoms include explosions of sporadic sobbing and an insurmountable feeling of emptiness. My mom posted a gut-wrenching comment on my brotherâ€™s Facebook page about the unfairness of it all. Her baby should be here, not gone. â€śWhere is the God that is making us all so sad?â€ť she asked.
In response, someoneâ€Šâ€”â€Ša stranger/(I assume) another human beingâ€Šâ€”â€Šcommented with one word: â€śJunkie.â€ť
The interaction may seem a bit strange and out of context until you realize that this is the Facebook page of a person who was somewhat famous, who produced the excellent show Parks and Recreation. Not that this forgives the behavior in any way, of course, but it does explain why strangers would wander by and make observations.
There is deep truth in the old idea that people are able to say these things because they are looking at a screen full of words, not directly at the face of the person they're about to say a terrible thing to. That one level of abstraction the Internet allows, typing, which is so immensely powerful in so many other contexts …April 16, 2016
… has some crippling emotional consequences.
As an exercise in empathy, try to imagine saying some of the terrible things people typed to each other online to a real person sitting directly in front of you. Or don't imagine, and just watch this video.
I challenge you to watch the entirety of that video. I couldn't do it. This is the second time I've tried, and I had to turn it off not even 2 minutes in because I couldn't take it any more.
It's no coincidence that these are comments directed at women. Over the last few years I have come to understand how, as a straight white man, I have the privilege of being immune from most of this kind of treatment. But others are not so fortunate. The Guardian analyzed 70 million comments and found that online abuse is heaped disproportionately on women, people of color, and people of different sexual orientation.
And avalanches happen easily online. Anonymity disinhibits people, making some of them more likely to be abusive. Mobs can form quickly: once one abusive comment is posted, others will often pile in, competing to see who can be the most cruel. This abuse can move across platforms at great speed â€“ from Twitter, to Facebook, to blogposts â€“ and it can be viewed on multiple devices â€“ the desktop at work, the mobile phone at home. To the person targeted, it can feel like the perpetrator is everywhere: at home, in the office, on the bus, in the street.
I've only had a little taste of this treatment, once. The sense of being "under siege" – a constant barrage of vitriol and judgment pouring your way every day, every hour – was palpable. It was not pleasant. It absolutely affected my state of mind. Someone remarked in the comments that ultimately it did not matter, because as a white man I could walk away from the whole situation any time. And they were right. I began to appreciate what it would feel like when you can't walk away, when this harassment follows you around everywhere you go online, and you never really know when the next incident will occur, or exactly what shape it will take.
Imagine the feeling of being constantly on edge like that, every day. What happens to your state of mind when walking away isn't an option? It gave me great pause.
I admired the way Stephanie Wittels Wachs actually engaged with the person who left that awful comment. This is a man who has two children of his own, and should be no stranger to the kind of pain involved in a child's death. And yet he felt the need to post the word "Junkie" in reply to a mother's anguish over losing her child to drug addiction.
Isnâ€™t this what empathy is? Putting myself in someone elseâ€™s shoes with the knowledge and awareness that I, too, am human and, therefore, susceptible to this tragedy or any number of tragedies along the way?
Most would simply delete the comment, block the user, and walk away. Totally defensible. But she didn't. She takes the time and effort to attempt to understand this person who is abusing her mother, to reach them, to connect, to demonstrate the very empathy this man appears incapable of.
Consider the related story of Lenny Pozner, who lost a child at Sandy Hook, and became the target of groups who believe the event was a hoax, and similarly selflessly devotes much of his time to refuting and countering these bizarre claims.
Tracyâ€™s alleged harassment was hardly the first, Pozner said. Thereâ€™s a whole network of people who believe the media reported a mass shooting that never happened, he said, that the tragedy was an elaborate hoax designed to increase support for gun control. Pozner said he gets ugly comments often on social media, such as, â€śEventually youâ€™ll be tried for your crimes of treason against the people,â€ť â€śâ€¦ I wonâ€™t be satisfied until the caksets are openedâ€¦â€ť and â€śHow much money did you get for faking all of this?â€ť
It's easy to practice empathy when you limit it to people that are easy to empathize with – the downtrodden, the undeserving victims. But it is another matter entirely to empathize with those that hate, harangue, and intentionally make other people's lives miserable. If you can do this, you are a far better person than me. I struggle with it. But my hat is off to you. There's no better way to teach empathy than to practice it, in the most difficult situations.
In individual cases, reaching out and really trying to empathize with people you disagree with or dislike can work, even people who happen to be lifelong members of hate organizations, as in the remarkable story of Megan Phelps-Roper:
As a member of the Westboro Baptist Church, in Topeka, Kansas, Phelps-Roper believed that AIDS was a curse sent by God. She believed that all manner of other tragediesâ€”war, natural disaster, mass shootingsâ€”were warnings from God to a doomed nation, and that it was her duty to spread the news of His righteous judgments. To protest the increasing acceptance of homosexuality in America, the Westboro Baptist Church picketed the funerals of gay men who died of AIDS and of soldiers killed in Iraq and Afghanistan. Members held signs with slogans like â€śGOD HATES FAGSâ€ť and â€śTHANK GOD FOR DEAD SOLDIERS,â€ť and the outrage that their efforts attracted had turned the small church, which had fewer than a hundred members, into a global symbol of hatred.
Perhaps one of the greatest failings of the Internet is the breakdown in cost of emotional labor.
First weâ€™ll reframe the problem: the real issue is not Problem Childâ€™s opinions – he can have whatever opinions he wants. The issue is that heâ€™s doing zero emotional labor – heâ€™s not thinking about his audience or his effect on people at all. (Possibly, heâ€™s just really bad at modeling other peopleâ€™s responses – the outcome is the same whether he lacks the will or lacks the skill.) But to be a good community member, he needs to consider his audience.
True empathy means reaching out and engaging in a loving way with everyone, even those that are hurtful, hateful, or spiteful. But on the Internet, can you do it every day, multiple times a day, across hundreds of people? Is this a reasonable thing to ask of someone? Is it even possible, short of sainthood?
The question remains: why would people post such hateful things in the first place? Why reply "Junkie" to a mother's anguish? Why ask the father of a murdered child to publicly prove his child's death was not a hoax? Why tweet "Thank God for AIDS!"
Unfortunately, I think I know the answer to this question, and you're not going to like it.
I don't like it. I don't want it. But I know.
I have laid some heavy stuff on you in this post, and for that, I apologize. I think the weight of what I'm trying to communicate here requires it. I have to warn you that the next article I'm about to link is far heavier than anything I have posted above, maybe the heaviest thing I've ever posted. It's about the legal quandary presented in the tragic cases of children who died because their parents accidentally left them strapped into carseats, and it won a much deserved pulitzer. It is also one of the most harrowing things I have ever read.
Ed Hickling believes he knows why. Hickling is a clinical psychologist from Albany, N.Y., who has studied the effects of fatal auto accidents on the drivers who survive them. He says these people are often judged with disproportionate harshness by the public, even when it was clearly an accident, and even when it was indisputably not their fault.
Humans, Hickling said, have a fundamental need to create and maintain a narrative for their lives in which the universe is not implacable and heartless, that terrible things do not happen at random, and that catastrophe can be avoided if you are vigilant and responsible.
In hyperthermia cases, he believes, the parents are demonized for much the same reasons. â€śWe are vulnerable, but we donâ€™t want to be reminded of that. We want to believe that the world is understandable and controllable and unthreatening, that if we follow the rules, weâ€™ll be okay. So, when this kind of thing happens to other people, we need to put them in a different category from us. We donâ€™t want to resemble them, and the fact that we might is too terrifying to deal with. So, they have to be monsters.â€ť
This man left the junkie comment because he is afraid. He is afraid his own children could become drug addicts. He is afraid his children, through no fault of his, through no fault of anyone at all, could die at 30. When presented with real, tangible evidence of the pain and grief a mother feels at the drug related death of her own child, and the reality that it could happen to anyone, it became so overwhelming that it was too much for him to bear.
Those "Sandy Hook Truthers" harass the father of a victim because they are afraid. They are afraid their own children could be viciously gunned down in cold blood any day of the week, bullets tearing their way through the bodies of the teachers standing in front of them, desperately trying to protect them from being murdered. They can't do anything to protect their children from this, and in fact there's nothing any of us can do to protect our children from being murdered at random, at school any day of the week, at the whim of any mentally unstable individual with access to an assault rifle. That's the harsh reality.
When faced with the abyss of pain and grief that parents feel over the loss of their children, due to utter random chance in a world they can't control, they could never control, maybe none of us can ever control, the overwhelming sense of existential dread is simply too much to bear. So they have to be monsters. They must be.
And we will fight these monsters, tooth and nail, raging in our hatred, so we can forget our pain, at least for a while.
After Lyn Balfourâ€™s acquittal, this comment appeared on the Charlottesville News Web site:
â€śIf she had too many things on her mind then she should have kept her legs closed and not had any kids. They should lock her in a car during a hot day and see what happens.â€ť
I imagine the suffering that these parents are already going through, reading these words that another human being typed to them, just typed, and something breaks inside me. I can't process it. But rather than pitting ourselves against each other out of fear, recognize that the monster who posted this terrible thing is me. It's you. It's all of us.
The weight of seeing through the fear and beyond the monster to simply discover yourself is often too terrible for many people to bear. In a world of heavy things, it's the heaviest there is.[advertisement] At Stack Overflow, we help developers learn, share, and grow. Whether youâ€™re looking for your next dream job or looking to build out your team, we've got your back.
You know what's universally regarded as un-fun by most programmers? Writing assembly language code.
As Steve McConnell said back in 1994:
Programmers working with high-level languages achieve better productivity and quality than those working with lower-level languages. Languages such as C++, Java, Smalltalk, and Visual Basic have been credited with improving productivity, reliability, simplicity, and comprehensibility by factors of 5 to 15 over low-level languages such as assembly and C. You save time when you don't need to have an awards ceremony every time a C statement does what it's supposed to.
Assembly is a language where, for performance reasons, every individual command is communicated in excruciating low level detail directly to the CPU. As we've gone from fast CPUs, to faster CPUs, to multiple absurdly fast CPU cores on the same die, to "gee, we kinda stopped caring about CPU performance altogether five years ago", there hasn't been much need for the kind of hand-tuned performance you get from assembly. Sure, there are the occasional heroics, and they are amazing, but in terms of Getting Stuff Done, assembly has been well off the radar of mainstream programming for probably twenty years now, and for good reason.
So who in their right mind would take up tedious assembly programming today? Yeah, nobody. But wait! What if I told you your Uncle Randy had just died and left behind this mysterious old computer, the TIS-100?
And what if I also told you the only way to figure out what that TIS-100 computer was used for – and what good old Uncle Randy was up to – was to read a (blessedly short 14 page) photocopied reference manual and fix its corrupted boot sequence … using assembly language?
Well now, by God, it's time to learn us some assembly and get to the bottom of this mystery, isn't it? As its creator notes, this is the assembly language programming game you never asked for!
I was surprised to discover my co-founder Robin Ward liked TIS-100 so much that he not only played the game (presumably to completion) but wrote a TIS-100 emulator in C. This is apparently the kind of thing he does for fun, in his free time, when he's not already working full time with us programming Discourse. Programmers gotta … program.
Of course there's a long history of programming games. What makes TIS-100 unique is the way it fetishizes assembly programming, while most programming games take it a bit easier on you by easing you in with general concepts and simpler abstractions. But even "simple" programming games can be quite difficult. Consider one of my favorites on the Apple II, Rocky's Boots, and its sequel, Robot Odyssey. I loved this game, but in true programming fashion it was so difficult that finishing it in any meaningful sense was basically impossible:
Let me say: Any kid who completes this game while still a kid (I know only one, who also is one of the smartest programmers Iâ€™ve ever met) is guaranteed a career as a software engineer. Hell, any adult who can complete this game should go into engineering. Robot Odyssey is the hardest damn â€śeducationalâ€ť game ever made. It is also a stunning technical achievement, and one of the most innovative games of the Apple IIe era.
Visionary, absurdly difficult games such as this gain cult followings. It is the game I remember most from my childhood. It is the game I love (and despise) the most, because it was the hardest, the most complex, the most challenging. The world it presented was like being exposed to Platoâ€™s forms, a secret, nonphysical realm of pure ideas and logic. The challenge of the gameâ€”and it was one serious challengeâ€”was to understand that other world. Programmer Thomas Foote had just started college when he picked up the game: â€śI swore to myself,â€ť he told me, â€śthat as God is my witness, I would finish this game before I finished college. I managed to do it, but just barely.â€ť
I was happy dinking around with a few robots that did a few things, got stuck, and moved on to other games. I got a little turned off by the way it treated programming as electrical engineering; messing around with a ton of AND OR and NOT gates was just not my jam. I was already cutting my teeth on BASIC by that point and I sensed a level of mastery was necessary here that I probably didn't have and I wasn't sure I even wanted.
I'll take a COBOL code listing over that monstrosity any day of the week. Perhaps Robot Odyssey was so hard because, in the end, it was a bare metal CPU programming simulation, like TIS-100.
A more gentle example of a modern programming game is Tomorrow Corporation's excellent Human Resource Machine.
It has exactly the irreverent sense of humor you'd expect from the studio that built World of Goo and Little Inferno, both excellent and highly recommendable games in their own right. If you've ever wanted to find out if someone is truly interested in programming, recommend this game to them and see. It starts with only 2 instructions and slowly widens to include 11. Corporate drudgery has never been so … er, fun?
I'm thinking about this because I believe there's a strong connection between programming games and being a talented software engineer. It's that essential sense of play, the idea that you're experimenting with this stuff because you enjoy it, and you bend it to your will out of the sheer joy of creation more than anything else. As I once said:
Joel implied that good programmers love programming so much they'd do it for no pay at all. I won't go quite that far, but I will note that the best programmers I've known have all had a lifelong passion for what they do. There's no way a minor economic blip would ever convince them they should do anything else. No way. No how.
I'd rather sit a potential hire in front of Human Resource Machine and time how long it takes them to work through a few levels than have them solve FizzBuzz for me on a whiteboard. Is this interview about demonstrating competency in a certain technical skill that's worth a certain amount of money, or showing me how you can improvise and have fun?
That's why I was so excited when Patrick, Thomas, and Erin founded Starfighter.
If you want to know how competent a programmer is, give them a real-ish simulation of a real-ish system to hack against and experiment with – and see how far they get. In security parlance, this is known as a CTF, as popularized by Defcon. But it's rarely extended to programming, until now. Their first simulation is StockFighter.
Participants are given:
Participants are asked to:
This is a seriously next level hiring strategy, far beyond anything else I've seen out there. It's so next level that to be honest, I got really jealous reading about it, because I've felt for a long time that Stack Overflow should be doing yearly programming game events exactly like this, with special one-time badges obtainable only by completing certain levels on that particular year. Stack Overflow is already a sort of game, but people would go nuts for a yearly programming game event. Absolutely bonkers.
I know we've talked about giving lip service to the idea of hiring the best, but if that's really what you want to do, the best programmers I've ever known have excelled at exactly the situation that Starfighter simulates — live troubleshooting and reverse engineering of an existing system, even to the point of finding rare exploits.
Consider the dedication of this participant who built a complete wireless trading device for StockFighter. Was it necessary? Was it practical? No. It's the programming game we never asked for. But here we are, regardless.
An arbitrary programming game, particularly one that goes to great lengths to simulate a fictional system, is a wonderful expression of the inherent joy in playing and experimenting with code. If I could find them, I'd gladly hire a dozen people just like that any day, and set them loose on our very real programming project.[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
In 2006, after visiting the Computer History Museum's exhibit on Chess, I opined:
We may have reached an inflection point. The problem space of chess is so astonishingly large that incremental increases in hardware speed and algorithms are unlikely to result in meaningful gains from here on out.
So. About that. Turns out I was kinda … totally completely wrong. The number of possible moves, or "problem space", of Chess is indeed astonishingly large, estimated to be 1050:
Deep Blue was interesting because it forecast a particular kind of future, a future where specialized hardware enabled brute force attack of the enormous chess problem space, as its purpose built chess hardware outperformed general purpose CPUs of the day by many orders of magnitude. How many orders of magnitude? In the heady days of 1997, Deep Blue could evaluate 200 million chess positions per second. And that was enough to defeat Kasparov, the highest ever ranked human player – until 2014 at least. Even though one of its best moves was the result of a bug.
In 2006, about ten years later, according to the Fritz Chess benchmark, my PC could evaluate only 4.5 million chess positions per second.
Today, about twenty years later, that very same benchmark says my PC can evaluate a mere 17.2 million chess positions per second.
Ten years, four times faster. Not bad! Part of that is I went from dual to quad core, and these chess calculations scale almost linearly with the number of cores. An eight core CPU, no longer particularly exotic, could probably achieve ~28 million on this benchmark today.
I am not sure the scaling is exactly linear, but it's fair to say that even now, twenty years later, a modern 8 core CPU is still about an order of magnitude slower at the brute force task of evaluating chess positions than what Deep Blue's specialized chess hardware achieved in 1997.
But here's the thing: none of that speedy brute forcing matters today. Greatly improved chess programs running on mere handheld devices can perform beyond grandmaster level.
In 2009 a chess engine running on slower hardware, a 528 MHz HTC Touch HD mobile phone running Pocket Fritz 4 reached the grandmaster level – it won a category 6 tournament with a performance rating of 2898. Pocket Fritz 4 searches fewer than 20,000 positions per second. This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second.
As far as chess goes, despite what I so optimistically thought in 2006, it's been game over for humans for quite a few years now. The best computer chess programs, vastly more efficient than Deep Blue, combined with modern CPUs which are now finally within an order of magnitude of what Deep Blue's specialized chess hardware could deliver, play at levels way beyond what humans can achieve.
Chess: ruined forever. Thanks, computers. You jerks.
Despite this resounding defeat, there was still hope for humans in the game of Go. The number of possible moves, or "problem space", of Go is estimated to be 10170:
Remember that Chess had a mere fifty zeroes there? Go has more possible moves than there are atoms in the universe.
Wrap your face around that one.
Deep Blue was a statement about the inevitability of eventually being able to brute force your way around a difficult problem with the constant wind of Moore's Law at your back. If Chess is the quintessential European game, Go is the quintessential Asian game. Go requires a completely different strategy. Go means wrestling with a problem that is essentially impossible for computers to solve in any traditional way.
A simple material evaluation for chess works well – each type of piece is given a value, and each player receives a score depending on his/her remaining pieces. The player with the higher score is deemed to be 'winning' at that stage of the game.
However, Chess programmers innocently asking Go players for an evaluation function would be met with disbelief! No such simple evaluation exists. Since there is only a single type of piece, only the number each player has on the board could be used for a simple material heuristic, and there is almost no discernible correlation between the number of stones on the board and what the end result of the game will be.
Analysis of a problem this hard, with brute force completely off the table, is colloquially called "AI", though that term is a bit of a stretch to me. I prefer to think of it as building systems that can learn from experience, aka machine learning. Here's a talk which covers DeepMind learning to play classic Atari 2600 videogames. (Jump to the 10 minute mark to see what I mean.)
As impressive as this is – and it truly is – bear in mind that games as simple as Pac-Man still remain far beyond the grasp of Deep Mind. But what happens when you point a system like that at the game of Go?
DeepMind built a system, AlphaGo, designed to see how far they could get with those approaches in the game of Go. AlphaGo recently played one of the best Go players in the world, Lee Sedol, and defeated him in a stunning 4-1 display. Being the optimist that I am, I guessed that DeepMind would win one or two games, but a near total rout like this? Incredible. In the space of just 20 years, computers went from barely beating the best humans at Chess, with a problem space of 1050, to definitively beating the best humans at Go, with a problem space of 10170. How did this happen?
Well, a few things happened, but one unsung hero in this transformation is the humble video card, or GPU.
Consider this breakdown of the cost of floating point operations over time, measured in dollars per gigaflop:1961$8,300,000,000 1984$42,780,000 1997$42,000 2000$1,300 2003$100 2007$52 2011$1.80 2012$0.73 2013$0.22 2015$0.08
What's not clear in this table is that after 2007, all the big advances in FLOPS came from gaming video cards designed for high speed real time 3D rendering, and as an incredibly beneficial side effect, they also turn out to be crazily fast at machine learning tasks.
The Google Brain project had just achieved amazing results â€” it learned to recognize cats and people by watching movies on YouTube. But it required 2,000 CPUs in servers powered and cooled in one of Googleâ€™s giant data centers. Few have computers of this scale. Enter NVIDIA and the GPU. Bryan Catanzaro in NVIDIA Research teamed with Andrew Ngâ€™s team at Stanford to use GPUs for deep learning. As it turned out, 12 NVIDIA GPUs could deliver the deep-learning performance of 2,000 CPUs.
Let's consider a related case of highly parallel computation. How much faster is a GPU at password hashing?Radeon 79708213.6 M c/s 6-core AMD CPU52.9 M c/s
Only 155 times faster right out of the gate. No big deal. On top of that, CPU performance has largely stalled in the last decade. While more and more cores are placed on each die, which is great when the problems are parallelizable – as they definitely are in this case – the actual performance improvement of any individual core over the last 5 to 10 years is rather modest.
But GPUs are still doubling in performance every few years. Consider password hash cracking expressed in the rate of hashes per second:GTX 295200925k GTX 690201254k GTX 780 Ti2013100k GTX 980 Ti2015240k
The latter video card is the one in my machine right now. It's likely the next major revision from Nvidia, due later this year, will double these rates again.
(While I'm at it, I'd like to emphasize how much it sucks to be an 8 character password in today's world. If your password is only 8 characters, that's perilously close to no password at all. That's also why why your password is (probably) too damn short. In fact, we just raised the minimum allowed password length on Discourse to 10 characters, because annoying password complexity rules are much less effective in reality than simply requiring longer passwords.)
Distributed AlphaGo used 1202 CPUs and 176 GPUs. While that doesn't sound like much, consider that as we've seen, each GPU can be up to 150 times faster at processing these kinds of highly parallel datasets — so those 176 GPUs were the equivalent of adding ~26,400 CPUs to the task. Or more!
Even if you don't care about video games, they happen to have a profound accidental impact on machine learning improvements. Every time you see a new video card release, don't think "slightly nicer looking games" think "wow, hash cracking and AI just got 2× faster … again!"
I'm certainly not making the same mistake I did when looking at Chess in 2006. (And in my defense, I totally did not see the era of GPUs as essential machine learning aids coming, even though I am a gamer.) If AlphaGo was intimidating today, having soundly beaten the best human Go player in the world, it'll be no contest after a few more years of GPUs doubling and redoubling their speeds again.
AlphaGo, broadly speaking, is the culmination of two very important trends in computing:
Huge increases in parallel processing power driven by consumer GPUs and videogames, which started in 2007. So if you're a gamer, congratulations! You're part of the problem-slash-solution.
We're beginning to build sophisticated (and combined) algorithmic approaches for entirely new problem spaces that are far too vast to even begin being solved by brute force methods alone. And these approaches clearly work, insofar as they mastered one of the hardest games in the world, one that many thought humans would never be defeated in.
Great. Another game ruined forever by computers. Jerks.
Based on our experience with Chess, and now Go, we know that computers will continue to beat us at virtually every game we play, in the same way that dolphins will always swim faster than we do. But what if that very same human mind was capable of not only building the dolphin, but continually refining it until they arrived at the world's fastest minnow? Where Deep Blue was the more or less inevitable end result of brute force computation, AlphaGo is the beginning of a whole new era of sophisticated problem solving against far more enormous problems. AlphaGo's victory is not a defeat of the human mind, but its greatest triumph.
(If you'd like to learn more about the powerful intersection of sophisticated machine learning algorithms and your GPU, read this excellent summary of AlphaGo and then download the DeepMind Atari learner and try it yourself.)[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.