This is the second installment of a case study on my first AI project. I plan to share what we are working on, what is going well, what is sucking at the moment – everything – as it happens.
Part two of a series.
Our team inception story involves rainbows and kittens. Because, of course… In August 2016, I presented at Centric’s TED-inspired tech conference, Camp I/O, on the data and technology behind talent acquisition. Recruiting is often seen as a “soft” part of the business.
But to many people’s surprise, we do leverage some pretty hard methods to attract and vet candidates. And because I am a sucker for corny taglines and obnoxious imagery, I entitled the presentation “Recruiting Isn’t Just Rainbows and Kittens.”
During the presentation, I shared some of the data analysis techniques we employed for hiring, as well as the third party software we used to increase diversity hiring. The latter, Textio, uses machine learning to identify which job posting terminology attracts (or repels) female candidates. Because we had great success with the software, I became intrigued by the intersection of machine learning and talent management.
So, for the finale of my presentation, I presented a vague concept that had been rattling in my head – could we use machine learning to staff better?
I was completely surprised, and humbled, to later learn that my colleagues had voted “Rainbows and Kittens” the second best presentation of the day. Somewhat ironically, I won the ubiquitous AI wundertoy, Amazon Echo.
More importantly, however, I was able to meet some other AI-obsessed folks from within our company as a result of the presentation. They were in different places on their personal machine learning journeys – from “Kaggle regulars” to “yet to write a single line of Python.” But they all had the enthusiasm to learn, and the gullibleness to explore my hare-brained idea. So I signed them up on the spot.
And thus, thanks in part to rainbows and kittens, the Smart Staffing project team was formed.
Next Up: Building the Prototype
I promise to delve into our technology choices, offshoring, project management, and other fun stuff in future installments. For now, I will leave you a brain dump of the lunacy swirling in my head:
- Good Thing #1: We were in scrappy, innovation phase since August 2016. At the beginning of April, we became an official project, charge code and all. It is exciting to see the vague concept presented at Camp I/O finally become something sort of real.
- Good Thing #2: We had a solid conversation with the Centric India team. I will explain another day why we chose to off-shore some of the project, but it was certainly not a spur of the moment thing. I feel more confident about that decision after our meeting.
The thing that is keeping me up at night
There are still some things to iron out about the platform and data interfaces. Our prototype was not developed on Azure, but for multiple reasons, we are using it for the real deal. So we have some things to figure out. We are planning a conversation with the Microsoft gurus this week, so hopefully, that will bring some clarity.
What’s making me feel smarter
A few of my colleagues crowd-shamed me into taking Andrew Ng’s Machine Learning class on Coursera. They hooked me with, “It’s only math.” True to their word, there was a lot of math. Thankfully, a hundred years ago I spent decent time studying linear algebra and data modeling and that came back pretty easy. As they say, just like riding a bike.
What they forgot to mention was that there also was coding homework. Writing code did not come back so easily. That, unfortunately, was more like the first time riding a bike after taking off the training wheels. Crash and burn. Repeatedly. But I muddled through and actually passed the class. Now my eyes don’t gloss over when the team throws out words like “logistic regression” and “backpropagation in the neural network.”
What’s making me feel dumb
I can’t pronounce “Azure” and I feel like an idiot every time I slaughter the enunciation. It is some sort of weird speech tic. I also cannot pronounce many words ending in –ton, like Dayton.
Thankfully, there are not that many important words that end with “ton,” so over the years it has been easy to find pronounceable replacements or just avoid them altogether. Seriously, how often does one visit Dayton anyway? “Azure” is another story. Since Azure is an important part of this project, I really do not think I can get away with saying the tongue-twister “Microsoft’s cloud platform that we are using” in lieu of the actual word.
So, right now I have to concentrate extra hard to get the word out right. Who knew that speaking would be the hardest part of machine learning?
Come back next month for more. My hope is by sharing our project’s small victories and painful bruises, you will be encouraged to tackle a project that scares the sh?! out of you too.