In part 1 of a series, I chronicle the struggles and successes of a real-time case study on Artificial Intelligence. Join my journey.
First part of a series.
The thing I am most excited about right now professionally is also the one thing that is scaring the sh?! out of me. I am embarking on my first Artificial Intelligence project.
More about that in a second. But first, let us segue to why I hate case studies. Post-mortem case studies are dry, sterile and devoid of emotion. And most importantly, they leave me with a massive inferiority complex. How did these people do these huge, gnarly projects and saunter away afterward completely unscathed? Their tidy bullet lists of “Positive Outcomes” and “Lessons Learned” make me want to hurl. And cry.
My projects always seem to involve blood, sweat, and tears. Why don’t their projects have sleepless nights, scope creep battles, and painful screw-ups? Am I just a crappy project manager?
Undoubtedly, their projects had all of those hiccups. All projects do. Glossy, sanitized case studies rarely tell the true story.
So, as a public service to all my fellow insecure project managers out there, I am going to do a real-time case study of my first AI project. I plan to share what we are working on, what is going well, what is sucking at the moment – everything – as it happens.
My hope is by sharing my project’s small victories and painful bruises, you will be encouraged to tackle a project that scares the sh?! out of you too. Insecure people unite!
So, about the project…
A little over a year ago, Centric launched, for lack of better terminology, an innovation incubator. Any employee, no matter where they live on the food chain, could conceive and share product and process improvement ideas.
Like-minded people would come together around those ideas and develop a Minimum Viable Product. Once the MVP was in hand, the team could Shark Tank the idea in front of leadership.
If approved, the project would be funded, official project code and all.
Here was my initial idea concept:
In a professional services organization, the vast majority of expenses are tied to non-billable staff hours (“bench time”). By reducing bench time, companies can increase profitability.
Therefore, it is preferable to staff new projects with benched resources versus hiring new employees. Additionally, there is an associated hiring cost for a new employee (in the consulting industry, this cost averages $4300/hire).
The traditional staffing scenario relies heavily on what we will call “Boolean” decision making. A Resource Manager will match the particulars of an upcoming project (such as start date, bill rate, and location) against a database of available resources. A typical search query can be performed by a computer or more simply in the Resource Manager’s head.
Either way, a typical search would be something like this:
List employees that are available for a new project June 1, have a cost rate of less than $50 per hour and are classified as an Organizational Change Management Senior Consultant.
There are several challenges with this model:
- Scalability – As an organization grows, it is difficult to have deep knowledge of all team members’ skills and preferences. This is particularly difficult to do if staffing is managed via spreadsheets. However, even if a database is employed, a search query for “employees with Java experience” could yield hundreds, if not thousands, of results. There is just too much information for a human to synthesize in a thoughtful manner.
- Adjustable Parameters – Rarely are the particulars of a project (such as start date) rigid. Parameters adjust regularly based on client and staffing conditions. Staffing is much more gray than it is black and white.
- Bias – Despite our best intentions, our human nature causes us to make biased decisions. For example, in staffing that may mean a tendency to gravitate towards resources in which we have familiarity versus a more qualified stranger.
- Data Scarcity – As the mantra goes, garbage in, garbage out. Manually updating skills databases is a difficult, and never-ending, task. Thus, data is often incomplete or even missing. Without good data, decision making is less effective.
Organizations like Netflix, Amazon, and Spotify, use recommendation engines to curate content to an individual user. By providing personalized recommendations, these sites increase customer satisfaction which ultimately drives revenue. In the simplest terms, they observe, record and learn user behavior which allows them to continually tune their recommendation algorithms. These algorithms, obviously, are much more complex than a simple AND-OR statement.
It can thus be inferred that these companies feel that the recommendation process is superior to basic Boolean matching. So why is the staffing of people, which can be even more nuanced than product or content, still relying on traditional matching methodology? Can we become smarter at professional services staffing by using some of the same strategies, such as machine learning, that e-commerce companies incorporate?
OK, that write-up makes me sound way more intelligent than I really am. (Insecurity Alert!) What the concept boils down to is: Can we use machine learning to better predict the right person for consulting assignments?
Getting Funded and Getting Started
Long story short, we got funded, and ready to officially kick off the project. I am borderline obsessed with AI right now and so excited to be working on a machine learning project. But the project also involves a new-to-me technology, a dual-shore team and a healthy dose of ambiguity. So that is scaring the sh?! out of me.
In the next episode of this real-time case study, I will bring you up to speed on how I conned some people smarter than me (Insecurity Alert!) to come along for the ride and our scrappy first flails at seeing this concept come to life.
From there, I promise regular updates on the ups and downs. Will you join me on this exciting/scary journey?