Three entry points for big data initiatives

The TDWI Big Data Maturity Model and Assessment is set to launch November 20th.  Krish Krishnan and I have been working on this for a while, and we’re very excited about it.  There are two parts to the Big Data Maturity Model and Assessment tool. The first is the actual TDWI Big Data Maturity Model Guide. This is a guide that walks you through the actual stages of maturity for big data initiatives and provides examples and characteristics of companies at different stages of maturity. In each of these stages, we look across various dimensions that are necessary for maturity. These include organizational issues, infrastructure, data management, analytics, and governance.

The second piece is the assessment tool. The tool allows respondents to answer a series of about 75 questions in the organization, infrastructure, data management, analytics, and governance dimensions. Once complete, the respondent receives a score in each dimension as well as some expectations and best practices for moving forward. A unique feature of the assessment is that respondents can actually look to see how their scores compare against their peers, by both industry and company size.

We urge you to take the assessment and see where you land relative to your peers regarding your big data efforts.  Additionally, it’s important to note that we view this assessment as evolutionary.  We know that many companies are in the early stages of their big data journey. Therefore, this assessment is meant to be evolutionary. You can come back and take it more than once. In addition, we will be adding best practices as we learn more about what companies are doing to succeed in their big data efforts.

In the course of our research for the model, Krish and I spoke to numerous companies embarking on big data.  There were a number of patterns that emerged regarding how companies get started in their big data efforts.   Here are a few of them:

  1. Large volumes of structured data are already being analyzed in the company.  Some companies have amassed large volumes (i.e., terabytes) of structured data that they are storing in their data warehouse or in some sort of appliance, often on-premises.  They feel that their BI infrastructure is pretty solid.  Typically, the BI effort is departmental in scope.  Some of these companies are already performing more advanced kinds of analysis; such as predictive analytics on the data.  Often, they are doing this to understand their customers.  The vision for big data is about augmenting the data they have with other forms of data (often text or geospatial data) to gain more insight.
  2. A specific need for big data. Some companies start a big data effort, almost from scratch, because of a specific business need.  For instance, a wireless provider might be interested in monitoring the network and then predicting where failures will occur.   An insurance company might be interested in telemetric information in order to determine pricing for certain kinds of drivers.  A marketing department might be interested in analyzing  social media data to determine brand reputation or as part of a marketing campaign. Typically these efforts are departmental in scope and are not part of a wider enterprise big data ecosystem.
  3. Building the business on big data.  We spoke to many e-businesses that were building the business model on big data.  While these companies might be somewhat advanced in terms of infrastructure to support big data often they were still working on the analytics related to the service and typically did not have any form of governance in place.

Big Data’s Future/Big Data’s Past

I just listened to an interesting IBM Google hangout about big data called Visions of Big Data’s future.  You can watch  it here.  There were some great experts on the line including James Kobelius (IBM), Thomas Deutsch (IBM), and Ed Dumbill (Silicon Valley Data Science).

The moderator, David Pittman, asked a fantastic question, “What’s taking longer than you expect in big data?”  It brought me back to 1992 (ok, I’m dating myself)  when I used to work at AT&T Bell Laboratories.  At that time, I was working in what might today be called an analytics Center of Excellence.  The group was composed of all kinds of quantitative scientists (economists, statisticians, physicists) as well as computer scientists and other IT like people. I think the group was called something like the Marketing Models, Systems, and Analysis department.

I had been working with members of Bell Labs Research to take some of the machine learning algorithms they were developing and applying them to our marketing data for analytics like churn analysis.  At that time, I proposed the formation of a group that would consist of market analysts and developers, working together with researchers and some computer scientists.  The idea was to provide continuous innovation around analysis.  I found the proposal today (I’m still sneezing from the dust).  Here is a sentence from it,

big data from 1992

Managing and analyzing large amounts of data?  At that point we were even thinking about call detail records.  It goes on to say, “Specifically the group will utilize two software technologies that will help to extract knowledge from databases:  data mining and data archeology.  The data archeology piece referred to:

Data discovery 1992

This exploration of the data is  similar to what is termed discovery today.  Here’s a link to the paper that came out of this work.   Interestingly, around this time I also remember going to talk to some people who were developing NLP algorithms for analyzing text.  I remember thinking that the “why” around customers were churning could be found in those call center notes.

I thought about this when I heard the moderator’s question not because the group I was proposing would certainly have been ahead of its time –  let’s face it AT&T was way ahead of its time with its Center of Excellence in analysis in the first place – but  because it’s taken so long to get from there to here and we’re not even here or there  yet.

Five Trends in Predictive Analytics

Predictive analytics, a technology that has been around for decades has gotten a lot of attention over the past few years, and for good reason.  Companies understand that looking in the rear-view mirror is not enough to remain competitive in the current economy.  Today, adoption of predictive analytics is increasing for a number of reasons including a better understanding of the value of the technology, the availability of compute power, and the expanding toolset to make it happen. In fact, in a recent TDWI survey at our Chicago World Conference earlier this month, more than 50% of the respondents said that they planned to use predictive analytics in their organization over the next three years. The techniques for predictive analytics are being used on both traditional data sets as well as on big data.

Here are five trends that I’m seeing in predictive analytics:

  • Ease of use.  Whereas in the past, statisticians used some sort of scripting language to build a predictive model, vendors are now making their software easier to use.  This includes hiding the complexity of the model building process and the data preparation process via the user interface.  This is not an entirely new trend but it is worth mentioning because it opens up predictive analytics to a wider audience such as marketing.  For example, vendors such as Pitney Bowes, Pegasystems, and KXEN provide solutions targeted to marketing professionals with ease of use as a primary feature.  The caveat here, of course, is that marketers still need the skills and judgment to make sure the software is used properly.
  • For more trends: http://tdwi.org/blogs/fern-halper/list/ferns-blog.aspx

Closing the loop in customer experience management: When it doesn’t work

Last week I had the unfortunate experience of trying to deal with American Airlines regarding some travel arrangements via its Advantage help desk.  I literally spent hours on the phone trying to get to the right person.  I won’t bore you with the details of my experience, however I did want to talk about how American used social media analytics in an active way – and where it came up short.

By now, many people are aware that companies are not only using social media analytics to understand what is being said about their brand; they are using it to actively engage with a customer when there is a problem as well.  This typically involves some sort of automatic classification of the problem, automatic routing to the right person, and suggested responses to the customer.

The good news was that when I tweeted about American Airlines I actually got a response back from them.  Here’s my first tweet and response:

First twitter round with aa

So far, not bad.  Here’s the next round of tweet/response:

conversation with aa round 2

Well, this was not what I wanted to hear, since it only partially addressed my issue.  If I just wanted an apology, I would not have bothered to tweet about a credit.  I would have preferred a follow up email (if they had a way to link my information together) or at least the contact information where I could get more help.  American Airlines wasn’t helping me they were whining.

So then I tweeted the following:

aa round 3

I gave up after this response. Frankly, it almost sounded sarcastic.  Should I have said, “Not on twitter, send me an email contact?”  I’m sending a letter to Craig Kreeger, instead, explaining my dissastisfaction. Maybe I’ll send it snail mail……….

My point is that if you’re going to engage your customers online via the channel that they used in the first place, make it count.  This exchange simply annoyed me.  Maybe twitter wasn’t the best channel for customer service, but it is the one that I used since no one was answering the phone and the American site couldn’t let me perform the function I wanted to do.  I’m not saying its’ easy to engage adequately via twitter.  To do this properly would have involved more finely tuned text analytics to understand what I was actually talking about as well as a way to integrate all of my data together to understand me as a customer (i.e. my loyalty information, recent trips, etc).  Maybe the customer service reps were tired after last month’s outage debacle at American when thousands of passengers were

Two Big Data Resources Worth Exploring

It’s a good day.  Our new book, Big Data for Dummies, is being released today and I’m busy working on a Big Data Analytics maturity model at TDWI with Krish Krishnan.  Krish, a faculty member at TDWI, is actually presenting some of the model at the TDWI World Conference:  Big Data Tipping Point taking place during the first week of May (see sidebar).  I would encourage people to attend, even if you aren’t that far along in your big data deployments.  TDWI has terrific courses in all aspects of information management and we understand that most companies will need to leverage their existing infrastructure to support big data initiatives.  In fact the title of this World conference is, “Preparing for the Practical Realities of Big Data.”   Check it out.

Back to the book.  Here’s a look at the Introduction!  Enjoy!

 

Two Weeks and Counting to Big Data for Dummies

I am excited to announce I’m a co-author of Big Data for Dummies which will be released in mid-April 2013.  Here’s the synopsis from Wiley:

Find the right big data solution for your business or organization

Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you’ll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You’ll learn what it is, why it matters, and how to choose and implement solutions that work.

  • Effectively managing big data is an issue of growing importance to businesses, not-for-profit organizations, government, and IT professionals
  • Authors are experts in information management, big data, and a variety of solutions
  • Explains big data in detail and discusses how to select and implement a solution, security concerns to consider, data storage and presentation issues, analytics, and much more
  • Provides essential information in a no-nonsense, easy-to-understand style that is empowering

 

Big Data For Dummies cuts through the confusion and helps you take charge of big data solutions for your organization.

Five Challenges for Text Analytics

While text analytics is considered a “must have” technology by the majority of companies that use it, challenges abound.  So I’ve learned from the many companies I’ve talked to as I prepare Hurwitz & Associates’ Victory Index for Text Analytics,a tool that assesses not just the technical capability of the technology but its ability to provide tangible value to the business (look for the results of the Victory Index in about a month). Here are the top five: http://bit.ly/Tuk8DB.  Interestingly, most of them have nothing to do with the technology itself.

Follow

Get every new post delivered to your Inbox.

Join 1,189 other followers