Four ways to illustrate the value of predictive analytics

My new (and first!) TDWI Best Practices Report was published a few weeks ago. It is called Predictive Analytics for Business Advantage. In it, I use the results from an online survey together with some qualitative interviews to discuss the state of predictive analytics, where it is going, and some best practices to get there. You can find the report here. The Webinar on the topic can be found here.

There were many great questions during the Webinar and I’m sorry I didn’t get to answer them all. Interestingly, many of the questions were not about the technology; rather they were about how to convince the organization (and the senior executives) about the value in predictive analytics. This jives with what I saw in my research. For instance,”lack of understanding of predictive analytics” was cited as a key challenge for the discipline. Additionally, when we asked the question, “Where would you like to see improvements in your predictive analytics deployment?”, 70% of all respondents answered “education.” It’s not just about education regarding the technology. As one respondent said, “There is a lack of understanding of the business potential” for predictive analytics, as well.

Some of the questions from the audience during the Webinar echoed this sentiment. For instance, people asked, “How do I convince senior execs to utilize predictive analytics?” and “What’s the simple way to drive predictive analytics to senior executives?” and “How do we get key leaders to sponsor predictive analytics?”

There is really no silver bullet, but here are some ways to get started:

  • Cite research: One way is to point to studies that have been done that quantify the value. For instance, in the Best Practices Report, 45% of the respondents who were currently using predictive analytics actually measured top- or bottom-line impact or both (see Figure 7 in the report). That’s pretty impressive. There are other studies out there as well. For instance, academic studies (i.e., Brynjolffson et al., 2011) point to the relationship between using data to make decisions and improved corporate performance. Industry studies by companies such as IBM suggest the same. Vendors also publish case studies, typically by industry, that highlight the value from certain technologies. These can all be useful fodder.
  • Do a proof of concept: However, these can’t really stand alone. Many of the end users I spoke to regarding predictive analytics all pointed to doing some sort of proof of concept or proof of value project. These are generally small-scale projects with high business impact. The key is that there is a way to evaluate the impact of the project so you can show measurable results to your organization. As one respondent put it, “Limit what you do but make sure it has an impact.” Additionally, think through those metrics as you’re planning the proof of concept. Additionally, someone in the organization is also going to have to become the communicator/evangelist to get people in the organization excited rather than fearful of the technology. One person told me that he made appointments with executives to talk to them about predictive analytics and show them what it could do.
  • BI foundation: Typically, organizations that are doing predictive analytics have some sort of solid BI infrastructure in place. They can build on that.  For instance, one end user told me about how he built out trust and relationships by first establishing a solid BI foundation  and making people comfortable with that and then introducing predictive analytics. Additionally, success breeds success. I’ve seen this countless times with various “new” technologies. Once one part of the organization sees something that works, they want it too. It grows from there. 
  • Grow it by acting on it: As one survey respondent put it, “Analytics is not a magic pill if the business process is not set up.” That means in order to grow and sustain an analytics effort, you need to be able to act on the analytics. Analytics in a vacuum doesn’t get you anywhere. So, another way to show value is to make it part of a business process. That means getting a number of people in the organization involved too.

The bottom line is that it is a rare company that can introduce predictive analytics, and behold! It succeeds quickly out of the gate. Are there examples? Sure. Is it the norm? Not really. Is predictive analytics still worth doing? Absolutely!

Do you have any suggestions about how to get executives and other members of your organization to value predictive analytics? Please let me know. And please visit the tdwi site for more information on predictive analytics and to download the report

<note:  This blog posting first appeared on my tdwi blog>

Three entry points for big data initiatives

The TDWI Big Data Maturity Model and Assessment is set to launch November 20th.  Krish Krishnan and I have been working on this for a while, and we’re very excited about it.  There are two parts to the Big Data Maturity Model and Assessment tool. The first is the actual TDWI Big Data Maturity Model Guide. This is a guide that walks you through the actual stages of maturity for big data initiatives and provides examples and characteristics of companies at different stages of maturity. In each of these stages, we look across various dimensions that are necessary for maturity. These include organizational issues, infrastructure, data management, analytics, and governance.

The second piece is the assessment tool. The tool allows respondents to answer a series of about 75 questions in the organization, infrastructure, data management, analytics, and governance dimensions. Once complete, the respondent receives a score in each dimension as well as some expectations and best practices for moving forward. A unique feature of the assessment is that respondents can actually look to see how their scores compare against their peers, by both industry and company size.

We urge you to take the assessment and see where you land relative to your peers regarding your big data efforts.  Additionally, it’s important to note that we view this assessment as evolutionary.  We know that many companies are in the early stages of their big data journey. Therefore, this assessment is meant to be evolutionary. You can come back and take it more than once. In addition, we will be adding best practices as we learn more about what companies are doing to succeed in their big data efforts.

In the course of our research for the model, Krish and I spoke to numerous companies embarking on big data.  There were a number of patterns that emerged regarding how companies get started in their big data efforts.   Here are a few of them:

  1. Large volumes of structured data are already being analyzed in the company.  Some companies have amassed large volumes (i.e., terabytes) of structured data that they are storing in their data warehouse or in some sort of appliance, often on-premises.  They feel that their BI infrastructure is pretty solid.  Typically, the BI effort is departmental in scope.  Some of these companies are already performing more advanced kinds of analysis; such as predictive analytics on the data.  Often, they are doing this to understand their customers.  The vision for big data is about augmenting the data they have with other forms of data (often text or geospatial data) to gain more insight.
  2. A specific need for big data. Some companies start a big data effort, almost from scratch, because of a specific business need.  For instance, a wireless provider might be interested in monitoring the network and then predicting where failures will occur.   An insurance company might be interested in telemetric information in order to determine pricing for certain kinds of drivers.  A marketing department might be interested in analyzing  social media data to determine brand reputation or as part of a marketing campaign. Typically these efforts are departmental in scope and are not part of a wider enterprise big data ecosystem.
  3. Building the business on big data.  We spoke to many e-businesses that were building the business model on big data.  While these companies might be somewhat advanced in terms of infrastructure to support big data often they were still working on the analytics related to the service and typically did not have any form of governance in place.

Deathtrap: Overlearning in Predictive Analytics

I am in the process of gathering survey data for the TDWI Predictive Analytics Best Practices Report.  Right now, I’m in the data analysis phase.  It turns out (not surprisingly) that one of the biggest barriers to adoption of predictive analytics is understanding how the technology works.  Education is definitely needed as more advanced forms of analytics move out to less experienced users.

With regard to education, coincidentally I had the pleasure of speaking to Eric Siegel recently about his book, “Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die” (www.thepredictionbook.com). Eric Siegel is well known in analytics circles.   For those who haven’t read the book, it is a good read.  It is business focused with some great examples of how predictive analytics is being used today.

Eric and I focused our discussion on one of the more technical chapters in the book which addresses the problem known as overfitting (aka overlearning) - an important concept in predictive analytics. Overfitting occurs when a model describes the noise or random error rather than the underlying relationship.  In other words, it occurs when your data fits the model a little too well.   As Eric put it, “Not understanding overfitting in predictive analytics is like driving a car without learning where the brake pedal is. “

While all predictive modeling methods can overlearn, a decision tree is a good technique for intuitively seeing where overlearning can happen.  The decision tree is one of the most popular types of predictive analytics techniques used today.  This is because it is relatively easy to understand – even by the non-statistician – and ease of use is a top priority among end-users and vendors alike.

Here’s a simplified example of a decision tree.  Let’s say that you’re a financial institution that is trying to understand the characteristics of customers who leave (i.e., defect or cancel).  This means that your target variables are leave (yes) and don’t leave (no).  After (hopefully) visualizing or running some descriptive stats to get a sense of the data, and understanding the question being asked, the company puts together what’s called a training set of data into a decision tree program.  The training set is a subset of the overall data set in terms of number of observations.  In this case it might consist of attributes like demographic and personal information about the customer, size of monthly deposits, how long the customer has been with the bank, how long the customer has used online banking, how often they contact the call center, and so on.

Here’s what might come out:

decision tree

The first node of the decision tree is total deposit/month.  This decision tree is saying that if a customer deposits >$4K per month and is using online bill pay for more than two years they are not likely to leave (there would be probabilities associated with this).  However, if they have used online banking for < 2 years and contacted the call center X times, there may be a different outcome.  This makes sense intuitively.  A customer who has been with the bank a long time and is already doing a lot of online bill paying might not want to leave.  Conversely, a customer who isn’t doing a lot of deposits and who has made a lot of calls to the call center, might be having trouble with the online bill pay.  You can see that the tree could branch down and down, each branch with a different probability of an outcome, either yes or no.

Now, here’s the point about overfitting.  You can imagine that this decision tree could branch out bigger and bigger to a point where it could account for every case in the training data, including the noisy ones.  For instance, a rule with a 97% probability might read, “If customer deposits more than $4K a month and has used online bill pay for more than 2 years, and lives in ZYX, and  is greater than 6 feet tall then they will leave.”  As Eric states in his book, “Overlearning is the pitfall of mistaking noise for information, assuming too much about what has been shown in the data.”  If you give the decision tree enough variables, there are going to be spurious predictions.

The way to detect the potential pitfall of overlearning is apply a set of test data to the model.  The test data set is a “hold out” sample.  The idea is to see how well the rules perform with this new data.  In the example above, there is a high probability that the spurious rule above won’t pan out in the test set.

In practice, some software packages will do this work for you.  They will automatically hold out the test sample before supplying you with the results.  The tools will show you the results on the test data.  However, not all do, so it is important to understand this principle.   If you validate your model using hold-out data then overfitting does not have to be a problem.

I want to mention one other point here about noisy data.  With all of the discussion in the media about big data there has been a lot said about people being misled by noisy big data.  As Eric notes, “If you checking 500K variables you’ll have bad luck eventually – you’ll find something spurious. “  However, chances are that this kind of misleading noise is from an individual correlation, not a model.  There is a big difference.  People tend to equate predictive analytics with big data analytics.   The two are not synonymous.

Are there issues with any technique?  Of course.  That’s why education is so important.  However, there is a great deal to be gained from predictive analytics models, as more and more companies are discovering.

For more on the results of the Predictive Analytics BPR see my TDWI blog:

Five Trends in Predictive Analytics

Predictive analytics, a technology that has been around for decades has gotten a lot of attention over the past few years, and for good reason.  Companies understand that looking in the rear-view mirror is not enough to remain competitive in the current economy.  Today, adoption of predictive analytics is increasing for a number of reasons including a better understanding of the value of the technology, the availability of compute power, and the expanding toolset to make it happen. In fact, in a recent TDWI survey at our Chicago World Conference earlier this month, more than 50% of the respondents said that they planned to use predictive analytics in their organization over the next three years. The techniques for predictive analytics are being used on both traditional data sets as well as on big data.

Here are five trends that I’m seeing in predictive analytics:

  • Ease of use.  Whereas in the past, statisticians used some sort of scripting language to build a predictive model, vendors are now making their software easier to use.  This includes hiding the complexity of the model building process and the data preparation process via the user interface.  This is not an entirely new trend but it is worth mentioning because it opens up predictive analytics to a wider audience such as marketing.  For example, vendors such as Pitney Bowes, Pegasystems, and KXEN provide solutions targeted to marketing professionals with ease of use as a primary feature.  The caveat here, of course, is that marketers still need the skills and judgment to make sure the software is used properly.
  • For more trends: http://tdwi.org/blogs/fern-halper/list/ferns-blog.aspx

Closing the loop in customer experience management: When it doesn’t work

Last week I had the unfortunate experience of trying to deal with American Airlines regarding some travel arrangements via its Advantage help desk.  I literally spent hours on the phone trying to get to the right person.  I won’t bore you with the details of my experience, however I did want to talk about how American used social media analytics in an active way – and where it came up short.

By now, many people are aware that companies are not only using social media analytics to understand what is being said about their brand; they are using it to actively engage with a customer when there is a problem as well.  This typically involves some sort of automatic classification of the problem, automatic routing to the right person, and suggested responses to the customer.

The good news was that when I tweeted about American Airlines I actually got a response back from them.  Here’s my first tweet and response:

First twitter round with aa

So far, not bad.  Here’s the next round of tweet/response:

conversation with aa round 2

Well, this was not what I wanted to hear, since it only partially addressed my issue.  If I just wanted an apology, I would not have bothered to tweet about a credit.  I would have preferred a follow up email (if they had a way to link my information together) or at least the contact information where I could get more help.  American Airlines wasn’t helping me they were whining.

So then I tweeted the following:

aa round 3

I gave up after this response. Frankly, it almost sounded sarcastic.  Should I have said, “Not on twitter, send me an email contact?”  I’m sending a letter to Craig Kreeger, instead, explaining my dissastisfaction. Maybe I’ll send it snail mail……….

My point is that if you’re going to engage your customers online via the channel that they used in the first place, make it count.  This exchange simply annoyed me.  Maybe twitter wasn’t the best channel for customer service, but it is the one that I used since no one was answering the phone and the American site couldn’t let me perform the function I wanted to do.  I’m not saying its’ easy to engage adequately via twitter.  To do this properly would have involved more finely tuned text analytics to understand what I was actually talking about as well as a way to integrate all of my data together to understand me as a customer (i.e. my loyalty information, recent trips, etc).  Maybe the customer service reps were tired after last month’s outage debacle at American when thousands of passengers were

Two Big Data Resources Worth Exploring

It’s a good day.  Our new book, Big Data for Dummies, is being released today and I’m busy working on a Big Data Analytics maturity model at TDWI with Krish Krishnan.  Krish, a faculty member at TDWI, is actually presenting some of the model at the TDWI World Conference:  Big Data Tipping Point taking place during the first week of May (see sidebar).  I would encourage people to attend, even if you aren’t that far along in your big data deployments.  TDWI has terrific courses in all aspects of information management and we understand that most companies will need to leverage their existing infrastructure to support big data initiatives.  In fact the title of this World conference is, “Preparing for the Practical Realities of Big Data.”   Check it out.

Back to the book.  Here’s a look at the Introduction!  Enjoy!

 

Two Weeks and Counting to Big Data for Dummies

I am excited to announce I’m a co-author of Big Data for Dummies which will be released in mid-April 2013.  Here’s the synopsis from Wiley:

Find the right big data solution for your business or organization

Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you’ll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You’ll learn what it is, why it matters, and how to choose and implement solutions that work.

  • Effectively managing big data is an issue of growing importance to businesses, not-for-profit organizations, government, and IT professionals
  • Authors are experts in information management, big data, and a variety of solutions
  • Explains big data in detail and discusses how to select and implement a solution, security concerns to consider, data storage and presentation issues, analytics, and much more
  • Provides essential information in a no-nonsense, easy-to-understand style that is empowering

 

Big Data For Dummies cuts through the confusion and helps you take charge of big data solutions for your organization.

Four Vendor Views on Big Data and Big Data Analytics: IBM

Next in my discussion of big data providers is IBM.   Big data plays right into IBM’s portfolio of solutions in the information management space.  It also dove tails very nicely with the company’s Smarter Planet strategy.  Smarter Planet holds the vision of the world as a more interconnected, instrumented, and intelligent place.  IBM’s Smarter Cities and Smarter Industries are all part of its solutions portfolio.  For companies to be successful in this type of environment requires a new emphasis on big data and big data analytics.

Here’s a quick look at how IBM is positioning around big data, some of its product offerings, and use cases for big data analytics.

IBM

According to IBM, big data has three characteristics.  These are volume, velocity, and variety.   IBM is talking about large volumes of both structured and unstructured data.  This can include audio and video together with text and traditional structured data.  It can be gathered and analyzed in real time.

IBM has both hardware and software products to support both big data and big data analytics.  These products include:

  • Infosphere Streams – a platform that can be used to perform deep analysis of massive volumes of relational and non-relational data types with sub-millisecond response times.   Cognos Real-time Monitoring can also be used with Infosphere Streams for dashboarding capabilities.
  • Infosphere BigInsights – a product that consists of IBM research technologies on top of open source Apache Hadoop.  BigInsights provides core installation, development tools, web-based UIs, connectors for integration, integrated text analytics, and BigSheets for end-user visualization.
  • IBM Netezza – a high capacity appliance that allows companies to analyze pedabytes of data in minutes.
  • Cognos Consumer Insights- Leverages BigInsights and text analytics capabilities to perform social media sentiment analysis.
  • IBM SPSS- IBM’s predictive and advanced analytics platform that can read data from various data sources such as Netezza and be integrated with Infosphere Streams to perform advanced analysis.
  • IBM Content Analytics – uses text analytics to analyze unstructured data.  This can sit on top of Infosphere BigInsights.

At the Information on Demand (IOD) conference a few months ago, IBM and its customers presented many use cases around big data and big data analytics. Here is what some of the early adopters are doing:

  • Engineering:  Analyzing hourly wind data, radiation, heat and 78 other attributes to determine where to locate the next wind power plant.
  • Business:
    • Analyzing social media data, for example to understand what fans are saying about a sports game in real time.
    • Analyzing customer activity at a zoo to understand guest spending habits, likes and dislikes.
  • Analyzing healthcare data:
    • Analyzing streams of data from medical devices in neonatal units.
    •  Healthcare Predictive Analytics.  One hospital is using a product called Content and Predictive analytics to understand limit early hospital discharges which would result in re-admittance to the hospital

IBM is working with its clients and prospects to implement big data initiatives.  These initiatives generally involve a services component given the range of product offerings IBM has in the space and the newness of the market.  IBM is making significant investments in tools, integrated analytic accelerators, and solution accelerators to reduce deployment time and cost to deploy these kinds of solutions.

At IBM, big data is about the “the art of the possible.”   According to the company, price points on products that may have been too expensive five years ago are coming down.  IBM is a good example of a vendor that is both working with customers to push the envelope in terms of what is possible with big data and, at the same time, educating the market about big data.   The company believes that big data can change the way companies do business.  It’s still early in the game, but IBM has a well-articulated vision around big data.  And, the solutions its clients discussed were big, bold, and very exciting.  The company is certainly a leader in this space.

Four Vendor Views on Big Data and Big Data Analytics Part 2- SAS

Next up in my discussion on big data providers is SAS.  What’s interesting about SAS is that, in many ways, big data analytics is really just an evolution for the company.  One of the company’s goals has always been to support complex analytical problem solving.  It is well respected by its customers for its ability to analyze data at scale.  It is also well regarded for its ETL capabilities.  SAS has had parallel processing capabilities for quite some time.  Recently, the company has been pushing analytics into databases and appliances.  So, in many ways big data is an extension of what SAS has been doing for quite a while.

At SAS, big data goes hand in hand with big data analytics.  The company is focused on analyzing big data to make decisions.  SAS defines big data as follows, “When volume, velocity and variety of data exceeds an organization’s storage or compute capacity for accurate and timely decision-making.”   However, SAS also includes another attribute when discussing big data which is relevance in terms of analysis.  In other words, big data analytics is not simply about analyzing large volumes of disparate data types in real time.  It is also about helping companies to analyze relevant data.

SAS can support several different big data analytics scenarios.  It can deal with complete datasets.   It can also deal with situations where it is not technically feasible to utilize an entire big data set or where the entire set is not relevant to the analysis.  In fact, SAS supports what it terms a “stream it, store it, score it” paradigm to deal with big data relevance.   It likens this to an email spam filter that determines what emails are relevant for a person.  Only appropriate emails go to the person to be read.  Likewise, only relevant data for a particular kind of analysis might be analyzed using SAS statistical and data mining technologies.

The specific solutions that support the “stream it, store it, score it” model include:

  • Data reduction of very large data volumes using stream processing.  This occurs at the data preparation stage.  SAS Information Management capabilities are leveraged to interface with various data sources that can be streamed into the platform and filtered based on analytical models built from what it terms “organizational knowledge” using products like SAS Enterprise Miner, SAS Text Miner and SAS Social Network Analytics. SAS Information Management (SAS DI Studio, DI Server, which includes DataFlux capabilities) provides the high speed filtering and data enrichment (with additional meta-data that is used to build more indices that makes the downstream analytics process more efficient).  In other words, it utilizes analytics and data management to prioritize, categorize, and normalize data while it is determining relevance.  This means that massive amounts of data does not have to be stored in an appliance or data warehouse.
  • SAS High Performance Computing (HPC). SAS HPC includes a combination of grid, in-memory and in-database technologies. It is appliance ready software built on specifically configured hardware from SAS database partners.  In addition to the technology, SAS provides pre-packaged solutions that are using the in-memory architecture approach.
  • SAS Business Analytics.  SAS offerings include a combination of reporting, BI, and other advanced analytics functionality (including text analytics, forecasting, operations research, model management and deployment) using some of the same tools (SAS Enterprise Miner, etc) as listed above.  SAS also includes support for mobile devices.

Of course, this same set of products can be used to handle a complete data set.

Additionally, SAS supports a Hadoop implementation to enable its customers to push data into Hadoop and be able to manage it.  SAS analytics software can be used to run against Hadoop for analysis.  The company is working to utilize SAS within Hadoop so that data does not have to be brought out to SAS software.

SAS has utilized its software to help clients solve big data problems in a number of areas including:

  • Retail:  Analyzing data in real time at check-out to determine store coupons at big box stores; Markdown optimization at point of sale; Assortment planning
  • Finance: Scoring transactional data in real time for credit card fraud prevention and detection; Risk modeling: e.g. moving from looking at loan risk modeling as one single model to  running multiple models against a complete data set that is segmented.
  • Customer Intelligence: using social media information and social network analysis

For example, one large U.S. insurance company is scoring over 600,000 records per second on a multi node parallel set of processors.

What is a differentiator about the SAS approach is that since the company has been growing its big data capabilities through time, all of the technologies are delivered or supported based on a common framework or platform.  While newer vendors may try to down play SAS by saying that its technology has been around for thirty years, why is that a bad thing?  This has given the company time to grow its analytics arsenal and to put together a cohesive solution that is architected so that the piece parts can work together.  Some of the newer big data analytics vendors don’t have nearly the analytics capability of SAS.   Experience matters.  Enough said for now.

Next Up:  IBM

SAP moves to social media analysis with NetBase partnership

Today, SAP and NetBase announced that SAP will resell NetBase solutions as the SAP® Social Media Analytics application by NetBase.

What does this mean?  According to the announcement:

SAP Social Media Analytics is a cloud-based solution that is able to process more than 95 million social media posts per day. It uses an advanced natural language processing (NLP) engine to read and categorize each one of these posts according to the opinions, emotions and behaviors that the market is expressing. “

NetBase is a SaaS social media insight and analytics platform that contains one year of social media data.  This data consists of blogs, tweets, newsfeeds, and other Web content.  NetBase combines deep Natural Language Processing (NLP) analytics with a content aggregation service and a reporting capability.  The product provides analysis around likes/dislikes, emotions, reasons why, and behaviors. For example, whereas some social media services might interpret the sentence, “Listerine kills germs because it hurts” as either a negative or neutral statement, the NetBase technology uses a semantic data model to understand not only that this is a positive statement, but also the reason it is positive.

The platform is currently used by hundreds of corporate customers, and was developed in partnership with five of the top 10 consumer packaged goods companies, including Coca-Cola and Kraft.  I have used NetBase for competitive intelligence, most notably when I was putting together the Victory Index for Predictive Analytics.  The platform is quite robust and easy to use.

 The idea is that an end-user could do his or her social media analysis in the NetBase solution and then, using an API provided with the solution, export data into Business Objects to further analyze it.  Here are a few screen shots I pulled from the company’s online demo that illustrate this:

Step 1:  In this simple example, say an end-user is trying to understand the buzz around a specific product (in this case a PC product).  He or she utilizes the NetBase system to understand some of the key opinions, passions, and sentiment regarding this brand.

Step 2:  Once the end user has done some analysis, he or she can export the results of the analysis to SAP Business Objects.  The illustration below shows the kind of data that is exported.  In this case, there is information about attributes and emotions about the product.  These values also have a sentiment indicator associated with them.

 

This data can then be visually displayed and analyzed in SAP Business Objects.   In the example below, the insights are displayed on an IPad.

In addition to simply displaying information in SAP Business Objects, the plan moving forward is to be able to operationalize this data throughout workflows that are part of an enterprise business process.  I imagine that SAP HANA will enter the picture too at some point.

I am glad to see that SAP is partnering with NetBase on this solution.  It is a good time for SAP to incorporate social media analysis into its products.  As social media analysis becomes more mainstream, SAP customers are, no doubt, asking for a solution that can work with SAP products.  While SAP bought Inxight, a text analytics vendor, a number of years ago, it does not have the social media historical data or the SaaS business around it.  This partnership seems like a good solution in the short term.  I will be interested to learn more about how SAP will incorporate social media analysis into enterprise workflows.   Certainly NetBase will benefit from the huge SAP installed base.  I suspect that SAP customers will be intrigued by this new partnership.

Follow

Get every new post delivered to your Inbox.

Join 1,189 other followers