vrijdag 28 november 2014

The Stockholm Papers on Self-Service BI

I had the pleasure of moderating a peculiar kind of brainstorm session in Stockholm called speed geeking, a process which will remain unrevealed to those who weren't present. Never mind the "how". Let's talk about the "what". The "what" is a set of interesting replies to the three theme questions on Self- Service Business Intelligence (SSBI)

The three theme questions discussed were:

  • Why do you use SSBI?
  • What are the major problems encountered?
  • Will IT become obsolete? (a more challenging version of "How will SSBI affect IT's role")

Why organisations use SSBI

The problems SSBI can solve are low BI service levels,  elicitating better requirements for the data warehouse as uses will see the gaps in the available data and providing a workaround for slow DWH/BI development tracks. But the majority of answers went in the direction of opening up new opportunities.
The number one reason for SSBI  is time to market: support faster decision making, explore the organisation's creativity better, enhance flexibility, innovation, exploration,.. it's all there. Some teams  came up with deep thoughts about organisational development: "Distributes fact based decision making" was a very sharp one as well as "getting the right information to the right person at the right moment" although both motivations will need to be managed carefully. Because SSBI is not a matter of opening up the data warehouse (or other data lakes) to everybody, the paradox is that the more users are empowered, the more governance and data management are needed.

Conclusion: as always, two approaches to this question emerge: either it solves a problem or it creates an opportunity. My advice is to look for opportunities if you want a concept, a technology  or a new business process to last. Because problem solvers will limit the new technology from the problem perspective which is a form of typecasting the technology whereas opportunity seekers will keep exploring the  new possibilities of a technology.

What are the major problems encountered?

SSBI is not a walk in the park for neither IT nor the business users.
Data quality management, as well as the related management of semantics and governance of master data are the principal bumps on the road. Lack of training is also high on the radar as well as performance and security and integrity. So far, no real surprises. But strangely enough, an issue like "usability" appeared. You'd think that this is the main reason of developing an SSBI platform but apparently it is also the main show stopper.

Conclusion: in this mixed audience of IT and business professionals I have noticed few defensive strategies. Yes, there are problems but they can be solved was the general mood I felt in the room. Maybe this is one of the reasons why Sweden is one of the most innovative societies in the world?

How will SSBI affect IT's role

There was a general consensus between the IT and the business professionals: IT will evolve into a new role when SSBI is introduced. They will develop a new ecosystem, optimise the infrastructure for SSBI and act as an enabler to advance the use of SSBI.
Other interesting suggestions were pointing towards new IT profiles emerging in this ecosystem like data scientists, integrators of quality data, managers of business logic form both internal and external systems. In short, the borders between IT and the business users will become vaguer over time.  But one remark was a bit less hopeful: one group concluded that the CIO is still far away from the business perspective.

Maybe that's because many CIOs come from the technology  curriculum and there are still organisations out there that do not consider ICT as a strategic asset. Every day I praise myself lucky that I worked in a mail order company, early in my career. The strategic role of ICT was never questioned there and it was no surprise that the CIO of my company became the CEO as customer data, service level data, financial data and HRM data were considered as the core assets.

zondag 19 oktober 2014

Defining Business Analysis for Big Data

Introduction to an enhanced methodology in business analysis

Automating the Value Chain

In the beginning of the Information Era, there was business analysis for application development. Waterfall methods, Rapid Application Development, Agile methods,.. all were based on delivering a functioning piece of information technology that supports a well defined business process. There are clear signs of an evolution in the application development area.
Core operations like manufacturing and logistics came up with automation of human tasks and the IT department was called the “EDP department”. Some of the readers will need to look up that abbreviation. I can spare them the time: Electronic Data Processing indicated clearly that the main challenge was managing the data from these primary processes.
Information as a business process support becomes an enabler of (new) business processes

This schema gives a few hints on the progress made in automation of business processes: the core operations came first: finance, logistics and manufacturing which evolved into Enterprise Resource Planning (ERP). Later sales, marketing and after sales service evolved into customer relationship management which later on extended into Enterprise Relationship Management (ERM) incorporating employee relationship management and partner relationship management.  Finally ERP and ERM merged into massive systems claiming to be the source of all data. The increase in productivity and processing power of the infrastructure enabled an information layer that binds all these business processes and interacts with the outside world via standardized protocols (EDI, web services based on SOAP or REST protocols).
The common, denominator of these developments is: crisp business analysis to enable accurate system designs was needed to meet the business needs.

The "Information is the New Oil Era"

Already in the mid nineties, Mike Saylor, the visionary founder and CEO   from Microstrategy stated that information is the new oil.  Twenty years later, Peter Sondergaard from Gartner repeated his dictum and added “and analytics is the combustion engine”.  A whole new discipline –already announced since the 1950’s- emerged: Business Intelligence (BI). Connecting all available relevant data sources to come up with meaningful information and insights to improve the corporate performance dramatically.
The metaphor remains powerful in its simplicity: drill for information in the data and fuel your organization’s growth with better decision making.
Yet, the consequences of this new discipline on the business analysis practice remained unnoticed by most business analysts, project managers and project sponsors. The majority was still using the methods from the application development era. And I admit in the late nineties I have also used concepts from waterfall in project management and approached the products from a BI development track as an application where requirements gathering would do the trick. But it soon became clear to me that asking for requirements to a person who has an embryonic idea about what he wants is not the optimum way. The client changes requirements in 90 % of the cases after seeing the results from his initial requirements. That’s when I started collecting data and empirical evidence on which approach to a business analysis method leads to success.  So when I published my book “Business Analysis for Business Intelligence” in October 2012, I was convinced everybody would agree this new approach is what we need to develop successful BI projects. The International Institute of Business Analysis’s (IIBA) Body Of Knowledge has increased its attention to BI but the mainstream community is still unaware of the consequences on their practice. And now, I want to discuss a new layer of paradigms, methods, tricks and tips on top of this one? Why face the risk of leaving even more readers and customers behind?  I guess I need to take Luther’s pose at the Diet of Worms in 1521: “Here I stand, I can do no other.” So call me a heretic, see if I care. 
Luther at the Diet of Worms in 1521

The new, enhanced approach to business analysis for business intelligence in a nutshell deals with bridging three gaps. The first gap is the one between the strategy process and the information needed to develop, monitor and adjust the intended strategic options.
The second gap is about the mismatch between the needed and the available information and the third gap is about the available information and the way data are registered, stored and maintained in the organization.
Now, with the advent of Big Data, new challenges impose themselves on our business analysis practice.

Business Analysis for Big Data: the New Challenges

But before I discuss a few challenges, let’s refer to my definition of Big Data as described in the article “What is really Big About Big Data” In short: volume, variety and velocity are relative to technological developments. In the eighties, 20 Megabytes was Big Data and today 100 terabytes isn’t a shocker. Variety has always been around and velocity is also relative to processing, I/O and storage speeds which have evolved. No, the real discriminating factor is volatility: answering the pressing question what data you need to consider as persistent both on semantic and on a physical storage level. The clue is partly to be found in the practice of data mining itself: a model evolves dynamically over time, due to new data with better added value and / or because of a decay in value of the existing data collection strategy.
Ninety percent of “classic” Business Intelligence is about “What we know we need to know” . With the advent of Big Data the shift towards “What we don’t know we need to know” will increase. I can imagine in the long run the majority of value creation will come from this part.
From “What we know we need to know” to
“What we don’t know we need to know”
is the major challenge in Business Analysis for Big Data
Another challenge is about managing scalability. Your business analysis may come up with a nice case for tapping certain data streams which deliver promising results  within a small scope but if the investment can’t be depreciated on a broader base, you are dead in your tracks. That’s why the innovation adage “Fail early and fail cheap” should lead all your analytical endeavors in the Big Data sphere. Some of you may say “If you expect to fail, why invest in this Big Data Thing?”. The simple answer is “Because you can’t afford not to invest and miss out on opportunities.” Like any groundbreaking technology at the beginning of its life cycle, the gambling factor is large but the winnings are also high. As the technology matures, both the winning chances and the prize money diminish. Failing early and cheap is more difficult than it sounds. This is where a good analytical strategy, defined in a business analysis process can mitigate the risks of failing in an expensive way.
Business Analysis for Big Data is about finding scalable analytical solutions, early and cheap.
So make sure you can work in an agile way as I have described in my article on BA4BI and deliver value in two to three weeks of development. Big Data needs small increments.
Data sources pose the next challenge. Since they are mostly delivered via external providers, you don’t control the format, the terms and conditions of use, ... In short it is hard if not impossible to come with an SLA between you and the data provider. The next challenge related to the data is: getting your priorities right. Is user generated content like reviews on Yelp or posts in Disqus more relevant than blog posts or tweets? What about the other side of the Big Data coin like Open Data sources, process data or IOT data? And to finish it off: nothing is easier than copying, duplicating or reproducing data which can be a source of bias.
Data generates data and may degenerate the analytics
Some activist groups get an unrealistic level of attention and most social media use algorithms to publish selected posts to their audience. This filtering causes spikes in occurrences and this in turn may compromise the analytics. And of course, the opposite is also true: finding the dark number, i.e. things people talk about without being prominent on the Web may need massive amounts of data and longitudinal studies before you notice a pattern in the data. Like a fire brigade, you need to find the peat-moor fire before the firestorm starts.
The architectural challenge is also one to take into account. Because of the massive amount amount of data and their volatility which cannot always be foreseen, the architectural challenges are bigger than in “regular” Business Intelligence.
Data volatility drives architectural decisions
There are quite a few processing decisions to make and their architectural consequences impact greatly the budget and the strategic responsiveness of the organization. In a following article I will go into more detail but for now, this picture of a simplified Big Data processing scheme gives you a clue.

Big Data Architecture Options

Enabling Business Analysis for Big Data

We are at the beginning of a new analytical technology cycle and therefore, classical innovation management advice is to be heeded.
You need to have a business sponsor with sufficient clout, supporting the evangelization efforts  and experiments with the new technologies and data sources.
Allow for failures but make sure they are not fatal failures: “fail fast and cheap”. Reward the people who stick out their necks and commit themselves to new use cases. Make sure these use cases connect with the business needs, if they don’t, forward them to your local university. They might like to do fundamental research.
If the experiments show some value and can be considered as a proof of concept, your organization can learn and develop further in this direction.
The next phase is about integration:

  •  integrate Big Data analytics in the BI portfolio
  • integrate Big Data analytics in the BI architecture
  • integrate Big Data analytical competences in your BI team
  • integrate it with the strategy process
  • integrate it in the organizational culture
  • deal with ethical and privacy issues 
  • link the Big Data analytical practice with existing performance management systems.

And on a personal note, please, please be aware that the business analysis effort for Big Data analytics is not business as usual.

What is the Added Value of Business Analysis for Big Data?

This is a pertinent question formulated by one of the reviewers of this article. “It depends” is the best possible answer.
The Efficiency Mode
It depends on the basic strategic drive of the organization.  If management is in a mode of efficiency drive, they will skip the analysis part and start experimenting as quickly as possible. On the upside:  this can save time and deliver spontaneous insights. But the downside of this non directed trial-and-error approach can provoke undesirable side effects. What if the trials aren’t “deep” and “wide” enough and the experiment is killed too early? With “deep” I mean the sample size and the time frame of the captured data and with “wide” the number of attributes and the number of investigated links with corporate performance measures.
The Strategy Management Mode
If management is actively devising new strategies, looking for opportunities and new ways of doing business rather than only looking for cost cutting then Business Analysis for Big Data can deliver true value.
It will help you to detect leading indicators for potential changes in market trends, consumer behavior, production deficiencies, lags and gaps in communication and advertising positioning, fraud and crime prevention etc…
Today, the Big Data era is like 1492 in Sevilla, when Columbus went to look for an alternative route to India. He got far beyond the known borders of the world, didn’t quite reach India but he certainly changed many paradigms and assumptions about the then known world. And isn’t that the essence of what leaders do?

maandag 26 mei 2014

Elections’ Epilogue: What Have We Learned?

First the good news: a MAD of 1.41 Gets the Bronze Medal of All Polls!

The results from the Flemish Parliament elections with all votes counted are:

 Results (source: Het Nieuwsblad)
SAM’s forecast
20,48 %
18,70 %
Green (Groen)
8,7 %
8,75 %
31,88 %
30,32 %
Liberal democrats (open VLD)
14,15 %
13,70 %
13,99 %
13,27 %

Table1. Results Flemish Parliament compared to our forecast

And below is the comparative table of all polls compared to this result and the Mean Absolute Deviation (MAD) which expresses the level of variability in the forecasts. A MAD of zero value means you did a perfect prediction. In this case,with the highest score of almost 32 % and the lowest of almost six % in only six observations  anything under 1.5 is quite alright.

Table 2. Comparison of all opinion polls for the Flemish Parliament and our prediction based on Twitter analytics by SAM.

Compared to 16 other opinion polls, published by various national media our little SAM (Social Analytics and Monitoring) did quite alright on the budget of a shoestring: in only 5.7 man-days we came up with a result, competing with mega concerns in market research.
The Mean Absolute Deviation covers up one serious flaw in our forecast: the giant shift from voters from VB (The nationalist Anti Islam party) to N-VA (the Flemish nationalist party). This led to an underestimation of the N-VA result and an overestimation  of the VB result. Although the model estimated the correct direction of the shift, it underestimated the proportion of it.
If we would have used more data, we might have caught that shift and ended even higher!


Social Media Analytics is a step further than social media reporting as most tools nowadays do. With our little SAM, built on the Data2Action platform, we have sufficiently proven that forecasting on the basis of correct judgment of sentiment on even only one source like Twitter can produce relevant results in marketing, sales, operations and finance. Because, compared to politics, these disciplines deliver far more predictable data as they can combine external sources like social media with customer, production, logistics and financial data. And the social media actors and opinion leaders certainly produce less bias in these areas than is the -case in political statements. All this can be done on a continuous basis supporting day-to-day management in communication, supply chain, sales, etc...
If you want to know more about Data2Action, the platform that made this possible, drop me a line: contact@linguafrancaconsulting.eu 

Get ready for fact based decision making 
on all levels of your organisation

zaterdag 24 mei 2014

The Last Mile in the Belgian Elections (VII)

The Flemish Parliament’s Predictions

Scope management is important if you are on a tight budget and your sole objective is to prove that social media analytics is a journey into the future. That is why we concentrated on Flanders, the northern part of Belgium. (Yet, the outcome of the elections for the Flemish parliament will determine the events on Belgian level: if the N-VA wins significantly, they can impose some of their radical methods to get Belgium out of the economic slump which is not very appreciated in the French speaking south.)  In commercial terms, this last week of analytics would have cost the client 5.7 man-days of work. Compare this to the cost of an opinion poll and there is a valid add on available for opinion polls as the Twitter analytics can be done a continuous basis. A poll is a photograph of the situation while social media analytics show the movie.

 A poll is a photograph of the situation while social media analytics show the movie.

From Share-of-Voice to Predictions

It’s been a busy week. Interpreting tweets is not a simple task as we illustrated in the previous blog posts. And today, the challenge gets even bigger. To predict the election outcome in the northern, Dutch speaking part of Belgium on the basis of sentiment analysis related to topics is like base-jumping knowing that not one, but six guys have packed your parachute. These six guys are totally biased. Here are their names, in alphabetical order, in case you might think I am biased:

Dutch name
Name used in this blog post
CD&V (Christen Democratisch en Vlaams)
Christian democrats
Green (the ecologist party)
N-VA (Nieuw-Vlaamse Alliantie)
Flemish nationalists
O-VLD (Open Vlaamse   Liberalen en Democraten)
Liberal democrats
SP-A (Socialistische Partij Anders)
Social democrats
VB (Vlaams Belang)
Nationalist & Anti-Islam party
Table 1 Translation of the original Dutch party names

From the opinion polls, the consensus is that the Flemish nationalists can obtain a result over 30 % but the latest poll showed a downward trend breach, the Nationalist Anti-Islam party will lose further and become smaller than the Green party. In our analysis we didn’t include the extreme left wing party PVDA for the simple reason that they were almost non-existent on Twitter and the confusion with the Dutch social democrats created a tedious filtering job which is fine if you get a budget for this. But since this was not the case, we skipped them as well as any other exotic outsider. Together with the blanc and invalid votes they may account for an important percentage which will show itself at the end of math exercises. But the objective of this blog post is to examine the possibilities of approximating the market shares with the share of voice on Twitter, detect the mechanics of possible anomalies and report on the user experience as we explained at the outset of this Last Mile series of posts.

If we take the rough data of the share-of-voice on over 43.000 tweets we see some remarkable deviations from the consensus.
Share of voice on Twitter
Christian democrats
21,3 %
Green (the ecologist party)
8,8 %
Flemish nationalists
27,9 %
Liberal democrats
13,6 %
Social democrats
12,8 %
Nationalist & Anti-Islam party
11,3 %
Void, blanc, mini parties
4,3 %

Table 2. Percentage share of voice on Twitter per Flemish party

It is common practice nowadays to combine the results of multiple models instead of using just one. Not only in statistics is this better, Nobel prize winner Kahneman has shown this clearly in his work. In this case we combine this model with other independent models to come to a final one.
In this case we use the opinion polls to derive the covariance matrix.
Table 3. The covariance matrix with the shifts in market shares 
This allows us to see things such as, if one party’s share grows, at which party’s expense is it? In the case of the Flemish nationalists it does so at the cost of the Liberal democrats and the Nationalist and Anti-Islam party but it wins less followers from the Christian and the social democrats. The behaviour of Green and the Nationalist and Anti-Islam party during the opinion polls was very volatile, which explains for a part the spurious correlations with other parties.

Graph 1 Overview of all opinion poll results: the evolution of the market shares in different opinion polls over time.

Comparing the different opinion polls, from different research organisations, on different samples is simply not possible. But if you combine all numbers in a mathematical model you can smooth a large part of these differences and create a central tendency.
To combine the different models, we use a derivation of the Black-Litterman model used in finance. We are violating some assumptions such as general market equilibrium which we replace by a total different concept as opinion polls. However the elegance of this approach allows us to take into account opinions, confidence in this opinion and complex interdepencies between the parties. The mathematical gain is worth the sacrifice of the theoretical underpinning.
This is based on a variant of the Black-Litterman model  μ=Π+τΣt(Ω+τPΣPt)(pPΠ)

And the Final Results Are…

Central Tendency of all opinion polls
Data2Action’s Prediction
18 %
18,7 %
Green (the ecologist party)
8,7 %
8,8 %
31 %
30,3 %
14 %
13,7 %
13,3 %
13,3 %
9,4 %
9,8 %
Other (blanc, mini parties,…)
5,6 %
5,4 %
100 %
100 %

Table 4. Prediction of the results of the votes for the Flemish Parliament 

Now let’s cross our fingers and hope we produced some relevant results.

In the Epilogue, next week, we will evaluate the entire process. Stay tuned!

vrijdag 23 mei 2014

The Last Mile in the Belgian Elections (VI)

Are Twitter People Nice People?

The answer is: “Depends”. In this article I make a taxonomy of tweets in the last week of the Belgian elections. Based on over 35.000 tweets we can be pretty sure that this is a representative sample. You can consider this article as an introduction to tomorrow's headline: the last election poll, based on twitter analytics.

A picture says more than a thousand tweets

The taxonomy of the Twitter community

So here it is.  The majority of tweets are negative. When you encounter positive tweets, they are either from somebody who wants to market something (in case of the elections him or herself or a candidate he or she supports) or from somebody who is forwarding a link with a positive comment.
There is a correlation between the level of negativity about a subject and the political party related to the subject. From a political point of view, the polarisation between the Walloon socialist party and the Flemish nationalist party is clearly visible on Twitter.
Even today, on the funeral of the well-respected politician of the older generation, the former Belgian prime minister Jean-Luc Dehaene, the majority of tweets were negative. Tweets linking him to the financial scandal of the Christian democrat trade union in Dexia were six times more than the pious "RIP JLD" variants.
So how do you derive popularity and even arrive at some predictive value from a bunch of negative tweets?  That, my dear blog readers, will be examined tomorrow in the final article. 

donderdag 22 mei 2014

The Last Mile in the Belgian Elections (V)

Why Sentiment Measures Alone Are Not Enough

In the process of developing Social Analytics and Monitoring, we learnt something most interesting about sentiment analysis. Before we created Data2Action  as a platform for data mining and developed SAM (Social Analytics and Monitoring) we studied many approaches.
Many of these were just producing numbers to express sentiment versus a brand, a person, a concept or a company, to name a few.
Isolated Sentiment Analysis is Meaningless
This can be too superficial to produce meaningful analytic results so we recreated social constructs that match with concepts. Analysing the sentiment of a construct element in context with a topic is not a trivial task. But at least it approaches human judgement and it can be trained to increase precision and relevance.
Today, I am not going to amaze you with Big Numbers but I’ll show you some examples of how we approach sentiment analysis with SAM.
Let’s take a few tweets about the N-VA party and examine how they are scored:
The ultimate horror for companies and a torpedo for our welfare state: an anti N-VA coalition with the ecologist party
Another point where N-VA does not represent the Flemish people
From a one-dimensional point of view, both tweets are negative for N-VA but the first is in fact meant as a positive, pro N-VA statement.
Let us look at this, more complex tweet:
Vande Lanotte opens up the coalition for the Green Party, wrong move as the voters already consider N-VA strong enough.
The first part of the sentence “Vande Lanotte opens up the coalition for the Green Party” can be considered positive for Vande Lanotte and his socialist party SP-A. But the second part is negative. This shows the importance of parsing the sentence correctly and attributing scores as a function of viewpoints.

woensdag 21 mei 2014

The Last Mile in the Belgian Elections (IV)

How Topic Detection Leads to Share-of-voice Analysis

It was a full day of events on Twitter. Time to make an inventory of the principal topics and the buzz created on the social network in the Dutch speaking community in the north of Belgium.
First, the figures: 10.605 tweets were analysed of which 5.754 were referring to an external link (i.e. a news site or another external web site like a blog, a photo album etc…)
As the Flemish nationalist party leader Mr. Dewever from N-VA (the New Flemish Alliance in English) launched his appeal to the French speaking community today, we focused on the tweets about, to and from this party.
A mere 282 tweets were deemed relevant for topic analysis. And here’s the first striking number: of these 282 tweets only 16 contained a reactive response. 
Tweets that provoked a reactive response are almost nonexistent

About 49 topics were grouping several media sources and publications of all sorts. We will discuss three to illustrate how the relationship between topic, retweets, klout score and added content makes some tweets more relevant than others. These are the three topics:

  • Dewever addresses the French speaking community via Twitter
  • Christian Democrat De Clerck falsely accuses N-VA of using fascist symbols in an advertisement
  • You Tube movie from N-VA is ridiculed by the broad community 

Dewever addresses the French speaking community via Twitter

This topic is divided in a moderately positive headline and two neutral ones. The positive: Bart Dewever to the French Speaking Community: “Give N-VA a Chance”
This headline generates a total klout score of 188 where the Flemish tv station VRT takes the biggest chunk with 158 klout score.
This neutral headline generates only 98 klout score: “Dewever puts the struggle between N-VA and the French speaking socialist party at the centre of the discussion”
The other neutral headline “N-VA President Bart Dewever addresses the French speaking community directly” delivers a higher score: 140 klout score partly because one of N-VA’s members of Parliament promoted the link to the news medium.
All in all with 426 total klout score, this topic does not cause great ripples, especially not if you compare this to a mere anecdote, which is the second topic.

Christian Democrat De Clerck falsely accuses N-VA of using fascist symbols in an advertisement

On the left, the swastika hoax, commented by the christian democrat and in the right the original ad showing a labyrinth

Felix De Clerck, son of the former Christian democrat minister of Justice Stefaan De Clerck, reacted to a hoax and was chastised for doing this. With a klout score of 967 this has caused a bigger stir although the political relevance is a lot smaller than Dewever’s speech… Emotions can play a role even in simple and neutral retweets.

You Tube movie from N-VA is ridiculed by the broad community

Another day’s high was reached with an amateuristic and unprofessional YouTube movie which showed a parody on a famous Flemish detective series to highlight the major issues of the campaign. This product from the candidates in West-Flanders, including the Flemish minister of Interior Affairs, Geert Bourgeois generated a total klout score of 778 tweets and retweets with negative or sarcastic comments.
Yet an adjacent topic about a cameraman from Bruges who is surprised by minister Bourgeois’ enthusiasm generates a 123 moderately positive klout score.

Three topics out of 49 generate 20.6 % of total klout scores!

This illustrates perfectly how the Twitter community selects and reinforces topics that carry an emotional value: the YouTube movie and the hoax from De Clerck generated a share of voice of no less than almost 17% of the tweets.

Forgive me for reducing the scope to Flanders, the political scope to just one party and the tweets to only three because this blog has not the intention of presenting the full enchilada. I hope we have demonstrated with today’s contribution that topics and the way they are perceived and handled can vary greatly in impact and cannot be entirely reduced to numbers. In other words, the human interpreter will deliver added value for quite a long time.

dinsdag 20 mei 2014

The Last Mile in the Belgian Elections (III)

Awesome Numbers... Big Data Volumes

Wow, the first results are awesome. Well, er, the first calculations at least are amazing.

  • 8500 tweets per 15 seconds measured means 1.5 billion tweets per month if you extrapolate this in a very rudimentary way...
  • 2 Kb per tweet = 2.8 Terabytes on input data per month if you follow the same reasoning. Nevertheless it is quite impressive for a small country like Belgium where the Twitter adoption is not on par with the northern countries..
  • If you use  55 kilobytes for a  model vector of 1000 features you generate 77 Terabyte of information per month
  • 55 K is a small vector. A normal feature vector of one million  features generates 72 Petabytes of information per month.

And wading through this sea of data you expect us to come up with results that matter?
We did it.

Male versus female tweets in Belgian Elections
Gender analysis of tweets in the Belgian elections n = 4977 tweets

Today we checked the gender differences

The Belgian male Twitter species is clearly more interested in politics than the female variant: only 22 % of the 24 hours tweets were of female signature, the remaining 78 % were of male origin.
This is not because Belgian women are less present on Twitter: 48 % are female tweets against 52 % of the male sources.
Analysing the first training results for irony and sarcasm also shows a male bias. the majority of the sarcastic tweets were male: 95 out of 115. Only 50 were detected by the data mining algorithms so we still have some training to do.
More news tomorrow!

maandag 19 mei 2014

The Last Mile in the Belgian Elections (II)

Getting Started

I promised to report on my activities in social analytics. For this report, I will try to wear the shoes of a novice user and report, without any withholdings about this emerging discipline. I explicitly use the word “emerging” as it has all the likes of it: technology enthusiasts will have no problem overlooking the quirks preventing an easy end to end “next-next-next” solution. Because there is no user friendly wizard that can guide you from selecting the sources, setting up the target, creating the filters and optimising the analytics for cost, sample size, relevance and validity checks, I will have to go through the entire process in an iterative and sometimes trial-and-error way.
This is how massive amounts of data enter the file system
Over the weekend and today I have been mostly busy just doing that. Tweet intakes ranged from taking in 8.500 Belgian tweets in 15 seconds and doing the filtering locally on our in memory database to pushing all filters to the source system and getting 115 tweets in an hour. But finally, we got to an optimum query result and the Belgian model can be trained. The first training we will set up is detecting sarcasm and irony. With the proper developed and tested algorithms we hope for a 70% accuracy in finding tweets that express exactly the opposite sentiment of what the words say. Tweets like “well done, a**hole” are easy to detect but it’s the one without the description of the important part of the human digestive system that’s a little harder.
The cleaned output is ready for the presentation layer
Conclusion of this weekend and today: don’t start social analytics like any data mining or statistical project. Because taming the social media data is an order of magnitude harder than crunching the numbers  in stats.

Let’s all cross our fingers and hope we can come up with some relevant results tomorrow.