Building Your Big Data Team in 2015 – Top 5 Pieces of Real-World Advice

January 27, 2015

There’s lots of advice out there on building a big data team, from industry or expert analysts and leading publications. But we wanted to see how this is being implemented in real life, so we talked to the real world big data mavericks – those who’ve faced the challenge of gaining true business value from big data and succeeded.  They shared real-world insights into how they made it happen and the advice they’d give to those ready to take the plunge. (Scroll to the bottom to meet our mavericks.)

1. Clearly define your business goal, and don’t be afraid to start small.
When you work with big data, you have to know first what you’re going to do with that data” – Marc Hayem, VP of Platform Transformation, RichRelevance

It may seem obvious but is often overlooked.  Whether you’re a data-driven company whose entire business model revolves around crunching big data, or a manufacturer looking to optimize your operational efficiency using machine data, you need to be clear about the challenge you’re trying to tackle with big data. Omitting this step, you risk ending up with inappropriate technologies, a lack of executive support, and an ill-prepared team. Saad Khalid, a product manager at Paytronix, echoes the advice about starting small:

Starting small to get into big data can be useful, because you can get lost in a lot of technical jargon: Hadoop, Hive, MapReduce. My advice to people considering big data as a project would be to take it slow, have a smaller project in mind where you can actually think about the questions that you want to answer and achieve results…. Have a team that is dedicated towards that goal, and those results.  Start slow and then grow big and then scale your project. ” Saad Khalid

Andrew Robbins is CEO of Paytronix, a company that helps restaurants build brand loyalty and get rich, big-data driven insights into their customers’ behavior for better sales and marketing.  The questions that big data could answer for them were endless – but in the end, zeroing in on one small, simple question – “Who had breakfast for dinner?” – helped them define the scope of their entire project:

“For us, we sat around and thought of so many ideas and it became so big and we boiled it down to a single question and it was who had breakfast for dinner?  In that question, it seems kind of simple.  The “who” is pretty complicated.  Who are the people?  Can you give me the collection and what are they like?  What are their demographics?  “The “had breakfast,” what does that mean?  You got to get into the details of a check.  Is it scrambled eggs?  …All of those pieces led to a simple thing that we could all shoot for and that was our minimal viable product and you can get to it quicker and then the team goes, “Aha.  That’s success.” Andrew Robbins

Finally, as you define your scope, make sure the projects have a measurable return to achieve your business goals.  Because big data projects can be complex, people need to be motivated to work through the challenges and that happens when your project impacts the business in a demonstrable way. Marc Hayem is VP of Platform Transformation at RichRelevance, a company that helps retailers provide personalized recommendations to shoppers.

“I think the important thing when you get into big data is to be able to prove the value rapidly, which is to really pick the right problem and demonstrate very rapidly that you can find solutions to that problem. That you can create value around that problem… If you have identified that something that will give you a competitive advantage and the technology is applied right, then the payoff can be monumental.” Marc Hayem

2. Choose your technologies carefully, based on the challenge you’re trying to address and your organizational culture.
“Pick the tools that work and ignore all the religion that’s out there.” – Andrew Robbins, CEO, Paytronix

You should only start to investigate your technologies once you define your problem.  Many of the big data leaders we spoke to acknowledged that the big data technology ecosystem can be complex, and cautioned against being driven by the current frenzy to adopt a particularly hot technology.  Their advice is unanimous: start with one problem, start small, and work backwards from there in picking your technologies.  Always pick the tools that solve the problem at hand and find tools increase your teams’ productivity, not create obstacles.  Andrew Robbins discussed how heated the debate can be:

I think one of the things that surprised me the most was just how fragmented the tool sets are and it really seems like the wild west of different components and how religious people are that you’re using a component .… ‘If you’re using Hive, you must be crazy.  You must use Impala.  Anybody who is not using Impala is just … that doesn’t make sense’. Pick the tools that work and ignore all the religion that’s out there.  Be practical.  Pick the tools that work.  You can always switch them out in the future if you need to” Andrew Robbins

Marc Hayem shares his perspective on what makes a good fit:

Evaluating the tools can be overwhelming. There are new tools that come out constantly. There is a tendency to look at always the next shiny thing that comes out and think this will solve even more magical problems. At some point, you have to settle. You have to choose your tools. The tools that you’re comfortable with. That you have the tools for. That you have the staff for, more importantly. This is basically your tool set. That’s what you’re going to use. There is definitely with this ecosystem of open source tools, a tendency to go after the next big thing, constantly. It’s something that you have to fight a little bit. We have used a lot of open sourced software…Essentially, we believe that when you use open source solutions there is a community behind those tools. The tools get better over time, very, very rapidly.” Marc Hayem

Marc’s comments illuminate that in evaluating technologies, vendors, and platforms it’s important to consider what’s a good fit for your organization based on common values like transparency and innovation. Paytronix’s head of technology, Stefan Kochi, also believes this is an important factor:

Once we decide to implement a big data solution then we started looking at different providers, different vendors. The initial guiding principles were the ones that we use for other decisions we have made, such as they have to feel like an extended part of our company. … Some of the things we look for are – what was the technology based on? Open source versus private? How easy it for them to innovate? Innovation is critical. Do they serve things that we need? We have some guiding principles that we apply in general, the transparency of the company, how easy it to communicate with, and how solid and mature the product is. Pentaho was an attractive options early on. They use open source technologies, and that was very attractive to us. Paytronix uses a lot of open source technologies, so right there you have a connection with the approach that Pentaho has taken.”  Stefan Kochi

3. Identify key players on a cross functional team

While in some cases, a big data implementation can be done with one person or a very small team, the general consensus is that having a dedicated, cross functional team will ensure success. This is critical to ensuring that business needs are understood and data is successfully prepared and accessible to meet the defined the business needs. So what roles are needed?  We asked our big data leaders and internal big data services team to comment on what is working and compiled the results.  While structures vary from organization to organization, here are some key roles to consider.

  • Executive Sponsor- This senior level person understands the business needs, rallies support, and funds the solution. Andrew Robbins is an example:

“Paytronix is full of bright, curious, empathetic people. I wasn’t the star of this …we have a really bright engineer who is at the forefront of thinking about [big data] and I probably just provided some air cover so that we’re safe to go after it and be successful.”  Andrew Robbins

  • Business User – This individual defines and prioritizes the business requirements and then translates them into high level technical requirements.

“My favorite part about what I do currently is gathering requirements and actually really thinking about what our next product’s going to be.  What our next feature’s going to be.  Talking to our clients, and talking to my internal clients, which is the rest of the team here.  Really start to think about a new feature, a new product, and gathering those requirements, and thinking about design.  I love working with the engineering team, and really trying to think about how to approach problems in several different manners, and really try to come up with a creative solution so our clients can benefit from it.” Saad Khalid

  • Subject matter expert – Especially important in non-technical industries where the gap between a data developer and the Business user can be very large, this person knows the business intimately.
  • Data scientist – This individual understands the data and can extract information from that data to meet the business requirements. The data scientist ideally has both domain knowledge, statistical analysis background, and basic understanding of computer science.

“As I mentioned earlier, we have hundreds of algorithms that basically constantly try to decide what is best for our customer. You have to be able to build those algorithms. You have to understand the mathematics behind it. you have to understand the technologies. You also need very good data scientists. You need people who understand very well the mathematics behind the predictive modeling that takes place in personalization.” Marc Hayem

  • Data Engineer/Software Engineer – This individual has a software engineering background and experience in developing software for distributed or multi-threaded applications. This person typically is a server side Java developer who can implement ETL at scale using various Big Data technologies. Someone with experience in statistics and machine learning is a plus.

“Paytronix has a small engineering group. We’re not a large firm, but we’re fortunate to have a very talented engineering team. Those engineers who have done a lot of existing development of the product are also able to explore and go from an idea and a concept to a real product….There is a lot to manage when it comes to big data.  We have a dedicated team that looks after our structure and architecture.  There is an architecture that oversees big data and we also have 2 software developers. You need to have a dedicated team to take care of this structure.  It is extremely important. ” Saad Khalid

  • Data journalist – We’re hearing more and more about a data journalist – someone who looks at the data from a storytelling aspect. Forbes even predicts that storytelling will be the hot new job in big data analytics in 2015. This person serves as the link to the larger audience for the data, making it understandable to the audience consuming the data.
  • Platform/Systems Architect – This is a senior technical architect responsible for designing the entire end-to-end solution that meets the business requirements for both short-term deliverables and long-term needs. Typically this person has a software engineering background in large scale clustering/distributed processing systems and is responsible for technology selections and implementation process.  The architect defines the big data blueprints, or architectural model, that an organization will implement.

Another lesson that Paytronix has learned is the importance of building a working model first. You can get caught up in the big picture, being very strategic, but you have to build the working model first. If you have a billion transactions that you want to ETL, you should probably ETL a thousand. You get an idea how the systems are working with a thousand transactions. Another important thing that we learned is that you have to be very focused on system integration and architect should be always present as you connect. Systems talking to each other is like building many bridges. You have people focus on each bridge, but someone needs to oversee all the bridges together.” Stefan Kochi

  • IT/Operations manager – This person operationalizes, deploys, manages, and monitors the systems. They should understand Hadoop and big data to successfully deploy across systems and scale to hundreds or thousands of servers, instead of just a few.  Yug Muppala, a software engineer at RichRelevance, points out the critical nature of this role:

We at RichRelevance have a really good operations team that keeps our servers up and running all the time. That is really important they make the cluster available to us and keep the health of the cluster up and running.”  Yug Muppala

4. Be creative to make the most of your human and technology resources
“Instead of search for the mythical people, we would take people we know and create a team that could be successful”  - Andrew Robbins, Paytronix

While the above list provides general guideline for a big data team, it’s only a starting point.  There’s a well-known meme about how looking for the perfect data scientist – who combines analytics with business savvy  and development skills and mathematics – is like looking for a unicorn: it doesn’t exist.  Companies who’ve successfully launched big data initiatives haven’t used unicorns – they’ve been innovated and are clever with how they resource their project and leverage their team.  Andrew Robbins acknowledges this:

When you make the move the Big Data, what are you concerned about?  What we’re concerned about in Paytronix and probably the biggest one is can you be successful, and then you go back from that and you say, “Where are the people?  What people are going to implement this solution?”  Is it internal people or are we going to go hire people?  Then people talk about data scientists.  Have you seen a data scientist?  Do you live next to one?  Can you find them on the street? I think one of the things that made us successful at Paytronix was to say we would, instead of search for the mythical.  To us, a data scientist is a function, not a person.  Data science might include a strategist, an analyst and an engineer.  In between them, they can satisfy the need of data science.” Andrew Robbins

Creative thinking and innovative technologies offer other options to remove the need for unicorns.  There are many emerging technologies that help minimize the dependence on coding and other hard to find skillsets – for smaller companies that can’t afford data scientists, these technologies are attractive options. Yug Muppala, a software engineer at RichRelevance, talks about why they use Hive:

Hive is very easy for anyone with SQL knowledge to start writing, querying the Hadoop cluster. That’s a big advantage. Not many people have knowledge around Pig scripts and stuff like that and most of our data science team is very comfortable with writing SQL queries. Hive gives them that advantage so that they could just go write queries themselves instead of having to wait for someone else to write the extraction for them.” Yug Muppala

Pentaho’s own visual interface helps here, by reducing the amount of code needed to join data, and reducing the time Paytronix spent on this task from two weeks to a mere hour and a half:

“We have some data in our transactional database and we have some data in Hadoop. Joining these two together was a hassle before and Pentaho helped us solve this problem. . . .It’s a simple step within Pentaho. ..We don’t have to write a lot of code which we were doing before and it’s a simple process of dragging and dropping steps to connect these different data sources.” Yug Muppala

5. Look to the future
Last – as you look ahead to building a team in 2015, there are a few thing to keep in mind:

  1. Consider the cloud. More and more companies are running all or part of their big data environment in the cloud.  As cloud becomes more widely adopted and becomes more mature and secure.  Look for team members with experience in the cloud, in addition to those who have dealt with data governance and compliance issues.
  2. Consider self-service analytics. Whether the end user is a customer or an internal user, you’ll need to consider how to make the insights created from your big data environment available for consumption both inside and outside your firewalls.  How will you deliver high-quality governed data to end users for analysis? Will you embed analytics in customer-facing software, or perhaps within an enterprise application?
  3. Consider the profile of people willing to tackle these big data challenges. In addition to experience with the relevant technologies and having people to embrace and learn from the challenge that big data provides. Marc Hayem says, “The people I’ve worked with are very much start-up people. They are adventurous a little bit more than your average IT person.”

Meet the Mavericks:

Andrew Robbins, Paytronix2
Andrew Robbins, CEO, Paytronix
Learn more about Andrew’s journey with big data here.

Marc Hayem, RichRelevance2
Marc Hayem, VP of Platform Transformation, RichRelevance
Learn more about Marc’s journey with big data here.

Saad Khalid, Paytronix2
Saad Khalid, Product Manager, Paytronix

Stefan Kochi, Paytronix2
Stefan Kochi, Head of Technology, Paytronix

Yug Muppala, RichRelevance2
Yug Muppala, Software Engineer, RichRelvance


Pentaho’s first mover status in the big data space

February 8, 2012

I am very excited to share the news the Pentaho is cited as the only  ‘Strong Performer’ in The Forrester Wave™: Enterprise Hadoop Solutions, Q1 2012 (February 2012).

The best way to summarize Pentaho’s position in the market is straight from the report by James G. Kobielus, Senior Analyst, Forrester Research:

“Pentaho is a strong performer with an impressive Hadoop data integration tool. Among data integration vendors that have added Hadoop functionality to their products over the past year, it has the richest functionality and the most extensive integration with open source Apache Hadoop.”

We believe that the inclusion of the Pentaho Kettle data integration product in the first Forrester Wave: Enterprise Hadoop Solutions report is a strong testimonial to Pentaho’s first mover status in this space. The scores awarded to Pentaho are a testimonial to the fact that Pentaho is helping companies operationalize Big Data by addressing critical pain points associated with Hadoop Data Integration and easy analysis of Big Data.

I encourage you to access the full report and see why Pentaho was named a strong performer and how we stack up against other vendors.

Richard

About the report
The Forrester Wave™: Enterprise Hadoop Solutions, Q1 2012 included 13 enterprise Hadoop solution providers in its assessment for its Enterprise Hadoop Solutions report. The firm examined past research and user needs assessment and conducted vendor and expert interviews in order to formulate a comprehensive set of 15 evaluation criteria, which Forrester grouped into the three high level buckets of current offering, strategy and market presence.


Analyst Insights: How Pentaho BI 4 is changing the game for SMB companies

July 7, 2011

Recently I had the pleasure to speak with Albert Pang, a veteran of the application market space with more than 25 years of experience and the founder of Apps Run The World. In our conversation, centered on Pentaho BI 4, we discussed the use of business intelligence by the casual business user – something that in the past has been impossible due to the highly technical nature of business intelligence deployments.

In his recent blog, Business Analytics for the Masses, Pang depicts this problem and describes how it has impacted the SMB market. He explains how highly technical BI tools worked as a roadblock for SMB companies that did not have intensive IT resources to allocate to BI projects — something that independent BI vendors in the market (Pentaho, Tableau, and Qliktech) have realized and leveraged as an opportunity to improve the BI market.  

Pang goes further by saying “Pentaho, which has made its name in the open-source BI marketplace, introduced Pentaho BI 4 Enterprise Edition in June. One of the key targets is non-technical business users who can use the new product to create highly formatted, interactive reports with zero training or involvement from the IT department.”

Part of our mission at Pentaho is to make it easy and cost effective for SMB companies to reap the same benefits historically only obtainable by blue-chip companies. With Pentaho BI 4, we’ve extended this mission to encompass the casual business users too.  As this “New wave of BI” becomes more pervasive, more users than ever before will look to harness BI for their everyday business use. Pentaho is already seeing this effect with our customers including Loma Linda University Healthcare, SpecSavers, Centro, and others.

I encourage you to read the blog post, Business Analytics for the Masses by Albert Pang, and let us know your thoughts.

Farnaz Erfan
Product Marketing
Pentaho Corporation


You can run but you can’t hide

May 18, 2010

Social media has changed the whole customer-vendor relationship. If our customers are not happy they have 100’s of ways to tell a lot of people. Vendors can run but they can’t hide.

Howard Dresner gets it. He tapped into the social media world to gather a sampling of over 450 business intelligence users to learn their real-world experiences with their vendors. He gathered their feedback in the new market study by Dresner Advisory Services, The Wisdom of Crowds Business Intelligence Market Study™. Here are a few of the highlights:

  • Pentaho, in the Emerging Vendor category, was ranked against six other vendors and received the top ranking in value for the price paid and quality of consulting services.
  • Pentaho is the clear leader in OSBI ranking above other OSBI vendors in every category.
  • Pentaho received the second highest marks overall among BI vendors.
  • 100 percent of responding customers recommended Pentaho.

Red Hat’s slogan comes to mind in this situation: “Truth Happens.” With the growth of social media you cannot hide if you have bad support or a crappy product – social media puts the power back into the users hands.

You can’t just say you are the best or largest – you have to be it. It is easy to throw around meaningless statistics about having “over 100,000 community members”, when the 100,000 merely visited a website or forge and registered to get community editions or view a forum. What is worse is when over 75% of these “community members” only visited that site one time and never returned. Making these grossly misleading claims is bad for commercial open source in general and horrible for the specific vendors.

Beyond the Wisdom of Crowds report, I wanted to point out a few other independent surveys out there to see the truth.

Open Source Solutions: Managing, Analyzing and Delivering Business Information
BeyeNETWORK Research Report by Mark Madsen

The End of Enterprise Software: Open Source Finds an Opening
By Wayne Eckerson, Director, TDWI Research

If you are interested to know what your peers are saying about Pentaho in The Wisdom of Crowds Business Intelligence Market Study™ the survey is available for download by visiting: http://www.pentaho.com/wisdom_of_crowds/

Richard


Follow

Get every new post delivered to your Inbox.

Join 112 other followers