Quantcast
Channel: Looker | Blog
Viewing all 281 articles
Browse latest View live

How to Turn Your Data into a Compelling Story

$
0
0

turn_your_data

At Looker, we get excited about data. How many people voted in the last presidential election? Who were those people? Who gets the most screen time in Game of Thrones? What can data tell us about building our Fantasy Football draft?

But as data people, we also know that data on it’s own isn’t always enough to insight action.

Humans think in stories.

It’s true. As a species, we’re hardwired to relate thoughts through images and stories. For example, if you tell someone: “Quarterbacks are reliable players in Fantasy Football, Running Backs are not” they will find it interesting, sure... but what actual value is there in that information? They need to feel a connection to the data in order to apply it to themselves.

However, if you say: “Data shows that NFL Quarterbacks are being strategically protected from injuries by their organizations, making them the most reliable picks for Fantasy performance, over the past 8 seasons. Running backs on the other hand, get hurt pretty frequently and are therefore extremely unreliable, with only one player making a repeat appearance at top spot in the last 8 years.”

Now that is something they’re going to remember when they’re putting together their Fantasy draft.

If you can learn to present your data through a compelling story, you’ll have a sure-fire recipe for success.

To help you better understand, consider this point made by Tomasz Tunguz in his book, Winning with Data:

“In its simplest form, a story is a connection between cause and effect. Peter tricks the villagers too many times with a false “Wolf” cry. For his transgression, he suffers the loss of his flock. When Aesop wrote his fables, he inculcated in Greek children a sense of cause-and-effect relationships that would serve them all their lives. Like those morals, the implications of data are best conveyed through stories.”
- Tomasz Tunguz, Winning with Data

Data Stories at JOIN 2017

This year we’ve lined up 12 stellar Data Stories presented by Looker customers Buzzfeed, Autodesk, Twilio, Blue Apron, GitHub and more! We’ve asked them to stand up at JOIN - Looker’s annual data conference - and tell stories around their data.

Extremely popular with JOIN attendees, it’s always interesting to hearing their stories around how they became data-centric, chasing data, finding value in data, creative uses for data, etc.

turn_your_data

Here are the top tips we’ve shared with our presenters to create compelling data stories:

  1. Test your story idea. Once you’ve come up with an idea, we suggest you write it down in one or two sentences and then ask yourself - is my idea or story new? Is my idea interesting? Is it factual and realistic? Can the call to action be executed? What is your thesis? Does it matter?

  2. Create a beginning, a middle and an end. Most stories involve a problem or a conflict and a resolution. Resolutions usually teach a lesson or prove a point, in other words they have a thesis that the whole story leads up to proving.

  3. Plan the structure of your talk. We love how TEDx breaks down their recommended presentation structure.

    1. Make your audience care by using a relatable example or intriguing idea.
    2. Explain your idea clearly and with conviction.
    3. Describe your evidence.
    4. Provide a call to action - the how and why of implementing your idea.
    5. Reveal the new reality - how the lives of your audience members will be affected if they act on your idea.
  4. Keep your slides simple. There’s only one rule for your slides: keep it simple. Stick to a few words or an image, or as Nancy Duarte calls it “the glance test.”

  5. Practice. Just do it, and do it a few times for different people. Ask someone to time you and call you out when you are moving around too much. Be open to and accepting of feedback. Remember:

    1. Your presentation begins the moment you walk towards stage, practice a confident walk.
    2. Use pauses instead of filler words like “um, ah, so, like, you know”
    3. Generally you will speak faster to an audience then you do during a practice session. Be mindful of pace in your practice runs.
  6. Have fun! This is the most important advice.The audience will feed off of your energy. Be yourself and keep things lively.

“Data alone is not powerful enough to inspire people to act or change. It must be interwoven with passion and emotion and conviction. By becoming great storytellers with data, we can change the way our businesses operate.”
- Tomasz Tunguz, Winning with Data

Need more inspiration? Attend the Data Stories sessions at JOIN 2017 and watch your peers present. We’ll also post recordings of our Data Stories post-JOIN for everyone to learn from and share!


Looker and Databricks Partner to Bring Data Scientists and Business Users Together

$
0
0

looker_databricks

We are very excited today as we announce a partnership between Databricks and Looker. We have seen customers using these products together to provide an easy and intuitive way for business users to visualize and discover the powerful analytics results of Apache Spark. Using Looker and Databricks, you can experience the following benefits:

  • Easy to Use – Make analysts productive instantly through easy to use visualizations

  • Fastest User Adoption – Enable widespread use of analytics throughout your organization through fast user adoption

  • Process More Data Faster - Provides the fastest implementation of Spark by using Databricks

  • Answer Your Toughest Questions - Run the most complex analytics problems, providing answers to your toughest questions. (See this benchmarking study)

  • Bigger Insights, More Intuitively - Easy to use on your toughest problems makes bigger insights more intuitive

  • Game Changing Insights – Make complex analytical answers available to more users to drive cross-company insights that change the game

Spark is the fastest analytics processing engine available. Databricks provides the Unified Analytics Platform, built on Spark by the inventors of Spark. It is performance tuned to run 7-10 times faster than vanilla spark, plus it has advanced security and content connectors. The Unified Analytics Platform provides a collaborative platform for data scientists, data engineers, and business users. It enables data scientists to create and run analytical models, including machine learning and artificial intelligence. It enables data engineers to run scheduled projects to extract, join and cleanup data. Its performance and flexibility makes ETL one of Databricks’ most popular use cases.

When you combine the analytics power of Databricks with the intuitive, user friendly interface of Looker, you provide a way for business users throughout your organization to use analytics to realize deeper insights and make better business decisions.

You can learn more about how customers are using Looker and Databricks together in this webinar, entitled “5 Keys to Build Machine Learning and Visualization Into Your Application.” John Huang, engineering and data analytics lead at Handshake will show how he used Looker and Databricks to build a recommendation engine into the Handshake application, incorporating machine learning and visualization without an army of data scientists and programmers.

To try Databricks, go to https://databricks.com/try-databricks
To try Looker, go to https://looker.com/free-trial

Looker 5: Finally, a Data Platform for Everyone

$
0
0

looker_5

When we launched Looker 4 last year, we were making a big bet on the data platform. That meant dramatically improving our developer experience, unveiling a new way to find content, and introducing the ability to send Looker data just about anywhere. Back then, Looker 4 was all about possibility.

Looker 5 is about accessibility.

Looker 5 takes the power of our data platform–custom visuals, custom applications, custom workflows–and makes them so easy to build and deploy that just about anyone can do it.

That means it’s simple for everyone to integrate Looker into their workflow, whether it’s asking Looker questions through Slack or changing the priority of a ticket in Github through Looker.

There’s something for everyone to get excited about in this release.

Looker 5, the Data Platform

looker_5

Looker 5 makes building on the Looker data platform easier than ever before.

Looker 5 gives customers even more value from their existing data by applying any of the brand new Looker block types:

  • Viz Blocks, which allow any Looker customer to add custom visualizations to their instance, whether they’re Looker-provided Viz Blocks or custom ones they build themselves

  • Data Blocks, which let customers enrich their analyses by joining in pre-modeled public datasets like weather data or demographic data

  • Data Tools, which allow you to create curated experiences for end-users (e.g. a web analytics tool like Google Analytics or a retention analysis tool), with simple, straightforward, interactive user interfaces

And integrating Looker into all your workflows has never been easier, with our new Action Hub, which gives Looker users one place to connect Looker to outside services. Want to send your data from Looker to Amazon S3, Google Cloud Platform, Dropbox or Box? Want to update Github and Zendesk tickets right from within Looker? Want to send data to a colleague on their phone?

Now it’s all possible, with just the click of a button.

Looker 5, the BI Tool

looker_5

But don’t worry, we’ve also added a raft of new features to Looker’s BI Tool to enable Looker Explorers and Developers use Looker exactly how they need to use it.

Analysts who write raw SQL will love our improvements to SQL Runner, which allows them to visualize results from written SQL. Visualizations are helpful ways to understand and communicate data, and this will allow more technical users to do that quickly and directly from SQL.

We’ve also added Data Merge, a way to to combine queries of disparate data sources by pointing and clicking (even when the queries run on different databases). We don’t want Looker users to be limited by data siloed across different databases.

There are a host of other improvements to the BI Tool included in Looker 5--dozens of new statistical functions, git branching to improve developer workflows, and improved dashboard features--all aimed at making Looker’s BI Tool a pleasure to use.

Looker 5, More Than One Tool in the Toolbox

This release is the first time that we’re looking beyond BI by Looker, and towards more specific and powerful ways for companies to get value from their data.

Applications by Looker are powerful new end-to-end solutions for users to dig into department-specific use cases.

Built on the Looker Data Platform, Applications are built directly from your data sources, and come with pre-built dashboard workflows and integrations, enabling you to not only rapidly get up to speed analyzing your data, but also taking action on it right from the tool itself.

If you’re a marketing professional using the Google Stack, you’ll want to take a look at Marketing Analytics by Looker, an end-to-end solution that leverages Google’s Digital Marketing stack, the Google BigQuery Data Transfer Service, and Looker’s purpose-built blocks. Digital Marketers can manage ad spend, monitor referrals and understand marketing attribution across all your marketing channels, all from a single user-friendly application.

We’ll also release applications for Operations professionals with IT Ops Analytics by Looker, and for product professionals with Event Analytics by Looker.

looker_5

Empowering a New Kind of Worker

Today’s business challenges demand a new kind of worker, pioneering a new kind of workflow.

Take a look at job postings on LinkedIn, and you’ll notice that it isn’t just “analyst” jobs that require a proficiency in data. Data is a core part of every marketer’s toolkit, every sales leader’s job, every customer success team’s work, every logistics operation.

And that means that data isn’t just something you can rely on an analyst to do for you anymore.. Data fluency is now so critical that it can’t be separated from the core function of doing so many jobs.

That’s why we’re so focused on the Looker platform and so excited about Looker 5 making it easy for everyone to get value from data.

Want to get a live walkthrough on some of our favorite Looker 5 features? Register to attend a live webinar hosted by Colin Zima, our VP of Product.

Want to learn how some of Looker 5’s features work specifically for your business? Sign up for a free personal demo of Looker 5 here.

Cohort Analysis: 3 Creative Applications

$
0
0

cohort

A cohort is defined as a group of individuals all sharing a common characteristic over time. As such, Cohort Analysis refers to the analytical pattern of comparing different cohorts to understand how they differ.

What does it do?

Cohort Analysis can help inform decisions in marketing optimization, customer success outreach, user experience design, and so much more.

With so many keen minds within the Looker community, we often hear about some pretty creative applications of cohort analysis. We thought we’d share a few of our favorites:

Finding ROI on Brand Marketing Radio Campaigns

Simply Business, a UK based online insurance broker, used location and time-based cohort analysis to determine if radio advertising was delivering on its promise to grow business.

Even as opportunities for online marketing seem to multiply overnight, offline marketing channels continue to be excellent drivers of new customer engagement. Radio advertising, for example, remains an affordable choice for both brand-building and direct response campaigns—even for the most innovative Internet businesses. But how does a modern business, whose marketing efforts are built on ROI, apply this online marketing mindset to traditional offline tools?

Simply Business ran an experiment to see if they could attribute ROI to their radio efforts.

They designed a regionalized campaign using a sophisticated data pipeline and data mining tools that would enable them to identify which website visitors had probably arrived at based on a recent radio ad.

The company then could establish a cohort of those visitors and track them, all the way through the purchase funnel, with tremendous clarity. In turn, calculating a reasonable attribution of new revenue to the radio initiative.

cohort

Dialed in to the Data

Unlike other channels, such as paid search, which typically acquire customers much further along in their purchase consideration, the company expected radio to be more about driving awareness and therefore to have a longer payback period. However, by observing the behavior of the radio audience segments over time, they were able to validate that brand nurturing does eventually drive positive ROI directly—an insight which spells out the business case for further investment in radio advertising.

Read more about their story here.

Applying Medical Trial Survival Analysis to Subscription eCommerce

Harry’s, a subscription-based razor delivery service, was trying to decide which product or product bundle they should promote to first time subscribers. To do this, they dug into their data to see which products or product bundles were purchased by their highest-valued customers.

While this seems fairly run of the mill, Harry’s user experience results in a more complex user lifecycle normal grouping of longest subscribed users or other method of finding the “best” customers just wouldn’t be enough. So they got creative and implemented Kaplan-Meier analysis into their deep dive.

The Kaplan-Meier estimator is a statistic used to calculate the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment. In marketing, it can be used to analyze the amount of customers still purchasing after a defined period of time. Using this method of analysis, Harry’s could compare cohorts of users at throughout their customer journey to see what the overall experience is for any one cohort at any one stage in the journey, regardless of whether the customer is still subscribed today.

To research which product or bundle they should promote to first time subscribers, the looked at their current subscribers. First they looked at cohort current and past subscribers, over time, and then lumped them into groups based on the items they purchased first to compare the groups.

Looking at the results, it suddenly became clear that users who bought only the razors - as opposed to those who purchased razor and shave cream bundles - on their first purchase were, in the long run, more valuable as customers, as they were more likely to “survive” over time.

cohort

Watch Harry’s data team tell their story at JOIN 2016.



Bonus: Try Looker on your own data today! Check out this Discourse post which provides details on how to implement this into your model right away.

Zeroing in on an Issue with a Product Update

When Venmo released a new update in June of 2014, their Support Team noticed an increase in tickets generated from users experiencing confusion around payments being issued instead of charges, as intended.

They shared this information with the Product Team to investigate whether this increase was this just a coincidence or whether was there actually a problem with the latest release.

Digging into their user behavior data the Product Team set out to determine if the increase in tickets was a result of the update, but they soon faced a problem: they didn’t have data on user intention. They only had data on user activity.

Inspiration in a Support Ticket

When requesting assistance, one user had shared the screen shots of how they remedied their accidental charge by immediately requesting double that amount from their friend, both repaying what they sent and getting the money they had originally requested.

This seemed like an intuitive action that other users might take to get back the money they accidentally sent and was something they tracked in the app.

So, in Looker, they made a new cohort of users who paid a friend and then immediately requested 2x that amount back. The update in question had only rolled out in iOS, so they compared the instances of this happening on iPhone users compared to Android and Web.

Visualizing this data made the answer to their question very clear:

cohort

No, this support ticket increase was not a coincidence, and they had a hypothesis about what was causing this issue: the new update changed the default action from “pay or charge” to simply “pay” with a secondary option to “charge.” The intention of this change was to make it even easier to pay their friends, but perhaps they’d made it too easy.

The Venmo team got to work and released a new version that returned to “pay or charge” less than a month later, and by monitoring this same cohort comparison, they were able to see instances of this mistaken payment return to normal levels.

Listen to Venmo tell this story now.



Getting Started with your Data

Cohort Analysis is a simple yet powerful tool that every analyst can employ. Understanding user groups and their behavior isn’t limited to a single group or person within an organization, so with access to data and some creative thinking, anyone can derive value from comparing different user cohorts.

If you want to try this on your own data and already have Looker, check out the following Looker Blocks to help you get started:

If you don’t have Looker yet, check out this site from our partner Stitch, which explains how to do this in SQL and Excel.

Oh, and request a demo today, so you can see how Looker makes Cohort Analysis quick, easy, and repeatable.

Taking “The Big Picture” To Your Display Screen

$
0
0

big_picture

Data visualization and digital signage are a match made in heaven. When combined, the two concepts bring “Big Picture” data visualization to a targeted audience anywhere, at anytime. This is why a new partnership between UCView and Looker is an exciting new step in innovating signage technology.

UCView is a digital signage software company that has been pushing the envelope in this technology field since 2006. We created a versatile signage software solution for a multitude of industries including hospitality, governmental institutions, educational establishments, corporate offices, retail, transportation agencies and even industrial lines. With our system clients can create, schedule and manage entire visual content playlists on many screens, in multiple locations, all at once.

Now, with the Looker Plugin API in UCView’s CMS, our mutual customers can directly integrate data visualization from the Looker platform into their digital signage playlists.

This provides a great tool for data analysis, motivation and fostering relevant discussion. Relevant statistical data can now be quickly displayed on screens in places such as corporate board meetings, college campuses, metro stations, government offices, and more. Data trends can be quickly shared and analyzed with the goal of informing or even motivating your audience using an attractive visual platform.

big_picture

Data visualization combined with a dynamic medium of digital signage means that you can take ‘big picture’ data and make it resonate with your audience. For example, a company can display their monthly sales performance statistics along with training videos, announcements and other live data - all visualized nicely on a single screen from different sources.

Beyond Traditional Signage

UCView even goes beyond the regular applications of digital signage by allowing its customers to send signage playlists to mobile devices of relevant users. UCView’s customers are not just limited to large displays when it comes to disseminating data to their audience. Things like training courses, promotional announcements and now Looker data can be viewed easily on workers mobile devices no matter where they are.

big_picture

As UCVIew continues to provide industry leading digital signage software and features, we are always in search of the best possible solutions for our customers and we are happy to offer Looker as our newest partner in advancing our customers to the next level.

If you’re interested in learning for, check out a free demo of Looker and UCView!

The Rise of The Instrumented Workplace

$
0
0

instrumented_workplace

Last year at JOIN, Looker’s annual data conference, I talked about the three waves of BI.

The first wave started 30 years ago with monolithic stacks that were reliable but inflexible. You could only get answers to specific questions, but you could trust those answers. It was also the era of the data rich and the data poor, where those without access to data starved as they waited for someone to finally have time to help them.

That frustration led to the second wave: a revolution toward self-service tools where users grabbed whatever data they could lay their hands on and threw it into data cleansing, blending, and visualization tools for analysis. But that revolution came at a cost. Scattered tools created a mess of silos that didn’t talk to each other, and you lost the ability to speak in a common language with anyone else at your company. Enter data chaos.

Today, we’re in the third wave of business intelligence: the era of the data platform. In this new era, databases are fast so data can stay where it is, and users can begin to access whatever data they need, from one central source of truth that provides a shared language for the entire company. One platform, one tool…no more waiting, no more hunger for data, and no more chaos.

From What to Why

At this year’s JOIN conference we went deeper on the “Why”.

Why does this new wave matter?

Because this new wave, the third wave, is allowing companies to finally create data culture.

“Data culture” is a term we hear a lot. It describes the idea of everyone in an organization using data to make informed decisions rather than guesses based on a hunch or loud opinion. I remember talking to our first customers and they were telling us “Looker is changing our culture, we’re building a real culture based on data.” I turned to Lloyd, Founder and CTO of Looker, and I said, “We can never tell anyone that. People are going to think we’re crazy.”

But our customers kept telling us about this culture shift, and sure enough, we’ve actually seen it become real.

It’s becoming real because data is changing. It’s shifting from something that is used by pockets of the company to become something that is relied on throughout the organization. Data is finally driving conversations and decisions everywhere.

So what changed? How has the third wave changed cultures?

In hindsight, we can see that data bottlenecks and data chaos were holding people back. It’s like Maslow’s hierarchy of needs: if you’re hungry you can’t focus on these higher value needs like self actualization.

If your people are hungry for data, then you aren’t fully actualizing your data’s potential in your organization. But what happens when the hunger for data is sated? It then becomes second nature to start asking more questions and expect to see data behind every decision. Then your culture changes.

When you put data in the hands of every person, not just the CEO or the CFO, allowing anyone to ask and answer their own questions with confidence. Then - and only then can - will you see this new data culture.

The Instrumented Workplace

What’s even more interesting is that there is a new workforce emerging that is hungry for data and for this data-first environment. If the 1990s were about the Knowledge Worker, today is about the Instrumented Workplace, where people across the organization are plugged into all the data that drives the business. Whether it be Slack or Gainsight or Salesforce, Instrumented Workplaces are constantly connected. And this new workforce wants to use data to look for new efficiencies and new competitive opportunities to push the company to the next level. They are informing what they should be working on today by what the data is telling them, not by a hunch or by a directive from a HiPPo (Highest Paid Person).

These are people with titles like Growth Hacker, Customer Success Manager, and DevOps Engineer - all roles that we rarely saw even four or five years ago.

And the common denominator between all of this is data, which is running back and forth through all of these applications and workflows.

The Data Leader

A data culture doesn’t happen on its own. This kind of change requires cultural leaders.

We see across our customers that the data people are becoming those leaders. These leaders are bringing data to meetings and using it to make and support their decisions. They’re responding to the questions of others with data that will not only answer the question but also inspire others to dig deeper, asking new questions and finding new answers and patterns. Being a data leader is not a title you see on a resume, it’s a way of working and inspiring those around you to work.

Piwik PRO Marketing Suite + Looker = better, quicker, deeper business insights

$
0
0

piwik_pro

Analyzing and measuring the behavior of website visitors is crucial for any business with a digital presence. This is also what marketing tools were designed for. They provide you with a way to inspect your users’ activity - you can see where they come from, where they exit your page, how much time they spend reading your content, which actions they perform, and much more. Then you can act on the gathered data and improve the performance of your websites, apps, and campaigns by making them more appealing for your customers.

With Piwik PRO - a marketing suite available in both the cloud and on-premises environments - on top of useful functionalities, you also retain full control over your data. The information you collect is never shared with a third party, and you can apply additional security measures to your data. All this allows you to collect valuable business insights while staying in compliance with the most stringent data privacy laws around the globe.

Because of its approach to data privacy, Piwik PRO has become the tool of choice among numerous financial and governmental institutions, including Accenture, the Government of Canada, and the European Commission.

However, when it comes business technologies, we know that two heads are always better than one. That’s why we partnered up with Looker to create a convenient solution that lets our users squeeze even more out of their data - a Block integrating Piwik PRO web analytics data with high-quality business intelligence.

See what new improvements the integration brings:

piwik_pro

piwik_pro

piwik_pro

Real-time analysis

For marketers, time is a valuable and very limited resource. You need to get things done as quickly as possible. Fortunately, you don’t need to wait around for data archival in order to create relevant reports. The integration gives Piwik PRO access to real-time reporting. Looker easily adapts to your particular data set and automatically presents the desired information.

piwik_pro

Combine dimensions and segments

Integrating Piwik PRO with Looker allows you to combine dimensions and segments with no limit on the number of metrics you can track. You’ll be able to create and share visualizations that tell a complete story with your data, free from virtually any technical limitations.

Hassle-free customization

Custom Reporting with Looker doesn't require extensive technical knowledge. Anyone can create new visualizations and explore their Piwik PRO data. All you need to do is create reports that compile only those metrics you know to be beneficial to your organization. At the same time, experienced analysts can go deeper and easily create more complex reports to share with the whole team.

Want to get started with Piwik PRO and Looker?

We hope that the use cases presented above shed some light on exactly how combining the power of data from Piwik PRO and the user-friendly interface of Looker gives you deeper insights into your user’s behavior to make better and quicker business decisions. But don’t just take our word for it - click here to give it a spin!

And if you want to learn more about Piwik PRO, just visit our website.

Fantasy Football: A Guide for Streaming Defenses

$
0
0

ff2

If you’re streaming fantasy football defenses -- selecting a new defense week-by-week -- finding the right defense to start can be difficult.

While the projections are usually a decent starting point for picking a defense, I wanted to analyze data from the 2016 season to see if any additional insights or patterns could be gleaned, such as home vs. away games, time of the games and the day of the week the game is played.

First, let’s take a quick look at the distribution of scoring for defensive fantasy points during the 2016 season:

ff2

Looking at the cumulative probability and the number of games in which defenses earned a particular score, the data shows that about half of the time (specifically, 51.4%) you can expect to earn 6 points from your defense.

Defenses are more likely to earn less than 6 points than they are to exceed it. 44.4% of the time defenses earned 5 points and 40.8% of the time defenses earned over 7 points. The huge skew towards the upside is in big thanks to 2016’s Kansas City 35 point day against the Jets.

When we look at the 2015 and 2016 seasons for projections vs outperformance, it looks like we have some pretty reliable adjustments we can make to Yahoo’s defensive expectations:

ff2

The trend shows that the higher your projection, the more likely it is that your team will undershoot that projection. In other words, don’t hold your breath on actually getting those 13+ points projected for your defense.

Digging further into projected scoring, let’s try to figure out how much we should reset our expectations:

ff2

In the above visualization, the colored lines represent a group of defenses which were projected to earn a certain amount of points (ex. the bright blue line represents defenses expected to earn 10+ points during a game). Since this is a cumulative distribution, the defenses under the 50% line are underscoring their projection

We see a similar pattern here, the teams with the highest point projection (ex. the bright blue line representing 10+ projected points) tend to underperform by 3-4 points.

As you go farther right along the x-axis we see the opposite for teams with the lowest projected points. The red line represents teams projected to earn less than 4 points, and we can see that these teams tend to earn slightly more points than projected.

The more extreme the projection, the more likely it is to be off. It looks like we can expect an extra point on the lowest estimates and a 3+ point regression from the highest estimates, so don’t rely on your defense to carry (or save) the day.

Now that we’ve looked at what to expect from defenses, let’s try to gain some actionable advice for selecting a weekly defense. Below, we look at 2016 defense fantasy performance by game day:

ff2

Looking at the weekly breakdown, Monday night defenses consistently underperform. On average, they are below projections by almost 2 points (-1.73 to be exact).

By comparison, defenses playing on Thursday or Sunday marginally overperform by 0.50 and 0.05.

So, if you are trying to decide between two similarly ranked/projected defenses and one of the teams has a Monday night game, our advice would be to NOT select them for the week and instead pick a team playing on Thursday or Sunday. While a small difference, I think many of us have been in situations where that extra 1-2 more points would have made the difference.

Another factor to take into account when selecting a weekly defense is whether they are playing a home or away game:

ff2

Looking at the data for the 2016 season, defenses who were playing at home are more likely to outscore their fantasy projection by .33 points. On the other hand, defenses who were playing away were more likely to underscore their fantasy projection by .-0.45 points. Again, if you have a good defense this should not change whether or not you start them. However, if you are streaming defenses this is one more factor to consider.

Taking into account game night and whether or not a team is playing on the road, be careful with away teams on Monday night!

Finally, let’s take a look at how the time that the game is played may or may not impact defense fantasy points. Specifically, let’s compare prime-time games to other game times and see what we get:

ff2

In the visualization above we are comparing defense fantasy performance for games played during prime time to games not played during prime time during the 2016 season. Here we see that defenses playing during prime time tend to slightly underperform their fantasy projections, whereas defenses who are not playing during prime time are more likely to score above their projections (although, still not by much).

To bring all the defense data findings together….

The majority of the time (51.4%), defenses earned 6 points during the 2016 season. Defenses which are projected to earn top points (10 or more) tend score about 3 few points than expected, while the lowest projected defense (less than 4 points) tend to do slightly better than expected. Defenses playing on Monday nights, away, and/or during prime time also tend to do slightly worse than projected.

Again, if you have a great defense, you’re probably better off playing them no matter their game day or time. However, if (like me) you’re streaming defenses every week to see who has the most favorable match up, these data-driven tips from the 2016 season may just save you a few points...and a win or two.


Our Top Picks for Denodo’s DataFest

$
0
0

denodo_datafest

It was so great to see so many Looker customers and partners under one roof at JOIN 2017 in September. It granted us real opportunity to learn about the new tools and technologies our customers are using. It also revealed the broad range of innovation advancing across our partner network. From data collection, to analytics, to business decision making, companies are striving trying to bring agility to each step of the business intelligence process.

Next week, Looker is pleased to be sponsoring Denodo DataFest, the 2nd annual user conference hosted by our data-friends at Denodo.

Things get underway in New York City on October 19th and 20th before rolling into London on 26, October. Both events will be streamed live for those who can’t attend in person.

Sessions at DataFest will explore the ways some of today’s most innovative companies are using Data Virtualization as part of their data stack to become more agile and ultimately drive more business value out of their analytics.

Don’t miss this opportunity to learn how this flexible and agile approach to data integration technology can help you to stitch together the widest range of data sources - in real-time - to create a comprehensive data fabric.

I’m looking forward to...

Day 1 of Datafest NYC will feature a discussion around self-service BI that you won’t want to miss.

Customers are using Denodo and Looker to accelerate critical self-service data initiatives. To shed light on this, my co-Looker and VP of Customer Success, Geoff Guerdat will sit on with a panel of industry veterans to discuss the importance of self-service data access as a modern business driver.

Personally, I’m looking forward to the Boot Camp being held on Day 2 of the New York event.

The Data Virtualization Architect Boot Camp will provide a deep-dive of common data virtualization architectural patterns as well as performance optimization, data services governance, scalability and operation.

This promises to be the most valuable and tangible experience of the event, exploring how data virtualization is solving important, real-life business problems.

Whether in New York, London or streaming online, we hope you’ll attend.

See you there!

Highlights from JOIN 2017: The Feedback Wall

$
0
0

join_highlights

If you were among the lucky attendees at the recent JOIN 2017 conference in San Francisco you probably caught the announcement of Looker 5 and all the new product features.

If you missed it, you can find an excellent overview of the market’s direction and Looker’s upcoming plans in Frank Bien’s latest blog.

This was my first time attending JOIN, and I was most impressed by by the open, informative dialog between the guests and Looker staff regarding the future direction of the Looker data platform. Nowhere was that more apparent than at the “Product Wall” where attendees were invited to vote on new features and capabilities they’d like to see added to upcoming releases of Looker.

Amazingly, nearly 1,000 individual feature suggestions and votes were submitted!

Looker Blocks

Highlighted on the Product Wall this year were a series of Looker Blocks, each focused on various topics and use cases. The Looker team was interested in gauging user interest in these types of Blocks and measuring what attendees considered most important.

If you’re not familiar with Blocks, you really should check out the Block Directory for details. You’ll find that you can leverage pre-built segments of Looker functionality to powerfully accelerate your analytics.

Crowd Favorites

After two days of voting, a few of the highlights included:

Google Sheets Actions.Looker Actions allow users to perform tasks via pre-defined API calls to nearly any other application all from a single Looker interface. A frequent request at JOIN was for Looker to perform common tasks such as reformatting cells or changing cell values in Google Sheets, from within the Looker interface - and without having to open Sheets directly. Sheets integration overall remains popular with the Looker community, and Looker works well with sheets already, both as a data source and a way to export results. If you’d like to learn more about the ways Looker works with Google Sheets, one of our experts has compiled a list.

Most requested source block: Jira. An overwhelming majority of attendees requested Looker build a LookML model for Atlassian Jira. In particular, requesters were asking for help in tracking software development metrics in a simple dashboard. Several were already using the power of the Looker data platform to do this analysis on their own, but having a Source Block would be very helpful. Interestingly enough, the Looker team is already working on Jira integration, so look for this in future releases.

Data Blocks with international/global data sets. Data Blocks, coming soon to the Blocks Directory, are pre-modeled external data available to Looker users. Reflecting the increasingly global nature of the Looker community, commonly requested Data Blocks included international data sources. Among the suggestions were global demographics, international financial data, EU and Asian data sources, and a number of others. Of course, the flexibility of the Looker platforms allows data teams to import data from existing public data sources today, but if there is a set of data that’s widely requested, the Data Blocks team will determine when (and if) it’s appropriate to add that data as a Block.

On a more personal note, I was lucky enough to assist the attendees with the voting. This gave me the opportunity to chat with them as they reviewed the board and provided their feedback.

Thanks again to everyone who took the time to help Looker understand your needs. With your input we can continue to build you a better data platform into the future!

PS: Don’t forget to check out the all new features in Looker 5!

4 Ways to Level Up Your Customer Support Analytics with Intercom, Looker Blocks, and Blendo

$
0
0

intercom_blendo

One of the few constants of doing business is that Customer Support encompasses everything you do over the course of your business relationship.

From the sales pitch to contract signing and beyond, Customer Support offers the tools necessary in getting the customer established, answers any follow-up questions, offers help, and ultimately delivers value.

But how do we know if our Customer Support efforts are successful?

Traditionally, Support Analytics have lived in the support tools themselves, siloed from the rest of the data obtained about the customer lifecycle and offered limited analytical functionality.

Today’s Business Requires Modern Tools

Today, tools like Blendo give you one-click data integration to many cloud services like Zendesk, Intercom, Stripe and Xero to easily sync with your data warehouse. This enables you to explore this data in Looker and easily make it available to everyone it would benefit across your organization.

Intercom is one of the data connectors offered by Blendo, which allows companies to seamlessly sync customer support data into their data warehouse with a schema optimized for analysis.

We created an Intercom Analytics Looker Block to make it easy for any company to turn data from Intercom for actionable results.

While Intercom provides some analytics about the performance of your support team, with Blendo and Looker Blocks you can delve deeper into the data, generate more insights and also use it as the basis for the creation of reports, customized exactly for your needs.

The Intercom Analytics Looker Block allows you to:

Use historical chat volume to optimize resource allocation
By analyzing your historical chat volume, you can identify if and when your customers’ behavior exhibit seasonality. In this way you can easily predict that you will need more agents available in certain periods to address the additional demand for customer support.

intercom_blendo

Get individual agent and team performance at a glance
People often use Intercom admins as customer support “Agents”. This way you may monitor conversation distribution among Intercom Admins or Teams, spot an admin’s outstanding performance, or identify teams that have more work than they can handle.

intercom_blendo

Identify the performance of your customer support team by comparing response or resolution times and customers’ wait time. This analysis can help you to assess the performance of each agent and the team as a whole to inform training, growth and team management programs.

intercom_blendo

Understand your customer interactions across the globe
Want to track conversations by geography? Monitor the number of conversations created or closed per month? When you have global reach as a company, it’s important to address the different time zones in which your customers operate, and ensure that you’ll always have agents available to service their requests.

intercom_blendo

Gain deeper insight into your customer’s experience with your support team and beyond
With your Intercom data in your data warehouse along with the rest of your customer information, you can create a full picture of the customer experience and lifecycle. See how common issues relate to churn, how chat response time affects NPS, or how chat interaction relates to product adoption.

With all of your data in one place, the possibilities are endless.

Try Looker + Blendo

Access the Blendo Looker Blocks by reaching out to your assigned Looker analyst, or request a Looker demo and trial.

To learn more and start syncing your Intercom data with your data warehouse, visit Blendo and create an account today.

4 Easy Ways to Embed Analytics with Looker

$
0
0

embed_analytics

The customers have spoken and, as featured in the 2017 G2 Crowd Report, have selected Looker as the Leader in the Embedded Analytics space. So, what exactly does embedding with Looker look like?

Powered by Looker

Looker’s embedded analytics platform, is all about accessing data within your daily workflow. That could mean public static reports on your website, a private authenticated dashboard in your day-to-day tools, or a full embedded experience within your product for your customers and suppliers.

The great thing about all these different use cases is that they all leverage the Looker platform without having you reinvent the wheel. Our powerful modern data modeling layer makes it easy to define your metrics or reports just once for all your different users. Also, your stakeholders can make use of Looker’s reliable scheduling, alerting, and data delivery capabilities to do more with their data.

How can you start embedding Looker?

Public Embedding

Start by embedding looks publicly on any website. Just grab a public iframe URL from a saved report and simply add it to any page! This could be something as uncomplicated as adding a Looker report with a full list of customers on your public website to showcase your company growth. We make it easy for you to access our visualization library to present your data anywhere.

embed_analytics

Internal Embedding

Dealing with sensitive or internal data? Bring Looker reports and dashboards into the tools your employees use the most. Looker manages user authentication and makes sure that users access only the content that’s relevant to them.

For example, you can bring in pipeline estimates for sales reps right in their Salesforce account as a dashboard or put together HR payroll and benefits data within your HR tool for all employees. Salesforce and Confluence are among many of the applications into which our customers have embedded Looker Dashboards.

embed_analytics
Creating Custom Applications

Bring Looker elements into your customer (or internal) portal or application through Single Sign On. Select the most appropriate parts of our platform and easily make the data available in one of the following ways:

  • Use our RESTful API to pull modeled data into your application. You have complete customizability over the look and feel of the data and how you can display information.
  • Embed Looker Visualizations in the form of Dashboards and Reports. Make use of our Visualization Library to create a dynamic experience for your data consumers.
  • Expose Looker’s Explore page. Bring self service analytics to your stakeholders and allow them to run and save Ad-hoc reports, schedule email alerts, and send result sets to a number of external applications.

embed_analytics

embed_analytics

White Label the Looker Platform

Love everything about Looker and want to share the benefits? Provide your customers with an OEMed version of the Looker application and allow them to make full use of Looker.

embed_analytics

See Looker in Action.
Request a Looker demo today.

Already a Looker Customer?
Reach out to CSM@looker.com if you’re interested in learning more about embedding data with Looker.

How Poshmark Powers Customer Service with Looker and Nexla

$
0
0

poshmark

Poshmark is the largest social marketplace for fashion where anyone can buy, sell and share their style with others. Poshmark’s mission is to make shopping simple and fun by connecting people around a shared love of fashion, while empowering entrepreneurs to become the next generation of retailers. Recognized as the go-to shopping destination for millennials, Poshmark’s community of over three million Seller Stylists help shoppers discover the perfect look from over 25 million items and 5,000 brands.

Because community is so critical to the business’s success, Poshmark strives to provide the best possible customer support. Proactive monitoring of support KPIs is key to this effort. An understanding of long term trends is also important to appropriately staff the support team.

The Pain of Data in Silos

Previously, support data existed in the Desk.com portal, but it was siloed from Poshmark’s core business data in Looker. To access this data, the team updated Excel sheets manually to extract the data, which led to significant hours wasted per week across all agents and created the possibility of a human error in the manual work.

Poshmark needed to enable access to support KPIs through Looker so that all executive and business stakeholders would have single-point access to all of the core business KPIs. Specifically, Poshmark’s executive team needed access to check on the number of cases, how they are pending, resolution time, and first response time. Given Poshmark’s focus on providing world-class community support, this information needed to be easily accessible on-the-go via phone. Combining support data with the rest of Poshmark’s customer data could further improve the experience for Poshers,

They knew that this system would not scale and that Poshmark needed a new solution that would allow them to access their support data along with the rest of their core KPI’s that already lived in Looker.

The Magic of One Source of Truth

The combination of Nexla and Looker was uniquely suited to help Poshmark achieve its objectives. Nexla made it easy to integrate and monitor customer support data via the Desk.com API, and then send that data to Poshmark’s Redshift database. Integration took less than a day and allowed the data engineering team to continue to focus on other priorities. Once the data was flowing, Poshmark was able to create Looker dashboards and analyses to provide the executive team and other business stakeholders access to critical support data through their core BI platform.

“Prioritization is always a challenge at a growing company. It can be hard to complete integrations in the time we would like,” said Barkha Saxena, VP Analytics at Poshmark. With Nexla, they were able to integrate the API in a few hours, instead of days or weeks. No data engineers were disturbed during the integration of this API. “I was happy to find a software solution to solve the problem. It allows us to scale without disrupting anything else,” Saxena said.

With one API integrated and the data flowing, Poshmark sees many uses for the Nexla platform. The monitoring and alerting features ensure the analytics team is always aware of any data breakages. Poshmark plans to use Nexla for more API integrations so they can “set it and forget it” and never worry about gaps in historical data again.

A Data-Driven Future

Now that Nexla connects the data sources Poshmark wants to analyze, the team can more easily build out advanced customer service analytics in Looker. Armed with access to raw event-level Desk.com data, the analytics team plans to work on many initiatives such as:

  • Estimating the value of customer service by measuring the changes in customer LTV as a result of customer touch points

  • Analyzing support data by different dimensions, such as user and order tags, to identify opportunities to continue to provide highest level of service to their community

  • Using historical support ticket data to anticipate trends and appropriately staff their team

This new access to support data from Nexla in Looker will allow the Poshmark team to stop wasting time manually updating spreadsheets, and instead continue to invest in their special Poshmark community experience.

Learn more about Nexla here, and if you’re new to Looker, request a demo today.

Optimizing BigQuery + GCP with Looker Blocks from Datatonic

$
0
0

bigquery

Google BigQuery is an extremely powerful analytics data warehouse. It allows you to run terabyte-scale queries, getting results in seconds, and gives you the benefits of being a completely fully managed solution. BigQuery and other Google Cloud Platform (GCP) services offers “pay as you go” pricing. Whilst this leads to greater flexibility and can keep costs low relative to other providers, it also means keeping track of all of your costs across your business can get a bit tricky. To help simplify this, we’ve taken logs and billing data from GCP, and surfaced analysis through Looker to give everyone a detailed view of your usage and associated costs.

GCP and BigQuery Monitoring

Stackdriver Logging offers a managed service to track cost and usage of BigQuery across your entire GCP environment. With Stackdriver Logging, you can export these logs to BigQuery for more detailed analysis and reporting. We've included set-up instructions for this at the bottom of this post.

bigquery

While these logs tend to have a slightly complicated structure - utilising nested and repeated fields in order to fully utilise the power of BigQuery - with the right tools, we can use these logs to get detailed information about BigQuery usage and costs across your enterprise.

To make analysing BigQuery audit data easy, we’ve built a Looker Block to model the logs allowing you to analyse the logs in a simple way, whilst utilising the underlying power of Google BigQuery.

bigquery

With the Looker Block, you can now easily track BigQuery billing and monitor performance. When combined with cross-project log exports, it becomes a powerful way to manage costs across your business.

The block allows you to:

  • Analyse which queries are using the most resources or running the longest
  • Monitor your active user base
  • Track which users are running the most expensive queries
  • Set up alerts when a user submits expensive queries, allowing you full flexibility to control costs

bigquery

Using our easy-to-plug-in Looker Block, you can now go away and analyse your BigQuery logs, and export the data anywhere.

GCP Billing

In addition to our BigQuery Monitoring Block, we’ve also just released a block for analysing GCP Billing exports. The Google Cloud console allows you to extract monthly reports on your billing account already, but using our Block gives you much more flexible analysis, more granularity, and the ability to combine this data with other sources.

You can use this Block to easily track GCP spend across your many projects, across services, and across labels applied to your GCP resources. As the majority of GCP products are pay as you go services, this provides a simple way to analyse spend, monitor resource usage, and even a projection of your total spend this month, based on current usage levels.

bigquery

Try the Looker + GCP Blocks

Access the Datatonic Looker Blocks by reaching out to your assigned Looker analyst, or request a Looker demo and trial.

This is just the starting point for analysing your entire GCP environment. Here at Datatonic, we’ve worked many companies to help them monitor and optimise their GCP environment. We have consultants who are experts across the GCP data stack, who can help employ best practices to enable you to save costs, and get the most out of the GCP services. Visit Datatonic to find out more about how we can optimise your analytics stack, or get you starter on GCP.




Setting up our Looker Block

Let's run through the steps in both the Google Cloud Platform, and in Looker, to setup the logging exports and the Looker block.

Google Cloud Platform Setup

Create a BigQuery dataset for the billing and BigQuery audit logs. Go to the Google Cloud Platform console, and select BigQuery, or go to https://bigquery.cloud.google.com/. Click the drop down next to the project name and select Create New Dataset, set a location and click OK.

Optional: We recommend setting up a new GCP Project, purely for this purpose.

Setting up the Billing Export

To setup a billing export to BigQuery do the following:

  1. Go to the Google Cloud Platform console and select Billing
  2. Choose the appropriate billing account (if you have more than one) using Manage billing accounts
  3. Click Billing Export > BigQuery export
  4. Select the Project and Dataset you created earlier
  5. Click Enable BigQuery export

Billing data will now be exported to your dataset at regular intervals. The Billing export table is date partitioned, and will incur a small data storage charge.

Setting up BigQuery audit logs export

To set up the BigQuery log export do the following in a project that contains BigQuery:

  1. Go to the Google Cloud Platform console and select Stackdriver Logging
  2. Click Exports and then Create Export
  3. Add a Sink Name and select Custom Destination as the Sink Service. The Sink Destination should be set to bigquery.googleapis.com/projects/<project-name>/datasets/<dataset-name>, adding the project and dataset names you created earlier.
  4. Click Create Sink

If you got a permission error then that is perfectly normal. It is because the project you have set up the export to is different to the project you have set up the logging export in. In this case the Service Account which writes the logs into the BigQuery dataset you have created will not have permission to do so. Follow the steps below to complete the setup:

  1. Go to BigQuery in the project the logs are exported to and click on the dropdown next to the dataset you have chosen. Click Share Dataset
  2. Get the name of the service account by going to Stackdriver Logging in the project where you set up the logging export, then Exports, and copy the Writer Identity
  3. Add this Writer Identity into the Share Dataset window in BigQuery from Step 1
  4. Give the account Can edit access, and click Add, and then Save Changes

The BigQuery audit log export should now be set up. The table will be updated throughout the day. The BigQuery audit log table is date sharded rather than date partitioned.

If you have more than one project using BigQuery, repeat the steps above. All logs from different projects will be added to the same table, allowing easy querying across projects.

Using the Google Cloud SDK

Alternatively, if you have the Google Cloud SDK installed, you can set up the BigQuery logging using the following command (make sure you in the project you want to set up the logging for by running gcloud config set project <project-name>)

gcloud beta logging sinks create <sink_name> 
bigquery.googleapis.com/projects/<project-name>/datasets/<dataset-name> 
--log-filter='resource.type="bigquery_resource"'

Looker Configuration

This block requires almost no configuration once added to your Looker instance. We will only need to change the billing export table name:

  1. Go to BigQuery and copy the name of the billing export table, this will start gcp_billing_export_
  2. In Looker, go to the view file gcp_billing_export
  3. Replace the table name in the FROM statement of the derived table with your billing export table name
  4. Create a new Database Connection in Looker to connect to the BigQuery dataset: follow the steps here to create a service account in GCP and add a new connection to Looker, ensure you use BigQuery standard SQL
  5. Change the connection name in the Looker model files to the connection name you chose in Step 4

You should now be ready to start monitoring your BigQuery and GCP usage.

Customer Spotlight: Building a Data Driven Culture at Disqus

$
0
0

disqus

When I started first started my career in analytics, Data Analyst was declared the “Sexiest Job of the 21st Century”. Buzz around “big data” was on the rise as the most successful companies were increasing their investments in data and striving to foster data-driven cultures. This lead to a high demand for data analysts and scientists who were needed to sift through that data in search of value. However, due to the lack of widespread data accessibility and transparency within companies, too often analyst work becomes frustrating and monotonous, consisting of one-off requests for stakeholders, data-validation, running similar queries over and over, and having co-workers compete for resources and prioritization. These inefficiencies keep companies from getting the most out of their analysts and make achieving the benefits of a data-driven culture more difficult.

Here at Disqus, we’ve been in a progressive evolution around how we leverage data to optimize our business. We’ve built our business and strategy around understanding the data signals that the market relays to us on a regular basis and being able to act quickly to take advantage of opportunities.

Disqus adopted Looker in 2015 as our primary BI tool. It immediately became an instrumental tool that allowed us to operate our business more efficiently. Our colleagues were no longer at the mercy of analysts to pull data and or perform simple analyses. Operational burdens for the analytics team were greatly reduced and productivity increased overall. Teams were making more informed decisions and we had widespread alignment across the company.

An internal survey we conducted earlier this year found that 94% of our employees utilize some form of data or analytics resource to do their job, which is truly representative of a data-driven culture. Obviously, this didn’t happen overnight. The change was gradual, sometimes painfully so, and not met without challenges. However, with persistence, the support of our leadership team, and a devoted data and analytics team, we were able to influence our co-workers into caring about data as much as we do.

Here are some best practices that we learned along the way:

Be transparent about KPIs and progress towards goals

Make sure everyone in the team understands overall company KPIs and why they matter. At Disqus, we talk about KPIs openly and frequently. Something that we put into practice years ago is a weekly KPI meeting run by the analytics team. While the meeting started as an executive summary for the leadership team, it is now open to the entire company. Analysts disseminate metrics, pacing, progress towards goals--if and why changes occurred, and any follow-ups or action items needed. Other attendees are encouraged to be curious about data and ask questions, add context, or call-out specific projects that they think may have shifted KPIs (bug fixes, feature releases, new partnerships, etc.). This promotes team accountability and allows individuals to deduce how their individual contributions have played a role in moving company metrics. In return, the analytics team gets feedback and insight into which metrics are critical and which ones need iteration. Key product-driven insights are provided that drive engagement around KPIs.

“Branding” key metrics for the entire company

There was a time at Disqus asking two different people to define a metric would likely produce two different answers. An inventory metric for the Sales team might be different than the inventory metric for Marketing and an Engineer might not even know to what an inventory metric is. As a way to alleviate confusion and prevent poor assumptions, we cataloged all key terms, measures, and metrics in the data dictionary. Every entry in the data dictionary has four key elements:

  1. An internal definition: what this measures and why it is important for tracking success
  2. An external definition: how this metric might be interpreted by 3rd parties
  3. A technical definition: how the metric is calculated and where it lives in our database
  4. A practical example, usually a link to a Looker report that the company is already familiar with via the KPI meetings

Metrics are often driven by context. The purpose of the data dictionary is not necessarily to be dogmatic. We know that metrics and definitions can change over time as a business does. The purpose is to remove ambiguity from metrics and rectify different interpretations. It is a great resource for all employees so they can understand differences between metrics. Additionally, it serves as a great learning device, especially for those who might be less quantitative or technically savvy. Your team won’t care about data unless they understand how to read and interpret it. The data dictionary is a tool they need to do just that.

Make metrics easily accessible

Not everyone at Disqus necessarily needs to look at data every day to do their job, but they should. Find a way to make KPIs a part of everyone’s daily routine--like checking the weather.
For the most important topline metrics, we utilize Looker’s scheduled email reporting to send updates to the entire team first thing every morning. For more detail on topline metrics or other project-specific metrics, we create easy-to-remember URLs that redirect to important Looker Dashboards, so that anyone in the company can track progress of any project or product at any time. We have also released an FAQ that contains links to important looks and dashboards based on high frequency requests for data. Part of generating a data-driving culture is creating a team that wants to seek out data. Make it easy for them to find.

Define success metrics for all new projects

Anytime a new project is kicked-off, we identify one or more metrics to track success. We also set expectations of how success in this project will move company KPIs. Prior to kick-off, the project team takes inventory of everything that is measurable and then distinguishes actionable metrics from potentially distracting ones--a distracting metric being something that is measurable, might be interesting, but not necessarily actionable. Opening these discussions from the get go and defining project goals set us up for later success. When we go into the project with a goal in mind and measure against that goal throughout, we are able to make deliberate decisions and adapt quickly. Whether the metric result in expected outcomes or not, we have a clear signal of what actions should be taken. Do we stay the course of change strategies? Do we need to allocate more resources? Maybe it’s not clear on the surface so we commit to digging into data more. Whatever it may be, it forces a decision to be made.

Teach them to fish.

In reality, most of our team at Disqus does not necessarily require an analyst to do most ad-hoc requests. They’re perfectly capable of looking at the data and drawing their own conclusions as long as they can trust they’re looking at the right data. Enter Looker. It allowed us analysts to set up a playground where end-users have a self-servable way to load and explore data. And while we might expect “if you build it, they will come” to apply here, it certainly does not happen overnight. Honestly, in the beginning, our team was conditioned to rely on analysts for data pulls and analysis. Even though this really powerful analytics tool was available to them, end-users weren’t quick to jump into the deep end. So, we committed to providing them with the tools and training to help them understand how to work with data and bolster their confidence to perform their own analysis. When someone requests a data pull, sit down with them and walk them through building a report from scratch. And don’t just have them look over your shoulder while you do it--seriously, make them do it. With time, they will become more proficient, gain more confidence, ask more questions, and most importantly, have the ability to find the answers themselves.

So, now that we have a company of data-literate individuals, did we just work ourselves out of a job? Actually, analysts and the data team are needed more than ever. Since so many of us at Disqus rely on it on a daily basis, we have devoted more resources to improving data infrastructure and performance. Now that we’re all experts on utilizing data, we’re able to push forward on more interesting and innovative projects that that drive further growth.


How Kiva Creates Opportunity with Data

$
0
0

kiva

When you think of “startups”, you probably don’t imagine a homemade baked goods shop in Paraguay or a goat farm operating inside a refugee camp in Jericho. However, these small entrepreneurial pursuits are only a portion of the businesses that Kiva’s platform helps fund around the world.

To date, Kiva, a non-profit microfinance organization that connects lenders with low-income entrepreneurs and students all over the world, has lent more than 1 Billion dollars through their platform.

By challenging assumptions about what exactly it means to be an entrepreneur, Kiva is changing the way the tech and financial industries think about microfinance lending.

We recently took a road trip up to Kiva’s San Francisco headquarters to speak with people on the Data Analytics and Development teams, who shared the inventive ways that they use data to create real change in communities around the globe.



Building a Base

Before Looker, the Kiva team had to rely on siloed, disparate data stores that made analytics time-consuming for analysts and nearly impossible for non-technical users.

The analytics department, which had one full-time member (manager Rob Schoenbeck), brought in Looker to speed up reporting for the entire company.

“Instead of having to reinvent the wheel every time somebody wants to see a specific data point in a particular place, I can just define it once in model and put it in a whole bunch of reports,” Rob explained. “It can be repurposed over and over again with very little effort on my part, which really helps with data quality and data management.”

Soon after adopting Looker, Kiva then turned to Snowflake, a cloud-based MPP data warehouse, for storing their vast supply of data.

“We found Snowflake to be pretty attractive because it’s compatible with Looker and doesn’t require a lot of overhead maintenance or management,” continued Rob. “Being able to use Looker and Snowflake together lifted a huge burden off the data team and allowed us to look at the data in a clearer, more complete, and more statistically sound way than ever before.”

Rob began teaching analysts and business users in different departments how to create dashboards and reports using their new data system. “Now that they're on Looker, they can just create a dashboard or a scheduled report and they no longer have to go through this super manual process just to take the pulse of what's going on in their business processes.”

The data team now supports Looker for over 50 technical and business users at Kiva and hopes to further expand the number of data users throughout the entire company.

Kiva Labs

For the team at Kiva, insight into their data doesn’t mean increased revenue and profits; it means impacting individuals throughout the world in realistic and meaningful ways. With self-service, reliable access to data, Kiva can better meet the needs of the communities they serve.

Bennett Grassano, Kiva’s VP of Strategic Development, told us about a new initiative called Kiva Labs that turns Kiva team member’s insights into powerful forces of change.

Kiva Labs provides crowd-sourced, risk-tolerant capital to accelerate new ideas around the world. “Kiva Labs is all about using crowdfunding to drive innovation in microfinance,” explains Bennett, “to do this, Kiva Labs uses a lot of data.”

One project founded as a part of Kiva Labs is the World Refugee fund, which focuses on helping refugees in countries like Lebanon, Jordan, and Turkey who are often denied loans by typical micro-finance institutions because they’re considered “high-risk” due to limited or inaccessible credit history, few fixed assets for collateral and higher flight risk.

While the classification of Refugees as “high risk” is not unfounded, Kiva found that there are a large number of individuals and foundations who wanted to help them, and the people at Kiva set out to challenge assumptions and bring these groups together to aid refugees in a meaningful and sustainable way.

So they set up the World Refugee fund, which raises money to match the loans made by lenders on Kiva’s platform and sets the groundwork for a revolving fund worth over $9 million.

“With Looker, we were able to slice data in different ways and pull together the data relevant to our work with refugees and identify how they’ve been successful,” explains Bennett. “We were then able to serve that up to both practitioners who are looking for better ways to serve refugees, as well as foundations, corporations, individuals who are looking for a way to support them.”

Only by leveraging key data on the financial needs and potential of refugees was Kiva able to launch this dynamic fund. In addition to meeting the immediate needs of refugees as they restructure their lives, the fund helps refugees build credit and establish solid financial records that will support them for years to come.

“We learned that we could lean into creating a platform for innovation while giving our users something that they’re already looking for, and that was really powerful,” Bennett explained. “When you can harness the curiosity of the people closest to the work, focus on innovation, manage risk, and also create a great user experience, that’s how we’ll maximize the impact we can have on the world.”

To read the full story of how Kiva is changing lives with data, check out their full story.

Agile and Bulletproof: An Approach to Mature Embedded Analytics

$
0
0

frame

Our mission at frame.ai is to make it easy to build better relationships with your customers (internal and external) using Slack. As a small team of startup vets, we often need to work quickly and independently, so as Head of Data, I’ve typically built and configured the services I need myself.

Our ability to make quick changes was recently put to the test. Responding to an increasingly common customer need, I was able to design, prototype, and safely deploy a new analytics dashboard to a large subset of Frame’s customers in eight hours. This new dashboard wasn’t just a nice-to-have, either. As a result of it, one of our partners was able to make critical budget decisions that week based on the new visibility this dashboard provided. Our agility made the difference for them.

Now, I’ll readily admit that although I have strong data science background, I’m a middling engineer, at best. So how’d I manage to pull off a non-trivial feature release with new data models, new visuals, and customer-specific functionality that didn’t risk any of our SLAs?

The answer is Looker, along with a custom deployment framework that leverages its code-based approach to analytics. Looker’s architecture made it possible to programmatically automate big parts of our analytics, and that’s made a huge difference for us as we grow.

In this post I’ll walk through our design, and how you can use the same approach to iterate rapidly and safely for your customers wherever they are. The system, which we call the Frame Analytics Build System (FABS), is a combination of Python, Jinja2, and Git automation.

frame

(1) Anonymized example of customer analytics dashboards from Frame.

Frame for Slack is highly configurable, allowing every team to align Frame’s conversation enhancements with their existing tools and processes. Configurability and easy iteration are built into the engine that handles our operational logic, and I wanted our analytics products to have all of those same great qualities.

Having built customer-facing reporting products before I knew they can seem straightforward, but often require several iterations to get right. Traditionally, I would have assembled a team of UX, design, front-end, and back-end engineering skillsets to build a reporting webapp for our customers, but this approach is both slow and resource intensive.

In search of an agile alternative, I turned to Looker’s embedded analytics capabilities. I knew if I could leverage Looker’s data modeling, permissioning, and visualization capabilities in a way that provided the kinds of production guarantees we needed for our customers, we would be able to move exceptionally fast in bringing new analytics products to market.

I needed a way to guarantee the following:

  • High availability and uptime for every customer
  • Security through per-customer data isolation and granular permissioning
  • Manageable Customization per customer based on feature configuration
  • Deployment of updates to one or thousands of customers easily and with low risk
  • Validation and error-checking for every deployment
  • Rapid Design and development of new analytic products
  • Data Consistency through a single view of data across all customers and internally

Looker doesn’t have all of these features out of the box, but because it exposes all data models and visual artifacts in code (LookML), adding the missing pieces was easy. And because the Looker API can render these artifacts, it’s straightforward to build automated tooling around them. Enter: FABS.

frame

(2) Anonymized example of Frame’s embedded Operational Dashboard

FABS takes a customer configuration file and a set of core Looker view files and renders the full set of LookML files required to fully specify a reporting product for a customer (view, model, dashboard files). The final dashboards are then embedded in our management console and made available to Frame’s enterprise customers (example shown above). Importantly, all core views are versioned when referenced in a deploy, so ALL files that define a single customer’s reporting are effectively immutable. You can see the resulting LookerML structure diagrammed for two customers below:

frame

(3) Example FABS Looker architecture

Core views define the “baseline” for our analytics features in this hub and spoke model, and each configuration file defines the transformations and extensions required to create a single tailored spoke. By separating definition and deployment, we decouple customer applications from each other as well as from any previous versions of themselves. Since rendering the final configuration to any spoke is programmatic, it becomes trivial to specify and (re)generate an arbitrary number of them.

There are a few pretty magical things happening above.

  1. Frame’s internal data exploration and dashboards all reference the most up-to-date view of the core data model, allowing modeling and product development at maximum speed.
  2. Internal and App views all utilize LookML’s extends feature to provide an extensible data interface to each application, allowing us to override any dimension, measure, or dashboard with customizations.
  3. Embedded users only have access to their own data through explicit model level restriction and database query isolation.
  4. Each deploy produces an immutable data model branch for each customer app on top of Looker’s native Git versioning, leaving each app unimpacted by each other or by internal work (diagrammed below).

frame

(4) FABS viewed through the lense of version control

Mechanically, FABS is a mix of Python and Jinja2 templates. We specify high-level model, view, and dashboard configurations using YAML, defining overrides of any dimension, measure, or dashboard as needed. You can see a toy example below:

frame

(5) a toy example of a single customer config YAML

In the above example, we customize how a customer name is presented in reports by overriding the display name and provide custom drill downs for customers in the orders view. Additionally, we define the required joins for the model and include a “Customer Retention” dashboard from our reports library (also YAML) to be deployed.

Once we’ve used FABS to generate the appropriate LookML files, we push them to a development branch of a Looker Git repository. A simple refresh of the Looker developer GUI detects a remote update to that development branch, prompting us pull the recently deployed LookML updates into Looker’s developer mode. Here we can run LookML and content validation, and spot check any updated dashboards for correctness before a final customer-facing deploy.

frame

(6) Rendering from a config YAML to LookML files

Taken all together, FABS and Looker allows Frame to provide our customers a high-quality analytics product in a way that is scalable for us and exceptionally responsive and tailored for them. While we are using this system to deploy for external customers, one could easily imagine using the system to deploy for internal customers at large organizations.

Analytic reporting is just one of the many data problems we are solving here at Frame. If you are excited by building or using cutting edge conversational AI please reach us at contact@frame.ai or, even better, install our Slack app and DM us directly!

Analysts, It’s Time to Focus on Analytics

$
0
0

data-tools

I’ve got some bad news. If you’re an analyst, you’re not being well-served by your existing tools. And at Looker we understand your pain.

But I’ve also got some good news: today Looker announced the availability of new features that are going to make your life as an analyst much easier.

Now with Looker, you can:

  • Combine your business data with new data sources more easily than ever with Looker’s Data Blocks
  • Collaborate on your business logic with Looker’s simple, powerful integration with Git
  • Use Looker to build exactly the tool your company needs with Looker’s Action Hub

Sounds pretty good doesn’t it?

But before we get to the good news, let’s take a look at three challenges I faced when I was an analyst -- I think you might be facing these challenges, too. I like to think of these as the proof that analysts need better tools.

1. Write and maintain SQL queries.

One of the most basic job requirements for any data analyst is a fluency in SQL. Why? Because if you’re like me you’re going to be writing SQL a lot. And you’ll most likely be writing variations on the same SQL over and over and over again.

For most analysts, writing SQL from scratch is just a fact of life. Because sadly, the really impressive SQL you write (y’know, the ones you’ll want to save) will be so hard to parse in a few weeks that you’re pretty much just better off building that query from scratch.

I know that workflow because I’ve been there myself.

2. Optimize how fast those queries run over and over.

Writing correct queries is a good start, but sometimes the first thing you write doesn’t return fast enough. So another core task of analysts is examining database usage to improve performance.

And if your data volume is growing rapidly (which, of course it is), you also need to constantly think about upsizing and redistributing data across nodes. I know your pain all too well.

3. Answer any and all questions from business users

Those two responsibilities are just preliminary steps to doing the thing that you (and I) were actually hired to do -- analyzing data and delivering those insights to the rest of your company.

If you’re like me, that’s the thing that made you want to become an analyst in the first place. But that’s too bad, because the previous two tasks are going to take up 75% of your time. So even though your business users could use your help interpreting and analyzing data, you rarely have time.

Isn’t it strange that the task that you were originally hired to do (and that you WANT to do) is being crowded out by two responsibilities that you seem to repeat over and over again?

Does it feel like something’s broken? It is.

And the thing that’s broken is your workflow. But it’s not your fault. And Looker is here to help.

What if I told you Looker can dramatically reduce the time you spend writing and optimizing queries?

Most business intelligence tools on the market focused on making it easier to extract data from the data warehouse or making it easier to manage SQL queries. Those approaches both come from a world where data warehouses needed protecting from your business users.

Today’s data warehouses are ridiculously fast and they’re incredibly cheap. So you need a tool that takes advantage of them, right? That’s Looker.

Looker works to free analysts like you from the tedious work of writing ad-hoc SQL queries. And we do that by making it easy to codify your knowledge in a data platform that’s shared across your company.

The benefits of this approach are straightforward and easy to capture. You get to build on top of a centralized data model that contains all of your business logic. And that business model is used to serve your users by generating the SQL queries they need to answer their own questions. Without you doing it.

Now you can focus on what you actually want to do--game-changing analysis that can drive your business forward.

What’s more, with Looker you can leverage work others have already done in the form of blocks of business logic. So, for example, data experts at Looker have pre-modeled public data and made it accessible to you so you don’t have to reinvent the wheel. We call these Data Blocks.

Check out Looker’s Data Blocks to see how we’ve made it easier than ever to deliver new data sources like data from the US Census or weather data to your business users without the need to develop the logic yourself.

Speaking of leveraging others’ work…. We can help there, too

If building on your own work is hard in your current workflow, collaborating with others is near impossible. This is particularly frustrating because software engineers work on far more complicated code all the time, and they collaborate constantly.

How do they do it? Version control.

Looker leads the industry again, finally bringing advanced version control to modern data analysis. Now, instead of working on queries in your own silo, you’re building on your data like a software engineering team.

What’s more, Looker’s new shared Git Branches make it easier than ever to collaborate with other LookML developers on the same code.

So what are you waiting for?

We’ve worked with a lot of analysts at Looker. And if the problems I described above sound familiar, you’re not alone. I get it.

But I know from personal experience that after adopting Looker, you’ll be able to envision a whole different work life. (I was a Looker customer for a lot longer than I’ve worked here.)

And fundamentally, that’s why I came to Looker. Because giving analysts back the time they need to construct innovative solutions that help their businesses grow is hugely rewarding. And as we fully embrace Looker’s ecosystem, the innovation we’re seeing only grows. Looker’s Action Hub is a huge step forward on that. Now, analysts can easily build tools for their colleagues that reach outside of Looker, and integrate directly with the other tools people are using every day.

Want to make it easy for your teams to send data to Slack? Check out our Slack action. Want to make it easy to email or text a specific group of users through Looker? You’re going to love our Sendgrid and Twilio actions.

If all this sounds interesting, let us show you it in action. You can request time with our team and sign-up for a Looker demo to check out the analyst experience in Looker for yourself. Or maybe you want to learn more about Looker’s Action Hub, and data actions. Sign up for our LookInside webinar where Looker’s product team will be walking through some popular workflows available through Looker actions.

We’ve worked hard to make it easier for analysts to collaborate, experiment, and build insights together on our platform. We can’t wait to see what you build next.

Git More Out of Your Data Model: Announcing Git Branching in Looker

$
0
0

git-branching

At Looker, we place a high priority on making LookML development easy, efficient, and effective for analysts.

This idea - along with a strong support of coding best practices for collaboration - is grounded in Looker’s powerful integration with Git.

With the launch of Looker 5, this integration has grown even more powerful with the introduction of Git Branches in Looker.

Looker and Git

Developed by software engineer Linus Torvalds in 2005, Git was intended to help manage large code bases with multiple collaborators and has seen worldwide adoption since then.

Git at its core is version control, which is an absolute requirement for all development as it allows tasks that are critical to writing efficient performant code, such as:

  • Test uncommitted code in a development sandbox without affecting production code.
  • Allow others to view and refine uncommitted code before it’s pushed to production.
  • Manage multiple separate additions to the code base, before pushing out production.

LookML centralizes SQL logic in one place and allows analysts to collaborate on a single codebase, a workflow much closer to that of a software developer’s and one that is perfectly suited for a tool like Git.

Before Looker 5, Looker’s version control capabilities gave analysts the ability to test out new additions to explores and LookML dashboards before making those changes available to everyone else.

But it was a clunky process to view uncommitted code that other developers were working on, and there wasn’t really a way for a single developer to manage multiple additions to the model simultaneously.

So, we’ve made it possible to create shared branches in Looker.

What’s in a branch?

Think of a branch as an entirely separate copy of your codebase that’s still connected to the version of the codebase that’s functioning as the production code for your users. A branch allows you to develop and experiment freely without fear that your code will affect other users. It’s your own private sandbox.

That branch is still connected to the master code however, so when developers are ready, they can “push their code to production”, which merges changes on their branch with the master branch.

Prior to Looker 5.0, the only branches that could be created in Looker were branches for individual developers. Developers in Looker would work on their own private Git branch whenever they were in development mode. Looker would automatically create branches for LookML developers, and they were accessible to only that user (the only way to view another person’s branch would be to sudo as that user).

Enter Shared Branches in Looker

Shared branches in Looker change all of that. Now, developers in Looker can create shared branches that can be edited and modified by other LookML developers.

This is a big deal because this feature finally allows analysts in Looker to collaborate on the same enhancements to their data model. Now, if I’m working on something and want input from my team, I can get help from another LookML developer easily because they can just check out and modify the branch I created.

This of course, doesn’t mean that your private developer sandbox goes away. LookML developers can still develop on their own personal branch. Other developers will be able to view (but not modify) that branch. If another developer wants to modify code on another user’s private branch, they can always create another branch from that user’s personal branch.

We believe this will allow analysts to organize and collaborate in new and productive ways, and hopefully make life a little easier for analysts, as well.

Want to learn more about what makes Looker the perfect platform for version control? Learn what makes building a data model on Looker so powerful.

Our Five Favorite Reads from 2017

$
0
0

best_reads

As 2017 comes to a close, we’re looking back at some of our favorite reads from the past year. From personal stories to practical guides, these articles cover the topics most on our minds, as well as our customers’.

Here are a few of the Looker team’s favorite (and most shared) stories from 2017:

From Burning Millions to Turning Profitable in Seven Months — How HotelTonight Did It
This compelling story by Sam Shank, CEO and co-founder of HotelTonight, describes how the HotelTonight team completely revamped their business approach over a span of just a few months. Shank’s practical, candid advice makes this one of our most read and widely shared articles of the year.

Five Building Blocks of a Data-driven Culture
This TechCrunch article by WeWork’s Carl Anderson and Michael Li argues that data should be made available and used by everyone in an organization, not just data scientists and analysts. Their “building blocks” lay out the groundwork for anyone building or honing their data driven culture.

How to avoid big data project failures: Your 5-step guide
Building the technical solution is only the beginning of the challenge of bringing new tools to an organization. This article from TechRepublic is chock-full of practical advice to help combat the real business challenges of implementing a new data solution, touching everything from justifying business value early to finding balance in timing.

Five Keys to Leading in the Age of Analytics
Data is changing the way we run our organizations, and this article from Data Center Knowledge covers key technologies and strategies every leader should consider, and how they’re shaping business objectives and cultures.

'Big Data' Is No Longer Enough: It's Now All About 'Fast Data'
In this story from Entrepreneur, Tx Zhou shares three practical tips for taking the next step in the data evolution: actually making big data usable in the modern organization.

Did we miss anything?

Send us your favorite reads from 2017 on Twitter: @Lookerdata!

Happy New Year! The Looker Team

Viewing all 281 articles
Browse latest View live