Quantcast
Channel: Looker | Blog
Viewing all 281 articles
Browse latest View live

Rather Dashing Dashboards

$
0
0

Rather Dashing Dashboards

Looker dashboards are a key tool for telling interesting stories with data. Recently, we took the opportunity to make dashboards easier to visually consume by making a few simple and considered design changes.

Business Pulse Dashboard

Reduced Cognitive Load

Simple, clear presentation is critical for truly understanding data, so we’ve simplified the visual presentation of dashboards considerably. This allows people to focus on the data itself instead of on the ancillary information.

Brand Overview Dashboard

Side-by-side comparison of old vs new dashboard layouts.

You can see that the new dashboard layout makes much better use of space by reducing the amount of visual noise on each tile. We accomplished this by cleaning up repetitive information, standardizing font styles, and only surfacing controls when they are required.

We were also able to make several improvements to single value tiles to make them much more consumable at a glance. Since we’ve relocated some elements that took up a lot of space (like the tile footer), we’re able to increase the size of single values text and tile titles.

This results in a dashboard with a drastically increased amount of perceived white space, which feels a lot less cramped. Even better, the new dashboards can actually fit more data in the same amount of space.

Business Pulse 2 Dashboard

Don’t Repeat Yourself

Dashboards previously showed the time that tile was last run in the footer of each tile. Most often, though, all tiles on a dashboard will run at the same time in the same time zone. All that extra information causes your brain to do a lot of unnecessary work before it gets to think about the actual data.

In order to remove more visual weight, we decided to focus on the differences rather than similarities. So, we removed repetitive run time information and instead decided to show information on tiles that ran at a different time (or in a different time zone).

Brand by Cost Dashboard

Tiles that differ from the default time zone are highlighted for easy reference with an icon and a tooltip that explains the details.

Single Value Tiles

We also added some simple, powerful functionality to single value tiles. First, we cleaned them up and standardized their font sizing, weight and color. Not only does this have a cleaner visual appearance, but having disparate sizes or colors could cause people to infer meaning where there isn’t any. We also moved the tile titles beneath the value, which places the context of the number’s meaning right to where it’s needed.

We also wanted to tackle typography, and in particular make sure dashboards always pick a consistent, readable font size. This is especially tricky, because the data can really be any length. For example, perhaps you have a tile that shows the city that sent you the most web traffic today. Today it could be Santa Fe, but tomorrow your traffic could all be from the Welsh village of Llanfair­pwllgwyngyll­gogery­chwyrn­drobwll­llan­tysilio­gogo­gochYou want your dashboard to display nicely in either case (or anywhere in between).

You also want to be able to put this information in a tile of any size on your dashboard, arranged nicely amongst your other tiles.

To this end, in the newly designed dashboards, we’ve rethought how sizing should work. Now, when the length of the value present in the tile changes, or you resize a dashboard tile to customize your layout, the other single value tiles increase or decrease their font sizes in tandem. This makes certain that you don’t have one tile with huge type and another with small type, which looks incongruous.

Resizing Tiles Business Pulse

“Show Comparison” to another piece of data

You can now add a comparison value to dashboard tiles to provide even more context. Several options are available to style the comparative value including the ability to show a progress bar, an up-or-down change, or just a simple value. You can put any value into the comparison, including any dimension, measure, or table calculation. This makes it really easy to compare two values, or to have a dynamically updating progress bar that reflects a current goal.

First Purchasers

Context Is King

In order to help add even more context to your dashboards, we’ve added a new component that allows users to add various types of explanatory text to their dashboards. You can add and edit title, subtitle, and body text and place it anywhere on the dashboard.

This makes it really simple to divide a dashboard into sections, or even to tell a multi-paragraph story about your data right alongside it.

Filter Edit

Moving and Sizing Tiles

We’ve made it much easier to move and size tiles on your dashboards thanks to a completely rewritten layout engine. The new engine is much faster and much smarter about how tiles interact when moving and resizing. Tiles will move out of the way to make space more easily, and they won’t disrupt the layout as you move past them. It’s also much easier to move elements around while scrolling through the dashboard, like to quickly move something from the top to the bottom.

Moving and Sizing Tiles

Embed Improvements

The new dashboard changes are also all available when embedding. The embed options have been simplified to make it easier than ever to match the color scheme you’re after.

The End of the Beginning

Dashboards are hugely important to us at Looker – they allow you to curate a data experience however you want – whether it’s a polished overview of critical metrics for your business or a robust operational tool.

We’re really excited to bring you this set of new features – and we couldn’t have done it without the incredible feedback from our community of Looker customers. We want to hear what you think about our new, much more dashing, dashboards. So, feel free to send us a note with your ideas and comments!

But we’re just getting started. We think it’s really important to make dashboards as easy as possible to create, consume, and edit. We have a lot of really exciting features and improvements we’re working on across the entire product. We can’t wait to show you what’s next!


You're Not Normal

$
0
0

One of the trickiest things about being human is that you only get to see the world from one perspective. This is relevant not just for the reasons that philosopherslove toponder, but also because it makes putting yourself in someone else’s shoes really hard.

And it’s not getting easier. As we’ve gained tools to reach larger and more diverse audiences, putting yourself in their shoes has simultaneously gotten much harder and much more critical.

After all, if you’re only fashioning tools for members of your village, you understand intuitively how they’ll be used. But when your potential audience is global—living in contexts and locations you can’t even dream of—you’re far from an average user.

And yet, while potential audiences are growing, the world each of us inhabits online has only shrunk, thanks to what my friend Eli Pariser calls the filter bubble.

When I was writing emails for MoveOn.org, it was easy to assume our 7 million members were following every legislative development as closely as I was. Except that was, of course, nuts, since following politics was my job, not theirs. They had their own lives to attend to.

I knew this intellectually, but even so, resisting the urge to think of myself as normal was really hard. Whenever I logged onto Facebook or the blogs I followed, I was surrounded by a (relatively tiny) group of people just like me. And even though I knew it was only a few hundred people, it didn’t feel tiny. It felt like my whole world.

So how can you counter this innate, very human belief in your own normality?

Empathy. And data.

First, always keep your audience in mind. At MoveOn, I’d think about writing to my mom’s friends. They weren’t representative of all MoveOn members, but they were a lot closer to average than I was. And when I recognized that my intended audience’s experiences were simply too far from my own for me to imagine, I’d ask for help from others who were more representative, either personally if I knew them, or by survey if I didn’t.

Then, and this is critical, don’t just assume that your guess was right. Even when we try to think about the world through others’ eyes, we’re generally not very good at it. Your guess is only a hypothesis. It’s data that proves your hypothesis right or wrong.

And as I learned, asking the people around you if your hypothesis is right is no substitute for real data. It’s no accident that those people surround you, and they’re probably no more representative of your intended audience than you are.

Because while the internet demands that we enlarge the scope of our empathy enormously, it’s also made data to aid us readily available and highly actionable. While you have to imagine yourself in an ever-wider range of shoes, you have better tools than ever to make sure they fit.

So, if you’re an engineer and you think users will suffer through a less-than-smooth onboarding process, I encourage you to check the data about your sign-up funnel. I think you’ll realize very quickly that your willingness to stick it out isn’t normal.

If you’re a writer and you’re sure that readers are loving your 3000-word essay because they clicked on its headline, see how many made it to the bottom of the page. You may love #longform, but, chances are, you’re not normal.

If you’re in marketing or sales and are sure it’s time to vary your message—that prospects must be as sick of your lines as you are—survey and see if it has even penetrated the consciousness of the average user. You may be sick of it, but you’re not normal.

Saying that you need to empathize with your audience doesn’t mean compromising your vision. Your vision is a big part of why you’re not normal, after all. But it does mean that your vision alone won’t suffice.

The goal, as MoveOn’s co-founders Wes Boyd and Joan Blades put it, should be “Strong Vision, Big Ears.” The goal should be to embrace the fact that you’re not normal, while remembering that great software is an act of empathy. Empathy, that is, and data.

3 Key Considerations for Embedded Analytics

$
0
0

You’ve built a great product that resonates with the market. Your customers are enthusiastic, and word of mouth is overwhelmingly positive.

The in-house picture is cloudier. Your end users can’t run their own queries, which means they have to wait for someone else to generate the information they need. Because your solution requires moving data out of your database, analysis is complicated and time-consuming, placing your data analysts in an uncomfortable position: just by doing their jobs, they’re creating a bottleneck. Your engineers would love to help, but wouldn’t focusing on your core offering be a better use of their time?

You’re now considering adding value to your customers by giving back the data you’ve collected through a new analytics offering, and you wonder whether to build the tool yourselves or buy an existing tool.

Enter Powered By Looker, an analytics solution that allows you to provide data access to your customers in precisely the way they want, from a single data point to a fully white-labeled version of Looker. Our customers led us to this solution—they realized how valuable it would be not only to analyze data internally, but also to provide insights to their own customers through Looker. As a result, we extended Looker’s capabilities beyond the walls of their organizations.

Since launching Powered By Looker, we’ve received excellent feedback and suggestions. We just added the ability to customize the look and feel of embedded dashboards, and we plan to implement many other features, such as allowing end users to save their own reports. With so many vendors offering solutions that incorporate amazing data science, visualization, marketing analytics, and more, differentiating among them can be challenging. Why do our customers love our embedded analytics solution more than others in the oversaturated BI market? Looker is an ideal choice because it helps you:

1) Stop Losing Engineering Focus

Your engineers should be focused on making your product better, not on scrambling to create a new BI product outside of your core competency. In addition to the initial development of a new product, there will be customer support needs and ongoing maintenance, requiring an increasing amount of effort and time as more customers sign up. In companies that try to build an analytics tool in-house, we’ve seen that a) the project draws engineering focus away from core offerings and b) the tool inevitably gets stale very quickly once engineering resources move elsewhere.

Providing you with the best BI experience is Looker’s core focus—our product (and your experience) gets better with every release. All you have to do is to model the data and choose what and how you’d like to expose this data, a process made even easier if you’re already using Looker internally.

2) Stop Moving Your Data

Unlike most BI tools, Looker works entirely in your database. You don’t have to move your data out of the relational format it’s in, nor will you have to give your data up to Looker. This is great, especially if your team is using an MPP database (such as Amazon Redshift, Google BigQuery, or HP Vertica), which is blazing fast and easy to scale with increasing data volume. All questions are answered using your database, so you can take advantage of the investment your team has made in your infrastructure. And because source data doesn’t move out of your network, you enjoy an added security benefit.

3) Stop Querying for Others

In the traditional analytics process, a person who has a question asks an analyst for answers. The analyst has to identify the data, think about how to measure the results, write a query (and probably find some mistakes), then finally get back to the requester. Then comes the follow-up question, which is often just the first of many.

Instead of this inefficient process, Looker enables end users to query information on their own—without any understanding of programming or data—by translating their questions into SQL via LookML. LookML is a business logic layer that abstracts SQL queries, making it easier to define and maintain attributes and metrics. This not only enables end users to get the insight they need (and more), but also guarantees consistent and reliable calculations across users.

Here Are a Few Examples in Practice

Campus Logic

Campus Logic is a customer that purchased Looker in late February of 2016 and went live with its product about 2 weeks later. Through Looker, CampusLogic powers their CampusMetrics service, a cloud-based student financial aid analytics platform.

Brand Overview Dashboard

Urban Airship

Urban Airship embeds Looker to enable their customers to uncover user-level insights about their mobile app audience. Urban Airship Insight combines performance dashboards, funnel reporting, cohort analysis, customer segmentation and ad-hoc exploration in Looker to help customers understand the “why” behind ROI.

Brand Overview Dashboard

Google Cloud

Google worked with Looker to embed analytics that convey the power of BigQuery. Try it yourself! Explore public data sets through an embedded Looker experience.

Brand Overview Dashboard

What Happens Next

You’ve launched a terrific new product, but your work’s not done. You need an analytics solution that respects your engineering team’s time and talent, keeps data in your database, and empowers end users to create their own queries. You need Looker.

If you want more information about embedding an analytics solution in your web page or application, please reach out. We’d love to help!

Catching the Third Wave

$
0
0

Mary Meeker of Kleiner Perkins released her annual Internet Trends report this morning and I thought about going surfing for sure (I don’t surf - but maybe now’s the time..?). If you don’t know her, Mary is a bit of an Internet legend. When I was at Google back in early 2000s, Jonathan Rosenberg always sent around her report requiring that everyone read and understand her analysis. It was great advice that forced me and my teammates to stick our heads up above the day-to-day and think about what is happening in technology and what it means for the world. This year I was excited to see that data is part of her focus - specifically Data as a Platform. Every day we talk to Looker customers who tell us how they have fundamentally changed their way of doing business with the Looker Data Platform. And now, this massive shift has stepped onto a bigger stage.

The three waves she talks about are something we see every day. Data is either stuck in a bottleneck behind monolithic systems or is spread across a company stuffed into business applications and bolt-on BI tools. The Third Wave is the change our customers are experiencing with cool innovations like the Lookerbot in Slack and our Looker Data Apps where data becomes integrated into the very fabric of everyone’s day-to-day work.

Evolution of the Data Platform

We were also thrilled that she singled us out as one of the new breakout players pushing this third wave forward - showing how we’re moving from the chaos and complexity of the old way of analyzing data to the new data platform where data is integrated into every function and every workstream.

Evolution of the Data Platform

In reality, this trend is happening because of the innovative and unique ways customers like Warby Parker, Kohler, Avant, and Buzzfeed have turned Looker into a central decision-making platform that is woven into their every action and decision of their business. Every day we are thankful to have such smart and creative customers who are driving forward a new way of doing business and becoming the next wave of powerhouse companies on their way there.

A Confession

$
0
0

Hi. My name is Daniel, and I love SQL. Ever since I first started learning how to write queries on MySQL a decade ago, I’ve loved the discipline that SQL imposed on me. It forced me to think through my analytic questions in a, well, structured way.

And using SQL, I could answer just about any question I could come up with. It’s an immensely satisfying and empowering feeling, and it’s made me better at whatever job I held. With SQL and the right data, whenever I had to make a decision I could make it better using data.

Having professed my love for SQL, I should also point out that I’m fully aware of its warts. SQL is, for all intents and purposes, a write-only language. Even when I’m looking at a query I wrote, if it’s more than a couple weeks old, it’s generally easier to start from scratch than try to figure out what the heck I was doing.

And collaboration is a nightmare, since making sense of others’ SQL is even worse than reading my own. And that’s without even getting into the lack of versioning or difficulty of organizing large amounts of code. Or even the stupid, simple stuff, like how hard it is to pivot results or do a median calculation in most dialects.

But we analysts have reconciled ourselves to SQL’s drawbacks, because it’s great at what it’s built for: letting us quickly and directly get at the data we’re looking for. Plus, it’s not going anywhere. The growing popularity of NoSQL datastores notwithstanding, SQL has a 40-year track record and, along with C, runs the world.

So, when I first saw a demo of Looker three years ago, the first thing that struck me was that, unlike most analytic tools, Looker doesn’t try to hide the fact that it’s using SQL. The SQL that Looker generates and sends off to your database is always right there for examination. And what’s more, unlike most machine-generated SQL, it’s quite readable!

Naturally, I wanted to know how Looker managed that, and the answer was (and is) LookML.

Unlike some proprietary analytics languages, LookML doesn’t try to reinvent the wheel. It’s written by people who know and love SQL, so LookML aims to keep all the power and flexibility of SQL, but to smooth off some of its rough edges.

The result is a pleasure to use. LookML:

  • breaks your SQL down into bite-sized chunks so that future-you can actually make sense of how now-you defined a particular measure or dimension.
  • encourages collaboration, because others can make sense of what you wrote at a glance.
  • lets you define the way tables relate to each other once, and then uses that information to construct joins properly everywhere.
  • allows you to specify formatting for measures and dimensions right in the code, so you don’t have to export data to Excel to format numbers as dollars or percents or decimals.
  • makes it as simple to do pivots and running totals and complex case statements as it is to do sums and averages.
  • ensures that no one forgets that one critical predicate that you always have to include if you want accurate results. (For me, it was always specifying that order status was 'completed'.)

Fundamentally, LookML isn’t a mess of unreadable SQL files. It’s real code. Code that’s organized and version-controlled and extensible.

Is there a learning curve? Sure, there is with anything worth learning. But if you already know and love SQL, LookML will feel like home. You can learn the basics in a few minutes, and in an hour you’ll be using LookML to build complicated queries that would have been a huge pain to write in raw SQL.

As you get comfortable with LookML, you’ll find that instead of spending time trying to find that query you saved in untitled43.sql last month or looking up the differences between Postgres 8.x and 9.x, you’ll start relying on LookML to handle that stuff so you can focus on the interesting stuff.

And that’s how programming languages get better, not by throwing out the old, but by abstracting away the low-level concerns so developers can focus on building cool stuff. LookML does exactly that for SQL. It’s not a replacement, it’s an evolutionary step, and if you want to stop worrying about the silly stuff and focus on doing great analyses, I think it’s one you’ll love.

Unlocking Census Voting Data with Looker and BigQuery

$
0
0

Government-collected datasets contain a wealth of information about our world—everything from which roads have the most potholes to who votes. And in recent years, governments and other public agencies have made huge strides in opening datasets to the public.

Projects like Data.gov, the NYC Open Data Portal, and Data.gov.uk catalog tens of thousands of fascinating datasets that anyone can download and explore. But even though we’ve come a long way, the promise of these datasets—to shed light on how government is performing, on where discrimination persists, on how to increase civic participation—is far from fulfilled.

The data is more accessible than it’s ever been, but for the vast majority of citizens, having to download a huge CSV to their computer or having to map latitudes and longitudes manually is such a barrier to entry that the data might as well be locked away in private repositories.

That’s why we’ve partnered with Google Cloud Platform to take some of those fascinating datasets and make them available for querying with Looker on Google BigQuery. That takes data from being nominally accessible, to actually explorable for anyone with an internet connection.

Of all the public datasets, one of the richest is the one collected by the U.S. Census Bureau. Many people don’t know this, but the Census does far more than just the decennial survey. It also includes surveys like the Current Population Survey and the American Community Survey that ask a sample of Americans literally hundreds of questions about their lives.

Since it’s an election year, we thought a nice dataset to start with would be the Community Population Survey’s Voting and Registration Supplement, which is collected in November after every election and goes back to 1994. The problem is, if you go to download this data, you’re presented with a data file that looks like this:

DataFerrett Census Data Sample

Not very user-friendly. To make sense of this, you need to consult the codebook, which is different for every survey and survey year and looks something like this:

DataFerrett Census Data Sample

Needless to say, without very specialized tools, the process of extracting meaning from this data is usually quite onerous. The data isn’t super tall—about 1.5 million rows—but each row is more than 300 columns wide. And since each respondent is given a different weight in the survey, making sense of the data is no easy task.

Luckily, giving meaning to data is exactly what Looker is good at. And with the power of Google’s BigQuery behind the scenes, we can slice and dice the data in millions of ways and get answers in seconds.

To transform the 60,000+ lines of codebooks into something useful, we’ve written a couple of Python scripts to rewrite the information in LookML (Looker’s modeling language). A few hundred lines of scripting transforms the codebooks from an impenetrable mess into code that Looker deploys to let you query the data directly in BigQuery, no special knowledge required.

And once we deploy Looker, what do we see?

We see that the percentage of voters who are white, non-Hispanic, and haven’t taken any college courses has been shrinking steadily, going from 34% in 1994 to just 20% in 2014.

This is both because Americans are getting better educated over time and because the country is becoming more diverse.

We see that Baby Boomers made up the largest share of voters in 2012. And although there were far more Millennials eligible to vote than members of the Silent Generation, there were actually more voters from the Silent Generation. We’ll see how that looks in 2016 soon.

We can also easily see non-voting related trends, like the housing bubble. We see that California, Florida and Nevada, three states that were devastated by the housing bubble, saw big increases in population and homeowners from 2000 - 2006. But from 2006 to 2012, their population increases slowed and huge numbers of homeowners left or lost their houses.

But states like Texas, Georgia and North Carolina, which weathered the housing bubble relatively well, saw increases in homeowners from 2000 to 2006 and from 2006 to 2012 (albeit at a slower rate). These states have continued to see strong growth in both overall population and homeowners.

These analyses only scratch the surface of what we can discover when we have an easy way to explore the Census’ incredibly rich datasets. We’ve added a bunch more shareable facts that we’ve discovered at CensusLooker. And over the next weeks and months, we’ll be releasing additional analyses and datasets that we hope you’ll use as jumping off points for your own analyses.

But if you don’t want to wait, next week, I’ll be doing a Reddit AMA where you can ask me to look up any question that the Census can tell us about. I’ll provide answers live on Thursday, July 28 at 1 p.m. ET.

P.S. Here are a bunch of other questions that can easily be answered from this data set:

By state, what percentage of workers work for the government? (Washington, D.C. is actually #6, not #1.)

How many men vs. women have served in the Armed Forces since 2001? (Overwhelmingly male, but more gender-balanced than previous periods)

Which state has the smallest percentage of white, non-Hispanic residents? (Think tropical.)

Does when you immigrated predict your chance making >$60,000? (It’s not a perfect correlation by any chance, but earlier immigrants do seem to make more.)

How many teenagers are in the U.S.? (That’s a lot of teenage angst.)

Which Metropolitan Statistical Areas have the highest percentages of government workers? (Anyone know what’s in Gainesville, FL?)

Do naturalized citizens earn more than foreign-born, non-citizens? (In general, yes, though the difference is maybe less pronounced than you’d think.)

And that’s just a taste. If you have questions of your own, come to my Reddit AMA on July 28.

Flexible Visualization Editing

$
0
0

For years, Looker has been providing the best interface for querying your data.

Today, Looker is excited to be releasing an improved interface. Ideal for editing charts, and creating beautiful visualizations, the new interface builds upon the existing platform to support more powerful features, previously unavailable to most users.

A New Focus on Series

For this update, we’ve made some improvements to help streamline your process. Column, Bar, Scatter, Line and Area charts now utilize a new Series tab, allowing easy configuration of series specific options, as well as an improved overall layout of visualization editor options.

DataFerrett Census Data Sample

Series represent the groups of data which underpin a visualization. By focusing on configuring each series, we’ve improved the usability of the chart editor significantly, and exposed some powerful new functionality, providing more granular control around series’ colors, labels and layer types.

Making Color Pop

You can still edit the overall color scheme for a chart using the color picker at the top of the Series tab, but now editing an individual series’ color is easier than ever. When changes are made - by either using the color palette picker or via a series’ individual color picker -the changes are immediately reflected in the Series tab and the chart.

DataFerrett Census Data Sample

Series Label

Updating a specific series’ label is now incredibly easy, even if that series is derived from a querying containing a complex pivot, such as a tiered field. You can also toggle the full field name for all the series that do not have a custom label applied by using the Show Full Field Name option.

DataFerrett Census Data Sample

Simplify the Creation of Complex Charts

Finally, using a series’ Type picker, mixing chart types is now amazingly simple.. For example, you can easily create charts that mix column and line types, to better express the meaning in your data.

DataFerrett Census Data Sample

The Stage is Set

These new improvements not only make editing a chart simpler and expose much richer functionality, they also provide a foundation for future visualization editing improvements. We’re passionate about expanding some of these interactional changes to other chart types, and committed to making Looker the best platform for exploring your data and using vibrant visualizations to tell the whole story..

We can’t wait to see what you come up with!

Looker + Stitch: Combine and derive immediate value from all your disparate data

$
0
0

Looker is excited to be a launch partner for Stitch (formerly RJMetrics Pipeline).

A fully managed service, Stitch allows you to extract data from various sources and load it into a central warehouse that you connect directly to Looker.

Centralize Your Data with Stitch & Looker

Looker delivers a next gen BI platform that marries scalability and governance with agility and self-service. Companies can point Looker to any database and use its modeling layer and web interface to make raw, granular data accessible and explorable to the entire enterprise.

As data centralization is core to Looker’s philosophy, we rely on our ecosystem partners to offer a complete data stack by pairing best-of-breed solutions.

By using a tool like Stitch together with Looker and a fast analytical database, our customers routinely go from having data locked away in disparate sources to having a single source truth accessible to every line of business in a few days, if not hours. Stitch has out-of-the box connectors for databases like MySQL, Postgres and MongoDB; SaaS applications like Salesforce, Netsuite and Facebook Ads as well as a RESTful API that lets you push raw data from any source through the pipeline. Stitch currently loads data into Redshift, with plans to add Postgres and BigQuery support in the next few months.

Looker has been partnering with Stitch since its beta release, last year. Companies like Sprig, InVision and BeenVerified all rely on Redshift, Looker and Stitch to power their analytics efforts.

Looker Blocks for Stitch Data Sources

Looker Blocks make it easy for companies to quickly deploy expertly built, tailored solutions specific to each business unit or data source. They are also a great way for partners like Stitch to make the data they are replicating into a data warehouse immediately actionable.

Stitch has built three Looker Blocks so far - for Salesforce, Zendesk and Facebook Ads sources. These leverage Stitch’s expert knowledge of their API calls and resulting database schema, which they’ve built for simplicity and optimal performance. Each Block is then fully reviewed by Looker analysts for modeling efficiency and subject matter expertise.

The resulting Blocks offer development simplicity and unbeatable time-to-value from disparate data sources to a full suite of analyses and dashboards that jumpstart your data exploration and operations.

Example analyses employed by Looker customers:

Sales Rep and Customer Health Dashboard using Salesforce combined with Zendesk data (note the filter on a specific Sales Rep)

DataFerrett Census Data Sample


Sales Performance and Pipeline Projections, using Salesforce data

DataFerrett Census Data Sample


Facebook Advertising Performance Insights using Facebook Ads Data

DataFerrett Census Data Sample

Each Block is powerful on its own, but becomes exponentially more so when you join the data from disparate sources into the same analysis. By combining CRM data with your transaction database, payments and advertising data, you start to get a holistic view into your customers and unlock actionable next steps to make them more profitable.

Try Stitch & Looker

Refer to the Looker Discourse site for more details on Looker Blocks. Or access the Source Blocks by reaching out to your assigned Looker analyst, or request a Looker demo and trial.

To learn more and start centralizing your data, visit Stitch and create an account today.


Forrester Vendor Landscape - Insights Platform

$
0
0

Every day, we work with innovative companies that are integrating data into their operations in ways that are transforming their business both competitively and culturally. So it’s easy for us to acknowledge that our industry - Business Intelligence, Big Data, Visualization and Data Management - is going through a massive transformation. Mary Meeker did a brilliant job looking on the surface at what has been happening in her annual Internet Trends report and called it the Third Wave of the evolution of the Data Platform.

Now Brian Hopkins of Forrester Research has taken up the big job of trying to identify, understand and categorize the ecosystem. Both a step back and a deep dive which has produced an extensive report called the Vendor Landscape: Insights Platform, Q3 2016.

Hopkins categorizes Looker as a Business Insights Platform which he defines as a platform that “Provide(s) a tightly integrated set of tools to help business users work with data to explore it, analyze it, and operationalize insights.”

That seems about right. We see our customers integrating Looker so deeply into their processes that it no longer looks like a traditional BI tool - something a small number of people check and run reports from on a regular basis - but instead Looker is deeply integrated into the day-to-day work of the company. Definitely a Data Platform or Organization-wide Analytics Platform as Mary Meeker categorizes us, and perhaps even more so a Business Insights Platform as Hopkins defines us.

As an example of where Looker can bring value to a company, Hopkins says:

“Your sales organization likely recognizes that its account managers need deeper insight while they are meeting with customers in the field. The insight team supporting sales enablement may find it easy to push insights from a business insight platform like Looker Data Science, which can also host data applications that deploy insights into customer relationship management (CRM) solutions.”

Our customers - especially Warby Parker, Hubspot, and Dollar Shave Club - definitely would agree.

Building A Census Data Application In No Time Flat

$
0
0

If anyone is wondering what kind of interactive data applications you can build when you have a powerful data platform like Looker available, today we’ve got a great, easy example.

Today we’re launching “Explore the Census”. It’s a really lean example of how you can turn an interesting analysis into an engaging, polished data story that’s Powered by Looker.

We downloaded and groomed a bunch of fascinating data from the U.S. Census, dug into what it could tell us about home ownership from 1994 - 2014, and visualized the results. We found some fascinating facts, but wanted to present them in a way that anybody could understand and interact with.

With just a little bit of web design, a jQuery plugin, and Powered by Looker’s embedding, we were able to turn some disconnected analyses into a whole data story that makes it easy for anyone to zoom in on their state to explore the data.

You might be thinking, “Ok, but I’ve seen things like this before, what makes this one special?”

Creating custom data applications is usually an arduous process, with lots of data preparation, custom data visualization, and little reusability. Explore The Census was done — from analysis to design to deployment — with less than a week’s work.

And now that we’ve got the first one wired up, this design is completely reusable. Adding additional stories, which we’ll be doing over the coming weeks and months, is a few hours of work.

So if you want to make your analyses more engaging and more accessible — to turn interesting findings into compelling data stories and engaging data applications — we think this is a great place to start.

The ROI of Looker, Part 1: Scaling and Cost-Savings

$
0
0

We have a saying here at Looker: “We’re in the business of getting people promoted”.

By that, we don’t only mean getting the person who buys Looker promoted for making a great decision. We mean getting every Looker user promoted as well. With broad access to reliable data, and the ability to perform their own analyses to make better business decisions, every employee can perform their job better. We’ve heard from tons of users who were promoted thanks to learnings they made through Looker, and nothing makes us happier.

One way many customers like to show the impact of Looker on a business, is through the lens of ROI. We’ve seen this so frequently, we wanted to share the logic with our larger customer base and the public.

As with any software, the ROI of Looker can be difficult to fully quantify. Simple calculations on cost-savings account for some of the returns, but fail to capture the far-reaching and more nuanced returns of improved business processes. How can you accurately measure the financial impact of operationalizing data, or of competitive advantages? It’s not easy.

For part 1 in this blog series, we’re going to look at the ROI through the lens of FTE savings as a company scales. In part 2, we’ll get into some customer examples that demonstrate the returns of those improved business processes achieved through use of Looker.

“Before, the work that two of us are doing on our data analytics team would probably have taken 5 or 6 different people. With Looker, it’s a few clicks away.”
- Sandeep Kamath, Senior Manager of Analytics, ShopRunner

We’ve often described the typical data breadline experience most companies face with inflexible data pipelines and tools. In this all-too-common scenario, business users seek answers to business questions from their analysts, who add that request to a long queue of questions from other users. Because of the bottleneck here, analysts typically end up reusing old queries and business users end up relying on the same periodic reports and vanity metrics. Business users are never given the ability to ask and answer new questions for themselves.

As a business grows, the number of users who need data, the amount of data, and the complexity of analysis all grow as well. This means the business needs to hire more data analysts to service those needs with more manual SQL. Costs are increasing, just to keep pace with the growth of the queue, but even so, business users are still facing the same bottleneck problem.

DataFerrett Census Data Sample

With Looker, each analyst’s work is far more leveraged, so the user-to-analyst ratio is far higher. That means that the number of analysts, and the associated costs, grows much more slowly. Looker translates an intuitive UI into optimized SQL queries to be executed across your entire database, down to row-level data, so any user can self-serve with reliable data. Analysts define the business logic once, in Looker’s modeling layer, and every user leverages those definitions

This modeling layer, LookML, is the critical component that saves the need for additional analyst resources as a company scales.

“The department has 1,000 people, there’s very few data analysts in this new model for us [with Looker]. The analysts are required for the insights into the data, but for just querying data, the people can do it themselves.”
- Suresh Duddi, VP Engineering, Yahoo

Once an analyst has defined the business logic, users have infinite flexibility to query the data. And when the underlying data or schema inevitably changes, all the business logic can be adjusted with just a few lines of code, rather than it breaking all existing reports and dashboards.

We often see companies that are growing quickly turn to online SQL visualization tools. These tools are definitely an improvement over basic SQL editors, and make it easier to create visualizations and dashboards, but there’s no leverage. Every query still has to be written by a data analyst, so even if the executives have nicer dashboards, anyone who doesn’t speak SQL is still totally dependent on a data analyst.

That’s the fundamental difference between a true modeling layer and reusable SQL queries: software scales far better than analysts writing every query by hand.

“There was a lot of wasted time and energy when you think about starting from square one every time. Now with Looker, someone’s already done that for you, and agreed on this is the best way to look at this data and you can get right into making decisions with data.”
- Robert Olsen, Director of Data & Analytics, Digital Ocean

With salaries for experienced data analysts surpassing $100k/year in many cities, Looker easily pays for itself in short order. We regularly hear from companies that were hiring a new analyst for every 20-30 total new hires but, thanks to Looker, have been able to meet the analytic needs of hundreds of employees without hiring any new analysts

Some of our customers have great examples of these savings in practice. I’ll go into detail on several of these customer stories in part two of this blog series.

“Looker has definitely helped us in keeping overhead down for the analysts in the company. It’s allowed us to focus on working in the teams, and on doing the forward-looking analysis, rather than working on maintaining our data infrastructure.”
- Erik Johanssen, Data Analyst, Transferwise

If you’re interested in projecting the ROI of Looker for your specific company, or in learning more about how Looker can save you costs, please reach out and we’d be happy to chat!

Campaign to Cash Looker Block - Monitor and Measure your Marketing Investments

$
0
0

From Google Analytics to Marketo to Salesforce, modern marketing organizations utilize multiple tools to execute and track campaigns. With so many different data sources, it can be difficult to get a holistic view of marketing’s influence on opportunity conversion and revenue generation. To solve for this, analysts often manually blend data from all these systems to understand how to optimize their marketing mix and improve their return of investment. This manual reconciliation takes a significant amount of time, is prone to errors, and inherently prohibits real-time analysis.

Built by industry experts on a solid foundation of years of experience in Marketing and Sales analytics across many industries, Datasticians’ Campaign to Cash Looker Block is built to address common problems in Marketing Analytics, such as contribution to revenue, leads to pipeline to revenue, campaign performance to target, etc.. Since it’s operating on the Looker platform, with all business logic residing in a centralized modeling layer, there’s never a need for manual reconciliation, and all analytics are surfaced in real-time.

The Looker Block for Campaign to Cash Analytics works with data from Marketing systems (Eloqua, Marketo), Sales data (Salesforce), ERP (Oracle/NetSuite) and Website/Social Marketing systems, and ties them together into a single central warehouse (Redshift, Snowflake, RDBMS, Google Big Query, etc.) to allow users to gain insight from a blended data spectrum.

data bases

Below is a sampling of the analysis provided:

data bases

Campaign to Cash Analytics includes pre-built dashboard to track Marketing Campaign Contribution to sales pipeline and revenue, conversion rates per campaign/offering, performance analysis against other like campaigns, cost per lead and cost per customer analysis. Each is customizable to client needs and the list keeps on expanding.

Dashboards include but are not limited to:

  • Campaign ROI
  • Campaign/Offering Conversion
  • Trend Analysis
  • Ability to drill to details from key metrics

Build Your Own Ad-hoc Reports on Standardized Metrics

data bases

The Campaign to Cash Looker Block empowers users to get quick answers to their marketing questions at the speed of thought independently without need of external teams. Using ad-hoc data sources comprised of standardized critical marketing metrics, users can build their own custom reports to answer needs specific to their own needs and marketing processes.

This allows for quicker user adoption, lower total cost of ownership and gives users the tools to answer their own questions and own their own data journey.

If you’d like to try the Campaign to Cash Block, reach out to a Looker representative now! They’d be happy to help.

See what Looker can do for your organization here.

BigQuery Standard SQL + Looker

$
0
0

Last year, we were excited to announce our support for Google BigQuery. At Looker, we love learning about advancements in database technology because our architecture fundamentally depends on it. The more flexible and powerful databases become, the more flexible and powerful Looker is.

We were particularly psyched to explore BigQuery because it’s been powering Google’s internal data culture for more than a decade. When Google introduced BigQuery to the public, the first thing that grabbed us was its unique multi-tenant setup, which means that your database essentially never runs out of storage and never gets slow. This is ideal for Looker, as our LookML modeling language is dependent on querying data sources directly.

Another key tenet of Looker’s philosophy is making data available to everyone in an organization, allowing everyone to be their own analyst. BigQuery closely aligns with this philosophy as well, and demonstrates it by making data management easy and accessible for any user. Since BigQuery is a fully managed service, there’s no need for a database administrator and there’s no infrastructure to maintain. Uploading data is trivial and sharing data is as easy as sharing in Google Docs.

At its start, BigQuery was a bit of an “Ugly Duckling”…

Because it was initially built to support Google-sized data, BigQuery was developed to be extremely powerful, but unfortunately not without a few quirks. Features that made BigQuery work with really large datasets - and it worked well - weren’t exactly standard.The SQL used by BigQuery wasn’t entirely familiar, and perhaps a bit “ugly” when compared to the elegance of the SQL standard.

We partnered with Google Cloud to fully support BigQuery anyway, and we’ve seen a number of customers deploy BigQuery and Looker together to much success. But we also assumed that the great team behind BigQuery was only getting started and maybe we’d get to see the original “ugly duckling” transform into something a bit more reflective of its incredible architecture under the hood.

Today we’re excited to see that Google hasn’t disappointed.

BigQuery’s transformation from “Ugly Duckling” into a “Beautiful Swan”

Google recently announced BigQuery support for Standard SQL. This means that users can get the same power from BigQuery, but with a much lower barrier to entry. Transitioning to BigQuery from existing data workflows and systems is now much easier, since the SQL is consistent with other databases.

But that’s not the only new thing BigQuery is rolling out. They’ve also announced:

  • Timezone Support: Ensuring time consistency in reporting
  • Partitioned Table Support: Reducing query complexity and improving query performance
  • Improved Query Optimizer: Including support for complex joins
  • Predicate Pushdown: Allowing for more efficient joining
  • Arrays and Structures: Making BigQuery a leader among relational datastores in querying nested and repeating structures in tables (what many think of as NoSQL or unstructured data). This implementation works great with BigQuery’s scanning architecture.

What does this mean for Looker’s integration with BigQuery?

In short: Looker and BigQuery work better together than ever before.

We’ve worked with Google Cloud Platform to make sure that Looker is ready with full dialect support for BigQuery’s new Standard SQL (in Looker 3.56). In addition, Looker will continue to support the legacy dialect to help customers transition seamlessly standard SQL.

Standard SQL will make writing LookML for BigQuery easier and more flexible. And just like with any other SQL dialect, Looker lets you leverage all the power of BigQuery directly in the modeling layer. Data is never abstracted into another engine for transformation.

Additionally, LookML natively supports BigQuery’s repeating nested objects as if they were joined tables. This means that modeling and querying nested objects is now as easy as querying standard columnar tables.

With a database as big and infinitely scalable as BigQuery, the queries are never slow. Since BigQuery prices by GB scanned, Looker shows the cost of all queries before they are run so you can keep track of your spending and identify areas where your data model needs further optimization.

Get started with Looker and BigQuery

Want to learn more about Looker and BigQuery? Check out our whitepaper and request a demo!

Article 0

$
0
0

Parse.ly recently introduced their Data Pipeline product. Building on the analytics infrastructure expertise they’ve developed by processing over 50 billion user events per month, Parse.ly is now making its fully-managed pipeline available as a service for developers. Specifically, their open source recipes for streaming data compatible with Redshift and BigQuery provide an easy way to get started with Looker. Andrew Montalenti, CTO of Parse.ly, details the joint solution.

Making Analytics Work for Content

Parse.ly is an analytics platform designed to make it easy for anyone access and understand their digital audiences. Whether you need real-time or historical insights about your content, the Parse.ly dashboards and APIs help teams monitor, promote, and improve based on data.

Thousands of typically data averse editors, marketers and content creators have adopted the intuitive dashboards because we’ve removed the complexity and jargon around digital analytics. Plus, we’ve helped product teams rapidly develop features like content recommendations to drive higher on-site engagement through a simple API.

parse.ly computer and iphone

The real-time overview screens within Parse.ly’s web and mobile dashboards.

Parse.ly has over 170 customers, including media companies like TechCrunch, Slate, and Mashable, and brands like Artsy, Ben & Jerry’s and more.

Questions No Vendor Dashboard Can Answer

With our dashboard covering the basic questions for the entire organization, our customers became more sophisticated and started asking questions specific to their business. So, we introduced Data Pipeline: a new service from Parse.ly that provides clean and enriched raw event data collected from your sites and apps via a fully-managed service.

We handle data collection on infrastructure that has already been scaled for over 700 top-traffic websites and 500 million monthly unique visitors. We also take care of enriching the raw events with useful information like geolocation and device categorizations leaving you with something immediately ready for analysis.

When Parse.ly decided to branch out from our core dashboard offering to provide raw data access, we knew we wanted to partner with a business intelligence platform that shared our company’s philosophy of “analytics for everyone”. Looker completely embraces this philosophy and was an easy choice as a launch partner for Data Pipeline.

parse.ly data flow

  • Are you a media company or digital publisher?

If you answered “yes”, you now have your ideal data source for Looker. A clean, enriched raw data source for metrics like unique visitors, pageviews, sessions, time spent, video starts, video watch time, and more, then you can just integrate Parse.ly via our standard JavaScript tracker and SDKs, and the data will simply flow.

  • Are you a B2B or B2C marketer?

As more marketers invest in content marketing, getting data on content/audience engagement has become key to understanding and improving their strategy and proving their value.

If you’re a B2B or B2C company that has been investing heavily in your website, knowledge base, online resources, public documentation, and blog, and you want to unify audience data from all of these sources to get a complete picture of your content marketing efforts, Parse.ly can help. You simply follow our standard integration instructions, and data will flow.

Getting the Data Pipeline to work with Looker

Event data is delivered to you in raw form via a fully-managed AWS Kinesis Stream (for real-time data) and AWS S3 Bucket (for historical). From there, you have two great options to load it into a Looker-compatible SQL database, while fully controlling your extract, transform and load (ETL) process:

  • Near-Real-Time Bulk Loads: Run a cron job or similar to issue a Redshift COPY command from your S3 bucket or a BigQuery load command from the same. This will get data into your warehouse with latencies as low as 15 minutes from the time of data arrival.

  • Real-Time Streaming Writes: Spin up a long-lived process in your favorite language (we recommend Python) that consumes data from your Kinesis stream and does streaming writes to either a Kinesis Firehose Stream configured to point to your Redshift instance or to BigQuery’s streaming write API. This provides the fastest latencies possible; for Google, this can be sub-minute latency.

Our product has standard schemas for our event records, and we have these defined using Redshift and BigQuery DDL, as well. But even better — our partnership with Looker means we’ve built a Looker Block atop this standard schema, meaning that most of the basic LookML modeling work is done for you already. This means you don’t have to spend time deciphering attributes or learning how to properly structure a query to count unique visitors or sessions, you can just explore your data and start getting answers.

parse.ly data flow

Above is an example Looker dashboard built from Parse.ly’s Data Pipeline and the Looker Block for Parse.ly. The customer receives the streaming data (via Kinesis/S3) and loads it into Amazon Redshift or Google BigQuery, while maintaining full control over the ETL process. Using the Parse.ly Block, Looker queries the standard column names and types that are common to Parse.ly raw events. This provides an easy starting point for your exploration, from which the LookML model can be further developed to support analyses unique to your company.

Want to get started with Parse.ly and Looker?

Refer to the Looker Discourse site for more details on the Looker Block for Parse.ly. Or access the Block by reaching out to your assigned Looker analyst, or request a Looker demo and trial.

To learn more and start using the Parse.ly Data Pipeline, sign up for an account today.

Introducing Looker 4

$
0
0

We are so excited to introduce Looker 4, fulfilling our vision of Looker being a complete data platform. It’s a major step forward.

With Looker 4, you not only have an amazing data exploration tool, but also powerful mechanisms to deliver data anywhere it needs to go. What’s more, Looker 4 introduces the most advanced suite of tools to help get you started yet.

Thanks to LookML, Looker’s centralized modeling layer, building data applications and services has never been easier. In Looker 4, LookML is coming into its own as a true language—one that allows you to completely describe your data in business terms so humans and machines can ask and answer complex questions simply.

We think Looker 4 is a quantum leap forward for data analysts, explorers, and consumers. And we think that by radically simplifying the data platform, Looker gives application developers and data scientists the tools they need to stop accessing raw data, and access curated data through Looker.

For Data Analysts

At its core, Looker gives you a language to describe your data in a complete and flexible way. In Looker 4, we’ve made that process even easier.

Our new Integrated Development Environment knows LookML and is there to help you every step of the way. The IDE suggests syntax as you type. It provides context-sensitive help and documentation, it catches errors in real time, and it even completes the names of dimensions and measures you’ve already defined elsewhere.

In short, we think it’s a first-class IDE. And because Looker works off an abstraction of SQL, we think Looker 4 marks an important step forward in the world of data transformation—it’s the first IDE designed specifically for data analysis.

For Data Explorers

Facility with data is the core value of Looker. With Looker’s Explore page, we’ve made it even easier to get at your data in intuitive ways. Looker 4 adds the ability to fill in missing values in a known set—think missing dates or integers—solving a common SQL complaint and making forecasting dead simple. And this all happens right in the tool. It also allows you to drill into visualizations, picking the best possible visualization for you right in a drill overlay.

A big part of dealing with data is getting it to the right place. Looker 4 delivers unlimited data sets directly to Amazon’s S3, SFTP, or any web server through HTTP. Combined with IFTTT or Zapier, Looker allows you to deliver data just about anywhere.

And Looker 4 adds to Looker’s already impressive scheduler. You can now filter what gets delivered to whom, so each piece of data delivered is customized to the recipient. That lets you send the right dashboard to the right account rep, or the right report to the right store manager.

And of course we made great improvements in our visualizations. We’ve simplified the way you edit visualizations, but given you even more control. We’ve added trendlines. We built in map layers for zipcodes, counties, and states. We’ve added auto computation of trendlines, the ability to unpin the y-axis from zero and the ability to reverse the x-axis. Visualizations tell your story. These pictures should be worth a 1000 words.

Data For Everyone

Once your organization starts exploring data, the number of views into your data—our Looks and Dashboards—starts to grow exponentially. And pretty quickly, it’s easy to lose track of which view of data is the official one. We recently introduced the ability to group users and give them different permissions to help address that problem.

But Looker 4 goes beyond that. With Content Discovery, Looker 4 adds search, favorites, and even recommendations to help you find the insights that your co-workers are finding most interesting. As Looker takes your organization from information scarcity to insight abundance, we think these features will prove critical.

Data Everywhere

On top of all that goodness in the Looker BI tool, Looker 4 also introduces a full, RESTful API. This gives application developers all the tools they need to leverage the work their data team has done to curate data in Looker. No more need to maintain parallel data systems. And no more need to hack things—Looker’s API lets you do everything you can do from inside the BI tool, and more, all programmatically.

In addition to making it easier to get data out of Looker, we’re also making it easy to act on data right from within in Looker with Looker Actions. Looker Actions let you edit Salesforce records, or send emails with Sendgrid, all from Looker’s UI. You can connect from Looker’s UI and push data into just about anything.

Sorry, that’s a lot. We know. But that’s only because there’s so much packed into Looker 4. We can’t wait for you to see it. Shipping starts next week.


How to chat with Data

$
0
0

looker / slack logos



Big news, the Lookerbot - our chatbot - is live in Slack’s App Directory. We launched the Lookerbot in April and have watched it spread like wildfire among our customers. Since we last wrote about our chatbot, we now have hundreds of organizations using the tool daily. For companies like Buffer, Jet.com, and Docker, Lookerbot lets them slack their data making it easier for everyone in the company to be data driven. The Lookerbot’s inclusion in Slack’s App Directory means more people can now find Lookerbot and see what it can do.

The success of Lookerbot proves how important accessing data wherever you need it - in a slack conversation, on a warehouse floor, or powering an application - is becoming. To make informed decisions based on data, it can’t just be buried deep in some dashboard. This is what we are most excited about here at Looker. Shifting the paradigm away from what is traditionally thought of as BI, to a world where data-driven insights are at the fingertips of any user. Data can be inserted into conversations so decision-making happens more naturally and effectively than ever before. This announcement means we are not only continue to add functionality to Lookerbot, but also that we are giving application and data developers the tools to innovate and bring data to teams in new ways our top priority. You can expect more awesome things like Data Actions to continue developing as we unlock the power of the Looker Platform.


“This is what we are most excited about here at Looker. Shifting the paradigm away from what is traditionally thought of as BI, to a world where data-driven insights are at the fingertips of any user”


While talk of exciting new ways to engage with data is exciting and bots are hot in the press, we like to be transparent, and I know a lot of people still question if chatbots add real business value. Colin, our head of product, put it simply in a recent webinar he did with Slack and Hubspot, “there are some things bots are great at and somethings that they are not great at”. Getting immediate answers to data-based questions is one of those things they’re great at.

Chatbots provide an ideal interface for querying data, because you can ask directly for something that you already know you need. This one-dimensional dialogue reduces the noise and confusion that can arise when dealing with other tools and more complicated interfaces. Lookerbot is particularly useful because you can query anything from your database. Unlike many chatbots which come with a limited number of precanned commands, with Lookerbot, anyone can build a custom command so they can ask and answer any question. With Lookerbot, you effectively give every user in your organization the power to execute their own custom queries, and answer any question for themselves, all from within Slack. Powerful stuff.

Give your team the flexibility and power of direct data access, right at their fingertips, at any time. Try Lookerbot for your company today!


Lookerbot is available to existing Looker customers only.

Request a Demo


Segment + BigQuery Launch

$
0
0

We’re excited to be a launch partner for Segment’s support for Google BigQuery. Both technologies have proven to be incredibly powerful for our customers, and the joint Looker-Segment-BigQuery solution encapsulates a best in breed analytics stack for any data-driven business, from small business to enterprise.

Segment Makes it Easy to Centralize All Your Data


At Looker, we value the importance of using all data possible to drive the decision making. Segment has continued to make it incredibly easy to centralize data. Once they made application event data collected through their numerous integrations easily available in a data warehouse, we partnered with them to provide a joint solution leveraging Looker Blocks. Since Segment utilizes similarities in schema structure across deployments, the Looker Blocks for Segment model around that expected structure, providing immediate time to value Looker application. Using Looker Blocks with data collected from Segment, it’s easy to expose insights into critical metrics such as visit duration, high-traffic content, DAU, and more.

Last April, we enthusiastically supported Segment’s launch of Sources, which allows you to extract customer data out of other tools and cloud services, and analyze it in your data warehouse. With Segment Sources like Salesforce, Zendesk, Facebook Ads, and Google AdWords combined with event data from your application, it’s easy to build out a comprehensive view of your business in one place. Over 100 Looker customers are already using Segment and Looker today, including Hotel Tonight, Bonobos, Docker and InVision.

Google BigQuery Provides a Fully Managed, Powerful Data Warehouse


Similar to Segment, we’ve supported Google BigQuery through incredible growth over the duration of our partnership. Since Looker pushes down all analytical workloads to the underlying database, we’ve always loved BigQuery’s architecture. Its unique multi-tenant setup ensures that your database essentially never runs out of compute power and never gets slow, which makes it ideal for companies with fast-growing data. BigQuery is also a fully managed service, which makes it great for any organization that wants to start deploying analytics company-wide quickly and easily.

Google recently announced BigQuery’s support for Standard SQL, meaning it’s now even easier to get started, especially when transitioning tech and resources from existing data workloads. With this announcement, Looker jointly announced support for BigQuery’s Standard SQL dialect. We’ve been working directly with Google to ensure the best possible experience. As a result Looker is the only BI tool that can take advantage of every function in BigQuery in its modeling layer, effectively making the power of BigQuery available to anyone in your business.

Looker Makes Segment and BigQuery Accessible to All Your Users


Based on our relationships with both partners, we were thrilled to hear that Segment and Google were going to offer their services together. You can now use Segment to centralize data across your application and disparate channels into Google BigQuery, which provides fast time to query in a powerful fully managed service. Looker provides the ideal data platform for this joint solution, leveraging the power of BigQuery directly to expose data exploration and visualization for all data captured by Segment. In conjunction with this launch, we’ve also created a Looker Block for Segment and BigQuery so it’s even easier to get started.

Get started with Looker, Segment, and BigQuery


Refer to the Looker Blocks Directory for more details on Segment Looker Blocks or request a Looker demo and trial. To get started with Segment and BigQuery, visit Segment and create an account today.

How Data Actions make our Jobs Easier

$
0
0

Data Actions


As I’ve previously written, the ability to analyze our Salesforce.com data with Looker was an absolute game-changer for our Sales Development department. So much so that our entire workflow now revolves around using Looker to track our KPIs and inform, then refine, our business and measuring processes.

In addition to all the metrics tracking and ad-hoc querying we do each day, we’ve also come to rely on Looker for identifying and correcting data errors. Now, with Looker Data Actions, our process for correcting these errors has become even more streamlined.

Here’s one quick example:

To ensure that leads are routed to the proper Salesperson, it’s critical that the Number of Employees field is entered correctly in Salesforce. Our team has a Looker report that returns Lead records that are missing this information. Before Looker Data Actions, here’s how it looked:

Data Actions


What you see there is custom HTML that links to the Lead Record in Salesforce. This meant that to update the data, the SDR would need to click each link, wait for the tab to load, scroll down to the Number of Employees field, fill in the value, save the Lead with the new information, then close the tab and navigate back to Looker to do the next one. While this process technically worked, it required us to jump between tools constantly.

With Looker Data Actions, here’s how the same report looks:

Data Actions


Now, the SDR clicks the ellipsis next to the field, and a form is displayed so they can update relevant fields. The SDR updates the Number of Employees in the dialog box, and submits it. And that’s it.

Data Actions


The Lead Record auto-saves in Salesforce with the new information, without the SDR ever having to open new tabs or leave their Looker report. Much more streamlined, much more efficient. Here’s the final table, after about one minute of updates with Looker Actions:

Data Actions


While this particular example may not seem monumental, improving enough processes this way saves individuals time each day, and saves the team hours each week. Imagine each contributor on our team saving those five minutes, hundreds of times each month. We’re effectively increasing our FTE count, entirely through efficiencies gained in Looker. Additionally, this allows our SDRs to spend more time following up with their leads which means they can book more meetings. More meetings at the top of the funnel means more deals at the bottom.

This is just the beginning of our exploration with Data Actions - we can’t wait to implement more Actions using other applications in our tech stack like SendGrid, Slack, and countless others.


Curious to learn more about Looker Data Actions?

Request a Demo


My Bets for 2017

$
0
0

Lloyd Tabb


2016 was an amazing year for data and, incidentally, also for Looker. I’m excited to see what 2017 has in store. I’ve been known to place the occasional wager, so, based on everything we saw in 2016, here are my bets for 2017…

  1. Moore’s Law holds true for databases
  2. SQL will have another extraordinary year
  3. The data lake will find purpose

Moore’s Law holds true for databases.

Per Moore’s law, CPUs are always getting faster and cheaper. Of late, databases have been following the same pattern.

In 2013, Amazon changed the game when they introduced Redshift, a massively parallel processing database that allowed companies to store and analyze all their data for a reasonable price. Since then however, companies who saw products like Redshift as datastores with effectively limitless capacity have hit a wall. They have hundreds of terabytes or even petabytes of data and are stuck between paying more for the speed they’d become accustomed to, or waiting five minutes for a query to return.

Enter (or re-enter) Moore’s law. Redshift has become the industry standard for cloud MPP databases, and we don’t see that changing anytime soon. With that said, my prediction for 2017 is that on-demand MPP databases like Google BigQuery and Snowflake will see a huge uptick in popularity. On-demand databases charge pennies for storage, allowing companies to store data without worrying about cost. When users want to run queries or pull data, it spins up the hardware it needs and gets the job done in seconds. They’re fast, scalable, and we expect to see a lot companies using them in 2017.

SQL will have another extraordinary year.

SQL has been around for decades, but from the late 90’s to mid 2000’s, it went out of style as people started exploring NoSQL and Hadoop alternatives. SQL however, has come back with a vengeance. The renaissance of SQL has been beautiful to behold and I don’t even think it’s near it’s peak yet.

The innovations are blowing everyone’s mind. BigQuery has created a product that is essentially infinitely scalable, the original goal of Hadoop, AND practical for analytics, the original goal of relational databases.

Additionally, Google recently announced-that the new version, BigQuery Standard SQL is fully ANSI compliant. Prior to this release, BigQuery’s Legacy SQL was peculiar and so presented a steep learning curve. BigQuery’s implementation of Standard SQL is amazing, with really advanced features like Arrays, Structures, and user-defined functions that can be written in both SQL and Javascript.

SQL engines for Hadoop have continued to gain traction. Products like SparkSQL and Presto are popping up in enterprises and as cloud services because they allow companies to leverage their existing Hadoop clusters and cloud storage for speedy analytics. What’s not to love?

To top it all off, companies like Snowflake and now Amazon Athena are building giant SQL data engines that query directly on S3 buckets, a source that was previously only accessible via command line.


2016 was the best year SQL has ever had.
I’m betting 2017 will be even better.


My final bet for 2017 is that the data lake will finally be useful.

Companies have been collecting data for a while, so the data lake is well-stocked with fish. But the people who needed data most couldn’t generally find the right fish.

I support the notion of a data lake, dumping all your raw data into one data warehouse. But it doesn’t work if you don’t have a way to make it cohesive when you query it. There have been great innovations by companies like Segment, Fivetran, and Stitch which make moving data into the lake easier. Modeling data is the final step that brings it all together and helps some of the best companies in the world see through data.

Companies like Docker, Amazon Prime Now and BuzzFeed are using all their data to create comprehensive views of their customers and of their businesses. When these final two steps are added, the data lake can finally be a powerful way to get all your data into hands of every decision-maker to make companies more successful.

So there you have it… my prediction for 2017. Let’s revisit this again in December of 2017 and we’ll see how things shook out. Can’t wait till then? Make Looker your 2017 bet.


Request a Demo


Operationalize Your Data with Looker Data Actions and Segment

$
0
0

Leveraging Segment Sources in your centralized data warehouse provides a complete view of each customer. Combining customer event data from mobile and web with data from cloud sources like Salesforce, Zendesk, and Sendgrid gives you actionable insights for sales, customer success, product, and other areas of your business. Using Looker and Looker Blocks on top of the data Segment collects, you can quickly build a centralized data model in Looker with visualization and exploration capabilities, powering everyone’s decision-making with data.

Looker recently launched Data Actions, which is yet another way you can interact with Segment data on the Looker platform. This feature lets you use your external tools to take action without ever leaving Looker. Segment customers can take advantage of Data Actions in the Sources they’re already using today, such as updating a record in Salesforce, triggering an email in SendGrid, or assigning a ticket in Zendesk – all directly from Looker.

Let’s walk through a real life example, where we’ll tackle modeling and joining data from Segment’s Salesforce source and Segment customer event data in Looker, and then create a Data Action to trigger an email in SendGrid directly from the Look we’ve created.

Modeling Segment Sources in Looker

Looker’s LookML modeling layer allows you to model data and expose it to your organization without having to move the data from its source. Looker supports all Segment Warehouses endpoints: Postgres, Amazon Redshift, and, most recently, Google BigQuery. Let’s assume we’ve synced Salesforce as a Segment Source in addition to collecting customer event data with Segment’s browser library. We’ll also consider pageviews as an indicator of engagement on our application. We can join our pages table to our Salesforce accounts table by creating the following explore in Looker. We’ll also bring in some contact information to use in the SendGrid Action later.

- explore: pages

  join: groups {
    type: left_outer
    sql_on: ${pages.user_id} = ${groups.user_id}
    relationship: many_to_one
  }

  join: accounts {
    type: left_outer
    sql_on: ${accounts.external_id} = ${groups.group_id}
    relationship: many_to_one
  }

join: account_contact_role {
    sql_on: ${account.id} = ${account_contact_role.account_id} ;;
    relationship: one_to_many
  }

  join: contact {
    sql_on: ${contact.id} = ${account_contact_role.contact_id} ;;
    relationship: one_to_many
  }

By joining our pageview data to our Salesforce data, we can get engagement insights at the account level that could indicate propensity to churn. For pages, we’ll define measures in our LookML pages view file to help us calculate week over week change in pageview count:

- view: pages {
  sql_table_name: segment.pages
  fields:

  - measure: count_last_week {
      type: count
      filters:  {
        field: received_date
        value: "last week"
      }
    }  

  - measure: count_2_weeks_ago {
      type: count
      filters:  {
        field: received_date
        value: "2 weeks ago for 2 weeks"
      }
    }  

  - measure: week_over_week_change {
      type: number
      sql: (${count_last_week} - ${count_2_weeks_ago})::float/NULLIF(${count_2_weeks_ago},0)
  }

}

Assuming we’ve already defined a dimension for account names in the Account view file, we’ll select Account Name, Pages Count Last Week and Pages Week Over Week change in Looker’s Explore section. (We also added some conditional formatting to Week over Week change to easily identify at risk accounts!)

Data Actions

We now have a list of accounts that have shown a recent decrease in activity. We’d point our account management team to this Look so they can take action on better engaging those accounts.

Closing the Loop: Adding Data Actions

Our account management team could take this list of at-risk customers and take action in external applications like Salesforce or email. But wouldn’t it be easier if they could take action directly from Looker? Let’s add a Data Action to email in the view file for our contacts table. Using the Looker documentation for Data Actions and SendGrid’s Web API docs, we can construct the action in LookML to send an email to the Contact displayed in Looker.

Suppose we want to reach out to Bubble Guru, an account that showed a 74% decrease in usage. We can filter the Account Name on Bubble Guru and add in Contact Email to get a list of emails associated with the account. Now when we click the menu next to each email, we see the option to send Check-in Email.

Data Actions

When we click this option we’ll see a modal window where we can directly compose email from Looker using SendGrid.

Data Actions

Solely using Looker and Segment, we’ve been able to attribute usage data to our accounts, model that data for actionable insights, and take action directly from Looker. We talk to companies all the time about eliminating “data breadlines” — how we can help companies break down data silos and empower business users to get insights from their data. Data Actions is the next step in building a data-driven culture. We can enable teams across an organization to simplify their workflows in Looker with data from Segment, no context switching required.

If you’re not already exploring your data with Looker, we’d love to hear from you!

Viewing all 281 articles
Browse latest View live