Quantcast
Channel: Looker | Blog
Viewing all 281 articles
Browse latest View live

How To Use Predictive Analytics and Forecasting To Save Your Company Money

$
0
0

Predictive analytics and forecasting can save your company considerable amounts of money, especially when it comes to sales forecasting. For B2B companies, accurate sales forecasts can be the competitive advantage that keep the business running smoothly, while inaccurate ones can be quite costly. By using forecasting analytics that leverage the latest technology and are based in data, businesses can significantly reduce costs.

Using Forecasting Analytics With Your Data

To help you get started, here are the steps you should take when setting up a predictive analytics and forecasting model for your sales pipeline:

  1. Align with your business operations team on the outcome you’d like to optimize for. In this case - closing deals.
  2. Define the data-sets to use and input to the model.
  3. Centralize all the relevant data in a single place.
  4. Define a training data set that includes ample successful and unsuccessful outcomes.
  5. Define a small testing data set to evaluate the accuracy of the model you create. Make sure the testing set is different than the training set.
  6. Apply machine learning, statistical, clustering, and other predictive analytics methods.
  7. Run the model across the training data set and compare it to actual results to judge the accuracy of your model.
  8. Review results with major stakeholders and incorporate the forecasting model into business operations.
  9. Continue to train and monitor the model.

predictive_analytics

What Opportunities Can Predictive Analytics and Forecasting Find?

If you follow the steps above, you should be able to confidently create sales forecasts based on data and be able to compare them with other sales forecasting techniques. By leveraging data and predictive analytics methods, you can dramatically increase the accuracy of sales forecasts, and can even approximate how accurate the model is. And because it’s all based on data, you can feel confident sharing your forecasts and demonstrating your process for getting there.

Predictive analytics and forecasting is a big step forward for many companies, but for some, it only scratches the surface. Similar strategies can be utilized to:

  • Help your Sales team identify and focus on the best opportunities
  • Highlight the optimal times for an Account Executive to engage with a prospect
  • Inform Sales which content is be best suited for specific prospects

With so much potential to save money and create new opportunities, it’s clear why predictive analytics is gaining more and more traction in the market today.


Looker Achieves SOC 2 Type 2 Compliance

$
0
0

Looker remains committed to continually improving its security and compliance practice. In September of 2018, our Service Organization Control 2 Type 2 Report for the Looker Cloud Hosted Data Platform became available for customers and prospects. The SOC 2 Type 2 assessment was conducted by independent auditors, The Cadence Group, who specialize in compliance across multiple industries.

The Type 2 report addresses service organization security controls that relate to operations and compliance, as outlined by the AICPA’s Trust Services criteria. The report includes management’s description of Looker’s trust services and controls, as well as Cadence’s opinion of the suitability of Looker’s system design and the operating effectiveness of the controls, in relation to availability, security, and confidentiality.

While our SOC 2 Type 1 report, released in February of 2018, was a "test of design," showing that specific security controls were in place at a specific date in time, our Type 2 report is a much more rigorous "test of operating effectiveness" of the design, evaluated over a period of six months. A company that has achieved SOC 2 Type 2 certification has proven that its system is designed to keep its clients’ sensitive data secure, and that the design of relied-upon controls is operating effectively.

By implementing the data security controls necessary to achieve SOC 2 Type 2 certification, Looker continues to build on the trust that customers and prospects have in the Looker Data Platform. To provide further reassurance that the Looker platform is secure and highly available and that customer data remains confidential, we will renew our SOC 2 Type 2 certification every six months, beginning in Spring, 2019.

In addition to our ongoing SOC 2 efforts, Looker's compliance team is continually pursuing other opportunities to make the hosted Looker platform secure and trustworthy, including pursuing and achieving ISO 27001 compliance, self-assessing to the Cloud Security Alliance's cloud security assurance program, demonstrating that Looker handles customer data in accordance with the HIPAA data security standards and the PCI -DSS (Payment Card Industry) standard, and ensuring that the Looker platform is aligned with GDPR data privacy obligations.

6 Reasons To Love Looker 6

$
0
0

This week, we are proud to announce the launch of Looker 6.

Looker 6 takes the analytics platform to a new level with a robust set of new features, greater extensibility than ever before, and an application-focused approach designed to provide users with more value, faster.

Whether you’re a Looker customer or considering the benefits of a self-service analytics platform for your organization, here’s a brief look at some of the reasons to be excited about Looker 6.

1. Advanced Analyst and Model Developer Tools.

Analysts and developers are going to love how Looker 6 platform now builds upon the core services with a suite of open, web-native features designed to make building impactful data applications easier. And why should software developers have all the fun? Looker 6 provides a model development environment that includes branching, folders, version control, code sharing, and code validation capabilities.

2. Customizability and Extensibility.

Looker 6 includes version 3.1 of the Looker API and a number of extensibility features that allow analysts to better meet the growing demands of data users. With Looker 6, embedded analytics can be customized to better match the look and feel users expect. You’ll find that custom visualizations are more sophisticated to help data teams build easy, more instinctual dashboards, allowing users to find valuable new insights, faster. And thanks to our expanded network of technology partners, Looker 6 can be seamlessly integrated with tools such as AI/ML.

3. Looker Applications.

Applications offer easy plug-and-play analytics for specific use cases, making it easy to tap into the hidden value of your data. Structured enough to help users perform day-to-day tasks, applications sit within the larger Looker platform, supporting strategic cross-functional analytic needs. And announced with Looker 6 are public beta programs for applications, focused on digital marketing and web analytics.

4. Content Discovery and Easier Data Exploration.

Along with improved formatting options and greater flexibility for report scheduling, Looker 6 also includes custom fields, giving all types of users more powerful self-service exploration capabilities for lightweight ad-hoc data analysis. And you can help users explore even sensitive data in Looker; the Looker 6 platform is SOC 2 Type 2 compliant.

5. Advanced Administrative Controls.

If you’re the admin trying to control costs or improve governance, Looker 6 has a new way to understand how, when, and by whom your instance is being used. Because this data is pre-modeled, you can use it the same way you use any data in Looker -- not just for cost control or governance, but to encourage a data-driven approach to business among your users based on their habits.

6. Encouraged Focus on Analytics Value.

With the launch of Looker 6, we are continuing to improve upon our already outstanding customer satisfaction scores. By providing support, services, and other assistance to our customers, owning and using Looker is now easier than ever. And as the ecosystem of Looker consulting partners continues to grow, we’re ready to help you with every aspect of Looker 6 deployment, management, and customization.

We’re just getting warmed up…

The launch of Looker 6 doesn’t stop here with these 6 feature sets. In 2019, you’ll see local language versions of Looker 6, fully integrated workflows connected to 3rd party applications and platforms, more ways to take action on data from within Looker, and many more tools built on the Looker platform. Get all the details of what’s new with Looker 6 or see it for yourself by requesting a demo.

The Platform for Data

$
0
0

For decades organizations have been using data to better understand trends or events that are happening in and around their business. Today, business systems are generating far more actionable data than ever before – data that becomes even more valuable when intelligently integrated together. But the explosion of SaaS applications has made things far more complex. Organizations that previously had under 100 applications often have 10x that number today. Having a SaaS solution for every problem is great, but each of those solutions brings more data.

Unfortunately, the analytic data toolchain hasn’t kept up. It’s broken. The mess of point “self-service” tools that were designed to operate on narrow sets of siloed data have been cobbled together to create Frankenstacks – technology science projects that are painful to operate and nearly impossible to maintain. And every new data project often means recreating those complicated stacks from scratch. The Business Intelligence “self service, but only for a single piece of the puzzle” has failed us.

It’s time to rethink how data projects get done. On one hand, we have a wealth of valuable business data being generated by every application and system in our companies. On the other hand, we have no common way to create value out of that data. We’re constantly starting from scratch. What’s required is a common surface for that data that can be quickly shaped into more specific data applications that meet our core data needs. For example, the Revenue team needs a data application to help them understand price optimization. The Marketing team needs a data application to help them understand attribution and focus on ad spend. The IT team needs data applications to help them understand the myriad of event data that their systems generate.

Creating a common surface on which to empower these organizations is the key. To modernize the data stack – and greatly simplify the data supply chain to build value out of data in our organizations more quickly – a new platform for data is required.

The Platform for Data.

The idea of a platform is proven – build the underlying infrastructure pieces and make it extensible with application development components to allow the creation of "end applications" to solve business problems more quickly. It’s time to bring this idea to the data world.

Our industry has historically moved away from platforms in favor of one-size-fits-all BI dashboard tools. We now have a mess of point tools that make it harder to solve end-user business problems. Generic applications for data problems are not nearly as valuable as a common surface for data that can be formed and melded together to solve dozens of problems and is more closely aligned to how different business teams work.

The explosion in data volume and complexity has made a Platform for Data even more critical.

The more data you have, the more valuable it gets if it can be integrated across all business operations. Now that the movement of data to the cloud is the norm rather than the exception, we have untold amounts of data sitting in warehouses and lakes waiting to be piped through our daily workflows. BI grew up around the idea of small data extractions, but now we have new fast databases that give us the opportunity to solve much bigger, broader problems.

We’re also seeing the modern workforce hungry for data in areas traditionally ignored by point tools. The movement away from generic dashboards is well underway. In the past sales data was for the Sales team, systems data was for the Operations team, and ad data was for the Marketing team. Now, we see that bringing the sales data together with marketing data has huge value... and that's just the start. Companies want a more complete and integrated modern solution that goes beyond BI and includes specific applications like Customer Success, Marketing Attribution and Event Monitoring. It’s a way to work in data and operationalize your business around accurate, current data.

So, Looker started to build this platform. We’re organizing it around three big ideas:

Core Services: If you want to retain flexibility, you have to assume you’ll need to be looking at all of the raw source data, not just subsets for a specific application. Then you need to bring together and rationalize the hodge podge of data products that are needed to make a solution: data preparation, data integration, embellishment, governance, security, caching, visualization, access, etc…. Take what you currently need five different products to do for each task and build it into a single layer.

Extensibility: Integration is the key – how do X and Y come together to mean Z? Take those core services and put them together in an open, web-native architecture. Then build development environments to that integration, develop a language, and make every part extensible and accessible to other systems. The goal needs to be 110% API coverage – meaning all and more of the core service functionality must exist through these APIs.

Applications: In the world of SaaS you can't just build the platform, you have to build the first big applications. For Looker our first big application was BI, but that quickly evolved to address more specific functions, like Marketing, Event Analytics or Customer Success. To achieve true adoption of the platform, we also knew we had to embrace the people who will build third party applications on the platform and give them tools they want to use to deliver business value to their end users. In the world of SaaS, time-to-value is everything, so providing the applications on top is critical.

Our vision is simple: to build a platform for data that easily integrates all of a business's data and then allows it to be melded to specific work processes in a way where users can do more with it and solve higher-value problems. It gives today’s workforce want they want: the ability to work in data rather than just viewing it. At JOIN today we introduced Looker 6– the next evolution of the Platform for Data. When you build a platform, you don't even know what people are going to build on it later. Five years from now, we’ll be surprised at what people will create on the platform. Today is the beginning of boundless possibilities.

Powering the Greater Good with Better Data

$
0
0

Looker has been embedded in its community since the very beginning. That’s actually why we’re headquartered in Santa Cruz, CA. Our founder and CTO, Lloyd Tabb, was a long-time Santa Cruz resident who’d raised his family there and taught middle school in the community. So when he was starting his own company, he didn’t want to start it “over the hill” in Silicon Valley, but right in his own community.

And as Looker has expanded to offices in New York, Dublin, London, San Francisco and beyond, Lookers have found ways to give back to the communities they live and work in. From packing backpacks for school children who are living in homeless shelters in NY to cleaning up the river in Santa Cruz and learning CPR in Dublin, Lookers are passionate about contributing back to their communities.

But as the company has grown, we’ve been thinking about how we can give back in a more formal way. And we’ve asked ourselves ‘what we can uniquely give back’? Not surprisingly, what we realized is that Looker is uniquely qualified to provide reliable, self-service access to data. And while not every nonprofit needs that, plenty of charitable organizations are struggling with the same data chaos that their for-profit peers are.

That’s why we’re so excited to announce Looker for Good, our way of giving back to our communities and the groups that enrich them.

Looker Pledges 1%

The first component of Looker for Good is that Looker is joining Pledge 1%. We’re pledging 1% of our product to charity, as well as the employee time needed to help the charities we give Looker to be successful.

We’re thrilled to announce that the first recipient, Accion, is already standing up their Looker deployment and getting value from it. Accion is an amazing organization that’s been providing microfinance loans to small businesses all across America since 1991, and we’re so excited to help them get more value from their data with Looker.

And we are already looking for the next nonprofits to give Looker to, so if you work for a nonprofit or know of one that could benefit from free Looker, we’d love for you to nominate them here.

Nonprofit Discounts

The next component of Looker for Good is focused on making Looker accessible to all nonprofits. We’ve announced significant discounts off Looker’s list price for every nonprofit, whether they need a small deployment or a huge one. You can see all the details here.

Training Future Analysts

The final part of Looker for Good is that we’re offering free Looker deployments to educators who are interested in using Looker to train the next generation of analysts. If you’re an educator who’s interested, please don’t hesitate to reach out so we can discuss your use case.

We’re so proud of the work that Lookers are already doing to care for their communities. And we can’t wait to build out Looker for Good as a new channel for using our unique abilities to power the greater good with better data.

Engineers Have It - Now Analysts Can Too

$
0
0

Software engineers know what good code looks like. It’s readable, organized, modular, version-controlled, well-tested, and it doesn’t repeat itself. It leverages work others have already done and it’s been reviewed before it ships, ensuring that it makes sense to more than just its author.

Developers realized that if they were going to be building immensely complex systems in collaboration with tens or hundreds (or thousands) of colleagues, all of these principles were essential to moving their craft forward.

Unfortunately, analytics has been slow to adopt similar principles. Data scientists have certainly moved things in the right direction, using R and Python to write real code that conforms to many of these principles. But most data work is still done in SQL and/or Excel, using manual, unaudited processes that waste time, impede collaboration, and lead to mistakes.

Looker has been focused on correcting that problem since its very beginning. As I’ve written, evolving SQL into a much more flexible, reusable abstraction - LookML - was the key first step that Looker took in moving analysts toward a better way of working.

But it was only the first step.

Looker 6 takes us further than ever in bringing good coding practices to analysts of all stripes.

Version control has been included in Looker since the beginning, but we’re always improving it. In recent releases, we’ve added the ability to keep multiple branches of work in Git and to specify pull request and code review workflows for developers. With Looker 6, we’re adding the ability to organize increasingly complex projects by putting files into folders within Looker’s IDE.

To make sure that LookML developers can leverage work that others have already done - rather than repeating it - Looker 6 also gives you more options than ever for referring to others’ projects and importing their code seamlessly. In the same way that software developers point to others’ libraries and then use those functions in their own code, LookML developers can leverage analytic patterns and data models that others have written without having to copy and paste the code (and then worry that they’ll miss any future upgrades to that code).

No need to reinvent the wheel

With the directory of Looker Blocks™ constantly growing, there is tons of publicly available code that developers can import with a single line of code. And even if you’re just looking to manage internal projects in a simpler way, importing projects from where they live, rather than repeating the code in multiple places, is a great way to maintain a hub and spoke analytic organization.

Looker 6 also gives LookML developers more control than ever over how the fields they create are used and who can view them. This new field-level access control is critically important to writing good code, because it prevents developers from having to repeat code to create similar models with differing levels of access.

Automated testing for all

Finally, Looker 6 brings a fundamental concept of software development to analytic model development: automated testing. As code gets increasingly complex, it becomes impossible to predict all of the downstream effects that a change might have. That’s why good code has comprehensive test coverage that knows what the software is supposed to do and warns the developer if something breaks.

Looker 6 brings this same concept to data, allowing you to specify known values for your data and test to make sure that your data arrives at that correct value as you integrate new transformations into your code. That way, if a change you make suddenly delivers an unexpected value for last year’s revenue, or a first transaction date decades before your business started, you can be alerted immediately, before you affect others.

In all, software engineers are still the leaders in using good tooling to write good code. But Looker 6 brings analysts further than ever before. And we’ve got a lot more planned to give analysts all the tools they need to build and maintain the complex data systems their organizations need to be successful.

How to Query Amazon Athena Geospatial Data

$
0
0

Amazon’s Athena database supports a wide array of geospatial functionality that allows for building complex analysis with any data containing geographies or individual locations. With Looker, you can query data directly from Athena and leverage all of their geospatial functionality to give users the ability to work with massive geospatial data sets.

A particularly powerful geospatial data set that is available to the public is OpenStreetMap data. Since this data is available in a public S3 bucket, you’ll be able to very easily pull this data into Athena and start querying it. While it’ll be easy to get up and running right away running basic SELECT queries, the complex structure of the dataset means that writing manual SQL queries every time you have a new question is going to be a challenge.

This is where combining Athena’s geospatial functionality with Looker’s LookML modeling layer becomes incredibly powerful. To demonstrate some of this functionality, I thought it would be interesting to explore the natural environment of Southern California by county.

Geospatial Data Example with Amazon Athena

OpenStreetMap data contains a tag for “Nature” that describes mountains, beaches, capes, trees, etc. around the world. Here’s an example:

LocationNature TypeNameTags
52-0.062714,-78.8328985beachPlaya del Río Saloya{natural=beach, surface=sand, name=Playa del Río Saloya}
53-0.0647157,-77.9743917wetlandrío volteado{natural=wetland, name=río volteado}
54-0.0657521,-80.1585446beachLa Boca de tabuga{natural=beach, name=La Boca de tabuga}
55-0.0665063,-78.3998779peakGualagüincha{is_in:country_code=EC, natural=peak, gns_uni=-1372397, gns_classification=MT, name=Gualagüincha, source=GNS, is_in:state=Provincia de Pichincha, is_in:country=Ecuador}


In order to compare which county has the most “Nature”, I loaded up the boundaryshapes of each SoCal county into Athena. The boundary shape field can be read as a type ‘Polygon’ in Athena and represents a set of points.


nameboundaryshape
1San Francisco00 00 00 00 03 05 00 00 00 45 d6 1a 4a ed a0 5e c0 23 a0
2Madera00 00 00 00 03 05 00 00 00 84 d8 99 42 e7 22 5e c0 c0 79
3San Mateo00 00 00 00 03 05 00 00 00 56 0c 57 07 40 a1 5e c0 65 69


Identifying Nature

To limit the OpenStreetMap data down to just its natural elements and improve query performance, we can focus our dataset by creating a Derived Table in Looker to shrink the data set considerably.

view: nature {
    derived_table: {
        sql:
            SELECT
                tags,
                id,
                lat,
                lon,
            FROM planet
            WHERE type = 'node' and 
            tags['natural'] is not null
        ;;
    }
}

Joining Geographies

To figure out which of our natural points fall within in each county, we can join the counties table with the nature table using Athena’s ST_CONTAINS function. ST_CONTAINS returns TRUE when a specific nature point falls within the boundaries of each county’s polygon:

explore: nature {
    join: counties {
        relationship: many_to_one
        sql_on: st_contains(${counties.boundaryshape}, ST_POINT(${nature.lon}, ${nature.lat})) ;;
    }
}

Unnesting Data

Since our nature data is highly nested, we can use Athena’s un-nesting functions to parse out the details about each point of interest and codify that logic in LookML.

dimension: type {
    type: string
    sql: ${TABLE}.tags['type'] ;;
    description: "Nature Type as Defined by OpenStreetMaps"
}

Diving into Nature

From here, we can start to analyze the geographic features of southern California and was able to compare the number of beaches by county. Turns out Looker’s headquarters in Santa Cruz County has the second most beaches:

amazon_athena

San Bernardino County, not surprisingly, contains the most peaks over 1000 feet in elevation.

amazon_athena

Drilling into this data, I found that the highest peak in San Bernardino is San Gorgonio Mountain at 3,502’.


Nature NameNature Elevation
1San Gorgonio Mountain3,502
2Jepson Peak3,416
3Anderson Peak3,304
4Charlton Peak3,291
5Shields Peak3,261


Creating a More Complex Geospatial Data Analysis

Say you were interested in planning a hiking vacation in California, an important decision for your trip will be picking a hotel that is closest to the highest peaks you intend to climb. Luckily, we can pull hotel information from OpenStreetMap by filtering our data on WHERE tags['tourism'] = 'hotel'.


IDTagsLonLat
1595970835{addr:housenumber=715, addr:country=US, name=The Beach Cottages, tourism=hotel, source=SanGIS Addresses Public Domain (http://www.sangis.org/), addr:street=Thomas Avenue, addr:postcode=92109, addr:city=San Diego}-117.25536232.79297
2595978669{is_in:country_code=US, addr:housenumber=1558, is_in:state_code=CA, addr:country=US, name=Sheraton Carlsbad Resort and Spa, tourism=hotel, source=SanGIS Addresses Public Domain (http://www.sangis.org/), addr:street=East Balboa Court, is_in:state=California, addr:postcode=92008, is_in:country=United States of America, addr:city=Carlsbad}-117.31182933.134073
3596004616{addr:housenumber=4767, addr:country=US, name=Capri by the Sea by All Seasons Resort Lodging, tourism=hotel, source=SanGIS Addresses Public Domain (http://www.sangis.org/), addr:street=Ocean Boulevard}-117.258340432.8004225
4596022506{rooms=23, internet_access=wlan, addr:state=CA, addr:country=US, internet_access:fee=no, tourism=hotel, stars=3, source=SanGIS Addresses Public Domain (http://www.sangis.org/), addr:postcode=92101, addr:city=San Diego, addr:housenumber=505, smoking=no, name=Found Hotel San Diego, addr:street=West Grape Street}-117.1675132.725654


In Athena, I used the ST_BUFFER() function to find the natural elements nearby each hotel in the data set.

explore: hotel {
    join: nature {
        relationship: many_to_one
        sql_on: st_contains(ST_BUFFER(ST_POINT(${hotel.lon},${hotel.lat}), .25),
                            ST_POINT(${nature.lon}, ${nature.lat})) ;;
    }
}

Using LookML, you can create a metric that will allow you to count only the natural elements that have peaks greater than 1000 feet high.

measure: high_peak_count {
    type: count
    filters: {
        field: nature_type
        value: "peak"
    }
    filters: {
        field: elevation
        value: ">1000"
    }
    drill_fields: [detail*]
}

Then, we could use Looker’s explore section to build a query that counts the number of high peaks within a short drive from each hotel.

It turns out that one region in California in particular, the Big Sur area, offers 4 hotels that are each within a short drive of over 20 high peaks. The Big Sur Lodge seems like it might be a great place to take a hiking vacation.

amazon_athena

We can drill into the count of high peaks and plot the exact locations of each peak.

amazon_athena

To pinpoint exactly which mountains are closest to my hotel, we can use the ST_DISTANCE function to calculate distance as a crow flies between two points. (In degrees):

dimension: distance_hotel_to_nature {
    type: number
    value_format_name: decimal_2
    sql: ST_DISTANCE(ST_POINT(${nature.lon}, ${nature.lat} ), ST_POINT(${lon}, ${lat})) ;;
}


NameElevationDistance Hotel to Nature
1Manuel Peak1,0740.03
2Port Summit1,0500.04
3Pico Blanco1,1260.07
4Ventana Double Cone1,4770.09
5Mount Olmstead1,0940.09


Conclusion

Geospatial datasets like OpenStreetMap can help users answer many important and challenging questions about our environment. However, historically, geospatial data was notoriously cumbersome and clunky to work with. Data scientists were always required to meticulously transform the data for every new question asked by a non-technical user. Now, with a combination of Amazon Athena’s geospatial functionality and LookML, analysts can create an environment that enables non-technical end users to explore these types of data sets and answer questions on their own.

Building on Looker - Spotlight on Upcoming Digital Marketing Application

$
0
0

The power of the Looker platform still surprises me. Just when I think I’ve seen it all, I’m reminded again of yet another of its capabilities. I am especially impressed with the seemingly endless creative ways users utilize Looker. I’m always learning about unique ways people share data through emails and Slack, or create simple A/B tests with statistical significance.

That said, a data platform is not ideal for conducting your day-to-day business. A data platform is… well, it’s a platform - a basic foundation for building upon. A house needs a solid, dependable foundation, but without the house, you’re left with only a foundation.

The first product we built on the platform was our one-of-a-kind BI tool. Over the years we’ve proudly watched customers build impressive, fully functional applications using Looker’s BI tool. To date, our focus has been on creating the ideal data platform and BI product, leaving the building of additional products to others. But this all changes now.

Introducing: Looker Applications

As part of Looker 6 we announced two applications in beta: one for digital marketing and one for web analytics.

What are applications?

Applications are easy to use, enhanced Looker experiences for specific workflows. In addition, they’re plug-and-play, which means that we - the people who specialize in something other than data - are able to get Looker up and running quickly, without the help of IT. For example, a digital marketer can keep a close eye on their ad spend from an application. In a matter of a few hours, they can get this up and running themselves and are able to have fresh, cross channel analysis at their fingertips at all times.

And if that weren’t enough, they’re gorgeous.

Spotlight: Digital marketing analytics with Looker

Running the digital marketing application within Looker gives you one place to view and analyze paid marketing spend across channels like Google, Facebook and more.

amazon_athena

Making life even easier, the application even suggests optimizations that can be made to maximize every dollar spent.

amazon_athena

We’re especially excited about these applications because of what they are setting the stage for. To revisit my house analogy: these are spec houses in an emerging neighborhood. They’re built on a strong foundation and for a certain audience (digital marketers in this case) but with a general design to allow people to personalize as needed. And these are just our first applications — paving the way for developers to build more applications for more people on the Looker platform.

Get Started with Looker 6

There is more to Looker 6 than the personal and practical experiences delivered by applications. The product is now more powerful and easier to own. Enhancements to the underlying platform will help customers stay agile as technology and business needs change and expand. And to make Looker easy to own, we added on to our world-class support offerings, created more flexible services packages and announced upcoming availability in more languages.

Learn more about Looker 6 and Applications with our introductory webinar, or request a demo of the Web Analytics or Digital Marketing Application.


Accelerating Machine Learning with Looker + Amazon SageMaker

$
0
0

Amazon and Looker have been strategic partners since shortly after Looker’s inception. Looker hosts its instances in Amazon Web Services (AWS), and over 55% of our clients are using one of the many Amazon-hosted cloud databases such as Redshift, Athena, and various Relational Database Service (RDS) flavors as their primary Looker data sources. With such compatible products and hundreds of joint customers, Looker and AWS are continuously working together to make the end-user experience more streamlined, which makes re:invent one of the annual highlights for our team and customers. This year is no exception.

At AWS re:Invent 2018, we’re announcing an integration with AWS SageMaker as well as a new, free 60-day trial of Amazon Redshift and Looker. We’re excited about both of these additions because we believe that the combination of Looker and Amazon is truly changing the lives of our joint customers by allowing them to build data-driven cultures and thriving companies.

New Action Hub Integration with SageMaker

Looker has already developed Action Hub integrations that allow Looker to spin Amazon Elastic Compute Cloud (EC2) instances up/down based on a timed or data-triggered schedule. Now, we have a new Action Hub integration with Amazon SageMaker that streamlines the data science workflow by allowing model training and inference to be initiated directly from within the Looker Scheduler.

What does that mean?

That means that, from with within Looker, data scientists can:

  • create a query
  • visualize it
  • filter it
  • remove outliers and reshape the data
  • select the Action Hub integration to SageMaker
  • choose an algorithm (such as XGBoost) for model training
  • choose a location in S3 where the model will be saved
  • and then SageMaker will handle the rest!

Since training a model is only the first part of the machine learning (ML) process, we’re also launching a second Action Hub integration that closes the loop on predictions. With this integration you’ll be able to:

  • point to a saved / trained model in S3
  • send over a set of predictive features from a Looker query
  • get a result set dropped back into S3 that can be queried (via Redshift Spectrum or Athena)
  • see model metrics such as Precision, Accuracy, MAE, and AUC (among others) on a Looker dashboard
  • explore and visualize everything in Looker

SageMaker supports a number of different machine learning algorithms via its API. Looker will initially provide integration to two, XGboost and Linear Learner, with others expected to be released on a rolling basis going forward.

How can Looker’s integration with Sagemaker benefit you?

Let’s look at a common example. Suppose, you’re a marketer and are finding it very difficult to anticipate or predict how marketing campaigns will be received.

In this scenario, you can use ML with Looker and SageMaker to create models that attempt to predict which audience members are likely to respond to marketing campaigns based on previous data from similar campaigns. This supervised form of learning is quite effective when the correct features are used.

How does this differ from the traditional data science workflow...

Let’s say you’re a bank looking to offer a term loan to existing customers. You have a set of data from previous campaigns, including things like customer age, income, prior defaults, and number of campaign touches. You’ve blanketed these customers with a term loan offer in the past and are interested to know which types of customers responded positively to the offer. Intuitively, you know that there must be clusters or cohorts of customers with a high likelihood of a positive response (i.e. people with this set of characteristics took a new term loan when offered).

In a traditional data science workflow, you would take all the data, pull it into Python or R, and use that environment to explore the data. You would need to split out a training dataset and a validation data set, as well as holding out some additional data to test the model. Only then could you begin training, defining each input feature (predictor) and providing a (sometimes bewildering) array of hyperparameters specific to the training algorithm. For that, you would need to be pretty conversant with the programming language, the data itself, and the inner workings of the machine learning algorithm. And it all might need to be repeated whenever the input data changed.

With Looker, similar exploration can be done by a business user or data analyst (or a savvy data scientist) using a Looker Explore. The resulting query will be reusable (so it can be reapplied whenever new data arrives) and the results can be automatically sent down to SageMaker, creating a new model or augmenting an existing model with newly arrived data. Furthermore, with SageMaker, you don’t need to have powerful hardware or to manually spin up EC2 instances to handle the training workload. When training on a large dataset, you can specify a larger instance size, or even run multiple instances and have SageMaker handle all the distribution for you. If you didn’t just swoon, ask the data scientist next to you how cool that is.

After you have a well trained model, predictions can be performed in real-time or using a batch transform job. Whenever new data arrives, you can refine the model with further training. WIth the new predictions, now you’re helping to reduce overall marketing costs as well as ensuring that targeted campaigns are reaching the desired customers.

Not using Looker or Redshift yet? We’ve got a new joint trial for you!

Redshift currently offers a 60-day free trial to provide a first-hand experience before you commit. The newly announced Looker Redshift Trial Experience will take it a step further by allowing users to seamlessly test out an entire modern data stack from data warehouse to analytics to dashboard to action.

To help you get up and running even faster we have a suite of Looker Blocks, pre-built templates of code customized to model data for specific use cases and tools, optimized for AWS users. Some of the Looker Blocks we co-authored with AWS to allow customers get the most out of their Redshift usage by making it as simple as possible to monitor AWS log data, identify opportunities to improve performance, and isolate levers to help optimize AWS spending. Looker Blocks drive faster time-to-value and have help joint Redshift customer adoption grow 200 percent over the last two years.

Curious how it works??

You can start the joint free trial here and then…

  • load or stream all data into the Amazon S3 data lake
  • Amazon Redshift Spectrum can then query from high performance disks or directly from Amazon S3 in open data formats
  • Redshift automatically connects with Looker allowing customers to:
    • Store, manage and process petabytes of data in seconds
    • Access a vast library of advanced analytical functions
    • Implement Looker Blocks to get further faster
    • Control distribution patterns, storage architecture and auto-scale at the push of a button

If you want to conduct geospatial analysis or other transactional workflows, you can also load data directly from S3 via Athena.

Still curious? You can learn more about the combination of AWS and Looker here.

3 Ways to Improve Data Security - Centralize, Govern, Monitor

$
0
0

Data security is vital for every company. Additionally, data privacy has become increasingly mission critical due to GDPR and other global privacy regulations. We see evidence of this everywhere -- from headlines in the press to changes in budgets and priorities. And according to CIO.com: “Data analytics and security will dominate CIO spending in 2018 and 2019.

As the demand for data privacy and security has increased, so too has the demand for access to the data necessary for continued business innovation. It is the responsibility of organizations today to bridge the gap between data supply and demand in a way that keeps data secure and compliant with privacy laws.

The architecture of the Looker data platform simplifies database security by leveraging world-class database technologies, providing comprehensive data governance, and a robust audit trail.

The Benefit of a Centralized Database

Cloud databases such as Google BigQuery, Snowflake, and Amazon Redshift have made it easier and more economical to centralize an organization’s data. When data lives in many locations, it reduces an organization’s control over that data. This is why having a centralized database helps to increase security and meet modern compliance standards.

From here, whether it’s through a Looker-hosted or on-premise deployment, Looker’s architecture enables our data platform to query the centralized database directly, without moving or extracting data to workbooks, cubes, .csv files, third-party analytical databases, or desktops. This reduces the risk associated with unauthorized data access and exposure. An additional benefit of this ‘in-database’ design is that real-time data generates the freshest reports and insights.

Database Permissions: Authentication, Access Controls, and Data Governance

Your company has likely made investments in modern user authentication tools. Looker supports two-factor authentication, integrates with LDAP, SSO, and can inherit the database permissions you’ve already established.

Built into Looker’s platform are fine-grained access controls that provide layered levels of data governance:

  1. Model Level - limits which models people have access to, which also controls database connections.
  2. Group Level - limits what content people have access to in Looker.
  3. Role Level - sets exact feature functionality and data an individual has access to in Looker.

Historically, business users worked with or viewed reports with extracted data outside of the secure database environment. This data does not have native access controls, and creates data security and privacy concerns. Looker runs queries against the database itself and the results are displayed in a web browser. When sharing a report with another employee, the second user will only see the information they have permissions to access. If the report is shared with a person outside the organization, your security authorization protocols would only grant access to the Looker platform if it is set up as publically available data. This is an example of layers of data governance providing access, but protecting your data and customer privacy.

A layered approach to data governance is of particular value to industries which have specialized requirements about privacy: Healthcare and HIPAA, financial institutions and GLBA, credit cardholder data and PCI. More broadly, this includes any company who collects, processes, transfers or stores EU resident data requires GDPR compliance.

Database Events: Auditing, Monitoring, and Logging

In the event of needing to investigate who has accessed what data, Looker provides a robust audit trail. Administrators can provide transparency to internal and external stakeholders and reveal who has accessed what data and when. The ‘in-database’ architecture means every query and viewed report creates a database event, which Looker logs. Looker has monitoring tools built into the platform. This unique ‘in-database’ architecture can also enable real-time alerting if a predefined event of interest takes place.

Additional considerations: Stricter Service Level and GDPR Data Protection Agreements between organizations are promising as fast as 24-hour alerts about data compromises. This is because GDPR requires that data breaches are reported within 72 hours to a regulator.

If your organization’s data is floating around on multiple third-party analytics servers, downloaded to thousands of workbooks or .csv files on desktops, can your organization meet its SLA and legal obligations? With Looker, your data remains centralized, and you can instantly search a log of all those who have historically accessed that data to more quickly understand the scope and focus on areas of interest.

Move forward, securely

Demands for access to the growing volumes of data to drive business success aren’t slowing down, neither are new data privacy regulations.

Looker’s integrations with modern authentication tools, along with layers of data governance, scale at the rate of data user growth. A robust audit trail is an insurance policy if data access questions arise. On top of that, Looker is SOC 2 Type 2 certified, demonstrating our commitment to security.

With Looker, it is in fact possible to bridge the gap between data supply and demand in a way that keeps that data secure and privacy compliant.

How To Build A Customer Content Database For Your Company

$
0
0

Working as an intern at Looker taught me many things, from basic dashboard usage to complex analytic terminology. While most of what I picked up was from hands-on daily tasks, I was struck by what I learned about data by simply listening to the people all around me.

I came to realize that there was a pattern to the questions repeated by fellow-Lookers on the marketing, sales, and sales development representative (SDR) teams. “Where can I find content about customers using Looker for marketing analytics?” and “Do we have any shareable content for healthcare analytics?” were among the most common queries. It quickly became clear to me that this type of information is very important in the daily roles of many people, across departments.

It occurred to me that the core of all of these questions was one base question: “Where can I find the content I need?”

My mentor, Kelly Payne, and I discussed the apparent need and decided that a solution to many of these questions could be found in the creation of a smaller database, specific to the needs of the marketing, sales and SDR teams.

With over 150 customer use cases, ranging from case studies to customer blogs, we had plenty of information for these teams to leverage. But with the location of these materials largely unknown, many of these pieces were being underutilized. So, I began a search for a solution with two goals in mind:

  1. Gather all the data needed to answer questions regarding customer content location and content type.
  2. Create a repository that is easy to update, integrates with other systems, and can grow with the company.

Building a Useful Database

Knowing that a customer content database would be heavily-utilized by the sales, marketing, and SDR teams, I interviewed Lookers from each of these departments to better understand their needs before creating the tool. All three departments had a need for finding content that they leverage as value-add proof points in their roles. Additionally, each team shared what they thought would make the most sense when tagging and bucketing pieces in a content database.

The result of these conversations became the four categories for the content database:

  • Industry - Defines the various verticals which different content can fall under including FinTech, Marketing, Healthcare, and AdTech
  • Technology Mentioned - Groups content by mentions of an ETL tool or certain databases such as Snowflake, Amazon Redshift or Google BigQuery.
  • Target Audience - Refers to the personas that content is written for such as CTOs, data analysts, operations, marketing analysts, etc.
  • Business Value - Buckets content by type of value gained by customers such as ROI, time optimization, increased supply chain efficiency, etc.

Using Looker to Categorize Content

A database is useless without trackable data. So, with a complete table holding our compiled customer content, it was time to tag the content. This included creating summary descriptions, categorizing (or ‘tagging’) these descriptions, and recording any valuable, useable customer quotes.

The results were well worth the investment of time. With the categorized raw data alone, it was now possible to narrow down and search for specific pieces of content; whether it’s a bit of content about how a customer’s marketing team uses Looker, or it’s information catered towards a Director/VP level audience, or both. Even in this unfinished format, I was pleasantly surprised to find that I was able to help a teammate with a specific project. Best of all, with this new database we were creating, finding the answers we needed took only minutes instead of the hours it might have taken to sort through numerous customer videos.

customer_database

Once all of the included data was tagged, the next step was to moving the data into Looker. The data was moved into Google BigQuery, which allows Looker to connect and manipulate the data. In addition, our Salesforce data was joined to the content database, which provides tags for several categories (including Segment and Industry). There were a few minor issues with unnesting the data, but after learning SQL and leveraging LookML, the data was complete and fully usable! All the information was now queryable on Looker and I was then able to create two special dashboards: the Customer Content Analytics Dashboard and the Customer Content Lookup.

customer_database

Let’s Get Visual

Looker presents the data in dynamic visualizations, providing easy-to-read insights so content creation can be easily observed by anyone from any department, to find the content they need by filtering with any category. Best of all, the new database can scale with the company! As more content is created, it can be added with relative ease with a simple form and long-term trends can be observed as it grows.

customer_database

Rolling Out the Database to Other Internal Teams

After signing off with the stakeholders and my superiors, the new database was ready for release. There are small intricacies with the Customer Content Lookup that may leave users confused. To address this potential issue, I documented how to effectively use the Lookup and sent it out on Slack. Additionally, it was embedded on our internal employee site for better visibility and access.

In terms of defining success, there are two measures that can prove whether my project made a difference. If there is a decrease in questions about customer content or finding it, then the project is a success, since it means I, and many others, spend less time answering those questions. The second measure is an increase in customer content use across the marketing, sales, and SDR departments. While it is too early to determine the project’s success, it can become the staple tool for finding content.

The Takeaway: Was It Worth Creating a Content Database?

While tagging content may seem like a tedious task, it became immediately clear that several hours of time can turn into hundreds of hours of saved time across departments.

This project was well-worth the endeavor, because I learned how Looker can be used across multiple industries and how valuable data has become in this digital age. In other words, what was the most valuable lesson I learned from this experience? Leverage your data, make it more accessible, and become data-driven.

Surprising and Innovative Data Stories of 2018

$
0
0

We love learning how companies use data to improve their processes, better the customer experience, and win in their perspective industries. While we often seek out these stories and tips by chatting with our customers and asking them questions, one of our absolute favorite ways to be surprised is to see a customer has shared their data story on their own.

As this year comes to a close, we looked back at some of the data stories that we didn’t write, but loved reading in 2018. Here’s what they have to say about how data is changing the way they work...

Using Data to Reduce Their Carbon Footprint From Selling Burgers at Five Guys

International burger chain, Five Guys, goes above and beyond selling burger and fries. In July, Five Guys shared how they’re using data to transform their culture and reduce their carbon footprint.

“Looker has enabled us to do just that, giving everyone access to real-time, live data and insights. Based on that knowledge, we’ve improved warehouse efficiency, reduced waste and cut queue-times – all underpinning an improved customer experience.”
Fareldia Jefferies, Integration and Warehouse Manager at Five Guys

To learn more, read the full story here.

Changing the Analytics Experience For Business Users at Heroku

With over seven million apps created to date, Heroku’s business users needed a solution to cut through the data chaos and provide a self-service solution for working with data. Since bringing Looker into the fold, users can now ask and answer questions in one sitting.

"There’s always those times when somebody asks, ‘Can you give us a breakdown of how many people are signing up by role, or by ad source?’ Literally, it’s ‘Hold on a second.’ And we just add it – in the middle of the meeting. And we all review it. To me, that’s the holy grail."
Michael Schiff, VP of Business Operations at Heroku

To learn more, read the full story here.

Data Warehousing with a Modern Twist at Adore Me

Adore Me began online as a way to provide custom recommendations and a tailored experience for women’s intimate apparel. In less than ten years, the subscription company is growing and preparing to open 300+ retail outlets. Data is key to their continued growth. The Adore Me team uses the combination of Google BigQuery and Looker to keep their data and algorithms fresh.

“Looker helps with data ingestion for our data science model. When you’re connecting to a database, you need to have a lot of details about what tables do you use, how you extract the data, do you need to add any filters? When pulling it through Looker, it’s already curated and ready to go. You just need to select what you need.“
Diane Streche, BI Developer at Adore Me

To learn more, read the full story here.

Fundraising With Data at Unsplash

Unsplash provides beautiful images and photos at an unbeatable price (free!). When working on their next fundraising round, the Unsplash team needed to answer investor questions quickly and accurately. Without being confident in SQL, Cofounder & CPO, Luke Chesser, relied on Looker to respond to every investor question within their target reply time (without bugging additional members of the team).

“Having quick access to customize the charts in real time not only helped the conversation, but allowed us to answer questions that we could never have prepared for.”
Luke Chesser, Cofounder of Unsplash

To learn more, read the full story here.

Building a Data-Driven Culture at TotallyMoney

UK credit specialists at TotallyMoney give customers control of their data to help them make smart borrowing decisions. To continue doing this at scale, the TotallyMoney team needed to push past data being viewed as an administrative function to serve other teams. By partnering with Looker, they built a reporting infrastructure that gave them centralised truths that ultimately broke down the silos of analysts and excel reporting.

“The data capability at TotallyMoney is entering a stage where it can become even more proactive in driving value for both our customers and colleagues. Decision makers at all levels can self-serve their data and make better informed, faster decisions.”
Jack Mitchell, Product Manager at TotallyMoney

To learn more, read the full story here.

Crushing Gaming Data at King

In the world of mobile gaming, increased data visibility is the only way to best serve the customer, as it results in a quicker, better understanding of each game’s performance. This is especially true at King, famously known for its global Candy Crush enterprise, where the appetite for data continues to spread across the organization.

“There’s no part of the business that doesn’t use data in some form to make important decisions. With Looker, the data team has the architecture that enables different teams to drill into data that is more specific to them, as opposed to looking at ‘one-size-fits-all’ data.”
Jonathan Palmer, Product Director of Core Data services at King

To learn more, read the full story here.

Using Data Visualization To Increase Organizational Impact at Benefits Data Trust

Since its inception, Benefits Data Trust has secured over $7 billion in benefits and services to help individuals and families reach financial stability. To continue on its mission, it has become increasingly important that the BDT team have direct access to the data and information needed to ensure increased impact and efficiency. Using Looker, the teams are able to analyze trends, find answers to questions and formulate new ones, and share their insights.

“Data visualizations are at the core of our democratization process. They tell a logical visual story, inspire curiosity, and simplify the complexity of hundreds of thousands of data points.”
Matt Stevens, Director of Data Science at BDT

To learn more, read the full story here.

Cheers to a great year

We’d like to thank all of our customers for choosing Looker. We are continually impressed by the data-driven processes and solutions you build, and we’re honored to be a part of your journey.

Stay up to date on how Looker is helping empower people through the smarter use of data by subscribing to our blog.

Opportunity, Culture, and Balance — Insights From Looker’s Leadership in 2018

$
0
0

Workplaces are continually evolving and changing. In a matter of a year, a lot can happen that can divert focus away from the aspects that make up a positive, inclusive, and healthy work environment.

With a new year around the corner, we reflect on the importance of equal opportunity, culture, and balance in the workplace as shared by our leaders at Looker...

Creating equal opportunities for growth

Ideas are what fuel the conversations that are key for innovation and growth. In business today, it’s become increasingly important for ideas to be backed by facts and data. Jen Grant, CMO at Looker, emphasizes that by giving employees an equal view of the data, everyone can share in the opportunity to contribute to the successes of the business.

“Without data to bring people together, discourse favors those that are able to get their voices heard. And, once again, people with the megaphone tend to be the same types of people who have always held power.”

“But the solution is simple: Everyone — every employee — needs access to a unified view of a company’s business data. By giving everyone an equal view, companies can begin to increase the diversity of ideas and opportunities. Every contributor can help diagnose the organization’s challenges and share their ideas for solving them.”

(read the full story here)

Scaling with culture

As organizations scale, it can be challenging to scale company culture and values at the same rate. Frank Bien, Looker’s CEO, shares that equating customer and employee happiness is what Looker has and continues to build its growth around. By focusing on the way people are hired, the culture they come into, and how values are practiced within the company, culture can more easily grow with the organization.

“For us, it has to be employee first. If you have motivated employees who are passionate about what they do, you will have happy customers. You can’t have happy customers in a software company with a workforce that isn’t highly motivated.”

“Great companies exhibit a personality. And I think that’s the part that scales, it’s who the people are, and who they share the vision with and bring on, is what makes a company great.”

(read the full story here)

Striking a balance

One of the corporate values at Looker is “Make Time to Shred” — a surfing terms that emphasizes the importance of having a work-life balance. For our CTO and Founder, Lloyd Tabb, this comes in the form of riding his bike. Lloyd shares that by making time for yourself outside of the office, you can improve the way you approach new goals and challenges in your role at work.

“The best thinking I do when I’m programming is on the bike. It’s the ability to be moving and having the blood flowing and having the time to work through the problem. The creative process works best when I’m on my bike.”

“A good life is a series of good days. If you work all of the time, you burn out, and you won’t be able to do your best creative work when you’re overworked.”

(read the full story here)

Onward

We look forward to the good days ahead, and wish you a great start to the new year.

For updates from our customers, leadership, and partners, click here to subscribe to our blog.

How To Combine A Retail Calendar With Retail Sales Data

$
0
0

How often have you heard retailers, restaurants, or other companies talk about “comp” sales? In retail, comparable store sales indicate the performance of a company based off of sales from the previous period. For many, this happens quarterly or even monthly, and outlines how a store is performing year over year. However, year over year performance can be misleading if calculated incorrectly, which is why the use of a retail calendar is so important.

Why Use a Retail Calendar?

Depending on your sales channel, sales may not be even across every day of the week due to varying dates between different months and years. Historically, this has made it difficult to compare year over year sales until the 1940s, when the use of a retail calendar became common.

The retail calendar allows you to compare sales across different time periods to help predict sales. Retail calendars do this by creating four quarters with 91 days in each, in either a 4-5-4 or a 4-4-4 format (i.e. a month with 4 weeks followed by a month with 5 weeks and finally a month with 4 weeks). This ensures that every month has the same number of weekends as the same month in the prior year, so that sales for the same comparable time period have the same number of weekdays and weekends. Additionally, the National Retail Federation (NRF) uses a 4-5-4 calendar and adjusts the start of the calendar year to ensure that major holidays are reflected in the same time period for proper comparisons.

How To Combine Your Retail Fiscal Year Calendar And Retail Sales Data

Below is a step-by-step process detailing how to combine the two calendars to view all your retail sales data in one place.

Step 1: Download the appropriate calendar from the NRF for the time period you want.

Step 2: Next, create a table in your database with at least the following retail calendar information:

Retail Calendar Information To Download

  • Day
  • Day of Year Number
  • Week
  • Week Day Number

retail_calendar

Step 3: From here, use the SQL code below to determine the current week and current week day number.

-- GENERATION OF THE RETAIL CALENDAR BASED ON NRF CALENDAR AND PIVOTS

--------------------------------------------------------------------------------
-- INITIAL CODE
--------------------------------------------------------------------------------
CREATE TABLE IF NOT EXISTS drp.calendar
(
calendar_date TIMESTAMP WITHOUT TIME ZONE,
retail_day_of_week BIGINT,
retail_week BIGINT,
retail_week_day_number VARCHAR(64),
calendar_month BIGINT,
retail_year BIGINT,
this_yesterday INT,
this_week INT,
this_month INT,
this_quarter INT,
this_year INT,
yesterday_last_year INT,
this_week_last_year INT,
this_month_last_year INT,
this_quarter_last_year INT,
this_year_last_year INT,
yesterday VARCHAR(32),
week VARCHAR(32),
wtw VARCHAR(32),
month VARCHAR(32),
mtm VARCHAR(32),
quarter VARCHAR(32),
year VARCHAR(32),
_loaded_at  TIMESTAMP WITHOUT TIME ZONE,
PRIMARY KEY(calendar_date)
)
SORTKEY(calendar_date)
;


--------------------------------------------------------------------------------
-- MAIN CODE
--------------------------------------------------------------------------------
TRUNCATE TABLE drp.calendar;

INSERT INTO drp.calendar
WITH
    yesterday AS
(
    SELECT *
    FROM calendar.retail_calendar
    WHERE calendar_date = DATEADD(DAY, -1, CONVERT_TIMEZONE('UTC', 'PST',GETDATE())::DATE)
),
    date_flag AS
(
SELECT
    this_year.calendar_date AS this_year_date,
    last_year_week.calendar_date AS last_year_date,
    last_year.calendar_date AS last_year_date_2,
    this_year.retail_day_of_week,
    this_year.retail_week,
    this_year.retail_week_day_number,
    this_year.calendar_month,
    this_year.retail_year,
    CASE WHEN this_year.calendar_date = yesterday.calendar_date THEN 1 ELSE 0 END AS yesterday,
    CASE WHEN this_year.calendar_date = yesterday.calendar_date THEN last_year_week.calendar_date ELSE NULL END AS yly,
    CASE WHEN this_year.retail_day_of_week <= yesterday.retail_day_of_week AND this_year.retail_week = yesterday.retail_week
         AND this_year.retail_year = yesterday.retail_year THEN 1 ELSE 0 END AS this_week,
    CASE WHEN this_year.retail_day_of_week <= yesterday.retail_day_of_week AND this_year.retail_week = yesterday.retail_week-1
         AND this_year.retail_year = yesterday.retail_year THEN 1 ELSE 0 END AS last_week,
    CASE WHEN this_year.retail_day_of_week <= yesterday.retail_day_of_week AND this_year.retail_week = yesterday.retail_week
         AND this_year.retail_year = yesterday.retail_year THEN last_year_week.calendar_date ELSE NULL END AS wly,
    CASE WHEN this_year.retail_day_of_year <= yesterday.retail_day_of_year AND this_year.calendar_month = yesterday.calendar_month
         AND this_year.retail_year = yesterday.retail_year THEN 1 ELSE 0 END AS this_month,
    CASE WHEN DATE_PART(day, this_year.calendar_date) <= DATE_PART(day, yesterday.calendar_date) AND this_year.calendar_month = yesterday.calendar_month-1
         AND this_year.retail_year = yesterday.retail_year THEN 1 ELSE 0 END AS last_month,
    CASE WHEN this_year.retail_day_of_year <= yesterday.retail_day_of_year AND this_year.calendar_month = yesterday.calendar_month
         AND this_year.retail_year = yesterday.retail_year THEN last_year.calendar_date ELSE NULL END AS mly,
    CASE WHEN this_year.retail_day_of_year <= yesterday.retail_day_of_year AND this_year.calendar_quarter = yesterday.calendar_quarter
         AND this_year.retail_year = yesterday.retail_year THEN 1 ELSE 0 END AS this_quarter,
    CASE WHEN this_year.retail_day_of_year <= yesterday.retail_day_of_year AND this_year.calendar_quarter = yesterday.calendar_quarter
       AND this_year.retail_year = yesterday.retail_year THEN last_year.calendar_date ELSE NULL END AS qly,
    CASE WHEN this_year.retail_day_of_year <= yesterday.retail_day_of_year
         AND this_year.retail_year = yesterday.retail_year THEN 1 ELSE 0 END AS this_year,
    CASE WHEN this_year.retail_day_of_year <= yesterday.retail_day_of_year
         AND this_year.retail_year = yesterday.retail_year THEN last_year.calendar_date ELSE NULL END AS ly
FROM calendar.retail_calendar this_year
LEFT JOIN calendar.retail_calendar last_year_week
    ON this_year.retail_day_of_week = last_year_week.retail_day_of_week
    AND this_year.retail_week = last_year_week.retail_week
    AND this_year.retail_year = last_year_week.retail_year + 1
LEFT JOIN calendar.retail_calendar last_year
  ON this_year.calendar_date = DATEADD(year, 1, last_year.calendar_date)
    AND DATEPART(MONTH, this_year.calendar_date) = DATEPART(MONTH,last_year.calendar_date)
CROSS JOIN yesterday
)
SELECT
    retail_calendar.calendar_date,
    rc.retail_day_of_week,
    rc.retail_week,
    rc.retail_week_day_number,
    rc.calendar_month,
    rc.retail_year,
    COALESCE(rc.yesterday, 0) AS this_yesterday,
    COALESCE(rc.this_week, 0) AS this_week,
    COALESCE(rc.this_month, 0) AS this_month,
    COALESCE(rc.this_quarter, 0) AS this_quarter,
    COALESCE(rc.this_year, 0) AS this_year,
    CASE WHEN week.yly IS NOT NULL THEN 1 ELSE 0 END AS yesterday_last_year,
    CASE WHEN week.wly IS NOT NULL THEN 1 ELSE 0 END AS this_week_last_year,
    CASE WHEN month.mly IS NOT NULL THEN 1 ELSE 0 END AS this_month_last_year,
    CASE WHEN quarter.qly IS NOT NULL THEN 1 ELSE 0 END AS this_quarter_last_year,
    CASE WHEN year.ly IS NOT NULL THEN 1 ELSE 0 END AS this_year_last_year,
    CASE
        WHEN rc.yesterday = 1 THEN 'This Year'
        WHEN CASE WHEN week.yly IS NOT NULL THEN 1 ELSE 0 END = 1 THEN 'Last Year'
    END AS Yesterday,
    CASE
        WHEN rc.this_week = 1 THEN 'This Year'
        WHEN CASE WHEN week.wly IS NOT NULL THEN 1 ELSE 0 END = 1 THEN 'Last Year'
    END AS Week,
    CASE
        WHEN rc.this_week = 1 THEN 'This Week'
        WHEN CASE WHEN week.wly IS NOT NULL THEN 1 ELSE 0 END = 1 THEN 'This Week Last Year'
        WHEN rc.last_week = 1 THEN 'Last Week'
    END AS wtw,
    CASE
        WHEN rc.this_month = 1 THEN 'This Year'
        WHEN CASE WHEN month.mly IS NOT NULL THEN 1 ELSE 0 END = 1 THEN 'Last Year'
    END AS Month,
    CASE
        WHEN rc.this_month = 1 THEN 'This Month'
        WHEN CASE WHEN month.mly IS NOT NULL THEN 1 ELSE 0 END = 1 THEN 'This Month Last Year'
        When rc.last_month = 1 THEN 'Last Month'
    END AS mtm,
    CASE
        WHEN rc.this_quarter = 1 THEN 'This Year'
        WHEN CASE WHEN quarter.qly IS NOT NULL THEN 1 ELSE 0 END = 1 THEN 'Last Year'
    END AS Quarter,
    CASE
        WHEN rc.this_year = 1 THEN 'This Year'
        WHEN CASE WHEN year.ly IS NOT NULL THEN 1 ELSE 0 END = 1 THEN 'Last Year'
     END AS Year,
     GETDATE() AS _loaded_at
FROM calendar.retail_calendar
LEFT JOIN date_flag rc ON rc.this_year_date = retail_calendar.calendar_date
LEFT JOIN date_flag week ON retail_calendar.calendar_date = week.wly
LEFT JOIN date_flag month ON retail_calendar.calendar_date = month.mly
LEFT JOIN date_flag quarter ON retail_calendar.calendar_date = quarter.qly
LEFT JOIN date_flag year ON retail_calendar.calendar_date = year.ly
ORDER BY retail_calendar.calendar_date
;


-- DATA VALIDATION
INSERT INTO drp.data_validation
SELECT
    CAST('010: Retail Calendar' AS TEXT) AS tablename,
    COUNT(*) AS total,
    COUNT(DISTINCT(calendar_date)) AS unique,
    NULL AS last_synced_at,
    MAX(_loaded_at) AS last_load_date
FROM drp.calendar
;
  • The code does this by looking at the current date and finding the corresponding dates for that same week and week day number from last year.

This allows you to easily filter out the days that correspond to the current year and previous year, making for easy year over year comparison.

Important Note: You will need to run this code daily as either a Looker Persistent Derived Table (PDT) or as a scheduled job in your database to ensure that the flags in the table are updated daily to show the correct year over year values for yesterday, this week, and last week.

Step 4: Now you’ll need to incorporate the LookML below with the daily calendar table, which will create three sets of dimensions:

  • Calendar: this has all the different dimensions using the fiscal calendar
  • Retail: this has all the different dimensions from the retail calendar
  • Pivots: these allow you to show the year over year comparisons by simply filtering and choosing “This Year” and “Last Year”
  view: calendar {
  sql_table_name: drp.calendar ;;
  dimension_group: calendar {
    type: time
    convert_tz: no
    timeframes: [raw, time, date, day_of_week, day_of_week_index, week, month, month_name, month_num, quarter, year]
    sql: ${TABLE}.calendar_date ;;
  }
  # Filters
  filter: previous_period_filter {
    type: date
    convert_tz: no
    description: "Use this filter for period analysis - i.e. select This Year vs. Last Year for the Period using Pivot #6 "
  }
  dimension: previous_period {
    type: string
    label: "6 - Pivot by Previous Period Selected"
    description: "The reporting period as selected by the Previous Period Filter and Add a Filter for 'Dimension is Not Null'"
    group_label: "Pivots"
    sql:
      CASE
        WHEN {% date_start previous_period_filter %} is not null AND {% date_end previous_period_filter %} is not null /* date ranges or in the past x days */
          THEN
            CASE
              WHEN ${calendar_date} >=  {% date_start previous_period_filter %}
                AND ${calendar_date} < {% date_end previous_period_filter %}
                THEN 'This Period'
              WHEN ${calendar_date} >= DATEADD (year, -1, {% date_start previous_period_filter %} )
                AND ${calendar_date} < DATEADD (year, -1, {% date_end previous_period_filter %} )
                THEN 'This Period Last Year'
            END
          END ;;
  }
# Dimensions
  dimension: yesterday {
    type: string
    label: "1 - Pivot by Yesterday"
    description: "Pivot Dataset by Yesterday: TY vs LY. To Use, Pivot This Field and Add a Filter for 'Dimension is Not Null'"
    group_label: "Pivots"
    sql: ${TABLE}.yesterday ;;
  }
  dimension: wtw {
    type: string
    label: "2 - Pivot by Week"
    description: "Pivot Dataset by Current Week: TY, LY, & LW. To Use, Pivot This Field and Add a Filter for 'Dimension is Not Null'"
    group_label: "Pivots"
    sql: ${TABLE}.wtw ;;
  }
  dimension: mtm {
    type: string
    label: "3 - Pivot by Month"
    description: "Pivot Dataset by Current Month: TY, LY, & LM. To Use, Pivot This Field and Add a Filter for 'Dimension is Not Null'"
    group_label: "Pivots"
    sql: ${TABLE}.mtm ;;
  }
  dimension: quarter {
    type: string
    label: "4 - Pivot by Quarter"
    description: "Pivot Dataset by Current Quarter: TY vs LY. To Use, Pivot This Field and Add a Filter for 'Dimension Is Not Null'"
    group_label: "Pivots"
    sql: ${TABLE}.quarter ;;
  }
  dimension: year {
    type: string
    label: "5 - Pivot by Year"
    description: "Pivot Dataset by Current Year: TY vs LY. To Use, Pivot This Field and Add a Filter for 'Dimension is Not Null'"
    group_label: "Pivots"
    sql: ${TABLE}.year ;;
  }
  dimension: retail_day_of_week {
    type: number
    label: "Retail Day of Week"
    description: "Day of Week of Selected Date Based on Sunday Week Start (NRF Calendar)"
    sql: ${TABLE}.retail_day_of_week ;;
  }
  dimension: retail_week {
    type: number
    label: "Retail Week"
    description: "Retail Week Based on the NRF Calendar"
    sql: ${TABLE}.retail_week ;;
  }
  dimension: retail_week_day_number {
    type: string
    label: "Retail Week Day Number"
    description: "Retail Day of Week Based on the NRF Calendar"
    sql: ${TABLE}.retail_week_day_number ;;
  }
  dimension: retail_year {
    type: number
    label: "Retail Year"
    description: "Retail Year Based on the NRF Calendar"
    sql: ${TABLE}.retail_year ;;
  }
# Hidden Dimensions
  dimension: this_month {
    type: number
    hidden: yes
    sql: ${TABLE}.this_month ;;
  }
  dimension: this_month_last_year {
    type: number
    hidden: yes
    sql: ${TABLE}.this_month_last_year ;;
  }
  dimension: this_quarter {
    type: number
    hidden: yes
    sql: ${TABLE}.this_quarter ;;
  }
  dimension: this_quarter_last_year {
    type: number
    hidden: yes
    sql: ${TABLE}.this_quarter_last_year ;;
  }
  dimension: this_week {
    type: number
    hidden: yes
    sql: ${TABLE}.this_week ;;
  }
  dimension: this_week_last_year {
    type: number
    hidden: yes
    sql: ${TABLE}.this_week_last_year ;;
  }
  dimension: this_year {
    type: number
    hidden: yes
    sql: ${TABLE}.this_year ;;
  }
  dimension: this_year_last_year {
    type: number
    hidden: yes
    sql: ${TABLE}.this_year_last_year ;;
  }
  dimension: this_yesterday {
    type: number
    hidden: yes
    sql: ${TABLE}.this_yesterday ;;
  }
  dimension: yesterday_last_year {
    type: number
    hidden: yes
    sql: ${TABLE}.yesterday_last_year ;;
  }
}

retail_calendar

Step 4: Once you’ve done all this, you can use the code to generate a daily calendar. This will create a calendar view that will allow you to look at retail data on a fiscal and retail calendar basis for seamless year over year analysis.

The End Result

By combining a fiscal and retail calendar, conducting year over year analyses not only becomes easier for your organization, but also becomes more useful. With an accurate view of your year over year retail sales data, sales forecasting, monitorization of trends, and goal-oriented planning can be based on concrete data, giving strategic direction to monthly and quarterly initiatives.

Want To Learn More?

Interested in learning more about using your retail calendar and retail data together at your organization? Contact the Daasity team to learn about omnichannel analytics and data warehousing solutions for retailers, or stop by the Looker booth #1433 at NRF 2019: Retail's Big Show to learn how retail leaders use data to better understand their customers.

My Surprise Career With Diversity And Inclusion In The Tech Industry

$
0
0

When considering a career where I could positively contribute to society, a profession in diversity, equity, and inclusion (DEI) was not where I envisioned ending up. In college, I studied pre-med and majored in communications with two minors in healthcare management and healthcare communications. I was determined to become an OB/GYN and find the cure for cervical cancer. After failing organic chemistry, I had to take a step back and reflect on what it was I really wanted from my career.

I realized that I was most happy working with people, conversing with people, and using dialogue to make spaces better than they were before. Upon shifting my focus to this new goal, I began developing and facilitating spaces that empowered people to have difficult dialogues around the issues that impact our society every day, which lead to the beginning of my career in DEI.

What Does Diversity, Equity, and Inclusion Mean?

DEI stands for diversity, equity, and inclusion. These breakdown into;

  • Diversity: a presence of differences within a given setting
    • i.e., Sexual orientation, ability status, religious affiliation, race, gender, gender identity, gender expression, political affiliation, citizenship, socioeconomic status, and educational attainment
  • Equity: ensuring everyone has access to the same opportunities
  • Inclusion: ensuring those with different identities feel welcomed and their contributions and differences are valued, not just recognized
    • As Anicia Santos, a member of the DEI Committee at Looker explains, ‘Diversity is being asked to the party. Inclusion is being asked to dance.

Working Towards Diversity in the Workplace

Since making my career change, I quickly found that working in DEI is not easy. There is a lot of emotional energy required to organize and promote safe spaces for diverse groups of people from various backgrounds, political affiliations, and lived experiences to come together. As a leader and advocate of DEI in the workplace, it is my job to make sure people are heard, feel valued, and at the same time, are challenged to consider the multiple truths that can exist at the same time. While this isn’t always an easy task, the effect it can have in the workplace is monumental, which is why it is so important it be an area of focus at every organization.

What I’ve come to love the most about working in DEI is getting to see the breakthroughs that DEI initiatives help people experience. It is so powerful and beautiful to see people understand more about themselves, the world around them, and the lived experiences of others. It’s because of these experiences and deeper understandings that I, and I’m sure many leaders in DEI, will continue to find ways to create safe, brave spaces, so that people can truly be their authentic selves and feel like they belong.

Diversity in Tech + Looker

Joining Looker as the Global Head of DEI has been a dream come true. The tech industry has always had a unique ability to influence the way people look at the world. I believe Looker has an opportunity to be a leader in this area, and help move the needle in a positive direction towards making DEI a standard in the technology industry.

A great example of Looker’s commitment to DEI in the workplace was the recent “Looker DEI Stories” night, which was put on by our Looker DEI committee. During the event, half a dozen Lookers got up and shared personal stories and experiences that have shaped who they are today. Lookers shared smiles, laughter, and even some tears that evening. But even more importantly, they all shared in the understanding that they were in a space of safety, openness, and inclusion.

It is from these events and opportunities where people can feel safe being vulnerable and sharing their stories that we can begin to breakdown the barriers that prevent us from knowing each other deeply, and really push us to start engaging in deeper dialogue with one another.

I am very honored to be a part of an organization that values DEI and is ready to do the work required to create a place where everyone can be their authentic self.

Click here to learn more about DEI at Looker and subscribe to our blog to get updates on future stories about DEI at Looker.


5 Tips For Success With Looker

$
0
0

Bringing self-service analytics to your organization can be a long but rewarding journey, and it certainly doesn’t stop after the launch of your first instance. Succeeding with your data through Looker is an ongoing process of education, enablement, and discovery.

Whether you are a seasoned Looker customer, or rolling out your instance for the very first time, we’d like to share some tips that have helped our customers maintain healthy instances and teams of happy, effective users.

1) Educate your team

First, and most importantly, make sure your team is aware of the Looker training resources. Sharing these courses early and often with every Looker-user in your organization will help enable confident self-servers.

2) Build intuitive content

The more intuitive your explores and dashboards are, the better users will be able to self-serve. An easy way to get started is to check out our eLearning course on “Building Explores Users Will Love”. Additionally, deleting old content and holding regular data-governance meetings will go a long way in helping to keep things clean for your teams. Lastly, leverage the iLooker feature to make data-driven decisions about what to delete.

3) Build a support network

Identifying Looker “ambassadors” across your organizations is a great way to build a support network for users. Ambassadors are often stakeholders who represent their end-user groups. They’re able to help answer questions and drive adoption within their respective teams. If you’re looking for ways to identify potential ambassadors at your organization, use iLooker to take a look at those who have the highest Looker usage data.

Once you have ambassadors in place, encourage them to consider owning one or more of the following, based on their strengths:

  • Assisting with training. In addition to building out a Looker ramp plan, some customers find that developing a few of their own short videos in addition to the Looker eLearning courses to be helpful. Videos that tend to be particularly helpful are those that outline key workflows relevant to your user groups. For example: “Getting Started with Looker at [your company].”
  • Supporting business users. This could include:
    • Answering questions in a Looker-specific Slack channel or through an email alias.
    • Setting up an email distribution list to email all Looker users periodically about new content developed and/or new Looker release notes/features.
    • Adding links and other important news into the “links” section of your organization’s Looker homepage.
    • Holding regular office hours or offering quarterly trainings, to give users an in-person way to ask questions and get help.
  • Helping with administration: This is less common and reserved for those who are very advanced and technical. With some additional LookML training, these ambassadors can help with model development and permission setting for end-users.

4) Give users the appropriate access

Giving the most appropriate permissions to your users will go a long way in keeping your instance clean and useful. Learn more in our Secure your Spaces! article.

5) Create a “data dictionary” using Looker’s API

Creating a data dictionary of all the fields in an explore will help keep track of the logic developers are putting in the modeling layer. If you have thousands of fields, you can search for the fields you need and quickly figure out what they are called.

Important Note: To create a data dictionary, you will need an internal site to host the values, or you can use a Wiki page.

Bonus Tips: A Few Things To Avoid

Looker is a very powerful tool, which means people can certainly find ways to create a confusing environment for end-users. Below are a few common pitfalls that can get in the way of self-service and success.

1) Too many exposed explores

If all users are able to see all the explores in your instance, they can easily become confused. It is important to give users access only to what they need to do their jobs. If you have niche explores, consider hiding them from users who don’t need them.

2) Using one giant explore

Having one giant explore with everything can cause content-overload and confusion. It is important to name your dimensions and measures things that actually make sense. The more clear and specific you are with naming, the better. Additionally, be sure to use ‘group’ and ‘view’ labels in order to ​better organize the field picker​ for your end users

3) No rules for how and where users save content

If your instance is the wild-west of users saving content anywhere they please, content redundancy, difficult clean-up, and prolonged confusion are likely to ensue. Set guidelines for how and where you’d like users to save their content to help keep everyone on the same page.

Looking for more?

If you want to learn more, check out how Diana and the team at AdoreMe cracked the code on best practices and adoption.

Still have questions? Check out our Looker User Guide, or reach out to our chat support team.

Five Global Trends in Data Ethics and Privacy in 2019

$
0
0

It’s no surprise that a recent Gartner1 report called out Digital Ethics and Privacy as one of the top trends for 2019. Data privacy and ethics issues have been hot topics, particularly in tech, for some time now. But what does that mean for organizations wanting to move from being compliance-driven to ethics-driven? What are those big things happening around privacy and ethics? Where is it happening?

First, a bit about what I mean by privacy and by data ethics:

  • Data privacy is responsibly collecting, using and storing data about people, in line with the expectations of those people, your customers, regulations and laws.
  • Data ethics is doing the right thing with data, considering the human impact from all sides, and making decisions based on your brand values.

With that in mind, here are 5, of probably many, important global trends I see for privacy and ethics in 2019:

1.Chief Privacy Officers can expect ethics to become an explicit part of their role

As technology becomes an increasingly important part of people’s lives, data ethics must be translated into sound business practices to ensure that both internal and external interests are balanced. This begins with considering the human impact from all sides of data use, the impacts on people and society, and considering whether those impacts are beneficial, neutral, or potentially risky. Moving forward from 2019, Chief Privacy Officers and Privacy Leaders should expect to start incorporating ethics assessments into data collection and uses, to ask, ‘what is fair’, ‘what is the right thing to do’?

2.Technology companies will lead the way for U.S. Federal Privacy legislation

Following the implementation of the General Data Protection Regulation (GDPR) in the European Union, companies in the technology industry will lead the charge towards similar privacy legislation in the United States. It has yet to be determined if there will be fundamental differences as to whether legislation should be “baseline” (sets the floor) or “comprehensive” (generally more prescriptive and detailed, like the GDPR). Regulators will also have to decide if the focus should be “rights-based”, “risk and harms based”, “accountability-based” or some combination of all three.

Regardless, the necessity for data privacy legislation in the United States continues to be a galvanizing discussion in legislative houses, universities, and homes across the country. Virtually every industry uses technology to provide its products and services, and wider contributions from many industries will shape a better, more balanced regulatory outcome for all stakeholders.

3.Sustainable ethics codes will evolve to better address the challenges of a digital world

A quarter century ago, there was a generational shift in the consensus on how to respect privacy due to an emergence of personal computing, networked computing and large structured databases. That shift led to the implementation of modernized rules governing the protection of personal data. Today, we are experiencing a new generational shift, driven by globalization of the economy and profound alterations in the digital, physical, and biological spheres we live in, creating an ever-expanding data-first interconnected digital world.

To keep up with the evolving digital world, the evolution of sustainable data ethics codes must go beyond check-the-box compliance and enforcement of the rules. New data ethics codes must objectively consider the effects new technology and data uses beyond common understanding has on people.

This year, data ethics will rise to become a board-level topic, requiring companies to take a values-driven approach and understand the consequences of both using and not using data. Companies must remember that not everything that is legally compliant and technically feasible is ethically and morally sustainable, nor is it always protective of the autonomy and privacy of people.

4.Product excellence and privacy by design will become synonymous

Privacy by Design (PbD) means to embed data privacy requirements into product design and development, embodying the “build it in, don’t bolt it on” mentality. This includes building in:

  • Privacy-savvy defaults
  • In-product transparency
  • Considerations for and documenting privacy risks and data flows
  • Assigning data owners up front and throughout the data lifecycle, including E2E security

PbD is complementary to and just as important as secure coding. Revolutionary technologies like Artificial Intelligence (AI), machine learning models, and connected Internet of Things (IoT) devices demand up-front rigor, methods, tools, standards, and regular reviews. These reviews are needed throughout the entire process - from research and conception, to design, development, testing, implementation, and ongoing revisions - and should make sure to include third party services and data sources, open source code, and integration with existing products and services.2, 3

Knowing where your data is and why you have it has never been more critical from both a strategy, operational, and compliance perspective. Data needs to be stored and managed in a way so it is clean and accessible for analysis and learning – to tackle business issues in real-time. The Looker platform helps businesses to find this data, define it, and empower users to analyze it and gain insights to drive business outcomes, all without data sprawl.

5.Companies will drive to educate policy-makers and regulators about their technologies

It’s vital that policymakers and regulators develop a deeper understanding about what they wish to regulate at the U.S. State and Federal level, and the same is important for policy-makers in countries outside the U.S.. Given the profound shifts in our global, digital, data-centric economy and the opportunities it offers to people and societies, policy makers must consider:

  • What harms are they trying to protect people from?
  • What rights do they want to guarantee?
  • What problems are they trying to solve?
  • What are the privacy outcomes they hope to achieve for their citizens?

Organizations that spend the time educating policymakers on how information, communications, data platforms, and analytics technologies work - supplemented by substantial use cases and best practices across multiple industries, and demonstrating accountable and ethical data practices - will have the highest impact.

Looking to the future of data ethics and privacy

We stand at a crossroads for data ethics and privacy in 2019. Around the world, there will be spirited debates in break-rooms, living rooms, and government hallways about the impact, direction, and considerations given to these topics throughout the year. While these debates may drum up dramatic media stories, they may also unearth paths that lead to eye-opening enforcement by regulators. As I see it, one of these paths could lead to more complex, restrictive procedural compliance. The other, and my preferred path, would blend regulations, individual rights, common sense, and data ethics together for a more balanced, 21st century approach.

For future updates and insights from me on data privacy and ethics, subscribe to the Looker blog.


1Gartner Top 10 Strategic Technology Trends for 2019, David Cearley, Brian Burke, October 15, 2018
2Privacy Engineering: A Dataflow and Ontological Approach by Ian Oliver
3The Privacy Engineer's Manifesto: Getting from Policy to Code to QA to Value by Michelle Dennedy, Jonathan Fox, and Tom Finneran

Moving Beyond BI - Looker Recognized for the 2nd Consecutive Year in the Gartner 2019 Magic Quadrant for Analytics

$
0
0

We’re proud to see Looker recognized again in the Gartner Magic Quadrant for Analytics and BI, which we believe validates our platform approach to modern analytics.

Since our founding, companies and data teams around the world have used Looker to make better decisions through the smarter use of data. We’ve helped facilitate this by developing a modern architecture for the data platform. Our focus has been intently set on customer satisfaction and helping expand the Looker community to include not just customers, but a far-reaching global partner ecosystem.

While we help organizations meet existing demands for information, we also strive to go beyond traditional business intelligence, dashboards, and reports to deliver truly actionable insights. As we plan for the future, we look forward to continuing to work closely with the data community to innovate and develop how people access and use data in their day-to-day work.

Customer First, With Everything

Dedication to our customers has helped us gain the trust of global leaders in innovation. International burger chain, Five Guys, uses Looker to deliver fresh insights to business users, increase efficiency, and reduce their carbon footprint. Using Looker, payments platform Adyen has created a data-driven culture at scale during a time of hypergrowth. Kiva provides microloans that change lives around the world, and they use Looker to create a single source of truth for their whole organization, ensuring resources are allocated properly and efficiently.

One way we’re supporting our global customer base is by focusing on connecting people, face-to-face. We started with JOIN, our annual data conference, and we’re taking it on the road again this year, visiting cities around the world with JOIN: The Tour 2019. We’ve also been supporting the data community by expanding our global Meetup groups — helping individuals meet others with similar roles and goals in the data community.

Connecting with our customers to work through challenges, empathize with their needs, and gain valuable feedback has been core to our values and product evolution. It is because of this that we have always aimed to provide exceptional customer support, including via our in-product chat interface. In 2018 we launched the Looker User Guide, which connects Looker users to the best training, videos, documentation, and community discussions available. We are continuing to create new resources that help our customers ignite data cultures within their own organizations by pulling best practices and tips from some of the most innovative data cultures around the globe.

Building A Platform For The Future

Looker is designed to provide value today and conquer the analytics challenges our industry has yet to identify. This is why we’ve built Looker as a Data Platform since day one — we know we need to meet today’s expectations, and we need the flexibility to build for tomorrow’s possibilities.

In 2019, we launched Looker applications to provide out-of-the-box solutions for use with common workflows and departmental use cases. Designed based on customer input, applications help solve common analytic challenges with simple, purpose-built interfaces. The first of these, made for digital marketing and event analytics, help marketers make data-driven decisions and demonstrate ROI.

Powered by Looker, our offering for embedded analytics, takes the full power of Looker and makes it easy for developers and product managers to build their own portals or applications — delivering data and insights by Looker wherever and whenever needed. Analytics can be presented to users via an embedded iframe, full RESTful API, or by using our scheduler to deliver reports by email or webhook. You can deploy an existing custom built application or leverage the Looker consulting partner ecosystem to help build an application of your own.

Building on the increasing need for data and our Looker vision, this year you can expect more integrations with our technology partners, more custom applications, and even more improvements to the core Looker platform.

Expanding the ecosystem

As Lookers, we want to use the latest and greatest and to help our customers to do the same. This is why we’re committed to an open and robust ecosystem, integration with the latest technologies, and APIs that make Looker accessible wherever fresh, governed data is required.

Looker has always been designed to operate in-database, letting our customers capture the value and power of database improvements as they have developed. Looker was built to leverage the storage and processing power of modern data lakes and data warehouses by writing optimized SQL to transform your data on the fly. Looker currently supports more than 45 SQL dialects, including Amazon Redshift, Google BigQuery, Snowflake, MySQL, as well as on-premise appliances like Teradata and Oracle. Frequently-used data sources are pre-modeled and you can quickly download and use a Looker Block to begin to analyze your data immediately.

But Looker is integrated with more than just database partners — if you want to go beyond business intelligence, you can leverage our integrations for data science and ML, including Google BigQuery Machine Learning, Amazon SageMaker, IBM Watson, R, Python, and more.

Looking to the future

We help companies around the world get value from their data through the governed, secure, powerful, and scalable Looker Data Platform. We’re very proud of our results in the latest Gartner Magic Quadrant for Analytics and BI, and we believe this provides further validation of what Looker users have been telling us since day one: Looker is the tool that finally helps make data cultures real.

If you’re new to Looker and interested in learning more about how Looker can help you put insights into the hands of decision makers when and where they need them, we would love to hear from you. We think you’ll enjoy working with us today and into the future.

Five Reasons To Get Excited For JOIN: The Tour 2019

$
0
0

The learnings we walked away with after meeting so many of our customers at JOIN - our annual data conference - were so invaluable that it left us wanting more. That’s why we decided to bring JOIN directly to you for our second annual customer roadshow, JOIN: The Tour.

Beginning on March 5th, JOIN: The Tour kicks off in Los Angeles and Denver. From there, we have stops across the U.S. and Europe. In addition, we’re going to Tokyo for the first time, to meet with our expanding Looker community in Japan.

With the best content coming to a city nearest you, there’s a lot to be excited about for this year's tour. If you’re looking for specifics, here are five reasons why we think you should mark your calendars and plan to attend a JOIN: The Tour stop near you.

1. Give Us Feedback

Better understanding your needs, pains, and successes with Looker is one of the main reasons JOIN: The Tour was started. As our customer community has grown, we’ve noticed that the way people use data varies across regions and continents. Not only does JOIN: The Tour give us the opportunity to meet with the growing Looker community, but it is a great way to get to hear from you - our valued customers - so we can continue making our platform suit your data needs, no matter what region you reside in.

2. Meet Looker Executives

Not only has our Looker customer community continued to grow, but so has our Looker team. We’re excited to have our new Chief Product Officer, Nick Caldwell, and our new VP of Customer Success, Wayne McCulloch, join us this year. Hired late in 2018, they have already helped enhance the Looker platform and our overall customer experience program. Both Nick and Wayne will be at several stops of JOIN: The Tour to meet you, share their roadmaps, and learn more about your successes with your Looker deployment

3. Engage With Your Tribe

Magical moments happen during JOIN: The Tour, usually sparked when attendees are able to meet and mingle with local data enthusiasts. For every tour stop, we work hard to ensure you get to learn from the top data companies in your area. This year’s speaker lineup includes Automatic, Bizible, Global Payments, Verizon, Walt Disney and more. Additionally, you can help your tribe thrive through one of our many local Meetup groups. We’ll have folks on-site at each JOIN: The Tour stop to help you get signed up on Meetup.com so you can continue meeting up with your peers after the tour.

4. Expand Your Ecosystem

Your analytics experience is only as good as your data ecosystem, right? At JOIN: The Tour, we’re tapping into your local markets to introduce you to our top technology and consulting partners. Partners like Amazon, Google, Snowflake, Trianz, Fivetran, Keboola, and many others will be with us throughout the tour to help enhance your tech stack so you can provide a one-of-a-kind experience to your data consumers.

5. Learn Tips, Tricks, and Best Practices

In 1996 Bill Gates penned an essay titled, ‘Content is King.’ He was referring to the Internet, but we are applying that theory to our JOIN: The Tour sessions. By attending, you will:

    1. Learn about our vision behind the Looker data platform
    2. Explore tips, tricks and best practices for showing, exploring, and integrating data anywhere
    3. Learn more about Looker’s 2019 Product Roadmap
    4. Get hands-on experience with our product
    5. Hear real-world use cases from other Looker customers

See You On The Tour!

There you have it! If you haven’t yet, be sure to check out what stops are nearest you for this year’s tour and register. And -- as a bonus reason to be excited -- JOIN: The Tour is a great teaser for our flagship conference, JOIN, which will take place in San Francisco on November 5-7th, 2019. We look forward to seeing you on the tour!

Why A Data Platform

$
0
0

The world of business data has changed radically in the past decade. That might seem obvious by now, but it is worthwhile to understand exactly why and how things have changed. On the supply side, there’s simply more business data than ever before. Not a little bit more or a lot more, either—VASTLY more.

The average enterprise today uses more than 1,000 cloud applications, plus custom enterprise software and transactional systems. All of those applications are data applications in that they capture huge amounts of data and produce clouds of data exhaust.

At the same time, today’s workforce is hungrier for data than ever before because they see the enormous value in it. Marketers want to know which campaigns bring in the biggest customers. Sales managers want to understand pipeline coverage. Warehouse directors want to optimize utilization. FP&A wants to improve their forecasts. Product Managers want to better track bugs. And on and on.

Given this explosion in both data supply and demand, businesses need tools to connect the two. In 2019, the horsepower needed to manage all of this data is available. From Hadoop and Big Data systems to cloud MPP data warehouses like Redshift, BigQuery and Snowflake, data infrastructure has undergone a revolution in the last decade. These advances have made it fast, cheap, and easy to query terabytes of data.

But business intelligence tools haven’t kept up. Designed in a world where slow, expensive databases ruled, they can’t fully leverage the recent innovations in data infrastructure. As a result, these tools make it impossible to give broad data access or guarantee everyone is aligned around the precise definition of key business metrics.

This hole in the market is why Looker was founded.

Leveraging Modern Technologies

Looker is built for modern data engines using modern technology. It brings together the most important software development paradigms including Git version control, full REST API coverage, and robust development workflows. With Looker, analysts and developers can build and maintain the powerful data tools that enterprises need to bring all that data supply to their employees.

At an architectural level, Looker is built for today’s ever-evolving data ecosystem. Looker natively connects to more than 40 different databases and fully leverages each of their unique data-processing capabilities. So rather than locking customers into a monolithic stack, Looker gives them the freedom to build a best-of-breed stack that meets their specific needs.

Looker also allows teams to confidently build their stacks knowing they can adjust them as the business grows and needs change. The same data models you build to analyze sales on your MySQL database can power large-scale analysis when you move to Hive or start building predictive models with Spark.

The Silos From Single-Use Solutions

Most customers come to Looker because they have problems that their BI tool can’t solve. But a few years ago, customers started coming to us after going down an entirely different path. After outgrowing the “analytics” tabs that most of their SaaS apps offered, these companies had been buying off-the-shelf analytics tools to meet each need.

Mixpanel optimized their website funnel. Splunk monitored their servers. Gainsight helped drive customer success. InsightSquared analyzed their sales funnel. Adobe managed their marketing spend. On top of that, everyone needed Excel to manually stitch things together.

This approach had its advantages. Each tool is purpose-built for the type of person who’ll use it. Plus, because each vertically integrated solution does it all—capturing, storing, querying, and presenting the data—customers can get up and running in days or weeks.

But the approach had clear drawbacks, too. For one, buying multiple, specialized tools is expensive. Because each of these tools is a proprietary, all-in-one solution, buyers are also locked in. If you decide to switch from Mixpanel to Heap, the data and insight you built up in one can’t be easily transferred to the other.

But by far the biggest danger of this siloed approach is that it won’t actually deliver the insights that teams benefit from most. The most valuable insights—the ones that can radically shift the trajectory of a business—come from combining data. And that’s precisely what silos prevent.

When your server monitoring identifies a slew of errors in your ordering flow, it helps you find and fix them. What it doesn’t tell you: whether the errors impeded customers’ orders; the lost orders’ impact on the bottom line; any updates you need to make in your supply chain; and which impacted customers deserve a discount code for their trouble.

The only way to get 360° of insight into your business is unifying your data. That’s what a data platform does, and it’s how we see more and more customers using Looker. Looker provides a common data surface—a unified layer that brings together data from all your sources, transforms it into the insights you need, and sends those insights to whoever needs them, wherever they are.

A Platform For Purpose-Built Experiences

Looker isn’t a point solution that just serves one department or one industry, but a flexible suite of tools that can be used to make sense of any kind of data. Want to map website sessions to people on your marketing list and customer loyalty data? Looker can do that. Want to see how events in the sales process impact customer support and customer retention? Looker can do that too.

But all that technical capability needs to be married to intuitive, purpose-built interfaces that help people in every role. That’s why we’ve moved beyond traditional BI and are investing so much in Looker’s data platform.

Powering agile business intelligence and traditional dashboards is still a core part of what we provide to our customers. But by embedding and delivering analytics and insight wherever you need it—from dynamic pricing engines to machine learning models to customer lookup interfaces—we’re helping people in every role be more informed and effective.

With the Action Hub and custom visualization framework, we’re providing the hooks to make Looker the data engine that drives all the other applications you use. Plus, we’re building purpose-built data applications to address specific workflows, and giving you the tools—REST APIs, the Looker Blocks Directory, and a powerful application framework—so you can build your own intuitive data experiences. Data experiences that make it possible for everyone in your company to do the best work of their careers.

Viewing all 281 articles
Browse latest View live