We explore how making SQL reusable can help streamline analytics across your business, and why it’s good to be lazy.
Good Programmers are Lazy
So the saying goes. A good programmer knows that when they write the same chunk of code twice, they’re better off writing a reusable function. That instead of writing everything from scratch, they should look elsewhere for pre-built tools and modules. That instead of reinventing the wheel each time, they’re better off automating as much as possible.
SQL is incredibly good at getting data out of a database in a usable format. However it is incredibly bad at being reusable and shareable. Analysts writing SQL cannot afford to be lazy, they will often have to write the same code, joins, business logic, metrics, over and over. This leads to valuable time spent coding, and not enough time spent extracting actionable insights from the data.
The problem is exacerbated in large organisations where teams of analysts sit in various sub-departments. With multiple analysts in multiple teams all working off the same data sources, but with no easy way to share the logic and SQL they have created.
It also provides a huge barrier; to know how to get the data out, everyone not only needs to know SQL, they also need to know the structure of the source database. They need to know which table holds the right information, which fields to grab, which filters and logic to apply.
This has led to a rise in self-service business intelligence tools. Tools that are supposed to let the analysts and business people get the insights they need themselves, rather than rely on data extracts, or dashboards being built for them.
But somehow this hasn’t really solved the problem, with the ability to create new fields, new customer SQL queries, and new logic in the application itself, it not only leads to a reusability issue between saved reports and dashboards, but also allows people to create their own versions of the truth. People are now free to use their own definitions of metrics, of joins, and it’s still just as hard to share this logic between analysts and teams.
Looker is a new type of data tool that aims to solve this problem. Fundamentally, Looker is a way to automate the writing of SQL. At the core of Looker is a data model, this model provides a unified platform that anyone, from data scientists, to analysts, to business people can go to get the data they want, and extract insights.
The code-based data model allows you to define business logic, joins between underlying tables, metrics and KPIs in one single place. You tell Looker how to write SQL, and Looker writes the SQL for everyone.
Utilising Looker means a data scientist no longer needs to write his own custom SQL to extract data for their machine learning model, an analyst no longer need to write the same SQL over and over to get answers, and business people do not need to understand the complex relationship between fact tables and dimension tables to get the insights they need.
Since everything is defined exactly once in a single, shared model, it also creates a single version of a truth. Now everyone is no longer free to calculate metrics how they like, and everyone’s reports will agree with each other.
Looker is also creating an ecosystem for sharing analytics. Since everything is code-based Looker has created a “Block” sharing centre to allow you to utilise pre-built models for analytical patterns and commonly used data sources, and encourages users to share models they’ve built so that other people can benefit from them.
With Looker, analysts can spend more time generating insights, and less time writing SQL, letting analysts be lazy too.