Data Council Blog

Data Council Blog
Pete Soderling

Pete Soderling

Pete Soderling is the founder of Data Council & Data Community Fund. He helps engineers start companies.

Recent Posts:

25 Hot New Data Tools and What They DON’T Do

“Wait, do tool X and tool Y work together? I thought they were competitive.”

There are dozens of new tools in the fast-growing data ecosystem today. Together, they are reshaping data work in exciting, productive and often surprising ways. The seeds of the data landscape for the next decade have been planted, and they’re growing wildly.

Turns out, cultivating a new ecosystem is messy.

Should Datacoral Power Your New Data Infrastructure?

Today's companies aim to be data-driven, but data infrastructure is time intensive and costly to build, maintain, and secure.  A coral is the exoskeleton of a small marine animal that attaches and grows on almost anything. Once it starts growing, it can create large reefs, which support a diverse ecosystem of plants and animals. So what happens if you apply that philosophy to the world of data?

How Histograms Can Help Improve Your Ops Monitoring

 

 

Life comes at you fast. Data even more so ...

When the engineering team at Circonus began to feel the pain of systems at scale, there were some common observability tools that provided them with a firehose of operational time series telemetry. However, managing all that data, yet alone making sense of it, was extremely difficult. And the existing tools they tried for managing time series metrics either didn't give mathematical insight, or fell over at modest workloads. They needed a better solution. So they decided to look into other statistical tooling options that had proven themselves for decades in other industries.

How to "Democratize" the Responsibility for Data Quality Across your Organization

 

 

Writing endless data transformations wasn't sustainable for an engineering team handling hundreds of inputs. Here's how Clover Health enabled their business users to help.

It's rare to find an ETL system that's completely static. As organizations change and grow they develop new business requirements. Because of this their data pipelines must change and adapt, ultimately becoming more robust and full-featured. Yet constant development can make already brittle ETL systems seem even more fragile.

Furthermore, systems with large numbers of different types of inputs bring special challenges - building, testing and managing an exploding number of data transformations can become a daunting project for the engineering team. 

The Clover Health ETL system supports hundreds of inputs and more than 500 custom transformations in production as well as a large number of custom connections between their different ETL pipelines. When hearing about the magnitude of the system, one might rightfully wonder, "how does Clover guarantee and maintain data quality across so many different inputs and transforms?"

Exploring the development trajectory of Clover's system makes for a fascinating story; hearing about their data team's successes and pitfalls are illustrative lessons to other engineers as they seek to increase the robustness of their own ETL systems.

The Future of Distributed Databases is Relational

 

 

What if developers could ditch their No-SQL solutions and still get scalability from a more traditional relational datastore?

I've been noticing an interesting pattern recently where developers seem to be rejecting some of the newer, more en vogue data stores with limited functionality and use-cases (while promising easier scale) and returning to the comfortable tried-and-true paradigm of relational databases. It seems that we've hit a watershed point where developers finally believe they don't necessarily need to make a trade-off between database features on one hand and easy scalability on the other.

One such company enabling this return to the golden era of of RDBMS is Citus Data. Citus is blazing a trail in 'cloud-proofing' the gold standard of relational databases, PostgreSQL, through extensions that allow their customers to achieve much easier horizontal scalability than ever before. 

ETL and the Question of Happiness

 

No one is happy with fragile ETL pipelines. But it doesn't need to be that way.

One might surmise that data "analysis" is, first and foremost, about data "access." It goes without saying that someone in the analyst's role must first obtain access to the data they wish to analyze. And with data being spread all over the inside, and now outside, of the enterprise (think of both your on-premises data stores, plus all the cloud and SaaS vendors you're currently using) modern day analysts face deeper challanges than ever before in obtaining access to the data they need.

And of course, techno-philosophical concepts like "democratizing acess to data" do nothing at all to help one overcome any of the actual technical integration challenges required to practically enable such unfettered access to one's data.

How Data Has Evolved at The New York Times

 

Whether you love or hate their paywall, the Times successfully balances competeing business frictions using a deep view of data. 

Since our initial DataEngConf in 2015, The New York Times has been a key supporter of the conference. The very first ever DataEngConf talk was a keynote given by Chris Wiggins, the Times' Chief Data Scientist, who presented a broad yet fascinating perspective on "Data Science at The New York Times" (video here).

In the years since, we've had deeply technical talks from both data engineers and data scientists at the Times, and I'm excited that their involvement in DataEngConf this year is as large as it's ever been.

How Dremio Uses Apache Arrow to Increase the Performance

 

(Image source: http://arrow.apache.org/)

What if all the best open-source data platforms could easily share, ("ahem,") data with each other?

As data has proliferated and open-source software (OSS) has continued to dominate both the stacks and the business models of the top tech companies in the world, the number of different types of data platforms and tools we've seen emerge has accelerated.

Having a hard time keeping up with the differences between Kudu, Parquet, Cassandra, HBase, Spark, Drill and Impala? You're not alone, and obviously this is one of the reasons we bring together top OSS contributors to these platforms to share at DataEngConf.

But there's one new innovation that attempts to bind all the above projects together by enabling them to share a common memory format. It's a new top level Apache Project called Arrow that aims to dramatically decrease the amount of wasted computation that occurs when serializing and deserializing memory objects. The serialization pattern is commonly used when building analytics applications that interact between data systems which have their own internal memory representations.  

Introducing our Data Startups Track

 

Machine Learning, Neural Nets, "AI" and Computer Vision are changing the world. Discover the data startups that matter.

As an engineer turned founder I've been passionate for years about helping other technical founders succeed. There are a unique set of challenges faced by founders, and building support communities to help them successfully overcome their obstacles helps move innovation forward. 

More broadly speaking, I'm also a proponent of bringing engineers together - hence our efforts in the data community via meetups, our conference series and via organizing other, smaller, events for engineers, data scientists and CTOs through Hakka Labs for the past 5 years.

This is why I'm so excited to be introducing the intersection of these two efforts - supporting startups and supporting the data community - into our upcoming DataEngConf NYC.

To Shard or Not to Shard (PostgreSQL)

 

Wouldn't the world be a simpler place if we could easily scale our RDBMS? (gasp!)

What do you do when you find yourself in a situation where you need to scale out your RDBMS to support greater data volumes than you originally anticipated? Traditionally, one would either need to vertically scale their infrastructure by putting their database on more powerful (costlier) machines or sharding their data across multiple workers.

  • 1
  • 2