Christophe Viau: From Arts & Biology to Big Data Visualization

ChrisVBlog

Christophe Viau taking a musical break from D3.js.

Earlier this year we were excited to welcome data visualization engineer Christophe Viau to Planet OS. He brings extensive experience building visualization platforms and chart libraries for Datameer, Plotly and Boundary while remaining an active leader and lecturer in the D3.js community. His recent book entitled “Developing a D3.js Edge” was released in 2013 and serves as a demonstration of how to build a chart library with D3.js. We asked Christophe a few questions about how he came to make a career in data visualization and what trends he sees in the industry more generally.

Why did you decide to join Planet OS and what projects are you currently working on?

I’m always looking for bigger data visualization challenges, and Planet OS seems to have them all: large datasets, data streaming, and a huge variety of data types and use cases, yet still really focused on the core mission of giving access to environmental sensor data. I like to work on new projects from scratch using quick prototypes to quickly deliver value, and Planet OS is agile enough to avoid getting stuck in technical debts. My work here is to expose the work done by our back-end engineering team and give shape to their vision. I am currently working on enabling Planet OS to scale data visualizations with their platform. This work includes a new graph library and data discovery interfaces.

How did you find your way to data visualization after having studied biology and art?

The goal of my Masters degree in Arts and Biology was to practice scientific illustration. I started programming by adding multimedia to my productions for museums and universities. I then discovered data visualization, which is not far away from what I was doing back then in terms of constructing a meaningful shape to information. I have since been working full-time as a data visualization engineer after finishing my PhD in software engineering.

You’ve been very active as a D3.js community organizer and lecturer – what made you interested in becoming an evangelist and facilitator for D3.js and data visualization more generally?

D3.js is one of the best tools for data visualization design because it is built from the ground up to help you move through the datavis pipeline, from data to graphics to interactivity. But the most appealing aspect of this to me is the community of designers/coders of all levels constantly sharing in lots of channels: meetups, books, code snippets, tutorials, wikis, social media, products, etc. There are so many smart people in this community; I’m just trying to expose their work with a gallery (close to 1,500 examples), a Twitter news feed (more than 5,000 followers), and organizing D3.js meetups (the Bay Area chapter having more than 3,000 members). That’s not my own work; I’m just amazed by the level of creativity of this community and obsessively sharing everything I find.

In your opinion what are the most interesting things you’ve developed in your career?

I like building products. It can be cliché for a developer, but it’s not obvious when you know my background is in arts, academia, teaching and hacking. But I would say I like the variety of things I’m working on instead of a single contribution. If you take “career” in the wider sense, I’m glad to have had experiences teaching video game design classes, dissecting human cadavers in a medicine class, developing a biological models company, traveled around the world with generative arts residencies, and maintained a hacking blog with my son. All these incoherent things add up to form a “career.” But at Planet OS I’m really proud to work on data that could be used to improve our relation with the environment.

What recent trends have you seen in data visualization advancements? 

The overall quality of the data visualization experience is improving. It’s not just a gadget; more and more people understand how it empowers the user. I see nice solutions to visualizing high velocity data, handling focus and context for large datasets, showing uncertainty and data quality, adapting visualization patterns to very specific use cases, and a whole lot more. I see more hard challenges in the data pipeline in terms of acquiring, cleaning, transforming, delivering data. All of this is hard. The tools are evolving but you still need very smart people to build products around it. And I want to be there to expose their solutions to the ultimate end user.


comments powered by Disqus