SG10 Solutions Big Data Q&A – Nigel Noyes

SG10 Solutions Big Data Q&A – Nigel Noyes
September 26, 2017 admin

Nigel is one of those rare breeds of analytics professionals who successfully bridges business strategy, data engineering, and data science. He feels as much at home creating the market research and discounted cash flow analysis for a project, as diving into the details of complex statistical models and coding world-class real-time distributed architectures. His cross-functional skillset is the result of experience in fields as diverse as aerospace, robotics, high frequency trading, investment banking, and credit cards. He holds a MBA from Wharton, a MS in Cognitive Science from UCSD, a BS in Computer Science from Cornell, and a Certification in Quantitative Finance.

Many times a simple solution will work quite well without the need to pull out heavy weaponry like Hadoop or Deep Learning.

1, Your data science career today has spanned capital markets, defence, payments, astrophysics and the start-up world. We are seeing more candidates than ever before transferring industries. What advice can you share for architecting big data projects in such different environments?

I’ve been doing what is now called ‘Data Science’ for fifteen years now, when the term ‘AI’ was mostly confined to the realm of sci-fi. I’ve been lucky enough to see the applications in use cases as diverse as missile defense to the pricing of super bowl tickets. I’m pretty much agnostic to the industry – for me the interest lies in solving new problems and applying the analytics I have spent my career and studies honing. This has given me the advantage of seeing the similarities across industries as the domain of data science has evolved.

When I started my career, today’s powerful tools (Spark, TensorFlow, H2O, etc) obviously didn’t yet exist. In some way, this was a benefit as I learned to approach a problem from the ground up. This requires spending the time to gain a deep understanding of the domain of your problem. Then, from a first principles approach, develop a solution that exploits that deep knowledge. Many times a simple solution will work quite well without the need to pull out heavy weaponry like Hadoop or Deep Learning. With all the tools currently at the disposal of data scientists, the temptation is to go directly to applying cookie-cutter tools and models to any problem encountered without much understanding of the tools and models being used. Often this brute-force approach works, and it’s great that these insights have been democratized, however for a company (and its data science team) to develop a competitive advantage in this democratization of analytics, they need to do more than just implement the latest tool. They really need to start from the ground up to gain a deep understanding of the domain of the problem first, regardless of industry.

2, You’ve not only been on the frontlines of architecting and implementing big data solutions, but have also hired and managed teams. What do you look for when recruiting entry level talent? And what advice would you give to students hoping to enter the field?

When interviewing anyone, experienced or entry-level, the main attribute I try to uncover is whether the candidate has had the intellectual curiosity over their academic and professional life to understand the why behind the techniques they have used and not just gone through the motions using libraries they don’t understand. I’ve interviewed candidates with advanced degrees in statistics that didn’t know how to code linear regression from scratch without using a library. In my opinion, this is a failure of both universities of not focusing on a strong understanding of the fundamentals and also of companies of only caring whether candidates can solve a copy-and-paste HackerRank test.

In practice, I’ve found that when creating the optimal solution for a specific problem rarely do textbook approaches or even existing libraries work. A practitioner should not approach a problem by first listing all techniques available in libraries like SciKit-Learn or MLLib, but should instead derive the solution starting with a fundamental understanding of that domain and first principles of Statistics and Mathematics. Then if a library happens to exist that accomplishes the selected approach, he or she can use it to speed up implementation of the solution. Another not too uncommon problem is that even if a relevant library exists, it may not be installed yet. Given IT compliance requirements in larger companies, this isn’t a trivial task and may take months. In these cases, I have had to write the specific functionality I need from scratch as not to delay the project. So my advice to students would be to focus on really understanding the fundamentals.

3, How does working in a smaller business compare to working at big names such as MasterCard and Lockheed Martin?

I’ve enjoyed my time at both start-up and incumbent firms, each has its advantages and challenges. It’s critical to do a thorough due diligence before signing on to any firm, regardless of size.

Large companies provide financial security, ample infrastructure and resources (budgets and people). They also often have an abundance of interesting data sets given their lengthy history. The challenge can be that it can be tougher to innovate when there is little turnover – the same ideas get recycled and there is not the same cross-pollination of knowledge that comes with fresh blood at senior levels – and less of an urgency to change.

On the other hand, start-ups typically bring in senior talent across a variety of companies to get a diverse blend of knowledge and experience. This heterogeneous mix of capabilities is what sparks real innovation and why frequently start-ups are able to compete quite successfully with large deep-pocketed incumbents. They also have a blank slate and are unencumbered by legacy systems and bureaucracy. However, start-ups (apart from the rare unicorn) have short financial runways which are often not compatible with the investment a big data build will require.

4, Today’s data world still seems to jostle with the varying definitions of data engineers and data scientists. Your most recent role saw you leading both data engineering and data science. Is there a need for such strong distinction between the two and what are the advantages of a close relationship between scientists and engineers?

You’re right – it is difficult to precisely differentiate between data scientist and data engineer and many use them interchangeably. These definitions vary between industries and countries. For example, in Europe the term ‘data scientist’ typically refers to someone who can not only build a complex statistical model but can also execute on the last mile to build an enterprise production quality platform that utilizes the model. In the US, model building and building the production system that uses the model are typically strictly separated between data science (the former) and data engineering (the latter). I tend to prefer the European approach, which is more holistic. Data engineers would find it difficult to build the ETL and DQ capabilities without a deep understanding of the required SLAs of the computational models that will run on the platform. Data scientists would write vastly sub-optimal solutions if they didn’t have a thorough understanding of the technical intricacies of the platform they are using and the specific assumptions and SLAs implemented in the ETL and DQ workflows. In general, the more diverse of a background a data scientist brings to the table the more likely an innovative out-of-the-box solution will be found instead of recycling common knowledge in that specific problem domain. I’ve been fortunate to have worked in positions and industries where vastly different technological and mathematical constructs are used to take advantage of data – from microlensing techniques in Astrophysics, Kalman filters in Aerospace, Optimal Control and Estimation techniques in Robotics, Psychology and Neuroscience experimental methodologies, Stochastic Derivative pricing models in Capital Markets, to distributed computing techniques to deal with truly massive data sets. In a perfect world when building a data science team, I would prefer members to also bring a wide diversity of backgrounds to the table. Ideally team members should be comfortable rotating between data engineering and data science tasks to have cross-pollination of knowledge. Unfortunately, because of the strict segregation in the US market, finding candidates that have deep knowledge or desire to learn more than one discipline is challenging.

5, When you join a new business what are the first things that you look to do to use data to increase their competitive advantage?

For many businesses building a data science team only makes sense once the company has pieced together a working business model to earn repeatable revenue. At this stage, I view data scientists as both internal and external facing consultants whose job is to optimize and scale the business. Typically it’s easier to add value to the business immediately by addressing internal inefficiencies. My advice would be to start with examining the process of customer acquisition. There are vastly different approaches to making the customer acquisition process more efficient depending on whether the company sells small ticket items like in e-commerce or big ticket items in B2B models. For example, with B2B companies it is important to regularly spend several days with the sales team to experience the end-to-end sale cycle and all the associated pain points. My next areas of focus would include the product delivery team, billing, and the product development team. Notice that it goes from easy quick wins to more difficult longer term goals like changing the company’s product. If the company is operating at peak efficiency, it is easier to ward off competition from firms with bloated processes and worse margins.

#sg10solutions #datascience #leadership #bigdata #sg10


Leave a reply

Your email address will not be published. Required fields are marked *