打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
更好的数据科学22招(英文)

22 tips for better data science


These tips are provided by Dr Granville, who brings 20 years of varied data-intensive experience working with successful start-ups, small companies across various industries, and eBay, Visa, Microsoft, GE and Wells Fargo.


  • Leverage external data sources: tweets about your company or your competitors, or data from your vendors (for instance, customizable newsletter eBlast statistics available via vendor dashboards, or via submitting a ticket)

  • Nuclear physicists, mechanical engineers, and bioinformatics experts can make great data scientists.

  • State your problem correctly, and use sound metrics to measure yield (over baseline) provided by data science initiatives.

  • Use the right KPIs (key metrics) and the right data from the beginning, in any project. Changes due to bad foundations are very costly. This requires careful analysis of your data to create useful databases.

  • Fast delivery is better than extreme accuracy. All data sets are dirty anyway. Find the perfect compromise between perfection and fast return.

  • With big data, strong signals (extremes) will usually be noise. Here's a solution.

  • Big data has less value than useful data.

  • Use big data from third party vendors, for competitive intelligence.

  • You can build cheap, great, scalable, robust tools pretty fast, without using old-fashioned statistical science. Think about model-free techniques.

  • Big data is easier and less costly than you think. Get the right tools! Here's how to get started.

  • Correlation is not causation. This article might help you with this issue. Read also this blog and this book.

  • You don't have to store all your data permanently. Use smart compression techniques, and keep statistical summaries only, for old data. Don't forget to adjust your metrics when your data changes, to keep consistency for trending purposes.

  • A lot can be done without databases, especially for big data.

  • Always include EDA and DOE (exploratory analysis / design of experiment) early on in any data science projects. Always create a data dictionary. And follow the traditional life cycle of any data science project.

  • Data can be used for many purposes:

    • quality assurance

    • to find actionable patterns (stock trading, fraud detection)

    • for resale to your business clients

    • to optimize decisions and processes (operations research)

    • for investigation and discovery (IRS, litigation, fraud detection, root cause analysis)

    • machine-to-machine communication (automated bidding systems, automated driving)

    • predictions (sales forecasts, growth and financial predictions, weather)

  • Don't dump Excel. Embrace light analytics.

  • Data + models + gut feelings + intuition is the perfect mix. Don't remove any of these ingredients in your decision process.

  • Leverage the power of compound metrics: KPIs derived from database fields, that have a far betterpredictive power than the original database metrics. For instance your database might include a single keyword field but does not discriminate between user query and search category (sometimes because data comes from various sources and is blended together). Detect the issue, and create a new metric called keyword type - or data source. Another example is IP address category, a fundamental metric that should be created and added to all digital analytics projects.

  • When do you need true real time processing? When fraud detection is critical, or when processing sensitive transactional data (credit card fraud detection, 911 calls). Other than that, delayed analytics(with a latency of a few seconds to 24 hours) is good enough.

  • Make sure your sensitive data is well protected. Make sure your algorithms can not be tampered by criminal hackers or business hackers (spying on your business and stealing everything they can, legally or illegally, and jeopardizing your algorithms - which translates in severe revenue loss). An example of business hacking can be found in section 3 in this article.

  • Blend multiple models together to detect many types of patterns. Average these models. Here's a simple example of model blending.

  • Ask the right questions before purchasing software.

  • Run Monte-Carlo simulations before choosing between two scenarios.

  • Use multiple sources for the same data: your internal source, and data from one or two vendors. Understand the discrepancies between these various sources, to have a better idea about what the real numbers should be. Sometimes big discrepancies occur when a metric definition is changed by one of the vendors, or changed internally, or data has changed (some fields no longer tracked). A classic example is web traffic data: use internal logfiles, Google Analytics and another vendor (say Accenture) to track this data.







期权大师手把手教期权

彻底将枯燥的期权理论成为盈利的工具。


点击“阅读原文”查看详情。



本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
Generating metric output data from an english database错误
计算机世界网-周报全文-数据仓库环境(4) 中英文对照计算机专业时文选读(541)
Using Service Data Objects with Enterprise Information Integration technology
Building a Flexible and Efficient Client
Firebird ODBC driver, Interbase ODBC or OLE DB.
Openi体系结构
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服