Artificial Intelligence

From Hype to Reality: “Doing AI” instead of “Talking AI”

Everyone’s talking about Artificial Intelligence these days. It’s a technology that promises a revolution.

But how many people are actually “doing” AI? How many are digging into AI, not just scratching the surface with buzzwords but implementing AI and learning from their successes and failures?

When I say “Implementing AI,” I need to clarify that I’m not talking about the superficial level of use many are engaging in today; I’m talking about deep integration. For example, using ChatGPT to write emails to clients or help create responses to customer service requests is superficial (but yet still important). Don’t get me wrong—using chatGPT for these tasks is great for starter projects, but using chatGPT does not make you or your company “AI Experts.”

The kind of AI work I’m talking about is developing a predictive maintenance system for industrial equipment that integrates sensor data, uses machine learning to predict failures, and triggers automated maintenance requests. This is ‘doing’ AI…not just “talking” about AI.

AI can ‘look at’ complex datasets, uncover hidden patterns, and offer insights that could take years for a human operator to come up with. By digging deeper into the core of AI technologies, we can harness this potential, moving beyond the allure of flashy demos, chatGPT, and the ‘experts’ on various social platforms talking about AI.

For Artificial Intelligence, Doing > Talking

Implementing AI is about more than just deploying a chatbot or even using a pre-trained model. It’s about creating something fundamentally new from the data and systems you have today. It involves a full-stack approach, integrating various AI components to build a cohesive and functional system.

Step 1: Understanding the Data

At the heart of any AI system lies data, the lifeblood that fuels algorithms. Understanding your data means more than just knowing what it is—it means understanding its structure, nuances, and potential. Conducting a thorough data audit involves:

  • Identifying Data Sources: Cataloging all available data sources.
  • Assessing Data Quality: Checking for missing values, inconsistencies, and errors.
  • Understanding Data Limitations: Recognizing biases and gaps.
  • Ensuring Data Privacy: Complying with regulations like GDPR.

Data comes in many forms: structured data like databases, unstructured data like text and images, and semi-structured data like JSON files. Each type requires different handling and processing techniques and mindsets (i.e., structured data is generally easy to analyze while unstructured data is not).

Step 2: Selecting the Right Tools

Once you have a solid grasp of your data, the next step is to select the right tools for your AI implementation, where most people and companies stumble.

The AI landscape is large and evolving, with countless tools, frameworks, and platforms to choose from. The key here is to match the tools to your specific needs and capabilities and to do your due diligence on the tools/companies you plan to use.

For instance, the OpenAI API provides powerful capabilities for natural language processing, enabling you to build applications that understand and generate human language. But it’s not the only tool in the arsenal – you could do the same thing with Llama, Claude, Bard, and many other models. In addition, if you want to move beyond pre-trained models, TensorFlow and PyTorch are popular frameworks for building and training and Scikit-learn offers a rich set of tools for traditional machine learning tasks.

When looking at vendors and tools, be sure to consider the following:

  • Vendor Lock-In Risks: Ensure tools are interoperable with other platforms.
  • Community and Documentation: Choose tools with solid community support and thorough documentation.
  • Future-Proofing: Evaluate the long-term viability and support for the tools.

The nature of your problem should guide the choice of tools, the type of data you have, and your team’s expertise.

Step 3: Building the Model

With your data prepared and your tools selected, it’s time to build your AI model. Building a model involves:

  • Selecting the Right Algorithm: Choosing algorithms that suit your data and problem.
  • Training and Validation: Using techniques like cross-validation to avoid overfitting.
  • Hyperparameter Tuning: Optimizing parameters for better performance.

However, building a model is not a one-time task. It’s an iterative process that involves experimentation, validation, and refinement. You start with a simple model, evaluate its performance, identify areas for improvement, and make the necessary adjustments. This cycle repeats until you achieve a model that meets your objectives.

Step 4: Integrating the Model

A model alone is of little use. A model must be integrated into existing systems and workflows to realize its full potential. Integrating a model involves developing APIs, creating user interfaces, and ensuring seamless interaction between the model and other system components.

Considerations for integration include:

  • Legacy System Compatibility: Ensuring new AI models can work with existing systems.
  • Latency and Efficiency: Managing API calls and processing times.
  • Scalability: Using containerization (e.g., Docker) and orchestration (e.g., Kubernetes) for scalable deployment.

Integration also requires careful consideration of deployment and scalability. Your systems should be robust, reliable, and capable of handling real-world workloads. Building a robust, reliable system may involve setting up cloud infrastructure, implementing load balancing, and monitoring system performance to ensure it is secure and compliant with data protection regulations.

Step 5: Learning from Successes and Failures

Here’s a cold hard fact: you will fail building models and starting out with AI. But these failures bring valuable lessons, just like each success brings new insights and opportunities. Embracing these failures as learning opportunities is crucial for continuous improvement and innovation in AI projects.

Successful AI projects often start small, with pilot projects and initiatives that test the waters and demonstrate AI’s potential with an organization. For example, a retail company might begin with a chatbot to handle customer queries, which could evolve into a personalized recommendation system based on customer preferences and the history of purchases from that customer. These pilots provide a safe environment to experiment, fail, and learn without significant risks. Once the pilot proves successful, you can scale up, applying the lessons learned to larger, more ambitious projects.

Conclusion

Take the first step today: audit your data, explore the tools, start a pilot project, and iterate. By building robust models, integrating them effectively, and learning from the journey, organizations can do more than ‘talk’ about AI—they can benefit from it.

AI isn’t just a buzzword or a futuristic concept; it’s a technology that has the potential to transform the world, drive innovation, enhance decision-making, and create significant business value. However, organizations must move beyond superficial engagement and commit to deep, strategic integration to harness its full potential.

Tagged ,

About Eric D. Brown, D.Sc.

Eric D. Brown, D.Sc. is a data scientist, technology consultant and entrepreneur with an interest in using data and technology to solve problems. When not building cool things, Eric can be found outside with his camera(s) taking photographs of landscapes, nature and wildlife.
View all posts by Eric D. Brown, D.Sc. →