This is the second part of a two-part article. Read the first part at The art of building AI-powered applications (Part 1)
A friend of mine, who is a gourmet chef at an award-winning restaurant, once gave me this Zen-like response when I asked the naïve question, “What’s the difference between what someone like you does, and a mediocre cook like me?” She said, “Don’t put turmeric in everything, but use turmeric. Don’t sauteé everything in olive oil, but use olive oil.”
OK, I get it. Great chefs use all the same ingredients that the rest of us do– but they know when to use them, and how, and in what amounts. It’s the art of getting the right combination, and of not being overly attached to one’s favorite ingredient, that makes cooking into an art.
Well, that’s true for AI development. It’s probably true for all software development, but that’s another blog post perhaps best written by someone else.
Not everyone who does good work in AI is practicing this sort of art. That’s because AI is so over-specialized and over-compartmentalized that most conferences dedicated to it are about just one sub-discipline of AI, e.g., machine learning, natural language processing or computer vision.
Within each of those areas are factions and camps with quasi-religious allegiance to one or another AI methodology. There’s a genetic algorithms expert who would not dream of spending time on neural networks. Or a logic programmer who eschews latent semantic analysis. And so on.
But the application builders–the practioners of AI, are usually “jacks of all trades, masters of none.” Like me. I may not know how to make a tiramisu like my gourmet chef friend, but I know how to make an AI tool that can get a job done.
Here’s a high-level outline of how several AI approaches are used within FMP, when a Web page is put through our text analytics system from end-to-end:
- Use an FSM stack to extract relevant text from a page and throw out ads, navigation, copyright notices, etc.
- Use an SVM to classify the page
- Use statistical methods to determine the best tags
- Use knowledge representation and ontological engineering to understand what entities are talked about in the text
- Use symbolic AI (a rule-based system) blended with evolutionary computation to determine some candidate topics
- Use a fuzzy logic to determine more related topics
- Use a multilayer neural network to determine richer features (e.g., attitude, tone, ideological slant)
- Finally, use GOFAI to slate the document, based on all the above, in the appropriate group, category, federation, ad campaign, content series, etc.
We might be “masters of none” of the aforementioned AI techniques. However, we surely are “jacks” of them all. We’re OK with that. It works for us.
What contrasting computer techniques have you used on a project? Have you ever crossed a divide between two ideological camps to combine both approaches in the same system?