Typically,
recommendation engines and systems enhance the user experience, because
they assist us in finding information, reduce search and navigation
time, and increase our satisfaction.
However, I am still receiving recommendations about options to buy vacation packs, books, movies, music, etc.; that I had reviewed more than one year ago. Even worse, I receive friend suggestions because they are friends of someone that I know… Why? Really, most of the time I am not interested in receiving those kinds of recommendations…
Therefore, I asked myself: How many people accept and follow these recommendations?
Unfortunately, I don’t have access to all the required data in order to evaluate this; but according to Symeonidis and Zioupos (“Matrix and Tensor Factorization Techniques for Recommender Systems” ISBN 978-3-319-41356-3):
- Amazon – 35% of product sales come from recommendations in Amazon.com
- Netflix – 66% of movies rented in Netflix.com are recommended
- Google – 38% more click-throughs are generated from recommendations in Google news
In my opinion, the main reason of this behavior can be traced back to the origin of the algorithms that have been used to create recommendations: clustering, ranking, scoring, pattern matching, etc. This means “the History”, usually understood as Big Data, OLAP, OLTP, etc. But it is clear that this "historical approach" demands many resources: storage, computing power, and time.
But: What happen if we’re looking for recommendations about the outcome of a “random”
process?
The situation becomes harder if we don’t have enough information about the process itself. Let’s put it on an easy way: We need recommendations that could not be related to the previous history of the process.
Example:
A recommendation with probability 0.7 is not a winning one in gambling.
Unfortunately, the number of alternatives experiences an exponential
growth in order to achieve a greater probability. It is difficult to create such recommendations using "brute force" algorithms, and the task will demand the most powerful computers. Even worse: There is not heuristics to "prune" the decision tree.
This
means that the inputs to the recommendation’s process are not well
suited because the data could be either too poor or too much, the
representation of the recommendation’s knowledge has not been identified
as it should be, the inference rules could not be useful because there
is not a previous experience about the behavior of the process, and the
explanation about how the recommendations were created does not allow to
identify the whole reasoning process: the domain is not well
formalized.
Then, I'm asking myself: How to predict the future behavior of a system whose previous history might not be relevant to the predictive process?
I believe that a new approach is needed to the recommendation’s
process. This approach should redefine how to analyze the inputs, how to
build new models of knowledge’s representation, how to propose
different inference mechanisms that might not be well suited and
computed by Turing’s Machines, many alternatives of solution, and
explanations about how these recommendations were reasoned that may not
match the common sense.
As a conclusion, I think that research in Artificial Intelligence should include more than machine learning, deep learning, and neural networks; because the focus of the problem should consider not only the data -the history-, but also the mechanisms to extract real knowledge of them: models, representations, inference rules, and explanations.
This path will lead
us to “skilled intelligent systems”: solutions that can be transferred from one domain to other domains, and recommendations that will be really useful.