Why your friends have more to be thankful for

kristina

As Thanksgiving approaches, it may feel like everyone else has so much more to be thankful for. Just check your Facebook, Twitter or Instagram: your friends seem to dine at finer restaurants, take more exotic vacations, and attend more exciting parties. Research suggests this is not simply a matter of perception, but a mathematical fact (for most of us anyway). This unsettling observation is rooted in the friendship paradox, which states that “on average, your friends are more popular than you are”. This means that if you ask a random person who her friends are, the average number of friends her friends have is likely to be larger than the number of friends she has. Friendship paradox holds online too: 98% of Twittter users follow others who have larger followings, on average. Unless you are Lady Gaga, most of your followers are also more popular than you are.

The friendship paradox is not merely a mathematical curiosity, but has useful applications in disease monitoring and trend prediction. Researchers used it to spot flu outbreaks on a college campus in their early stages devise efficient strategies to predict trending topics on Twitter weeks before they became popular. Similarly, if you arrive in an African village with only five Ebola vaccines, the best strategy is not to vaccinate five random people, but ask those people who their friends are and vaccinate five of these friends. Due to the friendship paradox, the friends are likely to be more central, both in the Twitterverse and in the village, and thus more likely get sickened early by the virus or to tweet about topics that later become popular.

Although it sounds strange, the friendship paradox has a simple mathematical explanation. People are diverse: most of us have a few dozen friends, and then there is Lady Gaga. This rare outlier skews the average friend count of many people, putting them in the paradox regime. Mathematicians advise using the median when dealing with distributions that include such extremely large values. The median is the half-way point: half of the numbers in the distribution lie below the median and half above. Unlike the average, it is not easily skewed by a few extremely large numbers. The median is used, for example, to report the income of US households, where extreme fortunes of the top 1% of the population skew the average household income.

Remarkably, my group at University of Southern California has shown that friendship paradox still holds for the median. In other words, most of your friends have more friends than you do, not on average, but most! We showed that over 95% of Twitter users have fewer followers than most of the people they follow, or most of the people who follow them. Stranger still, the paradox holds not only for popularity, but for other personal attributes. As an example, consider how frequently a user posts status updates on Twitter. There is a paradox for that: most of the people you follow post more status updates than you do. Similarly, most of the people you follow receive more novel and diverse information than you do. Also, most of the people you follow receive information that ends up spreading much farther than what you see in your stream.

The friendship paradox helps explain why you are not as cool or interesting as your friends. Extraordinary people are likely to be better socially connected and have more friends than the more ordinary people like you and I. Extraordinary people are also likely to be more active and post more frequently about their extraordinary experiences. This is all it takes to skew our perceptions of the quality of our lives relative to those of our friends. So, if you feel that your friends have more to be grateful for, at least in this you are not alone.

thx

Blogger Profile:
Kristina Lerman is a Project Leader at the Information Sciences Institute and holds a joint appointment as a Research Associate Professor in the USC Viterbi School of Engineering’s Computer Science Department. Her research focuses on applying network- and machine learning-based methods to problems in social computing.

Big Data Is Opening the Door To Revolutions: Databases Should Be Next

authors

Most professional fields, whether in business or academia, rely on data and have done so for centuries. In the digital age and with the emergence of Big Data, this dependency is growing dramatically – perhaps out of proportion to its current value given the concepts, tools, and techniques presently available. For example, how do you tell if the results of data-intensive analysis are correct and reliable and not weak or even spurious? Most data-intensive disciplines have statistical measures that attempt to calculate meaning or truth. Efficacy quantifies the strength of a relationship within a system, such as biology or business. For example, when researchers investigate a new drug, they compare its effectiveness to a placebo, using statistics to determine whether the drug worked. This approach, where data selection and processing is predicated on complex models rather than simple comparison, is a far cry from select-project-join queries.


Efficacy is the capacity to produce a desired result or effect. In medicine, it is the ability of an intervention or drug to produce an outcome. P-values have been a conventional empirical metric of efficacy for 100 years.


Moreover, the underlying data in these fields is complex, uncertain, and multimodal. Despite a large body of research data management for science applications, there has been little adoption of relational techniques in the science disciplines. In this post, we examine two challenges. First, modeling data around domain-specific efficacy rather than set theory. Second, support for ensembles of data models to enable many perspectives on a single data set.

The big picture is compelling. Since the late 1980’s one of us estimated in papers and keynotes that databases contain less than 10% of the world’s data and dropping fast as non-database data growth exploded. A corresponding fraction of the world’s applications – data and computation – are amenable to traditional databases. Modelling the 90% opens the door for the database community to the requirements of the rest of the world’s data and a new, vastly larger generation of database research and technology. This calls for a shift in our community commensurate with the profound changes introduced by Big Data.

Efficacy first, then efficiency

Since meaning and truth are relative to a system, efficacy measures are of accuracy, correctness, precision, and significance with respect to a context. That we can compute an answer efficiently – at lightning speed over massive data sets – is entirely irrelevant or even harmful if we cannot demonstrate that the answer is meaningful or at least approximately right in a given context. As fields develop and complexity increases, efficacy measures become increasingly sophisticated, refined, and debated. For example, p-values – the gold standard of empirical efficacy – have been questioned for decades, especially under the pressure of increasing irreproducibility in science. The same is true for precision and recall in information retrieval. In fact, since most fields that depend on data involve uncertainty, measures of efficacy are being questioned everywhere, with the notable exception of data management.

Big Data, broadly construed, is inherently multidisciplinary but often lacks the efficacy measures of its constituent disciplines – statistics, machine learning, empiricism, data mining, information retrieval, among others – let alone those of application domains such as finance, biology, clinical studies, high-energy physics, drug discovery, and astrophysics. One reason for this is that efficacy measures that have been developed in the small data world, based on statistics and other fields, do not necessarily hold true over massive data sets [3][5]. Efficacy in this context is an important, open, and rich research challenge. The value and success of data-intensive discovery (Big Data) depends on achieving adequate means of evaluating the efficacy of its results. A notable exception is the Baylor-Watson result [7] that focused first on efficacy, i.e., modeling, that then contributed to efficiency. But efficacy is one aspect of a larger challenge – modeling.


Relational data is the servant of the data model and the query. It was right to constrain data when we had a well-defined model. And we could always get the model right – right?


As data management evolved it distinguished itself from information retrieval by not requiring efficacy measures since databases were bounded, discrete, and complied with well-defined models, e.g., schemas. In contrast, information retrieval (and later machine learning) searched data sets for complex correlations rather than rigidly defined predicates. Finding relationships like “select all pairs where their covariance is greater than x” are inherently iterative and compute-intensive. In contrast, the contents of a database either match a query or they do not – black or white. No need for estimating accuracy, confidence, or probabilities. Relational data is the servant of the data model and the query. This permitted massive performance improvements that led to the widespread adoption of databases in applications for which schemas made sense. If the data did not comply with the schema and the query within that, then the data was erroneous by definition and should be rejected or corrected. It was right to constrain data when we had a well-defined model. And we could always get the model right – right? Many courageous researchers over the past 50 years have studied this problem, including probabilistic databases and fuzzy logic, (and more [1]) but none has seen widespread adoption. Why?

While the non-database world – life sciences, high-energy physics, astrophysics, finance – opened the door to Big Data and its possibilities, the data management world is aspiring to take ownership of their infrastructure – the storage, management, manipulation, querying, and searching of massive datasets. Currently much of this work is done in an ad-hoc manner using tools like R and Python. What is required for a more general solution? The non-database world is driven by applications – solving problems with real-world constraints – achieving efficacy within the models and definitions of their domain – often with 400 or 500 years of history.

In contrast, the database landscape is predominantly concerned with efficiency and has not dealt head on with efficacy yet. Some of these issues have been addressed in the database context in terms of specific models, languages, and design, but seldom have those concerns impacted the core database infrastructure, let alone gained adoption. Perhaps database researchers focused only on application domains that are well-behaved. While efficacy is a critical requirement – possibly the most critical requirement – in domains that make extensive use of data, it is part of the broader requirement for modelling unmet by database systems.


For more than a decade physics, astrophysics, photonics, biology, indeed most physical sciences as well as statistics and machine learning have made the modest assumption that multiple perspectives may be more valuable than a single model.


Data Models for the 90%

The database community, like many others, perhaps has not fully internalized the paradigm shift from small to big data. Big Data – or data per se – does not create the change nor is itself the change. Big Data opens the door to a revolution in thinking. One aspect involves data-driven methods. A profound shift involves viewing phenomena from multiple perspectives simultaneously.

A significant aspect of this shift is that every Big Data activity (small data activities also, but with less impact) requires measures of efficacy for each perspective or model. This is not simply owing to the reframing of corresponding principles from empirical science, but also to the multiple meanings of data, each of which requires mechanisms for addressing efficacy.

If the data management community is about to provide solutions for this nascent challenge, then it will need to deal with efficacy. This essentially has to do with modelling, a chaotic and ad-hoc database topic that has been largely unsuccessful, again measured by adoption. The relational model has dominated databases for over 40 years largely owing to efficiency. The database community knows how to optimize anything expressed relationally. While the relational model has proven to be amazingly general, its adoption has been limited in many domains, especially the sciences.

A related limitation of the database world is the assumption of a single perspective, e.g., a single version of truth, one schema per database even with multiple views. For more than a decade physics, astrophysics, photonics, biology, indeed most physical sciences as well as statistics and machine learning have made the modest assumption that multiple perspectives may be more valuable than a single model.

In [8] the author argued that science undergoes paradigm shifts only when there are rival theories about the fundamentals of a discipline. It is his position that rival paradigms are incommensurable using entirely different concepts and evaluation metrics from one another. One such example was the wave and particle theories of light. Each has entirely different models and measures of efficacy. Understanding the big picture necessitates finding consistencies and anomalies in both theories.

Ensemble models are one approach to addressing this challenge. Let’s consider an example in evolutionary biology where researchers use a collection of models to learn about how the human genome has changed over time. In [3] the authors identified positive examples of natural selection in recent human populations. Their discoveries have two parts: the affected gene’s location and its (improved) mutation. By composing many signals of natural selection, the authors increase the resolution of their genomic map by up to 100x. This research computes genetic signals at many levels, from clustering genes that are likely to be inherited together to looking at the high-level geographic distribution of different mutations. In present database modeling, the former might be represented as a graph database, whereas the latter is more likely to fall into the purview of geospatial databases. How can we bring them together? Perhaps neither of these models is designed for computing how effective different genetic variations are at producing advantageous traits. This pattern repeats itself in meteorology, physics, and a myriad of other domains that mathematically model large, dynamic systems.


Stepping into the void of uncertainty, unboundedness, ensemble models, and open-ended model exploration is far harder and scarier. We call it Computing Reality


Ensemble models pose substantial challenges to the data management community. How do you simultaneously store, manage, query, and update this variety of models, applying to a single dataset with many, possibly conflicting schemas? Database folks may first be concerned about doing this efficiently. Nope – wrong question. The first step is to understand the problem, to ask the right questions, to get the model correct and only then to make it efficient. How do you support ensemble models and their requirements including efficacy?

This may be why application domains that use massive data sets have grown their own data management tools, such as Hadoop, ADAM, Wikidata, and Scientific Data Management Systems, let alone a plethora of such tools in most physical science communities that the database community has never heard of. It’s not just that their data does not fit the relational model; databases do not support ensemble models, efficacy, or many of the fundamental concepts used to understand data. Why would any application domain (e.g., physical sciences, clinical studies, drug discovery) or discipline (e.g., information retrieval, machine learning, statistics) want to partner with an infrastructure technology that did not support its basic principles?

The database community has developed amazing technology that has changed the world. Since the early 1990’s it has extended its models to non-relational models such for networks, text, graphs, arrays, and many more. But efficacy is not just an issue of expressing eScience applications relationally, as UDFs or in R, but modeling and computing hypotheses under the complex contexts defined by domain experts, none of which map easily to set theory or other discrete mathematics. Stepping into the void of uncertainty, unboundedness, ensemble models, and open-ended model exploration is far harder and scarier. We call it Computing Reality [1][6].

Big Data is opening the door to a paradigm shift in many human endeavors. Machine learning was first through the door with real, albeit preliminary, results and it is already on to the next generation with deep learning [3]. Analytics and other domains are riding the wave of machine learning. The database community is heading for the door now, but it will be challenging. We first have to understand the problem and get the requirements right. To paraphrase Ron Fagin, we need to focus on asking the right questions. The rest may be a breeze but efficacy before efficiency!

So not only are we leaving the relational world that was dominated one model or a class of discrete models, but we are leaving the world of a single model for each dataset and embarking on a journey into a world of ensemble models of including probabilistic, fuzzy, and even potentially the richest model of them all, a model-free approach that enables us to listen to the data. All at scale. This seems scary to us but also just what we need.

Are we crazy, naive? Isn’t it our mission to dig in this data goldmine, to contribute to accelerating scientific discovery? What do you think? We are all ears.

References
[1] Brodie, Michael and Jennie Duggan. What versus Why. Towards Computing Reality, April 17, 2014, published April 22, 2014

[2] Duggan, Jennie and Michael L. Brodie, Hephaestus: Virtual Experiments for Data-Intensive Science, In CIDR 2015 (to appear)

[3] Gomes, Lee. Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts, IEEE Spectrum, 20 Oct 2014

[4] Grossman, Sharon R., et al. “A composite of multiple signals distinguishes causal variants in regions of positive selection.” Science 327.5967 (2010): 883-886.

[5] National Research Council. Frontiers in Massive Data Analysis. Washington, DC: The National Academies Press, 2013

[6] Piatetsky, Gregory, Interview: Michael Brodie, leading database researcher, industry leader, thinker. SIGKDD Explor. Newsl. 16, 1 (September 2014), 57-63. DOI=10.1145/2674026.2674035 http://doi.acm.org/10.1145/2674026.2674035

[7] Scott Spangler, Angela D. Wilkins, Benjamin J. Bachman, Meena Nagarajan, Tajhal Dayaram, Peter Haas, Sam Regenbogen, Curtis R. Pickering, Austin Comer, Jeffrey N. Myers, Ioana Stanoi, Linda Kato, Ana Lelescu, Jacques J. Labrie, Neha Parikh, Andreas Martin Lisewski, Lawrence Donehower, Ying Chen, and Olivier Lichtarge. 2014. Automated hypothesis generation based on mining scientific literature. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’14). ACM, New York, NY, USA, 1877-1886. DOI=10.1145/2623330.2623667 http://doi.acm.org/10.1145/2623330.2623667

[8] Kuhn, Thomas S. The structure of scientific revolutions. University of Chicago press, 2012.

Bloggers’ Profiles:
Dr. Brodie has over 40 years experience in research and industrial practice in databases, distributed systems, integration, artificial intelligence, and multi-disciplinary problem solving. He is concerned with the Big Picture aspects of information ecosystems including business, economic, social, application, and technical. Dr. Brodie is a Research Scientist, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology; advises startups; serves on Advisory Boards of national and international research organizations; and is an adjunct professor at the National University of Ireland, Galway. For over 20 years he served as Chief Scientist of IT, Verizon, a Fortune 20 company, responsible for advanced technologies, architectures, and methodologies for Information Technology strategies and for guiding industrial scale deployments of emergent technologies. His current research and applied interests include Big Data, Data Science, and data curation at scale and a related start up Tamr.com. He has served on several National Academy of Science committees. Dr. Brodie holds a PhD in Databases from the University of Toronto 
and a Doctor of Science (honoris causa) from the National University of Ireland.

Jennie Duggan is a postdoctoral associate at MIT CSAIL working with Michael Stonebraker and an adjunct assistant professor at Northwestern University. She received her Ph.D. from Brown University in 2012 under the supervision of Ugur Cetintemel. Her research interests include scientific data management, database workload modeling, and cloud computing. She is especially focused on making data-driven science more accessible and scalable.

Quality Experiments Crowdsourced

SihemAY

Crowdsourcing is a powerful paradigm that democratizes the creation, collection, analysis, curation, and dissemination of data, contributed by millions of individuals, who are called workers. Early crowdsourcing platforms in the context of citizen reporting and citizen sciences were dedicated to specific tasks. Today, many other platforms that expect workers with different expertise exist. Among them, we can find Turkit, Mob4hire, uTest, Freelancer, eLance, oDesk, Guru, Topcoder, Trada, 99design, Innocentive, CloudCrowd, and CloudFlower. The most popular one is Amazon Mechanical Turk (AMT) that is generic and allows any requester to post virtually any kind of task in which any worker can participate via self-appointment.

Tasks can be as simple as binary requests such as recognizing a landmark in a city, or more sophisticated ones such as meta-data enrichment (picture and video tagging in Flickr or YouTube, food description in Open Food Facts), opinion solicitation (restaurants reviews), collaborative intelligence (city map alignment in http://maps.nypl.org/warper/, identifying stars in GalaxyZoo, character recognition in Recaptcha), and knowledge-intensive crowdsourcing with tasks such as collaborative editing (Wikipedia) and idea generation. The database community has been offering its own platforms (e.g., Qurk on top of AMT and Deco for query processing) and today, scientists are resorting to crowdsourcing for a particular kind of task, that is, quality experiments to validate their findings.

There are several challenges one faces in crowd-based quality experiments. Some are inherent to using a crowd. For example, worker filtering to make sure only qualified ones participate, has to be done by task requesters. Those qualification tests are often difficult to design and do not always lead to flawless experiments. Data collection to build worker profiles is also a challenge (e.g., workers are asked to rate at least 30 movies to build a profile). Other challenges depend on the actual platform. The availability of a large worker pool and the ability to easily set up virtually any task, make AMT the platform of choice for most research experiments. In fact, AMT is used by most researchers in the DB community, and tasks range from answering simple binary questions such as determining whether a tweet expresses a positive or a negative sentiment, to more sophisticated tasks that require some domain knowledge such as collaborative journalism. However, the closed nature of AMT (and of all crowdsourcing platforms we know of today) forces task requesters to find workarounds outside the system in order to select the right pool of workers and assign to them the right set of tasks (AMT is often used as a hiring platform only and workers are redirected to another website). Moreover, monetary compensation, as is the case in AMT, is not always the best choice for incentivizing workers (think open-source contributions) and tends to attract more cheaters when the reward is too high. To address that, post-quality control is enforced via majority voting or by monitoring task completion time. The bottom line is that in a volatile environment, enforcing some level of correctness is difficult, time-consuming and costly.

I have used AMT in several qualitative analyses. The first experiment was in 2009 [1] to evaluate travel itineraries extracted from Flickr against carefully handcrafted ones. It was a large-scale experiment “to validate the wisdom of the crowd”. About 450 workers participated in the experiment and evaluated itineraries in 5 popular and geographically distributed cities (San Francisco, New York City, London, Paris and Barcelona). Half of the workers were involved in an independent study that evaluated the usefulness of our itineraries, the points of interests they contained and the landmark visit and transit times. The other half participated in a comparative study where our itineraries were put side-by-side with handcrafted ones and were preferred in a large majority of the cases. It worked so well that the Wall Street Journal, a prominent newspaper in the US, talked about it [2]. In this experiment, special attention was dedicated to the design of qualification tests to filter out workers. We had to think carefully about a test for which answers are not readily available with simple Web search. Below is one of the tests we used.

image001

The second time we used AMT [3] relied on input from 100 workers to evaluate how well different rating aggregation functions (least misery function, majority voting, pairwise disagreement, and global average) to compute movie recommendations from user groups with different characteristics (small/large groups, similar/dissimilar groups). We found several insightful and sometimes surprising results including that least misery, i.e., optimizing for the lowest rating in a group, performed better than averaging ratings and is the best aggregation function for small groups formed by similar users. Pairwise disagreement on the other hand, did very well with large groups of dissimilar users. In the first round of that experiment, we had inconsistent findings. We then looked carefully in AMT logs and realized that there were cheaters whose task completion time was unrealistic.

The latest work we used AMT for was crowdsourcing the validation of articles produced collaboratively by a crowd to another crowd [4]. Using the crowd to validate the crowd is in fact a common practice and has shown to work well in many applications such as sentence translation by non-experts. Participating workers were instructed to join a group and together write a short summary of current world events on some topic. The qualification test included answering factual questions on those events. Of course, cheaters could use their favorite search engine to find answers to the qualification test.

I hope that the examples above illustrate both the advantage of having a generic platform that enables reaching out to a large worker pool but also showcases the challenges of using crowds for qualitative analysis. Using crowds bears similarities to running surveys and to using cohorts in medical drug trials. While there are practices on how to select such cohorts and ensure result quality (e.g., controlled/test groups, independently chosen subjects, double blind tests in medical trials), none exist for running a crowd-based quality experiments.

With my colleagues Beatrice Valeri and Shady El Bassuoni, we examined 4395 papers published in SIGMOD, PVLDB, ICDE, EDBT, ICDM, KDD, ICWSM, RecSys, and Hypertext between 2010 and 2014 (2014 statistics are partial) and found that 60 of them used crowd-based quality experiments. Among those, all of them use AMT or CrowdFlower, a layer on top of AMT. Most tasks are about collecting golden data used in evaluation and among those, labeling data is the most common. 48 out of the 60 papers have at least one author in the USA. European authors contributed to a total of 14 papers followed by Israel (6 papers), China (4 papers), Qatar and Canada (3 papers each), Singapore and Japan (2 papers each). As shown below, the number of publications that crowdsource experiments has been increasing with a worker base ranging from around 20 to 500 per experiment.

image003

image005

Most papers use workers’ acceptance rate (recorded in AMT based on previous tasks completed by those workers) and only a few use a thorough qualification test. Those tests are followed by a post-processing step where majority voting, manual checking (by involving another crowd), task completion time, and answer justification, are used to check workers’ input. There goes design efficiency.

image007

image009

What is our opportunity today?
Crowd-based quality experiments in the database community are recent. In addition, research in crowdsourcing is gaining traction in our community. What could we do to make a difference? We need a crowd, a platform and some wisdom. Lab experiments have historically been conducted with a small number of colleagues and students who are physically present in a lab. We need to get them online. We also need a platform with returning workers that can be profiled, a platform where we can conduct crowd-based quality experiments but also test our own research on crowdsourcing: worker profiling strategies, our task-to-worker assignment algorithms. That would reduce the burden on all of us when reporting quality experiments on papers since it would be enough to point to results summarized on a single website (double-blind reviewing could be enforced). Such a platform would need to be generic in order to allow any kind of task and any worker to register, open in order to deploy our own crowd-based algorithms and advance the state of research in crowdsourcing, and free!

I do not expect our community to suddenly change the course of things. I myself still run experiments on AMT and will continue to do it. Of course, there are questions related to where to start and how to maintain the platform. Crowd4U (crowd4u.org) is a good place to start. It’s an all-academic generic platform that is being developed at the Univ. of Tsukuba and that is open to the rest of the academic world. Here in Grenoble, we ran a campaign on campus where we advertised crowdsourcing and gave out cookies and drinks (not totally free!) and passers-by performed over 1600 tasks to label tweets in less than 3 hours.

You can start your own platform or join Crowd4U but keep it open! In [4], we argue that human factors such as worker skills, worker availability, expected wage, and ability to work together, could be accounted for to make better use of the crowd. Such parameters can only be tested in an open and generic crowdsourcing system where task assignment and skill learning algorithms can be deployed for virtually any task.

So let’s take this opportunity, have some wisdom, and build and nurture our own crowdsourcing platform(s).

References
[1] Munmun De Choudhury, Moran Feldman, Sihem Amer-Yahia, Nadav Golbandi, Ronny Lempel, Cong Yu: Automatic construction of travel itineraries using social breadcrumbs. HT 2010: 35-44

[2] http://blogs.wsj.com/digits/2010/07/23/using-flickr-photos-as-a-travel-guide/

[3] Senjuti Basu Roy, Sihem Amer-Yahia, Ashish Chawla, Gautam Das, Cong Yu: Space efficiency in group recommendation. VLDB J. 19(6): 877-900 (2010)

[4] Senjuti Basu Roy, Ioanna Lykourentzou, Saravanan Thirumuruganathan, Sihem Amer-Yahia, Gautam Das: Optimization in Knowledge-Intensive Crowdsourcing. CoRR abs/1401.1302 (2014)

Blogger’s Profile:
Sihem Amer-Yahia is a 1st class CNRS (Centre National de Recherche Scientifique) Research Director at LIG (Laboratoire d’Informatique de Grenoble) in France. Sihem heads the SLIDE group (ScaLable Information Discovery and Exploitation) that sits at the intersection of large-scale data management and Web data analytics with an emphasis on the social and the semantic Web. Until July 2012, Sihem was Principal Scientist at the Qatar Computing Research Institute (QCRI) where she led a group in Social Computing and worked with local Universities on student mentoring and with Al Jazeera Online on news traffic analytics. From 2006 to 2011, she was Senior Scientist at Yahoo! Research and worked on revisiting relevance models and scalable top-k processing algorithms for Delicious, Yahoo! Travel and Personals, and Flickr. Before that, she spent 7 years at at&t Labs in New Jersey, working on XML query optimization and XML full-text search in conjunction with the W3C. Sihem has served on the SIGMOD Executive Board, is a member of the VLDB and the EDBT Endowments. She serves on the editorial boards of ACM TODS, the VLDB Journal and the Information Systems Journal. She was track chair of SIGIR 2013 and of PVLDB 2013. She is PC chair of EDBT 2014 and will be PC chair of BDA 2015 (French DB conference) and PC co-chair of SIGMOD Industrial 2015. Sihem received her Ph.D. in Computer Science from Paris-Orsay and INRIA in 1999, and her engineering degree from ESI/INI, Algeria.

Exploratory search: New name for an old hat?

explore Exploratory search has gained prominence in recent years. There is an increased interest from several scientific communities, from the information retrieval and database to human-computer interaction and visualization communities, in moving beyond the traditional query-browse-refine model supported by web search engines and database systems alike, and towards support for human intelligence amplification and information understanding. Exploratory search systems should inherently support symbiotic human-machine relationships that provide guidance in exploring unfamiliar information landscapes [Whi 09]. In this post, four researchers set to explore exploratory search from different angles and study the emerging needs and objectives as well as the challenges and problems that need to be tackled.

Exploratory Search in Structured Data: Data Warehouses and OLAP to the Rescue?

by Melanie Herschel
melanie

Looking more closely at what problems the field of exploratory search encompasses, Marchionini has established two main aspects of exploratory search that go beyond the classical problem of looking up information using classical search mechanisms [Mar 06]. More specifically, exploratory search has the general goal of (i) learning, thus acquiring new knowledge and (ii) investigating to possibly reveal new facts. Sub-tasks to achieve these goals include:

  • Learning: knowledge acquisition, comprehension and interpretation, comparison, aggregation and integration, and socialization.
  • Investigating: Accretion, analysis, synthesis, evaluation, discovery, planning and forecasting, and transformation.

Clearly, exploratory search applies to both structured and unstructured data, as we want to leverage as much data as possible in the above tasks. However, if we focus on structured data, we notice that quite a few of the tasks mentioned above resemble those for which data warehouses and online analytical processing (OLAP) have been developed. One example of a learning task supported by data warehouses is integration, as one of the main purposes of data warehouses is to integrate data from distributed and heterogeneous sources in a single repository. Consequently, data comparison is easily supported as well. As for supporting investigation tasks, OLAP and data mining algorithms readily support analysis of data stored in a data warehouse, and reports synthesize such data. Finally, data warehouses are deployed to support decision making, such as planning and forecasting.

So are the problems of exploratory search already solved for structured data through data warehouses and OLAP?

To some extent yes but there are still many areas of exploratory search that data warehouses do not cover, such as socialization, comprehension and interpretation, and discovery. In the following, I will highlight three main areas where I believe exploratory search will require novel approaches to reach its full potential.

Discovery. Data warehouses are designed based on a predefined information demand, meaning that the analytical queries they support are fixed in advance and for sure cannot go beyond queries that the schema of the data warehouse supports. Essentially, the data warehouse is designed to efficiently answer predefined queries. Opposed to that, in exploratory search, the information demand is unknown a priori, the goal of discovery being to also reveal things that users of the system did not think about during design time and thus are not covered by the rigid schema. Hence, system design for exploratory search needs to plan for the unplanned. Making an analogy to farming, we can say that whereas data warehouses and OLAP are useful for harvesting what has been planted, exploratory search is all about exploring and laboring new land.

Adaptation. Given the rigid cage schemas impose on a data warehouse, these systems typically have limited capabilities for changing or evolving information needs. But these are the essence of exploratory search, as learning is an iterative process that requires a search-refine-expand paradigm. That is, whereas data warehouses allow us to freely move in a cage, exploratory search systems need to be able to adapt to a new and rapidly changing environment.

Data and Users. As a final distinction here, we have limited the discussion above to structured data, but obviously, a wealth of information resides in unstructured data that exploratory search should natively support. This brings us back to a long going discussion on integrating databases and information retrieval. Additionally, many limitations of the data warehouse come from the rigid schema, so could maybe NoSQL databases be of help here? Finally, data warehouses and OLAP are targeted towards expert users that know their domain, whereas exploratory search is for the masses and human-computer interaction will be key to success.

To conclude, we have seen that data warehouses may support to some extent exploratory search, but they have serious limitations when it comes to discovery, adaptability, and the support of all types of relevant data and users.

Exploratory Search and Web searching

by Yannis Tzitzikas
yannisNew

Web searching is probably the most frequent user task in the Web, during which users typically get back a linear list of hits. In fact, web search engines are very good in ranking, and there is a plethora of ranking methods for capturing various aspects (topic relevance, authority, popularity, credibility and personalization). However, ranking is not enough. Ranking is adequate for focalized (else called precision-oriented) information needs, however the majority of information needs are recall-oriented (according to various user studies). For recall-oriented information needs, the user wants to find and understand more than one search hit. Examples include bibliographic survey writing, medical information seeking, patent search, hotel booking, and car buying. In general, recall-oriented information needs have an exploratory nature and aim at decision making (based on one or more criteria). To support such information needs, it is beneficial to provide methods that:

  • enable easy access to low ranked hits,
  • allow browsing the relevant hits and resources in groups (according to various criteria),
  • offer overviews (of variable complexity) of the hits, and
  • let the user to restrict gradually the search results.

The last years we, in the University of Crete and FORTH-ICS, have been investigating methods that can complement the classical Web searching with functionality for exploratory search. At first, we developed the web search engine (WSE) Mitos, a system which offers faceted search over the results of the submitted queries. Specifically, it supports facets corresponding to metadata attributes of the web pages (static metadata), as well as facets corresponding to the outcome of snippet-based clustering algorithms (a kind of dynamic metadata). The user can then restrict her focus gradually, by interacting with the resulting multidimensional structure through simple clicks.

The system IOS [Faf 12] was developed next, providing analogous features but during query typing (type-ahead search). This functionality is offered for the frequent queries and takes advantage of novel partitioned trie-based indexes for reducing the increased main memory requirements. Subsequently, through XSearch [Faf 12b] we investigated and showed how the available Linked Open Data (LOD) can be exploited for offering facets based on the outcome of entity mining techniques. Again, this approach is applied at query time over the textual snippets of the search hits, where LOD provides information about the names of the entities of interest. This approach has been proved successful for professional search, specifically in the domain of patent search and marine search. With X-ENS [Faf 12c] we focused on configurability, i.e., allowing the end user to configure the entities of interest, and also browse the information that LOD provides for these entities.

However, to offer entity mining not over the snippets, but over the full contents of the search hits, more than one machines are needed for downloading and analyzing the full contents of the hits. A recent work shows how MapReduce can be used for this purpose [Kit 14].

Subsequently, we investigated how we could enable the user to easily express preferences, simple and complicated ones, over the elements the user interacts with. Such preferences can affect the ordering of the different aspects of the multidimensional (and hierarchically organized) information space. We have developed the prototype system Hippalus that realized this approach over a proposed preference framework [Pap 14].

More information about the aforementioned systems is available here.

Based on our experience, we could identify four main topics that we believe current IR/Web search does not cover, and exploratory search will require to reach its full potential:

Ubiquity. Faceted browsing of search results and gradual restriction should be possible for any kind of query, for any domain, and with no predetermined facets. In other words, methods that bypass the need for explicit configuration (regarding facets, entity types, LOD sources) are required.

Fusion of Structured and Unstructured Content. The exploitation of LOD in the exploratory search process is promising, e.g. for Named Entity Recognition and disambiguation. However, the fusion of structured and unstructured content requires more work.

User Control. Explicit, user-provided, and controllable preference management is beneficial for supporting a transparent decision making process. We believe that the framework supported by the Hippalus system is a first step towards this direction.

Evaluation. We need easy-to-follow methods for evaluating the effectiveness of exploratory search methods, and easily reproducible evaluation results. Although the classical IR has well established methodologies for evaluation, things are not so clear and straightforward in interactive IR (IIR).

Exploratory Search and Multimedia Data

by K. Selcuk Candan
candan

By its very nature, multimedia data exploration shares the 3V challenges ([V]olume, [V]elocity, and[V]ariety ) of the so called “Big Data” applications. Systems supporting multimedia data exploration, however, must tackle additional, more specific, challenges, including those posed by the [H]igh-dimensional, [M]ulti-modal (temporal, spatial, hierarchical, and graph-structured), and inter-[L]inked nature of most multimedia data as well as the [I]mprecision of the media features and [S]parsity of the observations in the real-world.

Moreover, since the end-users for most multimedia data exploration tasks are us (i.e., humans), we need to consider fundamental constraints posed by [H]uman beings, from the difficulties they face in providing unambiguous specifications of interest or preference, subjectivity in their interpretations of results, and their limitations in perception and memory. Last, but not the least, since a large portion of multimedia data is human-centered, we also need to account for the users’ (and others’) needs for [P]rivacy.

Multimedia exploration (on data with the above characteristics and by users with the above limitations) is an inherently dynamic process, and systems for multimedia exploration must be able to support, efficiently and effectively, a continuous exploration cycle involving four key steps:

  • sense & integrate: the system takes as inputs and integrates data, media, and models of the application space and continuously sensed real-time media data,
  • filter, rank & recommend: the system provides support for context-aware access to integrated media data sets,
  • visualize & feedback: the system acquires accurate user feedback through an intuitive data and result representation, and
  • act & adapt: the system provides continuous adaptation of models of data, context, and user preference based on user feedback.

Naturally, each and every step of this multimedia exploration cycle poses significant challenges. While it would be impossible to enumerate all the challenges, I would like to identify the following five “core” challenges:

  • media annotation, summarization, and (dimensionality) reduction,
  • user, community, context, preference modeling and feedback,
  • multi-modal and richly structured/linked data exploration,
  • dynamic/evolving multimedia data exploration, and of course
  • bridging the semantic gap in media exploration.

Unfortunately, while as a community we have done great advances in tackling these core challenges, we probably have to admit that we are still quite far from addressing any of these issues satisfactorily, especially within the context of large multimedia data collections. In fact, at least in the short- to medium-term future, these five issues will continue to form the core challenges in multimedia exploration and retrieval.

If there is one thing that is becoming more urgent, however, it is that the ever-increasing scale and the speed of the data implies that to support the above media exploration cycle, our emphasis must shift towards development of integrated data platforms that can support, in an optimized and scalable manner, both media analysis (feature extraction, clustering, partitioning aggregation, summarization, classification, latent analysis) and data manipulation (filtering, integration, personalized and task-oriented retrieval) operations.

Personalizing Data Exploration: Exploring the Past

by Amelie Marian
amelie

Personal data is now pervasive as digital devices are capturing every part of our lives. Data is constantly collected and saved by users, either voluntarily in files, emails, social media interactions, multimedia objects, calendar items, contacts, etc., or passively by various applications such as GPS tracking of mobile devices, records of utility usage, financial transactions, or quantified self sensors. A typical user will have data recorded on many devices, cloud services, and proprietary systems; access to the data can be difficult because of the data fragmentation, security issues, or practical concerns.

Companies and organizations have famously been able to take advantage of the wealth of information produced by individuals, learning patterns and visualizing trends on a large number of data points produced by many users. Yet, individual uses do not have accessible tools to retrieve, manage and analyze their own data.

Leveraging personal data is critical to many data exploration tasks:

Exploring our past. A specific type of data exploration is re-finding [Tee 04], [Ber 08], whose goal is to find information that has been created, received, or seen by the user. Unlike traditional web search or exploration tasks whose focus is usually on discovering new information/documents, a re-finding exploration task has a target object it is attempting to recover.

Fragmentation of data is a main challenge. Users rarely own and store their personal data, with the exception of personal files. Most personal information is stored in the cloud by commercial companies that may offer some limited access, usually via a web browser or an API, to a user’s personal data. Attempting to retrieve and cross-reference personal information then leads to a tedious, often maddening, process of individually accessing all the relevant sources of data and manually linking their information. For example, checking that an insurance claim was correctly processed may require looking at a calendar application to find the doctor’s appointment date, checking the claim status on the insurance web site, and consulting a bank web portal to confirm that the payment was received.

The future of personal data exploration lies in the creation of personal data exploration tools, in the spirit of Dataspaces [Blu 07], [Hal 06], that will integrate personal data from a variety of sources, and allow users to visualize and search through their digital memories based on any piece of information they remember, following threads of information to navigate from one memory to the next.

Exploring our social data. Personal data is not limited to a user’s own data production. In an increasingly connected world, what other users in our social network share with us is also relevant to our data exploration. Users looking for book recommendations may want to explore their social network connections for books they have read or discussed. Some desktop search systems, such as deskWeb [Zer 10], have integrated the user’s social network graph, expanding the searched data set to include information available in the social network.

Personalizing our data exploration. Users have individual habits and patterns, as well as different types of data, context and information needs. By inferring knowledge from their past personal information and their interactions with their data, we can personalize data exploration and fit it to the users’ individual needs.

Social network information can be leveraged to learn contextual information about a user and guide data exploration. For instance, “Student Center” has different meaning for different groups of users. Knowing which social group the user belongs to can help us identify entities, e.g., observing that a majority of the user’s friends go to Rutgers University and attend events at “Busch Campus Student Center, Rutgers University”, increases the likelihood that the phrase “Student Center” in the user’s data and queries refers to that particular student center. Similarly, past queries can be used to learn some information about the type of the entities present in the personal data, or about the personal context.

This type of personalized data exploration is gaining traction in prospective memory applications, such as Google Now, which focus on reminding users of their appointments and to-do items.

Overall, understanding and leveraging personal data is crucial for many data exploration tasks, either because the exploration is on the personal information itself, or because personal information can give significant insights and directions for traditional data exploration needs.

References

[Whi 09] Ryen W. White and Resa A. Roth. Exploratory Search: Beyond the Query-Response Paradigm. Synthesis Lectures on Information Concepts, Retrieval, and Services, 2009, Vol. 1, No. 1 , Pages 1-98

[Mar 06] Gary Marchionini. Exploratory search: from finding to understanding. Communications of the ACM. Volume 49 Issue 4, April 2006, Pages 41-46.

[Faf 12] Pavlos Fafalios, Ioannis Kitsos, Yannis Tzitzikas: Scalable, flexible and generic instant overview search. WWW 2012: 333-336.

[Faf 12b] Pavlos Fafalios, Ioannis Kitsos, Yannis Marketakis, Claudio Baldassarre, Michail Salampasis, Yannis Tzitzikas: Web Searching with Entity Mining at Query Time. IRFC 2012: 73-88.

[Faf 12c] Pavlos Fafalios, Yannis Tzitzikas: X-ENS: semantic enrichment of web search results at real-time. SIGIR 2013: 1089-1090.

[Pap 14] Panagiotis Papadakos, Yannis Tzitzikas: Hippalus: Preference-enriched Faceted Exploration. EDBT/ICDT Workshops 2014: 167-172.

[Kit 14] Ioannis Kitsos, Kostas Magoutis, Yannis Tzitzikas, Scalable entity-based summarization of web search results using MapReduce, Journal on Distributed and Parallel Databases, 32(3), 2014, 405-446.

[Tee 04] Jaime Teevan, Christine Alvarado, Mark S. Ackerman, and David R. Karger. The perfect search engine is not enough: a study of orienteering behavior in directed search. In CHI, pages 415–422, 2004.

[Ber 08] Ofer Bergman, Ruth Beyth-Marom, Rafi Nachmias, Noa Gradovitch, and Steve Whittaker. Improved search engines and navigation preference in personal information management. ACM Trans. Inf. Syst., 26(4):20:1–20:24, October 2008.

[Blu 07] Lukas Blunschi, Jens peter Dittrich, Olivier Ren Girard, Shant Kirakos, Karakashian Marcos, and Antonio Vaz Salles. A dataspace odyssey: The imemex personal dataspace management system. In CIDR, 2007.

[Hal 06] Alon Halevy, Michael Franklin, and David Maier. Principles of dataspace systems. Communications of the ACM, 2006. 

[Zer 10] Sergej Zerr, Elena Demidova, and Sergey Chernov. deskweb2.0: Combining desktop and social search. In Proc. of Desktop Search Workshop, In conjunction with the 33rd Annual International ACM SIGIR 2010, 23 July 2010, Geneva, Switzerland, 2010.

Blogger Profiles:
Melanie Herschel is an Associate Professor in the Computer Science Department of the University of Paris South and is also member of the INRIA Oak project. Before this, after receiving her Ph.D. in Computer Science from Humboldt-University Berlin (2008), she was a postdoctoral researcher at IBM Research – Almaden (2008 – 2009) and at the University of Tübingen, Germany (2009 – 2011). During her research career, she has focused on topics related to data integration, data quality, data provenance, and, more recently, on processing large amounts of Web Data. Melanie participates in several nationally funded research projects and is an active member of the database research community, as she has served on numerous program committees, is a frequent journal referee, and has organized international workshops.

Yannis Tzitzikas is Assistant Professor in the Department of Computer Science of the University of Crete and Research Associate of FORTH-ICS. His research interests fall in the intersection of the following areas: Information Systems, Information Indexing and Retrieval, Conceptual Modeling, Knowledge Representation and Reasoning. He is one of the editors (and author of several chapters) of the book “Dynamic Taxonomies and Faceted Search” (Springer 2009), he is currently involved in the ongoing EU projects i-Marine, SCIDIP-ES, Aparsen NoW (WP leader), and he is national contact of the MUMIA (Multilingual and Multifaceted Interactive Information Access) Cost Action. His current research revolves around Faceted Interactive Search, and lately on methods that can bridge the world of documents with the world of data at search time. He has published more than 80 papers in refereed international conferences and journals (including ACM Transactions on the Web, VLDB Journal, Journal on Data Semantics, International Semantic Web Conference (winner of best paper award), ESWC, and many others).

K. Selçuk Candan is a Professor of Computer Science and Engineering at the Arizona State University. He has published over 170 journal and peer-reviewed conference articles, one book on multimedia retrieval, and 16 book chapters. He has 9 patents. Prof. Candan served as an associate editor of one of the most respected database journals, the Very Large Databases (VLDB) journal. He is also in the editorial board of the IEEE Transactions on Multimedia and the Journal of Multimedia. He has served in the organization and program committees of various conferences. In 2006, he served as an organization committee member for SIGMOD’06, the flagship database conference of the ACM. In 2008, he served as a PC Chair for another leading, flagship conference of the ACM, this time focusing on multimedia research (MM’08). More recently, he served as a program committee group leader for ACM SIGMOD’10. He also serves in the review board of the Proceedings of the VLDB Endowment (PVLDB). In 2011, he served as a general co-chair for the ACM MM’11 conference. In 2012 he served as a general co-chair for ACM SIGMOD’12. He has successfully served as the PI or co-PI of numerous grants, including from the National Science Foundation, Air Force Office of Research, Army Research Office, Mellon Foundation, HP Labs, and JCI. He also served as a Visiting Research Scientist at NEC Laboratories America for over 10 years. He is a member of the Executive Committee of ACM SIGMOD and an ACM Distinguished Scientist.

Amélie Marian is an Associate Professor in the Computer Science Department at Rutgers University. Her research interests are in Personal Information Management, Ranked Query Processing, Semi-structured data and Web data Management. Amélie received her Ph.D. in Computer Science from Columbia University in 2005. From March 1999 to August 2000, she was a member of the VERSO project at INRIA-Rocquencourt. She is the recipient of a Microsoft Live Labs Award, three Google Research Awards, and an NSF CAREER award.

Is Query Optimization a “Solved” Problem?

lohman2
Is Query Optimization a “solved” problem? If not, are we attacking the “right” problems? How should we identify the “right” problems to solve?

I asked these same questions almost exactly 25 years ago, in an extended abstract for a Workshop on Database Query Optimization that was organized by the then-Professor Goetz Graefe at the Oregon Graduate Center [Grae 89a]. Remarkably and quite regrettably, most of the issues and critical unsolved problems I identified in that brief rant remain true today. Researchers continue to attack the wrong problems, IMHO: they attack the ones that they can, i.e., that they have ideas for, rather than the ones that they should, i.e., that are critical to successfully modeling the true cost of plans and choosing a good one. Perhaps more importantly, that will avoid choosing a disastrous plan! At the risk of repeating myself, I’d like to re-visit these issues, because I’m disappointed that few in the research community have taken up my earlier challenge.

divider2

The root of all evil, the Achilles Heel of query optimization, is the estimation of the size of intermediate results, known as cardinalities. Everything in cost estimation depends upon how many rows will be processed, so the entire cost model is predicated upon the cardinality model. In my experience, the cost model may introduce errors of at most 30% for a given cardinality, but the cardinality model can quite easily introduce errors of many orders of magnitude! I’ll give a real-world example in a moment. With such errors, the wonder isn’t “Why did the optimizer pick a bad plan?” Rather, the wonder is “Why would the optimizer ever pick a decent plan?”

“Well,” you say, “we’ve seen lots of improvements in histograms, wavelets, and other statistics since 1989. Surely we do better now.” There’s been no shortage of such papers, it’s true, but the wealth of such papers precisely illustrates my point. Developing new histograms that improve selectivity estimation for individual local predicates of the form “Age BETWEEN 47 AND 63” by a few percent doesn’t really matter, when other, much larger errors that are introduced elsewhere in cardinality estimation dwarf those minor improvements. It’s simple engineering, folks. If I have to review one more such paper on improved histograms for local predicates, I’ll scream (and reject it)! It just doesn’t matter! What we have now is good enough.

What still introduces the most error in cardinality estimation is (a) host variables and parameter markers, (b) the selectivity of join predicates, and, even more significantly, (c) how we combine selectivities to estimate the cardinality. Amazingly, these three topics also have enjoyed the least research attention, or at least the fewest number of papers attempting to solve them, unless I’ve missed some major contributions lately. I’ll visit each of these topics in turn, describing the fundamental causes of errors, and why those errors can easily reach disastrous proportions, illustrated by war stories from real customer situations.

Host variables and parameter markers
Host variables, parameter markers, and special registers occur in SQL queries because applications on top of the DBMS, not humans, invoke most queries, unlike in academic papers. Such applications typically get the constants used in predicates from a “fill in the blank” field on a web page, for example. The SQL predicate then looks like “Age BETWEEN :hv1 AND :hv2”. At compile time, the optimizer has no clue what :hv1 and :hv2 are, so it cannot look them up in those wonderful histograms that we all have polished. Instead, it must make a wild guess on the average or likely values, which could be off significantly from one execution to the next, or even due to skew. A war story illustrates this point.

One of our major ISVs retrofitted a table with a field that identified which subsystem each row came from. It had 6 distinct values, but 99.99% of the rows had the value of the founding subsystem, i.e., when there was only one. A predicate on this subsystem column was added to every query, with the value being passed as a host variable. Not knowing that value a priori, DB2’s optimizer used the average value of 1/|distinct values| = 0.167, though that predicate’s true selectivity was usually 0.9999 (not selective at all) and once in a blue moon was 0.0001 (extremely selective).

There has been some work on this so-called Parametric Query Optimization (PQO), though it’s sometimes attacking the problem of other parameters unknown at compilation time (e.g. the number of buffer pages available) or limited to discrete values [Ioan 97]. One of my favorites is a fascinating empirical study by Reddy and Haritsa [Redd 05] of plan spaces for several commercial query optimizers as the selectivity of multiple local predicates are varied. It demonstrated (quite colorfully!) that regions in which a particular plan is optimal may not be convex and may even be disconnected! Graefe suggested keeping a different plan for each possible value of each host variable [Grae 89b], but with multiple host variables and a large number of possible values, Graefe’s scheme quickly gets impractical to optimize, store, and decide at run-time among the large cross-product of possible plans, without grouping them into regions having the same plan [Stoy 08].

Version 5 of DB2 for OS/390 (shipped June 1997) developed a practical approach to force re-optimization for host variables, parameter markers, and special registers by adding new SQL bind options REOPT(ALWAYS) and REOPT(ONCE). The latter re-optimizes the first time that the statement is executed with actual values for the parameters, and assumes that these values will be “typical”, whereas the former forces re-optimization each time the statement is run. Later, a REOPT(AUTO) option was added to autonomically determine if re-optimization is needed, based upon the change in the estimated filter factors from the last re-optimization’s plan.

Selectivity of join predicates
The paucity of innovation in calculating join predicate selectivities is truly astounding. Most extant systems still use the techniques pioneered by System R for computing the selectivity of a join as the inverse of the maximum of the two join-column cardinalities [Seli 79], which essentially assumes that the domain of one column is a subset of the other. While this assumption is valid for key domains having referential integrity constraints, in general the overlap of the two sets may vary greatly depending upon the semantics of the joined domains. Furthermore, the common practice of pre-populating a dimension table such as “Date” with 100 years of dates into the future can incorrectly bias this calculation when the fact table initially has only a few months of data. Statistics on join indexes [Vald 87] would give much more precise selectivities for join predicates, if we were willing to suffer the added costs of maintenance and lock contention of updates to these join indexes. However, join indexes would not solve the intersection problem typical of star schemas.

For example, suppose a fact table of Transactions has dimensions for the Products, Stores, and Dates of each transaction. Though current methods provide accurate selectivity estimates for predicates local to each dimension, e.g., ProductName = ‘Dockers’ and StoreName = ‘San Jose’ and Date = ’23-Feb-2013’, it is impossible to determine the effect of the intersection of these predicates on the fact table. Perhaps the San Jose store had a loss-leader sale on Dockers that day that expired the next day, and a similar sale on some other product the next day, so that the individual selectivities for each day, store, and product appear identical, but the actual sales of Dockers on the two days would be significantly different. It is the interaction of these predicates, through join predicates in this case, that research to date doesn’t address. This leads naturally to the final and most challenging problem in our trifecta.

Correlation of columns
With few exceptions ([Chri 83], [Vand 86]), query optimizers since [Seli 79] have modeled selectivities as probabilities that a predicate on any given row will be satisfied, then multiplied these individual selectivities together. Effectively, this assumes that Prob{ColX = Value1, ColY = Value2} = Prob{ColX = Value1} * Prob{ColY = Value2}, i.e., that the two predicates are probabilistically independent, if you recall your college probability. Fortunately for the database industry, this assumption is often valid. However, occasionally it is not.

My favorite example, which occurred in a customer database, is Make = ‘Honda’ and Model = ‘Accord’. To simplify somewhat, suppose there are 10 Makes and 100 Models. Then the independence (and uniformity) assumption gives us a selectivity of 1/10 * 1/100 = 0.001. But since only Honda makes Accords, by trademark law, the real selectivity is 0.01. So we will under-estimate the cardinality by an order of magnitude. Such optimistic errors are much worse than pessimistic over-estimation errors, because they cause the optimizer to think that certain operations will be cheaper than they really are, causing nasty surprises at run time. The only way to avoid such errors is for the database administrator to be aware of the semantic relationship (a functional dependency, in this case) between those two columns and its consequences, and to collect column group statistics, as DB2 and other database products now allow.

To identify these landmines in the schema automatically, Stillger et al. [Stil 01] developed the LEarning Optimizer (LEO), which opportunistically and automatically compared run-time actual cardinalities to optimizer estimates, to identify column combinations exhibiting such correlation errors. Ilyas et al. [Ilya 04] attacked the problem more pro-actively in CORDS (CORrelation Detection by Sampling), searching somewhat exhaustively for such correlations between any two columns in samples from the data before running any queries. And Markl and colleagues [Mark 05], [Mark 07] have made ground-breaking advances on a consistent way to combine the selectivities of conjuncts in partial results.

All great progress on this problem, but none yet solves the problem of redundant predicates that can be inadvertently introduced by the query writer who typically believes that “more is better”, that providing more predicates helps the DBMS do its job better – it’s American as Apple Pie! Let me illustrate with one of my favorite war stories.

At a meeting of the International DB2 User’s Group, a chief database administrator for a major U.S. insurance company whom I’d helped with occasional bad plans asked me to conduct a class on-site. I suggested it include an exercise on a real problem, unrehearsed. After my class, she obliged me by presenting two 1-inch stacks of paper, each detailing the EXPLAIN of a plan for a query. I feared I was going to embarrass myself and fail miserably under the gun. The queries differed in only one predicate, she said, but the original ran in seconds whereas the one with the extra predicate took over an hour.

I instinctively examined first the cardinality estimates for the results of the two, and the slower one had a cardinality estimate 7 orders of magnitude less than the fast one. When asked what column the extra predicate was on, my host explained that it was a composite key constructed of the first four letters of the policy-holder’s last name, the first and middle initials, the zip code, and last four digits of his/her Social Security Number. Did the original query have predicates on all those columns? Of course! And how many rows were there in the table? Ten million. Bingo! I explained that that predicate was completely redundant of the others, and its selectivity, 1/107, when multiplied by the others, underestimated the cardinality by 7 orders of magnitude, wreaking havoc with the plan. It took me maybe 5 minutes to figure this out, and I was immediately dubbed a “genius”, but it really was straightforward: the added predicate might help the run-time, especially if they had an index on that column, but it totally threw off the optimizer, which couldn’t possibly have detected that redundancy without LEO or CORDS.

So c’mon, folks, let’s attack problems that really matter, those that account for optimizer disasters, and stop polishing the round ball.

Disclaimer: The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.

References

[Chri 83] S. Christodoulakis, “Estimating record selectivities”, Info. Systems 8,2 (1983) pp. 105-115.

[Grae 89a] G. Graefe, editor, Workshop on Database Query Optimization, CSE Technical Report 89-005, Oregon Graduate Center, Portland, OR, 30 May 1989.

[Grae 89b] G. Graefe, “The stability of query evaluation plans and dynamic query evaluation plans”, Proceedings of ACM-SIGMOD, Portland, OR (1989).

[Ioan 97] Y. Ioannidis et al., “Parametric query optimization”, VLDB Journal, 6(2), pp. 132 – 151, 1997.

[Ilya 04] I. F. Ilyas, V. Markl, P. J. Haas, P. Brown, A. Aboulnaga: CORDS: Automatic Discovery of Correlations and Soft Functional Dependencies. SIGMOD Conference 2004: 647-658.

[Mark 05] V. Markl, N. Megiddo, M. Kutsch, T. Minh Tran, P. J. Haas, U. Srivastava: Consistently Estimating the Selectivity of Conjuncts of Predicates. VLDB 2005: 373-384.

[Mark 07] V. Markl, P. J. Haas, M. Kutsch, N. Megiddo, U. Srivastava, T. Minh Tran: Consistent selectivity estimation via maximum entropy. VLDB J. 16(1): 55-76 (2007)

[Redd 05] N. Reddy and J. R. Haritsa. “Analyzing plan diagrams of database query optimizers”, VLDB, 2005.

[Seli 79] P.G. Selinger, M.M. Astrahan, D.D. Chamberlin, R.A. Lorie, and T.G. Price, “Access path selection in a Relational Database Management System”, Procs. Of ACM-SIGMOD (1979), pp. 23-34.

[Stil 01] M. Stillger, G. M. Lohman, V. Markl, M. Kandil: LEO – DB2′s LEarning Optimizer. VLDB 2001: 19-28.

[Stoy 08] J. Stoyanovich, K. A. Ross, J. Rao, W. Fan, V. Markl, G. Lohman: ReoptSMART:A Learning Query Plan Cache. Technical report.

[Vald 87] P. Valuriez, “Join indices”, ACM Trans. on Database Systems 12, 2 (June 1987), pp. 218-246.

[Vand 86] B.T. Vander Zanden, H.M. Taylor, and D. Bitton, “Estimating block accesses when attributes are correlated, Procs. of the 12th Intl. Conf. on Very Large Data Bases (Kyoto, Sept. 1986), pp. 119-127.

Blogger’s Profile:
Dr. Guy M. Lohman is Manager of Disruptive Information Management Architectures in the Advanced Information Management Department at IBM Research Division’s Almaden Research Center in San Jose, California, where he has worked for 32 years. He currently manages the Blink research project, which contributed BLU Acceleration to DB2 for Linux, UNIX, and Windows (LUW) 10.5 (GA’d 2013) and the query engine of the IBM Smart Analytics Optimizer for DB2 for z/OS V1.1 and the Informix Warehouse Accelerator products (2007-2010). Dr. Lohman was the architect of the Query Optimizer of DB2 LUW and was responsible for its development from 1992 to 1997 (versions 2 – 5), as well as the invention and prototyping of Visual Explain, efficient sampling, the DB2 Index Advisor, and optimization of XQuery queries in DB2. Dr. Lohman was elected to the IBM Academy of Technology in 2002 and made an IBM Master Inventor in 2011. He was the General Chair for ACM’s 2013 Symposium on Cloud Computing and is the General Co-Chair of the 2015 IEEE International Conference on Data Engineering (ICDE). Previously, he was on the editorial boards of the “Very Large Data Bases Journal” and “Distributed and Parallel Databases”. He is the author of over 75 papers in the refereed academic literature, and has been awarded 39 U.S. patents. His current research interests involve disruptive machine architectures for Business Intelligence, advanced data analytics, query optimization, self-managing database systems, information management appliances, database compression, and autonomic problem determination.