Saturday, May 5, 2018

Analogical Reasoning

An analogy is commonly understood as finding a similarity between two arguments or groups of arguments. Besides similarity, according to Aristotle, an example could be used to draw an analogy. For instance, we can say sun like other stars has a life cycle. In this case we are saying sun is a star and like other stars it will have various life cycle phases. A more straightforward analogy is: the inverse square law of gravitation is analogous to inverse square law of magnetic field. Here we are referring to the syntax of the equation that defines gravitational force vs. magnetic or electro-static force. It goes without saying that the target of an analogy is not a complete replica. Let us suppose the source of analogy is "Theory of thermodynamics predicts increased entropy in the universe". And its target could be "The theory of evolutionary programming is based on entropy of the search space." Here the source and target are as far apart as they can be until an analogy is created.

Material and Causal Similarities

Suppose we have these comparisons between the forces of gravitation and electro-magnetism

1Applies to large objects Applies to charged particles
2Predicts force as inverse square law Predicts force as inverse square law
3Uses mass as a measure Uses charge as a measure

1&3 imply a similarity between material properties. And 2 implies a possible causal similarity However their neither of these similarities are necessary or sufficient. For instance, in mathematics an analogy can be created between a square and a cube without any causal similarity.

Let us say source has a set of attributes A1*, A2*...An*. The target has a set of attributes B1*, B2*...Bn* that can be mapped to their counterparts, (i.e. Bn* -> An*). Then we can say a positive analogy between the source and target has been drawn. A negative analogy is when: A* fail to hold in the target and B* fail to hold in source. A single negative analogy, that is strong enough, can rule out any number of positive analogies. This is the case with electrical conduction and fluid flow which share many similarities like the correspondence between Ohm's law and Poiseuille's law but when it comes to conservation they don't match, resulting in the negation of the analogy.

Where is analogy used?

Analogy can be used to generate a hypothesis a priori for further testing. If we knew under what conditions Dow is bearish, we can look for similar conditions in a different stock market like Nifty. Suppose, a presidential speech made the Dow bearish, we can offer a hypothesis like "When the premier of a country makes a negative speech about a group of companies traded in that country, then that country's stock market will go bearish". This is a convenient way to generate hypotheses for testing among the universe of hypotheses.

Plausibility vs. Probability

Plausibility implies that a strong analogy exists between a source and target based on a cursory examination. For example, you can explain the earnings of an average athlete using a bell curve to signify that there will be diminishing margin of returns as the age progresses. The plausibility is derived from the extant knowledge about bell curves that show a statistical distribution of a variable with respect to another (income and age in this case). The distinction between plausibility and probability is that the former need not be conditional on a priori probabilities but has its own distinguishing criterion—that can be called default reasoning.

Hume's 'no free lunch'

Hume (1711–1776) pointed out that ‘even after the observation of the frequent or constant conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience’. This applies to induction which takes analogical reasoning to an unknown territory of over-generalization. Suppose we study the spending behavior of consumers in a given economy and come up with an analogy with the rate of depression among clinical cases prima facie. The induced version of it is greater the rate of clinical depression, greater/lesser the spending behavior of consumers in that economy. The induced rule is not tenable until we have additional knowledge about consumers and clinical depression and the relationship between the two. Suppose, in the absence of a valid theory, we corroborate statistically the two categories of people, there is no way to tell that they share other properties (eg. the socio-economic class or height or weight).

Design Patterns and Analogy

Design patterns are a favorite theme of computer programmers across continents who stumble into the same lines of code that can be explained as a structural analogy. When we write code to feed computers, regardless of the programming language used, we tend to follow a familiar pattern of logic. Suppose someone wrote in Java a program to connect to a database, retrieve some data, process it and close the connection. If they were to write the same in another programming language, say C++, the logic is going to be the same. The important thing to note here is the release of the connection, if it is drawn from a pool of connections, without which the program will run out of connections after a few tries. In analogous terms, if we have a limited resource that we would like to share among multiple users, we should ensure that the resource is freed before someone else can use it. One can illustrate this with a hotel example where tenants possess a key to enter a room and must return the key to make the room available for another tenant's use.

Analogical Reasoning and Induction

Normally induction is when you can infer a general proposition based on many particular observations. One can observe monkeys across the continents and induce the proposition that monkeys are prehensile. The drawback here is even if there is one monkey specie whose tail does not match up with our observations, we have to draw an exception. This is the problem we run into when we are inducing rules in AI. For every induced rule we have to write where there are exceptions. With analogical reasoning we bring two disparate domains into focus and can avoid over-generalization. This is the theory behind case-based reasoning. When we call customer service about our broken dish-washer, chances are the company already knows about this problem. Based on our account of the problem, they can search their database and try to find a case that is analogous to the current one. In practice, they may not find an exact match in which case they tweak the matched case to explain all of the symptoms we are experiencing with our dish washer. Suppose we are noticing that our dish washer is making too much noise. If the best our search in the database yielded was the case about a dish-washer that was leaking. Putting two and two together one can say the rubber lining of the door, that also acts like a sound barrier, is not tight enough. While this is not as grandiose as Priestley's analogy of electro-static force with gravitation to derive the inverse square law, it is nevertheless a common use case of analogical reasoning. When computers draw analogies they do so based on the structure or syntax of the similarities. What will be more powerful is if our case-based leaner can retrieve a case from noise-proofing in windows and apply it to the dish-washer by using the commonly held attribute between the two: hermetical seal.

Analogy as precursor to induction

Consider the following syllogism

All mammals deliver children in situ
Human is a mammal
So humans deliver children in situ

Where does one get the premise "All mammals deliver children in situ?" Many suggest that it is through induction. Suppose we see elephants deliver baby elephants, horses deliver baby horses, and so on. And give them a new class called "mammals" and classify all mammals in a theoretical framework. Some argue that it is in the definition of mammals that we capture the essence. This has the advantage that if some animal that delivers children may not be a mammal who typically feed milk to their new borns. This can be explained more easily with the syllogism "All birds fly; Crow is a bird; So crows fly". What happens when we find a bird like kiwi that doesn't fly? It leads to 2 possibilities: either our inductive method needs a rework or the definition of a bird is wrong. It is easier to fix the latter as induction requires collecting a lot of evidence and then coming up with a theory that explains all of the evidence.

Scientists frequently use induction--that is based on analogies--to create new theories. Then to answer a specific question they use deduction. Contrast this with case-based reasoning where there is no inductive step but cases are tweaked to explain a set of observations. This relieves the legal scholars and computer scientists alike using case-based reasoning from time consuming evidence gathering in the inductive step.

Analogies as Explanations

Aesop's fables are an examples of allegories and metaphors to draw analogies between humans and other domains. Panchatantra tales are Indian allegories that serve similar purpose. Used this way, analogies provide elucidation of profound concepts that are otherwise hard to explain. How can we explain the meaning of creativity to a child? Tell them a tale of a fox that talked its way out of a lion's den and they will understand, albeit within the scope of the tale. It is held by some scientists that every new theory should have some analogies to its predecessors. Viewed this way scientific pursuit is a large continuum with one analogy piled over another. If Bohr's model of atom is inspired by the Kepler's description of planetary motion, then it makes sense. But where does the analogy end? Obviously the atomic nucleus doesn't behave like the sun which is considered to be not only the center of solar system but also revolving around other galaxies and so on.

Analogies as reminders and synergies

Often we hear people say, “this reminds me of ...” What is the trigger for this? Among many viable options, analogical reasoning is the most common occurrence. One can tell a long story of how hindu children are initiated into wearing a sacred thread to a person who summarizes it as: “that reminds me of my grandson's bar mitzvah!” What is the missing link here? They are exchanging stories of coming of age. Different cultures use different means to illustrate coming of age. In some African tribes the coming of age involves getting bitten by poisonous ants. SatarĂ© MauĂ©, for example, will require approximately 20 tocandira ant "inoculations" to ensure the highest level of protection against mosquitoes.

Analogies in Law

Lawyers use precedents to settle cases even before they ever see the daylight of a court room. The reason being all of the available evidence is going to reinforce the verdict or there is no evidence to begin with. When we have some evidence to the effect that the burglar broke a glass window and entered our house, there is likely more than one precedent where the burglary happened in similar way at a different house. The only issue here is the cost of material damage and its recovery. In cases where there is no hard evidence, but circumstantial evidence is available, then the precedent can be a powerful way to argue the case. Take for instance, someone getting sick after eating a store-bought green vegetable. Even without hard evidence for the purchase at a particular store or the presence of some bacteria—like E.Coli – to cause the illness, the case can be settled out of court using a precedent, assuming there is no nation-wide epidemic of E.Coli.

Machine Learners for Analogical Reasoning

Sowa and Majumdar describe a machine learner for analogical reasoning that came up with the following similarities between cats and cars:

Analogy of Cat to Car
CatCar
headhood
eyeheadlight
corneaglass plate
mouthfuel cap
stomachfuel tank
bowelcombustion chamber
anusexhaust pipe
skeletonchassis
heartengine
pawwheel
furpaint

Each concept, i.e. cat and car in this case, is represented within a conceptual graphs. Represented this way, the concept cat belongs in a conceptual graph beginning with animal to which a dog also belongs. This conceptual graph in turn can be represented as a sub-graph of living beings and so on. Then they use three methods to draw analogies:

  • match by labels: if cat and dog were to be compared, then it is straightforward as they share many labels, viz. number of eyes, tail, etc.
  • matching subgraphs: the sub-graph of living beings can be plants and animals which can then be compared
  • matching transformations:involves relating sub-graphs of one graph to sub-graphs of the other. For example the sub-graphs of animals can be compared to sub-graphs of automobiles

While the learner is based on the work of an American logician called Charles Peirce, it depends on a collection of concepts called WordNet knowledge base that contains over 100,000 nodes. The time it takes to search among these nodes depends on the algorithm used. While their algorithm of N log N complexity takes a few seconds, an algorithm of N^3 complexity, presumably making an exhaustive search, can take more than 30 years.

Peirce invented a geometric-topological notation called existential graphs. Like Venn diagrams for sets, the existential graphs serve as the underlying implementation of logic. Presumably this has a biological basis. The existential graphs allow to chart logical reasoning in its finest detail, making visible every single step in the reasoning process (as against notations aimed at quick, results- oriented heuristics). An existential graph for the assertion "Tom believes that Mary wants to marry a sailor" is shown to illustrate the due diligence in representing the concepts.

Representing concepts in a graph notation this way is a time exacting process. This can better be accomplished by a neural net. Still, one has to come up a training set comprising a list of concepts and their inter connections. But once that is done, it takes an efficient algorithm to traverse through the graph with the additional advantage of being able to visually represent the inference process. On the other hand, some think analogy in computer is based on the structural representation of the concepts that are predisposed to analogies. Viewed this way, almost all of the analogical representations in computers share this defect. The representation is the key to a successful analogiser. So it is not clear if the computer is any better than a child using lego sets to create complex objects. Just as using lego bricks one can build any complex structure, the syntax of the computer representations can capture any similarities between the concepts.

Like Data to Machine Learner as History to Democracy

With the raise of the social media there is plethora of data to be mined or understood. A machine learner is perfectly suited for this task where data can be used, say, in training neural nets. If enough people share their data it is theoretically possible to cure rare diseases and explain social behaviors alike. What the machine learners, in effect, need is an open society where everything about the life of a person can be known. Thus each person --their likes, dislikes, favorites, preferences, academic background, credit history and skills--serves as a datum for the learner. Democracies have come about because the history of failed nations is well known where each failed nation provides a datum to create a more robust democracy. However unlike history, which is painstakingly generated with archaeological studies and logic, the data harvested from social media is almost trivial to be of use beyond generating sales of products. Yet this raises concerns over the privacy of individuals participating on social platforms.

Online References

http://www.jfsowa.com/pubs/analog.htm

https://plato.stanford.edu/entries/reasoning-analogy/

https://wiki.eecs.yorku.ca/course_archive/2014-15/W/6339/_media/conceptual_graph_examples.pdf

No comments:

Post a Comment