Set Theoretic Approach to Algebraic Structures in Mathematics - A Revelation
Can you explain more about this? Yes Mike, that is reassuring! Thanks for a very nice post, Mike. There were various things I was going to say in response. Another was what Tom said here. In your post, you said that the univalence axiom. I find myself wanting to ask the same questions as Karol.
Asking people to change their foundational outlook is already enough of a big deal, without playing with the meaning of this extremely primitive concept! I wonder if structuralism actually commits you to univalence? Actually, you can do this with dependent types. If the type of arrows is dependent on the type of objects, then the composition operation has type.
Just from the comments to this post I see that this linguistic issue creates some confusion. To Karol, Tom, and anyone else feeling the same way: First of all, in set-theoretic foundations, we need to distinguish between equality and weaker forms of sameness. For instance, when I say that for all x , y: Secondly, you might be surprised at some of the things that even this restriction would allow. For instance, consider the type of natural number objects discussed above: This type is contractible, and therefore, in particular, it is an h-set.
Another nice example is the type of well-ordered sets: Thirdly, once I got used to the idea, I found it tremendously freeing. Read homotopically, an inhabitant of this type consists of a point a: A the center of contraction and a continuous deformation from the identity function of A A down to the constant function at a: A such that for all b: Thanks for a detailed answer. I want to say that on the level of mathematics I find your arguments convincing with some exceptions discussed below.
But what I really had in mind are practical considerations. However, the best you can count for is that it will coexists with other approaches. Do you really want your notion of equality to compete with the traditional one? I am afraid that such a competition would only alienate many potential users of HoTT.
- Skip to the Main Content!
- Set Theory and Foundations of Mathematics.
- Joint Mathematics Meetings Program by Day.
- ?
This rather pessimistic point of view comes from an observation that the differences between the two notions seem to be even subtler than what we discussed so far. Overcoming those subtleties requires a lot of motivation, effort and getting used to new ways of thinking. I think the problem here is that if you wrote this statement formally in HoTT you would be forced to write down the definition of the contraction of the type of natural numbers objects, but if you say it informally in English this specification is lost somewhere between the lines.
Another example of this phenomenon has its origin in grammar. Again we are in the situation when writing this formally in HoTT would force us to write such an isomorphism explicitly. I actually followed a similar line of thought in order to argue something opposite. While there is no formal distinction between those interpretations, this choice of words definitely highlights differences in the intended way of using the notion of group.
But even if this is true, do you really want to attempt both foundational and philosophical revolutions at the same time? I feel that trying to overthrow the traditional concept of equality would only hinder the efforts to promote type theoretic foundations. Thus for me an identity type is a homotopy theoretic concept and not a foundational one. Perhaps if were thinking about those things in more foundational terms I would be more likely to accept your point of view. Thanks for the thoughtful reply! The notion of equality in type theory is different from the traditional one, no matter how we slice it.
For me, I think one of the biggest hurdles to get over was learning to think in what Andrej calls proof-relevant mathematics. And so every proof you write is a program that can be executed, transforming input witnesses to output witnesses. This is where a big part of the power of type theory comes from. Once I really internalized that, then it started to seem perfectly natural that there could be multiple different proofs — different witnesses — to an equality.
Univalence just says that in some important cases, those witnesses are isomorphisms. Of course, it may happen, sometimes, that every witness to a given equality is provably equal to any other such witness i. More importantly, even in this case, equality is still proof-relevant: For instance, the assertion that a given function f f is an equivalence is an h-proposition, but a witness to that fact includes the data of an inverse to f f , and we often care what the inverse of a function is.
This issue is deeper than just choice of one word over another. It seems that English grammar is simply insufficient for proof-relevant mathematics. We would need to invent a new part of speech that is to an adjective like roughly speaking a general type is to a proposition and whose usage automatically entails specifying a witness to whatever it describes. If I really knew how to do it I would gladly wave the classical strict notion of equality goodbye. For example I would like to have a ready to use formalization of general homotopy colimits in HoTT.
At the moment I only know how to deal with homotopy colimits in specific models which heavily rely on traditional notion of equality. I would even say that this is the main reason which makes me cling to strict equality. I am more optimistic. But I believe we will get to that point sometime in this century, and then pragmatism will gradually take over. Probably there will be a large number of people who will never accept univalent foundations, but eventually the younger generations will grow up with the ideas and the revolution will be complete.
I am intrigued by your idea of inventing a new part of speech! This is probably just me, but I find this kind of analysis unattractively messianic. It really does exactly resemble the confident predictions of revolutionaries years ago about New Socialist Man. Something about foundations makes people jump immediately from personal enthusiasm to a desire to convert the entire world to a new world view. If there is to be any revolution, it should probably be an overthrowing of the view that any one foundation ought to suffice for all of mathematics.
Was the whole first paragraph a joke? We need a smiley face scoping operator to remove this kind of ambiguity.
His n different notions of the reals are n different concepts, so no conflict between them is possible, and any conflict is an illusion. No, the rest of the paragraph was mostly serious. There is one definition of the reals which behaves very differently depending on the foundational system you are in. Actually, of course, there is more than one definition of the reals, but each of them also has this behavior.
Thinking of foundational systems as analogous to rings is a very good analogy. To maximize the heighten the semantic confusion, I would call myself a pluralist, and yet I think there should be roughly one foundation of mathematics. That foundation is the list of formal systems we agree are consistent. If we agree on the list of consistent systems, we agree on foundations.
January 7, 2013
Most arguments about foundations prove, upon careful analysis, not arguments at all. This has been the great unsung discovery of logic and metamathematics in the 20th century. The Brouwerian reals and reals with nilpotents all sit comfortably inside a big shared universe. As a side note, see also this. Time will tell whether HoTT is a flash in the pan or is recognized as scoring notable successes. I have no opinion on homotopy type theory or univalent foundations.
And you can always heighten the confusion by using words differently than everyone else. Actually, I want to apologize in general, Mike. We should take a solemn vow to only discuss foundations in Loglan or something like that. There are actually two aspects of realism and pluralism in mathematics, both are relevant. The first one is purely mathematical, or shall I say meta-mathematical, where we see that there are different ways of setting up a mathematical foundation classical vs. The other one is sociological: These are also useful!
The two kinds of relativism are entangled in various ways. Keep in mind then that there are two extreme points: For me, the magic is in learning how to be able to switch in my mind from one to another kind of mathematics.
- !
- Astride the White Mule!
- Navigation menu.
And by this I mean not only technical proficiency but also the mathematical intuition that allows a mathematician to have a feel for things. On a less personal level, mathematicians who are not stuck in a single world of mathematics benefit from the plurality of worlds in a number of ways. It is like comparing the experiences and opinions of a man who has traveled around the world in comparison to someone who spent his entire life in a town in the Midwest. But there is a further thought you might have: The denial of such a possibility is relativism: By virtue of using the mathematical method you will make commitments that will filter your view of mathematical worlds through the view of a particular one.
It is like physics: One possible reaction might be defatist: He says it is all necessarily a mess, a matter of opinion. Wait, he says there is no such theory. Hmm, that is a nice challenge for logicians! You seem to mean something stronger by it. Quine that this in fact applies to any system of higher order logic? In fact, he went as far as to argue that such logical systems, which lack such properties as soundness and completeness do not qualify as logic at all.
I automatically translate everything into set language. How can a habit be worth more than the immense beauty and richness of those other worlds? Nothing in my proposal requires a stand on whether ZFC is consistent. Well, what you said in your previous comment was a r. I assumed you meant r. And in what meta-metatheory do you wish to discuss the question of which formulas are asserted by Peano Arithmetic to be theorems of ZFC? If ZFC is consistent, then no such integer exists.
I thought maybe your objection was that I was trying to slip ZFC in as a metatheory. We know these things because can produce proof witnesses. But that is by necessity a statement not just about the original system, but the original system plus the system you are embedding it into. For particular theorems with particular explicit proofs, we can give an absolute meaning to being a theorem of a formal system. Well, you might also want to prove that something is a theorem without exhibiting an explicit proof witness. However, I feel like this argument is starting to repeat itself, so perhaps we should have pity on the bystanders and desist.
I agree, recursively enumerable sets are not recursive. So if you have a translation function, Trans that translates theorems and proofs, and a proof predicate for the original theory, Proof1, and one for the metatheory, Proof2, then the recursively enumerable list of theorems are the ones that satisfy Proof2 Trans X , Trans Y , which are just Trans X , for X in the original list. So there might be some Y Y which is a theorem of the second system or, I suppose, its translation into the second system is a theorem of the second system but which is not a theorem of the first system.
But the only theorems that are really theorems of the first system are the ones that have proofs in the first system, all of which translate to the second system. So the second system can prove a superset of the theorems of the first system. But at the level of recursively enumerable sets, the theorems of the first system translate into a recursively enumerable set in the language of the second system.
A couple comments ago you started talking about translating from one system into another. Could you specify exactly what these two systems are in the relevant example so that I know how to interpret them? The things we say in the metatheory are statements about the object theory, not statements that we might also have made in the object theory. I used to think that first-order logic was a grand unifying theory. But what about formal systems?
This allows for a story to be told of how previous systems, material and structural, were as successful as they were, while pointing us to the next chapter of the story. Urs is doing great things with type theory and physics, see e. Perhaps I will always be still figuring that out. Maybe there is no one best way, either, so we just have to present it in lots of different ways and hope that everyone can find something that resonates with them.
Since we are talking about presenting type theory to a general mathematical audience I have a request in this spirit. However, most mathematics is being done classically. So what is a good way of making logic of type theory classical? And of course it rules out lots of interesting categorical models. I think a good number of people using Coq in practice are happy with this axiom.
It does have to be restricted to hprops, otherwise it contradicts univalence; thus it seems especially odd from a proof-relevant point of view. By the way, if we want AC do we just assume an axiom saying that every surjection of h-sets has a section? The only reference I know is Section 3. I like this idea because it treats the law of excluded middle on an equal footing with the rest of type theory while the first approach just throws it in as an additional assumption that stands out as something odd.
There is of course a question whether it has some reasonable interpretation for general types of HoTT. Another option is to work with algebras for the double-negation monad. But as far as I know, all non-trivial models are lop-sided one way or the other like the models for Call-by-Push-Value, which is also relevant. Mike gives one approach by adding classical axioms. There are other approaches as well. Another approach is to add control operators to the term language of a type theory. These allow one to prove classical results. There have been a lot of work on this area.
It is still being studied today. Here is a non-exhastive list of classical type theories. However, subject reduction failed. Much work followed to fix this. It is straightforward to prove the law of excluded middle. For a proof see this. The language enjoys subject reduction. This is the only know classical dependent type theory with control. This makes the dualities of the classical sequent calculus LK explicit. They use the dual of implication called subtraction. See this and that. Which does not have implication, but it can be defined.
There is a lot of other work on things like delimited control. I do not list everything here, but those listed above are some of the highlights of the body of research. There is a lot of work still needing done. One main problem with control operators is that canonicity does not hold. This is the part of the focus of a lot of ongoing work such as my own.
In addition notice that all of the above work is on simply typed calculi. Extensions to dependent types is still an open problem. Thanks for all the suggestions! I realized after I wrote my last comment that I also know another way, also involving double negation. Computationally, this is like using a continuation-passing style. Both the identity modality and the hprop modality do satisfy this principle.
This came up recently in another discussion. Actually, I just meant the double-negation endofunctor but wrote monad the monad-algebras are rarer, I guess. But you bring up some great points regarding predicativity: I was thinking about when we want to instantiate a particular answer type internally, but not decide on any particular one in advance, then we need a quantifier somewhere.
Now I remember that I have actually seen a solution before: It is definitely almost an instance of the Yoneda lemma. But I thought you needed to add a naturality condition in order to get an isomorphism. Steve Awodey and Andrej Bauer have been working on this; Andrej gave a talk about it at IAS last year, and the big deal was how to incorporate naturality and coherence for that naturality for types of higher h-level.
I am not a homotopy type theorist, but I am curious about this stuff. I have seen some of the statements such as those above from HoTT, and they resemble formal statements of first-order logic and such, and seemingly written for machine processing. Is a computer necessary to do mathematics in HoTT? A very interesting and important question! So far, most mathematics in HoTT has been done in a computer-assisted way. The idea is that this would be a way of writing mathematics which stands in the same relation to computer-formalized HoTT that everyday written mathematics nowadays stands to say ZFC.
I enjoyed this description of type theory. It made me feel I might understand this subject some day. My main interest in foundational studies is not that I am concerned about inconsistencies arising in mathematics. I would like to understand applications to computer science and specifically to the design of systems for computer algebra and combinatorics. I have read various postings about computer science, automated theorem proving, lambda calculus. None of this, unless I missed something, covers object oriented languages. What is the logic of object oriented languages? This suggests to my naive mind, that it could be useful to have a foundation for these.
I expect that the list of axioms would be longer than the alternatives already discussed here. It seems that what causes headaches is implementing coercions. At a basic level, objects are just Sigma-types with some syntactic sugar. When you get into fancier stuff then it gets more complicated.
You mean, a foundation for concrete categories? If we agree that potential inconsistencies are not the issue then can I clarify what are the issues? As I understand it there are two reasons for getting involved in this discussion. One is pedagogical; the aim here is to give some foundations that can be presented to students. This is laudable and I think I understand criteria for judging the various proposals. The other reason is to try and capture what it is that mathematicians do.
Joint Mathematics Meetings
I am much less clear about how to assess different proposals and so I find this more intriguing. The traditional approach is to look in the research literature. This is a top down approach and leads to work on automated theorem proving or automated proof verification. An alternative is to look at contemporary computer systems which now incorporate significant parts of mathematical knowledge. This is more of a bottom up approach. My feeling is that all of the current computer algebra systems are struggling to scale up.
My vision for concrete categories comes from a position of ignorance. I accept that all reasonable foundations allow an implementation of concrete categories. What I had in mind was that if I was asked to write out the definition of a concrete category then I would use set several times and with several meanings. I think I would regard the set of objects as a material set and each hom-set as a structural set. This would then lead to a foundation which would incorporate both aspects. This would be unsatisfactory from a pedagogical perspective as it would be unnecessarily complicated.
However this may be closer to what working mathematicians are currently doing with large scale software. One of the main points of this post is that type theory is the natural way to incorporate material-sets and structural-sets into a single foundation. For instance, the category of groups has type of objects given by. Why do you say you would make the hom-sets structural? The morphisms in concrete categories are generally just as concrete as the objects. The idea of axiomatic set theory is to write down a system with as few primitives as possible that we believe to be consistent, and can represent mathematics.
Set theory
You can take all kinds of things as primitives, if you want to. The objection is to the idea that you must use types, which commit you to a tightly-controlled set of formulas. Set theory can represent typing information as unary predicates, which then can be used in conjunction with ordinary logical connectives. Assembly language has a very small number of primitives which are easy to implement and debug in a processor, but which are sufficient to represent all programs. In set theory, everything is a set; in assembly language, everything is a bit sequence.
I think this analogy is also useful for understanding the advantages of type theory. Yes, in assembly language you can carry around typing information by hand, or just remember that register AX is currently a memory address while register BX is an ASCII code. On the other hand, this analogy breaks down under further inspection, because as I argued in the main post, set theory actually contains no fewer primitives than type theory does. Type theory uses these same logical primitives, extended to operations on types rather than merely operations on propositions.
So even if your goal is to reduce mathematics to the fewest primitives possible, type theory is just as good as set theory. Material set theory with classical first-order logic requires only four primitives: Everything else can be encoded using these. It always bothers me a little bit when I read that Boolean algebras can be defined by a single binary operation such as NAND, since this would seem to allow that a Boolean algebra can be empty.
But that attitude definitely seems to belong to the past. If you have a signature with no symbols, then the empty set is the initial object. The reason for rejecting the empty carrier before was that a bunch of tautologies of first-order classical logic implicitly assume a non-empty carrier. I think something similar holds for intuitionistic logic, but I know less about that. Categorical logic had something to do with recognition of the correct handling of empty domains in logic, both intuitionistic and classical, but generally speaking, recognition and acceptance of empty structures in mathematics is something much more broadly cultural.
In general, I agree with Todd. As can happen all the time, as in the example of sheaves over a space. And for some reason, I am not ashamed to admit that. Tautology is a function of the logical system. The real reason for imposing it in classical FOL is that the rules for converting a formula to prenex normal form are not valid for empty domains.
Since analyzing formulas in terms of their alternating qualifiers is a central technique in classical predicate logic, people just throw out the empty domain rather than handle it as a special case. Back when I was a high school student, and therefore prone to make impulsive decisions, I tried to read Principia Mathematica. It was a long time before I looked at formal logic again…. Yes, and as I mentioned in the post, type theory can be presented with only three primitives: But competing for who has the fewest primitives can anyone get down to two?
However, I think the advantages of teaching students to think in a typed way among which is the fact that you can simultaneously be teaching them to write computer programs, prove theorems, prove theorems about their programs, and write programs to implement their theorems outweigh the advantages of giving them this somewhat-illusory warm and fuzzy feeling. An article from the latest issue of AMS Notices is tangentially relevant: Boute, How to Calculate Proofs: Bridging the Cultural Divide pdf link.
The article is worth reading for its emphasis on rigorous logical reasoning aided by symbolic manipulation similar to the style that inteactive proof assistants naturally support. For instance, his notation uses a function abstraction operator v: S p as the constant function on S restricted to the subset of S satisfying p v. I see these foundational issues as fairly severe drawbacks in an otherwise very good article.
Then got totally bored and confused on the discussion after. Post a New Comment. Search for other entries: From Set Theory to Type Theory. John Baez on January 7, 7: Mike Shulman on January 7, 8: Toby Bartels on May 11, David Corfield on January 7, Mike Shulman on January 7, 6: Grapr on January 7, Jesse McKeown on January 7, 3: Dmitri Pavlov on January 7, 1: David Corfield on January 7, 2: Walt on January 7, 5: Todd Trimble on January 7, 6: Walt on January 28, Stuart Presnell on May 23, 7: Noah Snyder on January 7, 5: Dorais on January 7, 7: Mike Shulman on January 7, 7: Kevin Watkins on January 7, Mike Shulman on January 8, 4: John Baez on January 8, 5: Mike Shulman on January 8, 5: From Set Theory to Type Theory But if you consider 2 to the N to consist only of the functions we can define, you still get the same diagonalization, but suddenly we choose not to interpret it as meaning 2 to the N is uncountable.
Hendrik Boom on January 10, 2: Mike Shulman on January 10, 3: From Set Theory to Type Theory … and, for an even more striking perspective, see the work by Hamkins, Linetsky and Reitz on pointwise definable models. Dorais on January 10, 5: Kevin Watkins on January 11, Mike Shulman on January 11, 7: Jesse McKeown on January 11, Ulrik Buchholtz on January 11, Mike Shulman on January 12, 6: Ulrik Buchholtz on January 12, 7: Mike Shulman on January 13, 5: Jesse C McKeown on January 12, 7: Mike Shulman on January 13, 4: Contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals.
Mathematical topics typically emerge and evolve through interactions among many researchers. Set theory, however, was founded by a single paper in by Georg Cantor: Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, mathematicians had struggled with the concept of infinity. Especially notable is the work of Bernard Bolzano in the first half of the 19th century. Cantor's work initially polarized the mathematicians of his day.
While Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker , now seen as a founder of mathematical constructivism , did not. Cantorian set theory eventually became widespread, due to the utility of Cantorian concepts, such as one-to-one correspondence among sets, his proof that there are more real numbers than integers, and the "infinity of infinities" " Cantor's paradise " resulting from the power set operation. This utility of set theory led to the article "Mengenlehre" contributed in by Arthur Schoenflies to Klein's encyclopedia.
The next wave of excitement in set theory came around , when it was discovered that some interpretations of Cantorian set theory gave rise to several contradictions, called antinomies or paradoxes. Bertrand Russell and Ernst Zermelo independently found the simplest and best known paradox, now called Russell's paradox: In Cantor had himself posed the question "What is the cardinal number of the set of all sets? Russell used his paradox as a theme in his review of continental mathematics in his The Principles of Mathematics.
The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment. The work of Zermelo in and the work of Abraham Fraenkel and Thoralf Skolem in resulted in the set of axioms ZFC , which became the most commonly used set of axioms for set theory. The work of analysts such as Henri Lebesgue demonstrated the great mathematical utility of set theory, which has since become woven into the fabric of modern mathematics.
Set theory is commonly used as a foundational system, although in some areas—such as algebraic geometry and algebraic topology— category theory is thought to be a preferred foundation. Set theory begins with a fundamental binary relation between an object o and a set A. Since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the subset relation, also called set inclusion.
As insinuated from this definition, a set is a subset of itself. For cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined. Just as arithmetic features binary operations on numbers , set theory features binary operations on sets. Some basic sets of central importance are the empty set the unique set containing no elements; occasionally called the null set though this name is ambiguous , the set of natural numbers , and the set of real numbers. A set is pure if all of its members are sets, all members of its members are sets, and so on.
In modern set theory, it is common to restrict attention to the von Neumann universe of pure sets, and many systems of axiomatic set theory are designed to axiomatize the pure sets only. There are many technical advantages to this restriction, and little generality is lost, because essentially all mathematical concepts can be modeled by pure sets. Sets in the von Neumann universe are organized into a cumulative hierarchy , based on how deeply their members, members of members, etc.
The rank of a pure set X is defined to be the least upper bound of all successors of ranks of members of X. Elementary set theory can be studied informally and intuitively, and so can be taught in primary schools using Venn diagrams. The intuitive approach tacitly assumes that a set may be formed from the class of all objects satisfying any particular defining condition.
This assumption gives rise to paradoxes, the simplest and best known of which are Russell's paradox and the Burali-Forti paradox. Axiomatic set theory was originally devised to rid set theory of such paradoxes. The most widely studied systems of axiomatic set theory imply that all sets form a cumulative hierarchy. Such systems come in two flavors, those whose ontology consists of:.
The above systems can be modified to allow urelements , objects that can be members of sets but that are not themselves sets and do not have any members. NF and NFU include a "set of everything, " relative to which every set has a complement. In these systems urelements matter, because NF, but not NFU, produces sets for which the axiom of choice does not hold.
Yet other systems accept classical logic but feature a nonstandard membership relation. These include rough set theory and fuzzy set theory , in which the value of an atomic formula embodying the membership relation is not simply True or False. The Boolean-valued models of ZFC are a related subject. Many mathematical concepts can be defined precisely using only set theoretic concepts. For example, mathematical structures as diverse as graphs , manifolds , rings , and vector spaces can all be defined as sets satisfying various axiomatic properties.
Equivalence and order relations are ubiquitous in mathematics, and the theory of mathematical relations can be described in set theory.
Set theory is also a promising foundational system for much of mathematics. Since the publication of the first volume of Principia Mathematica , it has been claimed that most or even all mathematical theorems can be derived using an aptly designed set of axioms for set theory, augmented with many definitions, using first or second order logic. For example, properties of the natural and real numbers can be derived within set theory, as each number system can be identified with a set of equivalence classes under a suitable equivalence relation whose field is some infinite set.
Set theory as a foundation for mathematical analysis , topology , abstract algebra , and discrete mathematics is likewise uncontroversial; mathematicians accept that in principle theorems in these areas can be derived from the relevant definitions and the axioms of set theory. Few full derivations of complex mathematical theorems from set theory have been formally verified, however, because such formal derivations are often much longer than the natural language proofs mathematicians commonly present.
Combinatorial set theory concerns extensions of finite combinatorics to infinite sets. Descriptive set theory is the study of subsets of the real line and, more generally, subsets of Polish spaces. It begins with the study of pointclasses in the Borel hierarchy and extends to the study of more complex hierarchies such as the projective hierarchy and the Wadge hierarchy.
Many properties of Borel sets can be established in ZFC, but proving these properties hold for more complicated sets requires additional axioms related to determinacy and large cardinals. The field of effective descriptive set theory is between set theory and recursion theory. It includes the study of lightface pointclasses , and is closely related to hyperarithmetical theory.
In many cases, results of classical descriptive set theory have effective versions; in some cases, new results are obtained by proving the effective version first and then extending "relativizing" it to make it more broadly applicable. A recent area of research concerns Borel equivalence relations and more complicated definable equivalence relations. This has important applications to the study of invariants in many fields of mathematics.
January 7, 2013
In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. In fuzzy set theory this condition was relaxed by Lotfi A. Zadeh so an object has a degree of membership in a set, a number between 0 and 1. For example, the degree of membership of a person in the set of "tall people" is more flexible than a simple yes or no answer and can be a real number such as 0.
An inner model of Zermelo—Fraenkel set theory ZF is a transitive class that includes all the ordinals and satisfies all the axioms of ZF. One reason that the study of inner models is of interest is that it can be used to prove consistency results. For example, it can be shown that regardless of whether a model V of ZF satisfies the continuum hypothesis or the axiom of choice , the inner model L constructed inside the original model will satisfy both the generalized continuum hypothesis and the axiom of choice.
Thus the assumption that ZF is consistent has at least one model implies that ZF together with these two principles is consistent. The study of inner models is common in the study of determinacy and large cardinals , especially when considering axioms such as the axiom of determinacy that contradict the axiom of choice. Even if a fixed model of set theory satisfies the axiom of choice, it is possible for an inner model to fail to satisfy the axiom of choice.
For example, the existence of sufficiently large cardinals implies that there is an inner model satisfying the axiom of determinacy and thus not satisfying the axiom of choice. A large cardinal is a cardinal number with an extra property. Many such properties are studied, including inaccessible cardinals , measurable cardinals , and many more. These properties typically imply the cardinal number must be very large, with the existence of a cardinal with the specified property unprovable in Zermelo-Fraenkel set theory. Determinacy refers to the fact that, under appropriate assumptions, certain two-player games of perfect information are determined from the start in the sense that one player must have a winning strategy.
The existence of these strategies has important consequences in descriptive set theory, as the assumption that a broader class of games is determined often implies that a broader class of sets will have a topological property. The axiom of determinacy AD is an important object of study; although incompatible with the axiom of choice, AD implies that all subsets of the real line are well behaved in particular, measurable and with the perfect set property. AD can be used to prove that the Wadge degrees have an elegant structure.
Paul Cohen invented the method of forcing while searching for a model of ZFC in which the continuum hypothesis fails, or a model of ZF in which the axiom of choice fails. Forcing adjoins to some given model of set theory additional sets in order to create a larger model with properties determined i.
For example, Cohen's construction adjoins additional subsets of the natural numbers without changing any of the cardinal numbers of the original model. Forcing is also one of two methods for proving relative consistency by finitistic methods, the other method being Boolean-valued models.