What is Polymorphism?
The learning goals mention “polymorphism”. What is that exactly?
Breaking it down literally, the word means
- poly — many
- morph — form (or type)
- ism — a specific practice, system, or philosophy
So from reading the word, it's a system or practice (perhaps even a philosophy) related to “many forms/types”.
…
In the domain of programming languages, a function is polymorphic if it has at least one parameter that accepts arguments of more than one type.
Meh. That sounds super vague.
It is.
Are we talking about just one function the programmer wrote, or just the same name being used but it's really different functions that get called?
Both! There are actually several kinds of polymorphism.
Meh. Of course there are…
Let's see what kinds you can think of!
Templates: Parametric Polymorphism
Suppose we write this function template:
template <typename T>
const T& max(const T& x, const T& y) {
if (x > y) {
return x;
} else {
return y;
}
}
This max() function takes arguments of many types because we have factored out the type of the arguments as a type parameter (T).
These days, rather than calling it “parametic polymorphism”, people often instead say that max() is generic.
Notice that instantiations of our max template are made at compile time, so this kind of polymorphism can also be described as static.
Overloading: Ad Hoc Polymorphism
We could instead define specific max() functions for particular types. For example, we could have a max() for ints and a max() for doubles.
Overloading describes the situation where the same function name is reused for multiple (different!) functions behind the scenes. Instead of needing to write intMax() and doubleMax(), we can give them both the same name, max(), and have the compiler figure out which one to call based on the types it saw when we called it. Pass in two ints and it calls the integer max() function, pass in two doubles and it'll call the floating-point max() function.
What happens if you call
max(1, 3.14)? Will that work?
The compiler will pick the version of the function that matches best—that is, the one that needs the least amount of promotion/conversion that has to be done with the values passed into a particular call to the function name.
But we don't really need to get into the weeds here.
Hmph.
We've seen overloading ever since we started using numbers. When you write x + y, the compiler will use different code depending on the type(s) of x and y. But there isn't a generic + function that works on anything, only ones for the types that have operator+ defined.
If we want + for some new type (say we want + for two TreeSet types), we can define it, but if we don't write it, it won't exist.
This kind of polymorphism is much less expansive than templates (a.k.a. genericity or parametric polymorphism). It only works for the specific types we've defined it for. So it's called ad hoc polymorphism.
Inheritance: Subtype Polymorphism
In Java, you can make subclasses! You can say
Studentis a subclass ofPersonand then every function that takes aPersoncan be passed aStudent!! And we can sayjo.height()whetherjois aPersonor aStudent!!!
C++ has the same ideas. But I could never keep “subclass” and “superclass” straight in my head, so instead we say that
Personis a class that is derived fromStudent.
Argghh… I don't remember any of that Java stuff.
No worries, we'll go over it because it's not exactly the same in C++ anyway.
When Person is derived from Student, then Student is a subtype of Person. We use the term “subtype” similarly to “subset”: every Student value can also be seen as a Person value (but not vice versa).
We first saw (a limited form of) subtyping when we talked about promotion for integer types. For example, we can say that short is a subtype of int and float is a subtype of double.
Any function that wants a particular type can always be given a subtype of that type and still work. A function that wants a Person can be given a Student and a function that wants a double can be given a float.
(When logged in, completion status appears here.)