To make a long story very short: I spent many years programming large systems in C. If I had declared every function like this:
void *foo(void *arg,...);
everyone would have thought I was nuts (not to mention a really terrible programmer.) This is the nutshell version of why I don’t think I will personally ever understand the appeal of duck typing.
Having said that, I will probably make one more attempt to learn Ruby. I liked some of Ruby’s structures, it was just the duck-typing thing I couldn’t grok. I’ll give it one more try and maybe I’ll see what all the fuss is about this time. If not, at least I can honestly say I tried…
Have you tried Linux shell programming and awk?
They have been used for many years by many programmers.
Regards……….vi.
Umm… yes, actually. Does this have something to do with duck typing? I’m not sure I get the connection…
Yes, if you arrive in Ruby’s thought territory and find a there there, I’d be interested. I took the same Ruby plunge a while back in a Rails context, and couldn’t convince myself to continue. This, primarily, due to the arrival of Grails and Groovy.
I look forward to hearing how your exploration goes.
Mark
Here’s an example I’ve been working on (Holistic Interface Control Protocol, or HICP – very early stage of development, no demo yet).
I’ve been writing something to implement a GUI, but minimizing the pauses that come from event handlers taking too long. Rather than a single event handler (like “onClick”), handling is split up into three stages: feedback, process, update. Feeback and update are in the GUI thread, but process is a separate worker thread.
If you have a long task, you can use the feedback stage to display a message like “Working…”, or disable some functions while leaving others active. Every event in the processing stage are performed one at a time by another thread in the background, allowing other events to be received. Finally the update stage is used to display the results and return the GUI to normal.
Most of the time you only have a quick task, so you only need the update stage. Or you might only need to process something in the background (like a “save” command) and don’t need any GUI changes. In a language like Python (which I’m using for this), you create a Button subclass (“class MyButton(Button):”), and implement whichever stages you need, identified by name (“def update(self, hicp):”, etc.).
When an event occurs, the source object is examined to see if it has a feedback method – if so, it’s executed. It’s examined to see if it has a process method – if so, it’s sent to the process thread. And so on.
To do the equivalent in C , you’d have empty dummy methods and need to set a flag to indicate which actually do things (otherwise you’ll send an object to the process thread, where it’ll have to wait its turn to do nothing, rather than skip it and go straight to the update stage). Can be done, but now you’re forcing the developer to define the class, and then describe the same class in some way, as opposed to letting the computer figure it out (which it should – all the information is there, why repeat it?).
Or in Java or C#, you could define three interfaces and repeat yourself there instead (“implements Feedback, Process, Update”), and the underlying code would check for “this instanceof Feedback”, with some casting to make the call.
With duck typing, the developer writes what they want, once, without worrying how it works, and the computer (and underlying API) figures it out.
Static typing has its place, I think – I prefer it generally. But most of the time, you’re doing the same sort of mental analysis to make sure the right thing is passed to the right method anyway, so it doesn’t matter so much whether you tell the language what you know or let it figure it out later.
That last bit – the language figuring things out – is what makes duck typing different from “void *f(void *a, …)” – there’s no way for C to figure out anything from that.
Thanks for the example, John. I can see that there are cases where this “loose” typing feature might be useful.
Though I’m not sure I understand the difference between the object being “examined to see if it has a feedback method – if so, it’s executed” and implementing some Interfaces and doing a “check for ‘this instanceof Feedback’, with some casting to make the call.” I don’t know enough about it yet, but it seems like 6 of one, half-a-dozen of another, to me. Maybe the former requires no extra text input from the programmer, and that latter does. I’ll have to learn some more before I know for sure. (The thing is: I type very very fast, so extra verbiage in program code doesn’t bother me one bit—as long as it’s adding value by making things easier to understand.)
“But most of the time, you’re doing the same sort of mental analysis to make sure the right thing is passed to the right method anyway, so it doesn’t matter so much whether you tell the language what you know or let it figure it out later.”
This is the part that gets me, though. Sure, I have to do some mental analysis. But once I’ve done it, it’s lost. I must remember how everything works in order to use my own code again (or extend it.) And if I have to use someone else’s code (the simplest case is using built-in libraries), I have to be psychic to know how it’s going to behave—or else pray that the person who wrote it was kind enough to document every aspect of its contract.
This is what defeated me with Ruby the first time I tried to learn it. I discovered that in order to use a basic object (I think it was a file or i/o object of some kind) I actually had to read the source code for the library object in order to know what it could and would do. The documenation made only vague references to “this object behaves sort of like (some other object)”. From there, I was simply supposed to guess what methods I could call on it. I didn’t see how I could possibly accomplish anything that way.
I expect Ruby programmers depend on experience to know what some object will do. Probably looking at examples helps. After a while, the basic library objects become so familiar, it doesn’t matter that their usage is not fully defined anywhere. Again, I have to take another stab at learning it and figuring out what’s going on. I may misunderstand the whole thing. But if successful programming with a duck-typed language depends entirely on what you have previously deduced and can currently remember about an object’s contract, I don’t think it would be ideal for me. I have a terrible memory.
“Though I’m not sure I understand the difference between the object being “examined to see if it has a feedback method – if so, it’s executed†and implementing some Interfaces and doing a “check for ‘this instanceof Feedback’, with some casting to make the call.†I don’t know enough about it yet, but it seems like 6 of one, half-a-dozen of another, to me.”
For the implementation, pretty much, except “if self.f is not None:” is a bit more direct, and doesn’t need a cast (I always feel like a cast means I failed somewhere). The difference is for the developer using the API – the mere existence of “def f(self):” is enough for the implementation to know that it exists, you don’t have to remember to specify that elsewhere as well.
As for using “implements” and “instanceof” in Java, that’s like a cross between static and dynamic typing because “implements” does add dynamic type information that “instanceof” reads. Pure static typing like in C doesn’t let you do that, you need to declare your own field and add the information there (as far as I know – maybe there’s a template library that can add that information).
“But if successful programming with a duck-typed language depends entirely on what you have previously deduced and can currently remember about an object’s contract, I don’t think it would be ideal for me. I have a terrible memory.”
Me too, I’m also mostly a static-type programmer. I think there’s a reason that all major, long term, multi-programmer projects use statically typed languages, and much of that is simply the added communication between developers that type declarations provide. But while dynamic typing sometimes feels to me like walking a tightrope without a net (solution – practice a lot / unit test to death), I recognize that a lot of the time a statically typed language is like putting on a helmet, wearing a harness, and stringing safety lines just so you can walk down the driveway to pick up your mail.
Anyway, I just wanted to share what using duck typing feels like for me in hopes it would help you.
If you do that in C your system will be toast. Duck typing requires runtime type checking, array bounds checking, etc.
Successful duck typing requires an incremental code and test cycle with very small deltas. Seconds, minutes, but not hours.
And so Lisp, Smalltalk, Python, Ruby and so on have read-eval-print loops (or the Smalltalk equivalent, workspace windows).
The end result is the ability to move much quicker by manipulating less of a program at a time. Or if not done very well, the end result is as bad as in any other language.
This is what defeated me with Ruby the first time I tried to learn it. I discovered that in order to use a basic object (I think it was a file or i/o object of some kind) I actually had to read the source code for the library object in order to know what it could and would do. The documenation made only vague references to “this object behaves sort of like (some other object)â€. From there, I was simply supposed to guess what methods I could call on it. I didn’t see how I could possibly accomplish anything that way.
Holy crap, I feel your pain here. But what you’re running into is Ruby’s immature documentation. Remember how horrible the Javadocs were 10 years ago? And you couldn’t even read the Java source code to figure out what was going on.
Documentation aside, though, Ruby is a fun language. The pickaxe book from the Pragmatic Programmers is my reference of choice, moreso than the official documentation, which is … er, sparse.
Rails on the other hand…I like it, but there are a few areas where dynamic typing should pretend it’s static, and object models are one of those areas. UI, as John pointed out, can be an elegant and compelling use case for dynamic typing, but Rails is Ruby all the way back to the database.
While the Rails code itself tends to be good (if sparsely documented), Rails code that normal people write tends to be awful. Recently I saw a getter on an ActiveRecord (object model) class whose return type was sometimes an integer and sometimes a string like “not applicable”. *boggle*
Any time you have to check the type of the return value of a getter before using it, you know it’s Loose Typing Gone Bad.