Fast forward to 2008. I've decided to build a simple calculator in Cocoa, one that behaves pretty much like a real-life pocket calculator. Before doing that, I completed yet anoter basic tutorial (I had my reasons for doing that), and decided to take a stab at it.
There were some pretty basic and obvious things I had to re-learn. Maybe this time they will stick around in my head? We'll see. Such as: OK, I declare instance variables, and some of them are objects. They are also properties, with synthesized accessor methods. Cool. But then I still need to initialize them. Yeah, of course. Don't laugh. But that piece of obvious information just got lost somewhere for me.
Also, how about model objects? I have only a handful, and they are built-in Cocoa types. Where do I manage them? Well, in this simple case, I just had the controller object deal with them as instance variables.
OK, on to the calculator. It should be so simple, shouldn't it? Four basic operations, entering numbers, displaying results... But then somehow it didn't end up that very simple for me.
First of all, entering a number. If I'm entering 0.00001, for example, I want to see all the zeros appear before the 1 does. So at one point, the display will have to read "0," then "0.," then "0.0," and so on. Shall I play with number formatting? And how about "0."? It's not even a number. At this point, I decided to display the number being entered as a string, and also store it as such. Actually, two strings: one stores the integer part, and the other, the display string, takes the integer part, inserts commas as thousands separators if necessary, and after the decimal point has been used, it adds that as well, and keeps adding any further digits at the end (at this point, no further thousands separators will be needed).
The string will be converted into a double when necessary. (And that conversion involves removing the thousands separators, something that NSString's getDoubleValue method won't attempt. I wanted to subclass or extend NSMutableString for that, but eventually decided just to write a method in my controller class. Dealing with classes seemed as overkill to my untrained eyes, since there was only one instance of it all.)
But apart from that, a calculator should be simple, shouldn't it? We're entering a number, then a calculation symbol, which the program remembers, and knows that the next number to be entered will be the second operand. Then, pushing the "=" button simply performs the calculation, and that's it! Isn't it?
Well, not really.
In most calculators, if you push the equals button again and again, the last calculation will be repeated. The first operand keeps changing, but the second remains constant. (So, for example, "6 + 2 = = = = =" will yield "8 10 12 14 16.")
Also, if you do not enter a second operand for a calculation, most calculators will use the first operand as the second too. (So, for example, "4 * =" will display "16.")
Combining the two will make this work: "2 - = = = = =" yields "0 -2 -4 -6 -8 -10."
So how did I handle this all? This may not be the simplest way, but it works. The calculator has four modes: ResultDisplayed, FirstOperandEntry, OperationEntry, and SecondOperandEntry.
In ResultDisplayed mode, the result of the last calculation has just been displayed. We can repeat the last calculation (by pressing "="), we can start a new one by simply starting to enter a number (thereby setting the calculator to FirstOperandEntry mode), or we can enter an operation symbol (+, -, *, /) which will set the calculator to OperationEntry mode, recording the operation as well as the currently displayed number as the first operand.
In FirstOperandEntry mode, we can keep entering the number, or we can push an operation button to enter OperationEntry mode.
If we're in OperationEntry mode, we have just entered an operation. However, we can still change it. If we decide to use addition instead of subtraction, we can press "156 - + 2," and nothing special will happen: "156 + 2" will be recorded, the minus sign will be forgotten.
We can quit OperationEntry mode by either starting to type in the second operand (thus entering SecondOperandEntry mode), or by pressing the equals sign, which will use the currently displayed number as the second operand. (The first operand has already been set if we're here.)
Finally, in SecondOperandEntry mode, we're entering, well, the second operand. We can quit that mode by pressing "=," making the calculator perform the calculation and enter ResultDisplayed mode, or by entering an operation symbol, which will also perform the calculation, but it will also record a new operation, and set the mode to OperationEntry.
Performing the calculation will always store the result in the first operand. If we performed a division by zero, an error message will appear instead of the result, and an error flag will be set. With the error flag on, the operation buttons are ignored.
I found it pretty easy to get lost in all thes cases. I ended up drawing up a flowchart... and then realized that my flowchart couldn't be coded using if-else structures: it used weird loopbacks to other branches. I had to redraw my flowchart so that branches could only merge at their endpoints. Duh.
Of course, there's a Clear button, which clears various stuff (there's also a Clear All button, I haven't gotten around to differentiating between the two), plus the unary minus button isn't wired in yet. But these are small details. Everything else works now, all use cases have been tested extensively. And it's a wonderful feeling when it all falls into place. There were numerous bugs, partly arising from the fact that I coded first, then went back to the drawing board, and rewrote the whole thing, keeping some legacy code in it which caused problems. Then I used "==" instead of "=" on one occasion, that one had me almost pulling out my hair. I ended up NSLogging the hell out of everything before I found this one.
Now it works. Great. Am I too stupid to write software, or is a simple little calculator a damned thing to code, with all the gratification of a cheese grater, but the complexity of a a Rubik cube?
Monday, August 18, 2008
Monday, November 19, 2007
NSNull versus (null)
OK, the book says that an NSNull instance (=null) can be used to represent "nothingness," e.g. in a NSMutableArray. But it looks like this will be the value of an NSDate instance variable that you declare but fail to implement (by which I mean that you fail to assign any value to). At least, its descriptor will return "(null)."
But then it looks like I was wrong; it's just a mere case of homonymy: (null) is not null. Even though my unassigned NSDate instance entryDate was described as "(null)" by its description method, it's not an NSNull instance, since
[entryDate isEqual:[NSNull null]]
returns 0. And NSNull has only one instance, so every null would be "equal" to each other. Right?
So what's this "(null)"??! NSDate's documentation doesn't say anything about it, and neither does NSObject's.
But then it looks like I was wrong; it's just a mere case of homonymy: (null) is not null. Even though my unassigned NSDate instance entryDate was described as "(null)" by its description method, it's not an NSNull instance, since
[entryDate isEqual:[NSNull null]]
returns 0. And NSNull has only one instance, so every null would be "equal" to each other. Right?
So what's this "(null)"??! NSDate's documentation doesn't say anything about it, and neither does NSObject's.
initWithFormat: vs. stringWithFormat:
OK, I get it: NSString's initWithFormat method is an instance method, whereas stringWithFormat is a class method.
But the book says that the latter returns an autoreleased string object. OK, cool. But why does the documentation for either method or the class fail to mention that?!
Is there a rule of thumb that class methods other than alloc, new, copy or mutableCopy always return autoreleased objects (as Aaron suggests)? If so, how reliable is this rule? Would be nice to know.
I know that this point is mostly moot with garbage collection in Objective-C 2.0, but still. I want to learn the good ol' laborious retain counting way as well (and also, I hear GC isn't suitable in all circumstances).
But the book says that the latter returns an autoreleased string object. OK, cool. But why does the documentation for either method or the class fail to mention that?!
Is there a rule of thumb that class methods other than alloc, new, copy or mutableCopy always return autoreleased objects (as Aaron suggests)? If so, how reliable is this rule? Would be nice to know.
I know that this point is mostly moot with garbage collection in Objective-C 2.0, but still. I want to learn the good ol' laborious retain counting way as well (and also, I hear GC isn't suitable in all circumstances).
By the way, I'm at Chapter 3. Building a Foundation tool.
Sunday, November 18, 2007
The RandomNumber app
I had already started to read the book before, and liked it a lot. I think I completed the first app it teaches, RandomNumber. So I decided to read the specs and create it without looking in the book. I actually managed to. As I suspected, I had to browse the documentation a lot (especially for the random functions), but in the end, my little app did build.
Let me just say that I'm not fully satisfied with the documentation facilities built in Xcode. I will collect my thoughts on it as I gain more experience (and probably realize that lots of things that I didn't think were there are in fact there), and do a separate post on the subject later. All that I know is that, as a beginner, I could use a bit more hand-holding.
So anyway, here's how my version of the app differred from Aaron's.
Mine displayed the random number in a text field, not as a label. More importantly, I complicated things a bit, as I already applied the MVC concept to this simple little thingy.
While in Aaron's version (note that it is the first-ever app he has his readers build), the random number generation is done in the controller, my version actually has a model class, and the random number is one of its properties. Random number generation (as well as "seeding," or initialization) is performed in the model, all the controller does is call these methods, and update the text field with the randomNumber property of the model class.
I guess this is overkill, but then at least it's MVC. And the model preserves state, too.
Due to my lack of experience, I had no idea how the model object should be created. I decided to have only one instance throughout the life cycle of the app, and I had the controller (which is serialized in the nib, if I'm not mistaken) allocate and initialize it as soon as it awakes up from the nib.
I was a bit surprised when I found out (from sample code) that there's no need to call [super awakeFromNib] when implementing that method, if I'm subclassing NSObject, but then I confirmed that to be the case in the documentation.
I guess I could also serialize my model object in the nib file? Um, maybe not. This doesn't seem to be recommended. So I'll leave it at that.
Let me just say that I'm not fully satisfied with the documentation facilities built in Xcode. I will collect my thoughts on it as I gain more experience (and probably realize that lots of things that I didn't think were there are in fact there), and do a separate post on the subject later. All that I know is that, as a beginner, I could use a bit more hand-holding.
So anyway, here's how my version of the app differred from Aaron's.
Mine displayed the random number in a text field, not as a label. More importantly, I complicated things a bit, as I already applied the MVC concept to this simple little thingy.
While in Aaron's version (note that it is the first-ever app he has his readers build), the random number generation is done in the controller, my version actually has a model class, and the random number is one of its properties. Random number generation (as well as "seeding," or initialization) is performed in the model, all the controller does is call these methods, and update the text field with the randomNumber property of the model class.
I guess this is overkill, but then at least it's MVC. And the model preserves state, too.
Due to my lack of experience, I had no idea how the model object should be created. I decided to have only one instance throughout the life cycle of the app, and I had the controller (which is serialized in the nib, if I'm not mistaken) allocate and initialize it as soon as it awakes up from the nib.
I was a bit surprised when I found out (from sample code) that there's no need to call [super awakeFromNib] when implementing that method, if I'm subclassing NSObject, but then I confirmed that to be the case in the documentation.
I guess I could also serialize my model object in the nib file? Um, maybe not. This doesn't seem to be recommended. So I'll leave it at that.
Just started "Cocoa Programming for Mac OS X"
This book has been recommended to me in several forums, so I bought it last year, decided not to wait for a Leopard version. (Is there a Leopard version coming at all? Maybe I'll find out by registering the book, as recommended. OK, I've just registered the book, only to find out that my benefits for registration happen to be nada. So, no information on that. Sorry…)
All in all, I like the book, and am very excited as I go in and try to learn Cocoa. Wish me luck.
All in all, I like the book, and am very excited as I go in and try to learn Cocoa. Wish me luck.
Sunday, November 11, 2007
Some more bitching: editing application package info
OK, software development is supposed to be geeky, but why is this so incredibly tedious? I mean, I just want to enter the version number for my app. It's something every app needs. So why do I have to type stuff like CFBundleShortVersionString , instead of, say, filling in a text field somewhere?
Is CodeSense dumb? Or am I?
OK, this is weird. I declare a method in the header file for a class. Then I switch over to the implementation file. I want to implement the method… and I need to type out its name. CodeSense doesn't offer to autocomplete it. Is that a feature? Or can't it read the header file?
Then, when I begin to type an expression that sends a message, as in [myObject performMethod], why doesn't CodeSense offer performMethod as a prominent autocompletion option? And when I want to access a property with the dot syntax… No CodeSensing for me. Am I supposed to keep in mind all method and instance variable names I declared, at all times, and get no help?
Houston, we have Leopard
I've put off diving too deep into Cocoa until Leopard was out. I installed the new OS the day it came out; today, I even got the internet working (I had to buy a router, as Leopard seemed to be unable to get an IP address from my ISP on its own, needless to say, that's made me pretty disappointed and frustrated).
I read Apple's PDF on Objective-C years ago. I know some C, though I'm not a C programmer by any stretch of the imagination. I've followed some tutorials, even read a couple of Cocoa books, so I'm vaguely familiar with the simplest concepts, but I still find the learning curve that's ahead of me a bit intimidating.
I followed through the CurrencyConverter tutorial, though for some strange reason, the version included with the Leopard dev tools was not updated for the new OS. I couldn't help noticing how different things are now, even at such an elementary level.
For example, I'm no longer supposed to create classes in Inteface Builder, and then import them into Xcode. Looks like classes are now solely being generated by manually writing code in Xcode. I can't say I like that; though real programmers probably rejoice. I guess I need to grow fond of typing everything by hand, even boilerplate code, if I want to be a real programmer.
OK, now that I have internet, I can follow the updated tutorial. I'm going in.
Monday, September 24, 2007
College years: learning some math
In order to get into a college or university in Hungary, one had to pass an entrance exam. (I did also apply to some U.S. colleges. Lawrence University, a small liberal arts college in Canton, New York, did accept me, and even offered a scholarship, but I was still $10,000 short per semester, so I couldn't enroll.)
There was a complicated points system: you could get half of your score from certain high-school grades (up to 60), you had to get the second half at the entrance exam (also up to 60), and you could get some extra credits for any state-accredited language examinations (up to 3, I think). If your exam was so good, you could choose to double its score instead of using your high-school grades.
I applied for a computer science program at a respected university in Budapest. The entrance exam consisted of two parts: a written math test, and an oral test in programming. Each was worth 15 points, which needed to be doubled for a total maximum score of 60. (See above.)
The three-hour math part consisted of eight questions (as it had for decades). The first three questions were dead easy: math entrance exam tacticians recommended that you spend half an hour on them in total. The next three were a bit more difficult, it was good practice to do them in an hour or so. Finally, the last two questions (worth the most points, obviously) would take about one and a half hour, or half of your total allocated time.
We had practiced this a lot. We were professionals, almost. The advanced math class we took in the last year focused on these exams. We did a lot of the tests from the previous years. We almost always scored a respectable 12, 13, later 14 or even 15.
So did I.
But I couldn't sleep on the night before the test. And I was nervous. And I screwed up big time. I got stuck with some stupid typos I made on my second or third question. I wasted a lot of time. As a direct result, my score sucked: 10 points out of 15.
I added up my score quickly. I had 58 points from high school and languages. If I want to meet the previous year's score limit of 103, I need at least 13 points on the oral exam: a new subject for me, of which I'd never had any formal training.
I almost gave up.
But in the end I did go to the exam, though. They asked me what language I chose. I said none: I chose "pseudocode." I insisted (perhaps mistakenly) that it was allowed. They gave me a strange look, but conceded.
I drew a question from the pile. I remember it to this day:
An island was defined as ground surrounded by water on both ends. A peak was defined as a point higher than both of its neighbors.
So I was standing by a blackboard, chalking up pseudocode that iterated over an array, and the examiners' helpful questions made me realize that I iterated over it one time too many. I streamlined my procedure, it looked fine in the end. Then I had to answer some question on integer representation. I was pretty clueless about it. Then I had to represent the number of rice grains in the famous Legend of the Ambalappuzha Paal Payasam. I struggled a lot, even though I should have just written out the binary digit "1" sixty-four times. I did arrive at a solution, though, but repeatedly toyed with the thought of giving up the whole exam.
To my shock, my examiners awarded me a near-perfect score of 14, as there seemed to be "order in my head." They stopped short of welcoming me on board, though my final score of 106 would have granted me entry in any previous year since the existence of the program.
Well, except that year. The score was raised to 108. I desperately looked for any program that would take me, and it turned out that I was able to start my studies at a different, much less competitive branch of the same university: the college branch for training primary school teachers. I already had the necessary score before completing my oral math exam, I had to be careful not to score below -8. (Actually, I did need some minimum score, like 5, because of other rules, but you get my point about the less-than-competitive entrance process to that place.) My majors were math and English. I wanted to transfer to the computer science program, but wasn't allowed to.
Primary school, eh? I thought we'd learn addition in the first year, maybe subtraction in the second.
Wrong.
We had four semesters of some surprisingly tough real calculus. We also had university-level geometry, with an infamous professor who would reduce the class of 120 to about 30 by the end of the first year.
But what I learned was this: one needs to study math. Do the homework. Practice. And be sure to understand everything. Ask questions. Don't be ashamed. Not everyone is a natural.
I remember when I asked one of my classmates, back in high school, that okay, this looks like the equation for a circle, but how do I know that it really is the equation of a circle? He said that I should be more like an artist, and not concern myself with problems like these, as they clearly go over my head. Gee, thanks.
Except that at college, we learned a theorem (complete with proof, naturally) that this is the case: such an equation describes a real circle if the radius is positive; a "point circle" (i.e. a circle consisting of one point only) is the radius is zero; and an imaginary circle if the radius is negative.
I prefer explicitly understanding things.
My favorite math subject was elementary math. It dealt with tricky, sometimes extremely difficult problems that could be solved by elementary means. That is, a very, very, very smart sixth grader could solve them, and thus so should his or her teacher. Makes sense.
I remember a trick question I was really proud of solving:
At universities, students would use infinite sequences and the like. We used the four basic operations.
So I ended up actually liking math. Better late than never!

My parents got me a 386 when I started college. It had 4 megabytes or RAM, ran Windows 3.1, and had trouble running most games that I wanted to run. I spent about as much time fixing it as using it: getting Windows to recognize its sound and video cards, dealing with infected floppies, squeezing apps into the memory available, and so on.
I don't remember exactly why and how, but I got a copy of Borland's Turbo Pascal IDE. I don't even remember the terminology used by the system, but what I remember is this: I decided to use the coordinate geometry knowlegde I'd gained after the first year of college, and do something with it.
I came up with a library of functions that could describe points and vectors in 3D space, as well as simple polyhedrons (basically, just a collection of corners, and the edges connecting those). The main goal was to project these on a 2D surface (represented by the monitor screen), using a perspective vector, and try to mimic human vision by calculating the images captured by both eyes (with the help of blue-red goggles).
I had a summer job, and carried a little checkered notebook with me at all times, thinking about this problem. I did eventually solve all the questions I hoped I would, and I did code up the Pascal library I wanted to.
However, I stopped short of tackling invisible edges or filled polygons. My interest wasn't strong enough to lead me down those difficult paths.
There was a complicated points system: you could get half of your score from certain high-school grades (up to 60), you had to get the second half at the entrance exam (also up to 60), and you could get some extra credits for any state-accredited language examinations (up to 3, I think). If your exam was so good, you could choose to double its score instead of using your high-school grades.
I applied for a computer science program at a respected university in Budapest. The entrance exam consisted of two parts: a written math test, and an oral test in programming. Each was worth 15 points, which needed to be doubled for a total maximum score of 60. (See above.)
The three-hour math part consisted of eight questions (as it had for decades). The first three questions were dead easy: math entrance exam tacticians recommended that you spend half an hour on them in total. The next three were a bit more difficult, it was good practice to do them in an hour or so. Finally, the last two questions (worth the most points, obviously) would take about one and a half hour, or half of your total allocated time.
We had practiced this a lot. We were professionals, almost. The advanced math class we took in the last year focused on these exams. We did a lot of the tests from the previous years. We almost always scored a respectable 12, 13, later 14 or even 15.
So did I.
But I couldn't sleep on the night before the test. And I was nervous. And I screwed up big time. I got stuck with some stupid typos I made on my second or third question. I wasted a lot of time. As a direct result, my score sucked: 10 points out of 15.
I added up my score quickly. I had 58 points from high school and languages. If I want to meet the previous year's score limit of 103, I need at least 13 points on the oral exam: a new subject for me, of which I'd never had any formal training.
I almost gave up.
But in the end I did go to the exam, though. They asked me what language I chose. I said none: I chose "pseudocode." I insisted (perhaps mistakenly) that it was allowed. They gave me a strange look, but conceded.
I drew a question from the pile. I remember it to this day:
An aircraft flies over a territory in a straight line, taking elevation measurements. A measurement of 0 means sea, anything higher than that is ground. Find the highest peak on an island.
An island was defined as ground surrounded by water on both ends. A peak was defined as a point higher than both of its neighbors.
So I was standing by a blackboard, chalking up pseudocode that iterated over an array, and the examiners' helpful questions made me realize that I iterated over it one time too many. I streamlined my procedure, it looked fine in the end. Then I had to answer some question on integer representation. I was pretty clueless about it. Then I had to represent the number of rice grains in the famous Legend of the Ambalappuzha Paal Payasam. I struggled a lot, even though I should have just written out the binary digit "1" sixty-four times. I did arrive at a solution, though, but repeatedly toyed with the thought of giving up the whole exam.
To my shock, my examiners awarded me a near-perfect score of 14, as there seemed to be "order in my head." They stopped short of welcoming me on board, though my final score of 106 would have granted me entry in any previous year since the existence of the program.
Well, except that year. The score was raised to 108. I desperately looked for any program that would take me, and it turned out that I was able to start my studies at a different, much less competitive branch of the same university: the college branch for training primary school teachers. I already had the necessary score before completing my oral math exam, I had to be careful not to score below -8. (Actually, I did need some minimum score, like 5, because of other rules, but you get my point about the less-than-competitive entrance process to that place.) My majors were math and English. I wanted to transfer to the computer science program, but wasn't allowed to.
Primary school, eh? I thought we'd learn addition in the first year, maybe subtraction in the second.
Wrong.
We had four semesters of some surprisingly tough real calculus. We also had university-level geometry, with an infamous professor who would reduce the class of 120 to about 30 by the end of the first year.
But what I learned was this: one needs to study math. Do the homework. Practice. And be sure to understand everything. Ask questions. Don't be ashamed. Not everyone is a natural.
I remember when I asked one of my classmates, back in high school, that okay, this looks like the equation for a circle, but how do I know that it really is the equation of a circle? He said that I should be more like an artist, and not concern myself with problems like these, as they clearly go over my head. Gee, thanks.
Except that at college, we learned a theorem (complete with proof, naturally) that this is the case: such an equation describes a real circle if the radius is positive; a "point circle" (i.e. a circle consisting of one point only) is the radius is zero; and an imaginary circle if the radius is negative.
I prefer explicitly understanding things.
My favorite math subject was elementary math. It dealt with tricky, sometimes extremely difficult problems that could be solved by elementary means. That is, a very, very, very smart sixth grader could solve them, and thus so should his or her teacher. Makes sense.
I remember a trick question I was really proud of solving:
Point A is 100 miles from Point B.
A train starts from A to B, traveling at 15 miles per hour.
A train starts, at the same moment, from B to A, at 35 miles per hour.
At the same moment, a fly starts flying from the nose of the train from Point A at 50 miles per hour, to the nose of the other train. As soon as it reaches it, it starts back, at the same speed. And so on.
The two trains meet (crash?) somewhere between A and B. The fly keeps flying between them up to that point. How much distance does the fly cover in total?
At universities, students would use infinite sequences and the like. We used the four basic operations.
So I ended up actually liking math. Better late than never!
My parents got me a 386 when I started college. It had 4 megabytes or RAM, ran Windows 3.1, and had trouble running most games that I wanted to run. I spent about as much time fixing it as using it: getting Windows to recognize its sound and video cards, dealing with infected floppies, squeezing apps into the memory available, and so on.
I don't remember exactly why and how, but I got a copy of Borland's Turbo Pascal IDE. I don't even remember the terminology used by the system, but what I remember is this: I decided to use the coordinate geometry knowlegde I'd gained after the first year of college, and do something with it.
I came up with a library of functions that could describe points and vectors in 3D space, as well as simple polyhedrons (basically, just a collection of corners, and the edges connecting those). The main goal was to project these on a 2D surface (represented by the monitor screen), using a perspective vector, and try to mimic human vision by calculating the images captured by both eyes (with the help of blue-red goggles).
I had a summer job, and carried a little checkered notebook with me at all times, thinking about this problem. I did eventually solve all the questions I hoped I would, and I did code up the Pascal library I wanted to.
However, I stopped short of tackling invisible edges or filled polygons. My interest wasn't strong enough to lead me down those difficult paths.
Wednesday, May 9, 2007
Hello
Hi everyone, let me explain what this whole thing is. Oh, and sorry about the name of the blog. It was 2 a.m. when I came up with it, for crying out loud. (Besides, note how I managed to smuggle in both "Xcode" and the "My Whatever" naming convention... It's actually growing on me.)
So. I've decided to learn developing desktop applications for Mac OS X using Cocoa, Objective-C, and Apple's development tools, notably Xcode and Interface Builder. Then it occurred to me that I should maybe blog about it as well.
I'm going to be tracking my own progress this way, reminding me where I am, and just simply taking notes of things I should remember later. As there's nothing secret or personal about the whole learning process, I thought, why not make it public? Perhaps some helpful people will offer some help. Perhaps some people will learn from my mistakes, who knows.
So, read on! You'll be in for a ride.
So. I've decided to learn developing desktop applications for Mac OS X using Cocoa, Objective-C, and Apple's development tools, notably Xcode and Interface Builder. Then it occurred to me that I should maybe blog about it as well.
I'm going to be tracking my own progress this way, reminding me where I am, and just simply taking notes of things I should remember later. As there's nothing secret or personal about the whole learning process, I thought, why not make it public? Perhaps some helpful people will offer some help. Perhaps some people will learn from my mistakes, who knows.
So, read on! You'll be in for a ride.
Subscribe to:
Comments (Atom)