Entries for tag "philosophy", ordered from most recent. Entry count: 34.
# Technical debt is a good metaphor
Mon
12
Nov 2018
There is this term "technical debt" used in the world of software development as a metaphor to describe poor quality of source code (or the lack of documentation, tests etc.), whether it's a bad overall architecture, quick and dirty hacks in some places, or any kinds of software anti-patterns. It makes the code harder, slower, and less pleasant to maintain, fix, and expand. It's called "debt" because, just as the financial debt, it causes some "interest" that needs to be paid, in form of increased maintenance cost and more bugs to fix, until it's repaid (refactoring or rewrite of bad code, completing work that was missing and left "TODO"). It is most frequently caused by lack of time due to deadlines imposed by business people - lack of time to fully understand the requirements and design good solution, to read and understand the existing codebase, to refactor or rewrite if the code is already bad.
Some people say that "technical debt" is a bad metaphor. There was this one article I can't find now which argued that bad code is unlike real-world debt because we don't need to pay "interest" until we actually need to go back to and modify the specific piece of code where we made a dirty hack. I don't agree with it. Sure things in software development are harder to measure and estimate quantitatively than in the business world, but I think that "technical debt" is a good metaphor. Just like in real life, there are different kinds of debt. It depends on who do you borrow money from:
- Leaving bad code in a place where you and everyone else in the team may never need to look again is like borrowing money from you family or best friend. You know the debt is there, but you may postpone paying it off indefinitely and you will still be fine.
- Leaving bad code in a place where you periodically need to make fixes and other changes is like borrowing money from a bank. You have to periodically pay interest until you repay it all.
- Finally, making ugly hack in a critical piece of code that you touch constantly is like being in so desperate need for money to borrow from mafia. You better repay it quickly or you will get in trouble.
Comments | #philosophy #software engineering Share
# Iteration time is everything
Thu
06
Sep 2018
I still remember Demobit 2018 in February in Bratislava, Slovakia. During this demoscene party, one of the talks was given by Matt Swoboda "Smash", author of Notch. Notch is a program that allows to create audio-visual content, like demos or interactive visual shows accompanying concerts, in a visual way - by connecting blocks, somewhat like blueprints in Unreal Engine. (The name not to be confused with nickname of the author of Minecraft.) See also Number one / Another one by CNDC/Fairlight - latest demo made in it.
During his talk, Smash referred to music production. He said that musicians couldn't imagine working without a possibility to instantly hear the effect of changes they make to their project. He said that graphics artists deserve same level of interactivity - WYSIWYG, instant feedback, without a need for a lengthy "build" or "render". That's why Notch was created. Then I thought: What about programmers? Don't they deserve it too? Shorter iteration times mean better work efficiency and higher quality of the result. Meanwhile, a programmer sometimes has to wait minutes or even hours to be able to test a change in his code, no matter how small it is. I think it's a big problem.
This is exactly what I like about development of desktop Windows applications and games: they can usually be built, ran, and tested locally within few seconds. Same applies to games made in Unity and Unreal Engine - developer can usually hit "Play" button and quickly test his gameplay. It is often not the case with development for smaller devices (like mobile or embedded) or larger (like servers/cloud).
I think that iteration time - time after which we can observe effects of our changes - is critical for developers' work efficiency, as well as their well-being. We programmers should demand better tools. All of us - including low-level C and C++ programmers. Currently we are at the good position in the job market so we can choose companies and projects to work on. Let's use it and vote with our feet. Decision makers and architects of software/hardware platforms may think that developers are smart, so they can work efficiently even in harsh conditions. They forget that wasting developers' precious time means wasting a lot of money, not to mention their frustration. Creating better tools is an investment that will pay off.
Now, whenever I get a job offer for a developer position, I ask two simple questions:
1. What is the typical iteration time, from the moment when I change something in the code, through compilation, deployment, application launch and loading, until I can observe the effect of my change? If the answer is: "Usually it's just a matter of few seconds. Files you changed are recompiled, then launching the app takes few seconds and that's it." - that's fine. But if the answer is more like: "Well, the whole project needs to be rebuilt. You don't do it locally. You shelve your changes in Perforce so that build server picks it and makes the build. The build is then deployed to the target device, which then needs to reboot and load your app. It takes 15-20 minutes." - then it's a NOPE for me.
2. How do you debug the application? Can you make experiments by setting up breakpoints and watching variables in a convenient way? If the answer is: "Yes, we have debugger nicely integrated with Visual Studio/WinDBG/Eclipse/other IDE and we debug whenever we see a problem." - that's fine. But when I hear: "Well, command-line GDB should work with this environment, but to be honest, it's so hard to setup that no one uses it here. We just put debug console prints in the code and recompile it whenever we want to make a debug experiment." - then that's a red light for me.
Comments | #career #tools #philosophy Share
# What software engineering has to do with history, politics or law?
Sat
21
May 2016
When at school, I always prefered scientific classes (like mathematics) over humanities. Among my most hated classes were: history, geography and language/literature. That's why I chose to become a programmer. But despite computer science is a scientific discipline, I can see that on a higher level, software engineering involves some humanities like, for example, history, politics or law.
History, because in software - just like in real life - you need to know what happened in the past to be able to understand the state of things we have right now. For example, in the field of graphics API-s, someone asked a question on Programmers Stack Exchange: Why do game developers prefer Windows? The best answer is the one that extensively explains how two main API-s - DirectX and OpenGL - evolved over years.
Politics, because top-level decisions are not always made based on purely technical arguments. Going back to graphics API-s, Microsoft decided to push its next-generation low-level Direct3D 12 into Windows 10 only, while Khronos Group defined Vulkan as an open, multiplatform standard. Google was rumored to design its own graphics API, was even asked by John Carmack not to do so, and it finally returned to the negotiation table with Khronos, so Android N will support Vulkan as well. Apple chose different path and did design its own graphics API - Metal. Similarly, in the GPGPU field, OpenCL is a widely supported standard, but NVIDIA succeeded in promoting its own, vendor-specific API: CUDA. HSA is yet another such initiative, led by a foundation. Among its members are: AMD, ARM, Imagination, Qualcomm, Samsung and many others, but the list lacks some big players, like Intel or NVIDIA. So developing software technology is a little bit like doing politics - "Am I strong enough to go against the others or do I need to seek allies?"
And finally, the law. Specifications of programming languages and API-s are somewhat like acts passed by the government. They are written in natual language, but should be as unambiguous as possible, precisely defining each term, specifying what is allowed and what is not. Doing something against the specification is like breaking the law - it may go unnoticed, it may even give you an advantage (like programmers notoriously relying on signed integer overlow in C++, despite formally it's an undefined behavior), but you may also "get caught" (and get a compilation error or invalid results from your program). On the other hand, a compiler or API implementor not complying to the specification is more serious problem - it's like a state official breaking the law against you. You may just accept your fate and go away (equivalent of not using broken feature and looking for some workaround) or you may report it (so the bug will be fixed in new compiler/driver/library version).
So although software engineering is a scientific/technical discipline, I think that on a higher level it can be compared to some degree to humanities like history, politics or law.
# The Virtual Reality of Code
Wed
11
Nov 2015
In my opinion, coding is a virtual reality with its own set of rules unlike in the physical world. In physical world, each thing has its specific location at the moment, things don't appear or disappear instantly, we have laws of energy and mass conservation. Inside a computer, we have data in memory (which is actually linear - 1D) and processors, processes and threads executing instructions of a program over time.
I have been trying for years to imagine some nice way editing or at least visualizing code, which would be more convenient (and look more attractive) than the text representation we all use. I am unsuccessful, because even if we render some nice looking depiction of a function code, its instructions, conditions and loops, drawing all the connections with variables and functions that this code uses would clutter the image too much. It's just like this virtual world doesn't fit into a 2D or 3D world.
Movie depiction of computer programs is often visually attractive, but far from being practical. This one comes from "Hackers" movie.
Of course there are ways of explaining a computer program on a diagram, e.g. entity diagrams or UML notation. I think that electrical diagrams are something in between. Electrical devices end up as physical things, but on a diagram, the shape, size and position of specific elements doesn't match how they will be arranged on a PCB. Logical representation and electrical connections between them is the only thing that matters.
Today it occured to me that this virtual reality of code also has "dimensions", in its own sense. It's evident when learning programming.
1. First, one has to understand control flow - that the processor executes subsequent instructions of the program over time and that there can be jumps caused by conditions, loops, function calls and returns. It's called dynamic aspect of the code. It can be depicted e.g. by UML sequence diagram or activity diagram.
I still remember my colleague at university who couldn't understand this. What's obvious to every programmer, was a great mystery to him as first year's computer science student. He thought that when there is a function above main(), instructions of that function are executed first and then instructions of the main function. He just couldn't imagine the concept of calling one function from the other and returning from it.
2. Then, there is so called static aspect - data structures and objects that are in memory at given moment in time. This involves understanding that objects can be created in one place and destroyed in another place and at later time, that there may be a collection of multiple objects (arrays, lists, trees, graphs and other data structures), objects may be connected to one another (the concept of pointers or references). Various relationships are possible, like one-to-one and one-to-many. In object-oriented methodologies, there is another layer of depth here, as there are classes and interfaces (with inheritance, composition etc.) and there are actual objects in memory that are instances of these classes. It can be depicted for example by entity diagram or UML class diagram.
3. Finally, one has to learn about version control systems that store history of the code. This adds another dimension to all the above, as over successive commits, developers make changes to the code - its structure and the way it works, including format of data structures it uses. Branches and merges add even more complexity to it. GUI apps for version control systems offer some way of visualizing this, whether it's showing a timeline with commits ("History") or showing who and when commited each line of a file ("Blame").
There is even more to it. Source is organized into files and directories, which may be more or less related to the structure of contained code. Multithreading (and coprocessing e.g. with GPU, SIMD, and all kinds of parallel programming) complicates imagining (and especially debugging!) control flow of the program. Program binaries and data may be distributed into multiple programs and machines that have to communicate.
It fascinates me how software is so multidimensional in its own way, while so flat at the same time (after all, computer memory is linear). I believe that becoming a programmer is all about learning how to imagine, navigate and create stuff in this virtual reality.
Comments | #philosophy #software engineering Share
# Agile Methods and Real Life
Sun
02
Nov 2014
I've received an e-mail recently from a student asking me about what career path to choose. I generally don't feel good as an authority, but when trying to write some reasonable answer, I recalled this diagram - "The Project Paradox":
It basically says that when planning everything in advance, we have to make the most important decisions at the beginning, when we actually know the least about the project. In software development, we may know the final goal, but we usually don't know how exactly the implementation will look like and what problems will we encounter during the project (unless w already done something similar in the past). That's why agile software development methods were invented.
I think something similar happens in real life. What is more, we don't make the most important decisions like about job and marriage only by ourselves (it also depends on the decisions of others) and we don't do it by only rationally weighting pros and cons - we also follow our hearts. So instead of thinking all the time about the big, lifetime goals, it's better (it feels better and is more efficient) to just focus on the next step to do. That's also what GTD (Getting Things Done) method advices.
So in this case, I'd recommend not to consider theoretically what career path is better, more interesting, more profitable etc., but just try to do different things (like learn a bit and code some simple game, mobile app, web page, setup Linux server or whatever) and see how it feels for you to do these things.
# Are Comments Unnecessary?
Sat
08
Sep 2012
I generally believe that in programming there is often only one right way of thinking and who doesn't agree with it is just not experienced enough to understand it. That's why I avoid expressing my personal and more "philosophical" opinions about programming. But maybe I should? Tell me what you think in comments.
I was inspired to write this post by the list of 20 controversial programming opinions. It looks like a gist from discussion of many professionals. I like it a lot and agree with all of the points, except one.
In point 4 they claim that comments are a form of code duplication and should be avoided in favor of more readable code. I don't think that's true. In the perfect world all code comments would be unnecesary because programming language would be so clean and expressive that we could express our intentions directly in the code. But in reality there is always lots of bookkeeping / boilerplate code that we have to interlace with real logic to make any program work. Managed languages (like C#, Java) and scripting languages (like JavaScript, Python) do better job in reducing it than C and C++, but in exchange for performance.
Look at this very simple example. HLSL language has function reflect that calculates a reflected vector. If it didn't, we could easily code it in HLSL using vector arithmetic, like this:
float3 reflect(float3 i, float3 n) {
return i - 2 * dot(i, n) * n;
}
Now what if we want to code same function in C or C++ and we use XNA Math as the math library? It would look like this:
XMVECTOR reflect(FXMVECTOR i, FXMVECTOR n) {
return XMVectorSubtract(i, XMVectorScale(n, 2.f * XMVectorGetX(XMVector3Dot(i, n))));
}
Much better than if we coded it using D3DX math functions, but still not very readable. Don't you think that the formula for this calculation, expressed directly in some comment above this function, would help to understand it?
// i - 2 * dot(i, n) * n
Please also recall how algorithms look like when described in this Pascal-like pseudocode in computer science books. Now think about an algorithm for manipulating some complex data structure that you seen or written in the real code. Was it similar? I bet it was several times longer because of all this memory allocation and other stuff that had to be done to make it a working code.
Of course I agree comments that repeat what can already be seen in code are stupid.
// Creates thread.
void CreateThread();
// Number of threads.
UINT threadCount;
Even worse when comments replace good identifier names.
// Creates thread.
void Go();
// Number of threads.
UINT i;
But code doesn't tell everything. It doesn't tell whether particular pointer can or cannot be null, a time value is in seconds or milliseconds, a number has to be power of two etc. I like comments before functions and classes, especially in header files, as a form of documentation - no matter if in Doxygen format or not.
/*
i - Incident 3D vector, pointing towards the surface.
n - Normal 3D vector, perpendicular to the surface. Should be normalized.
Returned 3D vector points from the surface.
*/
XMVECTOR reflect(FXMVECTOR i, FXMVECTOR n);
I think encapsulation is everything. Even in real life, you press buttons on any device and use it without need to know all the electrical and mechanical details about what happens inside the box. In programming we want to use libraries, classes and functions just by using its public interface, without studying the underlying implementation. Hey, if we had to always deal with all the details down to the lowest level, we would never be able to build any complex system! Similarly, I'd like to see the mathematical formula or algorithm pseudocode without studying all the boilerplate that's in the real code. That's what documentation is for, that's what comments are for. And that's why I hate this saying "Want documentation? Just look at the code!"
# Why do I Publish Source Code?
Wed
25
Jan 2012
I like Open Source. Not in the way that some like Linux and hate Windows, but I just believe that to read or use a piece of code is often more helpful than to read only an abstract description of some algorithm. That's why I publish source of my personal projects. I can see many passionate programmers - like these from Warsztat - are not so enthusiastic to disclosing their sources. I was wondering why and here I'd like to share my thoughts on this subject, in hope I convince some of you to make your personal projects Open Source :)
First, I can hear an argument sometimes telling that you are ashamed of showing your code because you think it is bad. My answer to this can only be: you are probably wrong. Do you really believe that the "professional", "commercial" code inside companies - which is usually evolving for years, maintained by many people, modified to reflect constantly chaging requirements and written in a rush - is better than yours? Not necessarily. Or maybe you think that your code would be better if you put lots of additional comments in it, code in more object-oriented fashion or use more design patterns? Again, I don't think so.
You may also be afraid that someone looks at your code and tells that you are a beginner? Well, you may be right but... what's wrong with that? You code shows the level of your skill in programming and you shouldn't avoid showing who you really are. Remember that on the Internet there is always more beginner programmers who can learn from you than the advanced ones that could potentially make fun of your code. If you are afraid of headhunters searching the Internet, please keep in mind that when applying for a job, you may be asked to send samples of your code anyway - just as I was.
Second argument I can hear is: "But it's just my project, I don't want to make it Free Software developed by the community like Linux". That's a stereotype. Opening your sources doesn't mean the project cannot be developed by only you and will automatically be modified by some others programmers. Such thinking is just like when I hear somebody telling that programming in C++ means you extensively use OOP, while never use global variables, macros or printf. I've never received an email with a patch or other proposal for particular change in my code or collaboration in developing of my project. Even for my projects that are GNU GPL, I only get suggestions that I should add or modify some features.
If so, is it worth publishing source code anyway? My practice clearly answers yes, but not to start a community open source project. I sometimes receive emails asking about a piece of code for an algorithm I've decribed on my webpage, but I didn't provide code for. Programmers from around the world are constantly challenged with tasks they haven't done before, whether at work or at university. Their problems can be similar to yours. They Google for solutions. Your small niche place in the Web or a post on the forum can be of great help for such people.
# A Random Thought on Over-Generalizing
Mon
26
Jul 2010
I don't post next part of the description of my reflection system yet. Instead I just want to share a small thought that came to my mind today. It's about over-generalizing, over-engineering, writing overly-abstract code or however you call it. Using too much OOP is not only the matter of code perfomance, but something deeper, more ideological. Interesting blog entries about this topic are: 5 Stages of Programmer Incompetence (see "The Abstraction Freak" paragraph) and Criminal Overengineering @ yield thought, Smartness overload and Smartness overload - addendum @ mischief.mayhem.soap. A Counterpoint can be found at cbloom rants. And finally here is my idea:
It is a vicious circle. Here is how it works:
But I believe this is true only to some degree. We obviously need a general, universal code sometimes not to do the same, monotonous or error-prone work over and over again. That's why we create and use libraries. And that's why I've coded my reflection system :)
New: I can see similar vicious cycle in a programming language development, inspired by an article I've read today: Google engineer calls Java and C++ too complicated. For me it looks like this: software engineers, companies and committees develop very sophisticated programming languages because they want them to be as much general and universal as possible so developers don't have to learn and use many specialized languages for different purposes. Developers don't like to learn many new programming languages because they have bad experience from learning and using such universal, spohisticated languages.