May 3, 2009
Maybe this is obvious (and still flammable topic) for most of system administrators, but it worth to say it again.
You should never rely on the environment variables, unless you explicitly specify that in the documentation (which is still bad way in most cases). You script must work if there’s no PATH env. variable, or no EDITOR set, etc. Yes, it may say error to the end user in unresolvable conditions, as `svn ci` do if no EDITOR and no -m specified, but generally it shouldn’t fail.
Not `sh`, but `/bin/sh`, not `cat`, but `/bin/cat`! Remember it and you’ll save a lot of nerves for you and all those people that will use your tools.
May 3, 2009
The ‘config’ option in munin plugins is probably unnecessary and exists only because somebody didn’t think long enough about it.
No, seriously, I understand why there’s ‘autoconf’ and ‘suggest’ options – they’re really rarely called, maybe on node installation or when module first added to the configuration. But ‘config’ should be (and it is) called before each data fetch. Because graph legend or graph options may change between calls. So, if we always call ‘config’ and ‘fetch’ pair why do we have separate commands for them?
So, it is not an option and should be merged into the ‘fetch’ command.
May 2, 2009
Someone forgot to add samba filesystem into the df’s ignore list in FreeBSD port of munin-node. They grep out strings with “//” (which is… strange, not to say stupid).
Most of users will have gaps in df munin graphs as the result. Remote filesystem info may be not accessible for dozens of reasons and cause the whole df_ plugin to timeout, which will report “unknown” values to munin and you will get alert email into your inbox and gap in the graph.
To resolve the problem you will need to add smbfs and cifs into the ignore list in two lines of both /usr/local/share/munin/plugins/df_ and df_inode files:
/bin/df -P -t noprocfs,devfs,fdescfs,linprocfs,nfs,smbfs | tail +2 | grep -v “//” | while read i; do
This should be fixed in two lines in each file. You may remove `grep` if you want, I keep it just to be safe.
P.S.: As far as I know the problem exists also in Linux. In the RedHat’s and Ubuntu’s repos at least.
May 24, 2008
Did you know that you can easily add new time intervals into the system’s periodic feature?
Let’s add /etc/periodic/hourly as the example:
Read the rest of this entry »
April 19, 2008
I’m back finally. There’s the translation of the Igor Sysoev’s report made on the RIT conference. Igor Sysoev is the creator of one of the most used lightweight http servers in Russia and the world – nginx. Read the rest of this entry »
November 20, 2007
Some time ago I bought Latitude D620 and was completelly happy about it – it was fast, relatively small and not too heavy. And everything was working. Until I’ve re-installed the operating system. Even with Dell drivers I don’t have bluetooth now, I had approximatelly a week of hardcode fucking with wi-fi, and it is overheating now. Even when idle. If I’ll install Linux will it explode?
I can’t believe things can be such bad in 21th century!
November 14, 2007
Just want to describe the servers set-up I’m using on my current project. In next several posts I’ll tell why things was made some way and not another, and I’ll describe my future plans also.
But for now only the whole system layout:
I have six servers at the moment. Two of them is isolated set-up and runs live installation of the application. They will run it until I’ll get the next release online on new servers set-up.
Remaining four servers:
Two absolutelly same relatively powerfull servers used for web front-end. Almost typical Apache+mod_php installation and nginx reverse-proxy for load-balancing.
One server if dedicated for the MySQL and some background tasks like images processing.
One server (unlike others it is not very powerfull) is now doing absolutely nothing. I’m planning to put system monitoring, logs, backups, staging installation, and maybe some other non-critical stuff there.
All four servers runs under FreeBSD 6.2, shared storage required by application is located on third-party windows server (client’s data-center) and mounted through smb protocol.
November 13, 2007
There are a lot of UML software, but nothing is better yet than large whiteboard. Why?..
November 11, 2007
Just realized that there is one more benefit in denormalizing way described in previous post: you can easily move Orders records to another database server because you don’t really need foreign keys to products database. Helps with the scalability.
November 11, 2007
Everyone knows what is database normalization for and everyone tries to keep their database normalized at maximum possible level. Guaranteed data integrity, impossibility of insert/update/delete anomalies – all that cool things, you know…
Most of developers also know what is database denormalization for – for improving performance in most cases.
Btw because of cheap hardware and relatively expensive development services for most applications maximum performance is not the main goal. So, seems like no reason for denormalizaition?
Nope. There’s no reason for denormalization only if the data never changes.
Let’s see the simple example. We have a shop and we’re selling blankets with custom art printed on them. There are several steps from order submit to shipping – the art should at least be rendered, approved by quality control team and printed. What will happen if we will change some properties of the product when order is already submitted but not yet rendered? The size of printable area for example (we’re going to buy some new printing equipment with smaller printable area and don’t want to allow users to use old large area anymore).
With perfectly normalized database structure we will take the size of the printable area from the product properties and we will finally render smaller image. Will it be cropped or resized – doesn’t matter because it will differ from what user expected to get and what he saw on the preview anyway.
There are several possible approaches to avoid this problem. First is to make a clone of the product record on every change, so all old orders will point to unchanged data. In some cases it is acceptable way – if product info changed rarely we will not have too much garbage in the database, but if it is changed relatively frequently, or we expect the system to keep data for several years, then the second way is better I think.
The second way is to keep all information necessary for printing within the order record even if it will duplicate some product’s properties. It is actually not the ‘denormalization’ if we’ll call that ‘extra’ fields in order record not the “product’s editable area size” but “image size this order was submitted with”. This simple trick allows us to consider the whole database still perfectly normalized.