Archive

Archive for the ‘MySQL’ Category

If you get this in MySQL Lock wait timeout exceeded; try restarting transaction

October 23rd, 2008 tony 2 comments

I ran into this crap when I needed to purge a massive number of records from a database. All you need to do is edit the timeout setting in your MySQL config file.

On my Mac it is located at : /etc/my.cnf

Uncomment the line:

innodb_lock_wait_timeout = 50

And change the setting to 500 (its seconds)

Restart mysql and rerun your query.


sudo launchctl unload -w /Library/LaunchDaemons/com.mysql.mysqld.plist
sudo launchctl load -w /Library/LaunchDaemons/com.mysql.mysqld.plist

If you make this change on a production server you may want to set it back to the default setting. I don’t know what the consequences of a long timeout setting would be.

Categories: MySQL Tags:

Making mysqldump compatible with older versions

October 28th, 2005 Tony 1 comment

If you receive some errors such as the following when trying to import your mysql database dump you are most likely attempting to dump a database from a newer version of MySQL and import it into an older version:

You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use near ‘DEFAULT CHARSET=latin1′

OR

You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use near ‘DEFAULT CHARSET=utf-8′

Solution?
Add the following parameter to your msyqldump statement:

–compatible=mysql40

Categories: MySQL Tags:

mySql Doesn’t Scale – No Wonder Feedster Is So Damn Slow

November 30th, 2004 Tony No comments

I’ve been using mySql for many years now for small to medium-large projects. Its a fantastic free database that performs incredibly well. I especially want to tip my hat to the developers for mySql’s extraordinary performance with full-text searching released with 4.0.

However, recently I’ve been loading some databases with a large amount of data and I started to notice an exponential degradation in performance once the dataset started to get large (over 80,000 records).

I thought that I probably just wasn’t doing something right even though I am very concious of open database connections and make good use of indexes and database caching. I was fortunate enough to have the opportunity to chat with my favorite blogger Jeremy Zawodny at the fantastic Yahoo Search party (very San Francisco Bay Area, Red Herring, loads of venture capital, USWeb/CKS, late ’90s feeling kind of corporate party) for the WebmasterWorld geeks in Vegas this month and I asked him about my problem.

yahoo.jpg

Essentially he clued me in and said that mySql wasn’t really built for enterprise level databasing and that I needed to look elsewhere. I had a too many few free Yahoo drinks and forgot to ask a good question. Hey Jeremy. Is Postgres any better?

Anybody have a copy of Oracle I can bum? :)

Categories: MySQL Tags: