Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Tuesday, August 28, 2012

Android: can't upgrade read-only database from version 0 to 1

Solution:

If you have named one of your SQLite tables "default", change the name to something else.

Details:

So I was poking around with Android programming the other day, and I ran into this error:

04-12 14:35:09.779: ERROR/AndroidRuntime(790): Caused by: android.database.sqlite.SQLiteException: Can't upgrade read-only database from version 0 to 1: /data/data/com.example.test/databases/mydatabase.db
04-12 14:35:09.779: ERROR/AndroidRuntime(790): at android.database.sqlite.SQLiteOpenHelper.getReadableDatabase(SQLiteOpenHelper.java:170)


The line causing the error was simply calling SQLiteOpenHelper.getReadableDatabase():

SQLiteDatabase db = dbHelper.getReadableDatabase();

I finally narrowed it down to the database table name I was using: "default". As soon as I changed the table name, it worked fine. In fact, I simply changed it from "default" to "default1".

Just to be clear, I wasn't doing anything fancy. I was using pretty basic code to create the database:

public class DatabaseHelper extends SQLiteOpenHelper {
    public static final String DB_TABLE_NAME = "default";
    ...
    public static final String DB_TABLE_CREATE =
        "CREATE TABLE " + DB_TABLE_NAME...


It doesn't appear to be a SQLite thing, because according to their documentation the only reserved table names begin with "sqlite_". So it appears to be an Android issue, and as far as I can tell it isn't documented.

I checked the database after I changed the name, and the "default" table was never created either, so I'm not sure why it wouldn't work:

sqlite> .tables
android_metadata default1


A bug, perhaps?

Monday, January 9, 2012

Using LIMIT with android ContentProvider

So you're using a content provider in your android app, and you want to supply a LIMIT parameter. The problem is, ContentProvider's query method doesn't have a limit parameter. Well, fortunately there's an easy solution; put the limit into a query parameter:
  1. First, in your content provider, create a constant for the query parameter key:
    public class MyProvider extends ContentProvider {
        public static final String QUERY_PARAMETER_LIMIT = "limit";

  2. In the query method of your content provider, get the limit from the Uri:
    public Cursor query(...
        String limit = uri.getQueryParameter(QUERY_PARAMETER_LIMIT);

  3. Still in the query method of your content provider, you can pass the limit on when creating the cursor:
    SQLiteQueryBuilder qb = new SQLiteQueryBuilder();
    Cursor c = qb.query(db, projection, selection, selectionArgs, null, null, sortOrder, limit);

  4. Lastly, when calling your content provider (via getContentResolver.query(), a CursorLoader, etc), you can append the limit to the content uri like so:
    String someLimit = "15";
    cursor = getContentResolver().query(
        // add the limit to the content uri
        MyProvider.CONTENT_URI.buildUpon().appendQueryParameter(
            MyProvider.QUERY_PARAMETER_LIMIT,
            someLimit).build(),
            ...
The cool thing is you can use this same technique to pass other parameters to your query, like DISTINCT, for instance.

Friday, September 9, 2011

coming soon: memory-mapped db for OpenLDAP

short version:



I just ran across a site with news of a new database backend for OpenLDAP that's designed to be completely mapped to memory and is supposed to be faster, more memory efficient, and much easier to configure:

Memory-mapped Database for OpenLDAP


long version:



I'm a huge fan of OpenLDAP. not only is it the fastest and most scalable implementation of LDAP (see at the bottom for sources), best of all it's open-source. configuring it for optimal performance, however, is easier said than done. you have to configure indexes, the database cache size, the IDL cache size and of course the good ole entry cache size. Howard Chu, the current chief architect of OpenLDAP, describes the process pretty well:

"...it requires careful tuning to get good results and the tuning aspects can be quite complex. Data comes through three separate layers of caches before it may be used, and each cache layer has a significant footprint. Balancing the three layers against each other can be a difficult juggling act."

the good news: that quote comes from a site I just ran across where Chu announces a new database backend that's designed to be completely mapped to memory, known as "back-mdb". in Chu's description of back-mdb, he uses words like "extremely fast," "memory efficiency," and my favorite, "trivial configuration." I can't wait! (yes, I'm a giant nerd)

Chu said back-mdb will be ready sometime this month at the earliest, but whenever it comes, it will be worth waiting for. here's the link if you want to check out the details:

Memory-mapped Database for OpenLDAP

and here are those sources I promised you, ripped entirely from my wiki:

Monday, July 18, 2011

Python mysqldb UnicodeDecodeError: 'ascii' codec can't decode byte

Solution:

If you run into the error mentioned in the title of this post using python's mysqldb module version 1.2.1 or less, decode your data/query first:

mydata.decode('utf8')

(modifying 'utf8' to whatever encoding your data happens to be in)

Details:

So I was writing some code in python on Ubuntu, and it was working just fine. When I went to run it in RHEL, I got this error:

Traceback (most recent call last):
File "", line 50, in ?
File "/usr/lib64/python2.4/site-packages/MySQLdb/cursors.py", line 146, in execute
query = query.encode(charset)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 223: ordinal not in range(128)


My first thought was that it was due to the incredibly old version of Python that ships with RHEL 5 (Python 2.4), but it didn't take me long to realize the problem was with the MySQLdb module itself. Ubuntu 10.10 ships with version 1.2.2 of that module, while RHEL 5 ships with version 1.2.1. A minor difference, but apparently in that time this bug was fixed:
http://sourceforge.net/tracker/index.php?func=detail&aid=1521274&group_id=22307&atid=374932

Apparently MySQLdb 1.2.1 tries to indiscriminately encode the data to be put into the database to utf8 (well, at least when you specify utf8 as the database character set), without checking whether the string is already utf8 or not. My solution was just to decode my data from utf8 (to unicode) before passing it to my mysql query, at which point the encoding works just fine.

Like so (the first line's the relevant one):

mydata.decode('utf8')
query = ('INSERT INTO %(database)s (%(column)s) VALUES (%(value)s)' % {'database': database, 'column': column, 'value': mydata})
cursor.execute(query)


Of course, you should modify the 'utf8' part to whatever encoding your data is in.

Edit: If you're using MySQLdb.escape_string(), make sure you run that first before doing the decode, like so:

MySQLdb.escape_string(mydata).decode('utf8')

Tuesday, March 8, 2011

Python and MySQL autocommit

Solution:

If your MySQL database is using the InnoDB engine, commit your changes after database transactions:
cursor.connection.commit()

Or just turn on autocommit to automatically commit after every database transaction:
cursor.connection.autocommit(True)

Details:

So I was using python's MySQLdb module to edit a mysql database, and I noticed that even though python was telling me my modifications were taking place, I wasn't seeing any changes. in addition, when I would log onto the database from something other than python and try to make changes, I would get this error:

ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

Well apparently, the mysqldb module now disables mysql autocommit by default. so now, when I'm done with my transactions to the database, I need to commit my changes by calling the commit() function of the connection object. the connection object is what's returned when you call the MySQLdb.connect() function. I haven't really been using that object in my code other than to create the cursor object, which is what I normally use:

db = MySQLdb.connect(...)
cursor = db.cursor()


But the cursor can access the connection object, which I can then use to commit the changes:

cursor.connection.commit()

The strange thing with all of this is that I just started noticing this problem, even though according to the MySQLdb module authors, this functionality (disabling autocommit by default) has been in place since version 1.2.0 of the module. but when I look at the package versions of python-mysqldb for Ubuntu (what I'm currently using and have been for a while), it looks like it's been past version 1.2.0 for the last several years:
http://packages.ubuntu.com/search?keywords=python-mysqldb&searchon=names&suite=all&section=all

Maybe this "feature" has just recently made it into Ubuntu's package for this module. or maybe I'm missing something else here. at any rate, at least I know what's going on.

You can see here for more information:
http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away

Edit:
Okay, so apparently what happened is in my other code where I was modifying mysql databases, they used the default engine (MyISAM). the database I was having problems with was using the InnoDB engine, which is a transactional storage engine. this explains why I just now saw this issue.

More information here:
http://stackoverflow.com/questions/1617637/pythons-mysqldb-not-getting-updated-row

As well as an alternate solution: instead of committing after every transaction, I can turn autocommit on when I'm working with databases using the InnoDB engine, so they'll function just like the rest:

cursor.connection.autocommit(True)

Thursday, November 18, 2010

UnsatisfiedLinkError exception loading native library: njni11

So... I was working on automating an Oracle 11.1 db installation on a RHEL 5.5 machine. I automated the software install, but when running dbca to create the database, I kept getting this error:

UnsatisfiedLinkError exception loading native library: njni11

I googled it, but all I found were posts on how I needed to make sure I had installed the necessary packages, in particular libaio and libaio-devel. Well, I had all of those installed but I was still getting the error.

It finally hit me: I was automating everything (using Puppet), and hadn't explicitly configured those packages to be installed before Oracle. So they must've somehow gotten installed afterward. The Oracle install still succeeded for some reason, but dbca wouldn't run. Thankfully it was a VM, so I just told puppet to make sure the required packages were installed, reverted to my pre-Oracle-install snapshot, et voila!

Now time to figure out what these listener.ora and tnsnames.ora files are for and how they should properly be configured...