Pencils Down

This weblog is about my experiences in software development

Browsing Posts tagged Database

If you have your site on a less than dedicated hosting package and you need to import a large MySql script you are typically up a creek.  I tried all the usual ideas: using gzip, using bzip, split the file up, etc…  Nothing worked.

One of the last searches had the simplest solution: use a small PHP program to load file and execute directly on server!  Reference from the 1and1 q&a section but I am guessing the same method will work for any Linux host.

Best of all: it’s very fast.

I have been developing Hibernate coding for a couple of years, seeing hundreds of mapping files.  I am guessing 1% of those required a full path to the class being referenced.  From the few of those that I worked on it was never clear why Hibernate, Java, Spring, whatever couldn’t find the class definition in those cases.

Just ran across another odd ball that needed the full path to the object (even though there are several others in the same package that do not need the path).

Any ideas would be appreciated.

I think I have it: if there are multiple objects of the same name in the build path then you have to differentiate.  For example, if you have a “” and a “package.two.Person” then when hibernate attempts to resolve your HQL “from Person” it will not know which Person object to attempt to locate.

I hope this is the answer – appears to make sense.

Interesting database performance article from instagram.  The article focuses on their use of Postgres, but a couple of the key issues that are likely portable to other databases:

1. Partial Indexes to avoid massive collisions on meaningless data.  Create index like:

CREATE INDEX CONCURRENTLY on tags (name text_pattern_ops) WHERE media_count >= 100

2. Functional Indexes – same idea to avoid non-useful indexing

CREATE INDEX CONCURRENTLY on tokens ( substr(token), 0, 8 )

I have not dealt with big data, but making indexes work to specifically ignore useless aspects sounds like a great idea that I will be using.

Interesting article at  Gist is to put the database on a flash drive.  Massive performance boost at cheap cost.

I have done similar one-offs putting database on a flash drive, faster drive; using in-memory database apps.  Sounds like the logical next step: just put the entire thing on a flash drive.

I think we have a fairly standard database that looks like a parts BOMB.  Someone had the neato idea to use cascading delete in the project.  Hibernate’s cascading delete will follow any required foreign key and delete the children it finds there in a cyclical manner.  Good idea, huh?

No.  This means parent entity Parent with a primary key of ParentPrimaryKey MUST cascade it’s primary key to all children.  So, entity Child has a primary key of ChildPrimaryKey PLUS ParentPrimaryKey.  This continues all the way down your entity tree.  This example assumes simple row id’s for primary key.  If there was some other overriding attribute which makes every primary key a composite, like SystemThatThisEntityLivesOn, then every level has one of those plus all the parent’s.

Then we realize, we can’t actually delete some of the lower level entities because they are a lot of work to create.  So, we configure Hibernate to stop at those entities.

Now if we step back and look at the entity diagram for our database it is not uncommon for an entity at a lower level to have close to 20 component parts of a primary key that have cascaded down.

But, we only delete via cascade a very small subset of the entity tree.

Now, throw into the mix some developers who don’t understand the above features of a relational database or Hibernate in general and we now can’t use the LowLevelEntityId composite object that Hibernate generates as that is ‘unclean’.  We are to flatten all of these id’s wherever used.