We're going to dive right in and look at how to get JPEG-2000 support on an RPM-based Linux Distribution without installing from source via OpenJPEG.
About 2 years ago (soon after Drupal 7 was released), I had a site with a number of fields attached to a content type. When I had to clear caches and reload a page, it would take an awfully long time. This is because the site would need to run load each node, run a query for each field, process it, and more. What was good, though, is that all of these fields would then get placed into the cache_field table so the node could load faster on subsequent loads.
It has been 4 months since my last blog post. In that time span, we have seen doomsday come and go, I passed the 30 year mark (I suppose *that* was my doomsday?), and over 3 months have passed into the New Year. In that same time span, my involvement in the Drupal community went down significantly (well...moreso than a few months before it - I was spending far less time in IRC, less involvement in the issue queue...less than I would have liked in many ways). There have been changes which include:
- Leaving CalArts
- Starting a new job
We had an amazing Drupal meetup in Santa Monica a few days ago (link). Our turnout was much higher than it has been in quite a while (atleast 40 people showed up) and the atmosphere was very cheerful. Organizers like Steve Rifkin and Ishmael Sanchez have added a lot of positive energy since they joined and their efforts clearly showed last night. We had two presentations (I was one of the presenters) and two lightning talks - The other main presentation by Ishmael Sanchez and both lightning talks (by Justin Gossett and Chris Charlton were simply fantastic.
Most folk that talk to me about linking content within a Drupal site (and use a wysiwyg module) know that I am a big fan of the Linkit module with pathologic. It provides a nice way to reference content within your site, keeps a simple url, and pathologic will convert it to the nice url. However, I recently ran into a problem with the course catalog on our campus.
Its been over 3 months since I posted about installing Jenkins and Fabric on RHEL and I wanted to just chime back on the whole thing. Since I had started work for the client, I've actually implemented Jenkins on our campus for various development projects to work with the developers that we have and so far it has been a wonderful experience.
I've been playing around with using the new version of migrate for a little while. But its been on the more boring side and learning how to use migrate with CSV files (which admittedly feels quite good ^_^) And it was only after an email from Tom Camp and in an effort to get my presentation on Migrate ready for NYC camp and Drupal Camp LA.
I have recently been asked to help a client with setting up their server environments and to figure out a development workflow so the site can be moved from dev -> staging -> live in some manner. Tired of the way I was doing things myself, I took it as an opportunity to see some of the ways developers were setting up their development workflow.
A few months back, I blogged about creating dynamic migrations. With a small amount of code, you can do something very powerful. You can bring in large amounts of data that need to fit into different places with one simple class. And when all of these containers are holding close to the same kind of data, it makes it an obvious choice. Commerce Migrate approaches migrating data from Ubercart to Commerce in such a way and does a great job of bringing over the core fields of an Ubercart product. But what do you do when you need to add additional sets of data for a particular type of entity bundle? The client that needed my help had various kinds of information attached to their products - taxonomy terms for various vocabularies, additional image fields, text fields, stock, etc. Fields that do not get associated with Commerce products / product displays in the initial migration. When I initially saw this, I was completely stumped - it meant rewriting all the dynamic migrations that were being done by commerce migrate as actual migrations (not a task I was looking forward to given that I would essentially copying/pasting code to get the desired effect without actually using Commerce Migrate).
Yesterday evening, I was working with a client on their site who are doing some interesting things with one of their custom search pages. They send ajax requests to the backend to get 2 types of values for their user:
- A count on the total number of a node type X that matched the criteria
- A count on the total number of another node type Y that is referenced in node type X (Y can be referenced multiple times by various X but for this, we just want to get back that value.
Instead of opting to go with straight database queries to get the data, they were using the EntityFieldQuery manner to get the initial list of X since they were using fields. Its not quite as fast, but its a much more flexible approach (and if they opt to change their field storage in the future to something like MongoDB, they can have something really fast without having to change a single line of code!). The one problem with EntityFieldQuery, however, is that it will only return back a listing of entity IDs. Meaning that if we want to get other pieces of data, we have to load up the entity. In their scenario, the only other piece of data that they wanted to retrieve was the reference field data. And performing an entire entity_load (or node_load to be specific) would mean they would also need to load up the 50 other fields that they are storing. So doing a retrieval like this on uncached content meant that the retrieval of this data alone took 3 to 4 seconds.