Best way to initialize a class on a WordPress plugin

When you’re developing a WordPress plugin, there are certain patterns and practices that are extremely useful to know and apply in order to get a better fit with the platform as a whole.

One of these things it’s what’s the better way to initialize a class on a plugin, which this answer on the WordPress StackExchange covers in great detail, while also explaining other interesting topics and recommendations such as using an autoloader and global access, registry and service locator patterns.

While you’re at it, you might also want to check these posts from Tom McFarlin:

 

Big companies that use PHP

Every now and then there’s still some people who can’t believe PHP can be used for a big, successful project, when actually there are several examples of huge sites using PHP.

Here’s how some of them share their experience.

Facebook

With over 1.49 billion active users, Facebook has been forced into finding creative, out-of-the-box solutions to scaling.

First, they introduced HipHop for PHP on 2010, a transpiler that took PHP code and converted it into a C++ binary.

Even though the project was largely successful, it forced an elaborate deployment process and several incompatibilities with some PHP language features.

On December 2011, they released the HipHop Virtual Machine (HHVM), an open source virtual machine based on Just-In-Time compilation that allowed the greatly improved performance with an easier development and deployment process.

HHVM helped boost the PHP language development introducing lots of new features and a massive performance improvement on PHP7.

Continue reading “Big companies that use PHP”

Using Basic Authentication with the WordPress HTTP API

Basic Authentication it’s often used as a simple security measure or as a temporary authentication method while developing with certain APIs.

While the WordPress HTTP API doesn’t have explicit support for basic authentication, it’s still possible to use it as a header:

$request = wp_remote_post(
  $remote_api_endpoint,
  array(
    'body'    => array( 'foo' => 'bar' ),
    'headers' => array(
      'Authorization' => 'Basic '. base64_encode( $username .':'. $password )
    )
  )
);

Remember that if you’re sending an unencrypted request, all the headers will be sent in plain text, so you should only use it over HTTPS.

Use get_the_terms() instead of wp_get_object_terms()

I was recently debugging the front page of a WordPress site and found a lot of queries to the terms and term relationships database tables.

Digging a little deeper, I found that the culprit were a set of functions that were calling wp_get_object_terms() to get the terms from a set of looped posts… and then I thought… “wait a minute, doesn’t WordPress should be using the object cache for this?”

Well, it turns out that wp_get_object_terms() always queries the database.

If you’re looping over WP_Query results, you should prefer get_the_terms() instead. It’s pretty much the same for most use cases, but it uses the object cache, which by default gets populated with the terms for the posts matching your query — unless you specifically set update_post_term_cache as false when instantiating WP_Query.

The are several differences, though: wp_get_object_terms() can take arrays as the first and second argument, while get_the_terms() can only take the post ID (or object) as first argument (so you can’t get the terms for a bunch of posts on one function call) and a string for taxonomy (so you can’t get the terms for several taxonomies); and you can use a third argument on the former, which the latter doesn’t have.

You could still emulate some of this, and still benefit from using the object cache; for instance, let’s see how you would get the names of the my_custom_tax terms for the current post, ordered by use on a descending way.

// using wp_get_object_terms()
$popular_terms = wp_get_object_terms( $post->ID, 'my_custom_tax', array( 
    'orderby' => 'count',
    'order'   => 'DESC',
    'fields'  => 'names'
) );

// using get_the_terms()
$popular_terms = get_the_terms( $post->ID, 'my_custom_tax' );
// $popular_terms will be ordered alphabetically, so let's order by count
$popular_terms = usort( $popular_terms, function( $a, $b ){
    if ( $a->count < $b->count ) {
        return 1;
    }
    if ( $a->count > $b->count ) {
        return -1;
    }
    return 0;
} );  
// we only need slugs, so...
$popular_terms = wp_list_pluck( $popular_terms, 'name' );

Even if it’s somewhat troublesome, it’s probably worth the effort if you’re trying to maximize for performance.

Using Envoy to automate repetitive tasks

Envoy is a task runner originally developed for Laravel, but that you can also use on any other kind of project.

It’s a very easy way to define tasks with Blade syntax and simple terminal commands, which you can run on remote servers via SSH (including parallel execution) or locally.

Thanks to its simplicity, it’s great to quickly automate repetitive tasks. For instance, this is something I use for importing a replica of the production DB of a site:

@servers(['production' => 'foobar.com', 'local' => 'localhost'])

@macro('sync-db')
    dump-production-db
    get-production-db
    import-production-db
@endmacro

@task('dump-production-db', ['on' => 'production'])
    echo 'Creating production DB dump';
    cd ddbb
    mysqldump --no-autocommit --skip-extended-insert --single-transaction --ignore-table=foobar.wp_simple_history_contexts --ignore-table=foobar.wp_simple_history_history foobar_production | gzip > foobar.production.sql.gz
@endtask

@task('get-production-db', ['on' => 'local'])
    echo 'Copying DB dump from production server';
    cd ddbb
    rsync -P foobar:~/ddbb/foobar.production.sql.gz .
@endtask

@task('import-production-db', ['on' => 'local', 'confirm' => true])
    cd ddbb
    gzip -d -f foobar.production.sql.gz
    sed 's/www.foobar.com/www.foobar.lo/g' -i foobar.production.sql
    echo 'Importing production DB replica';
    mysql -v foobar_development < foobar.production.sql
@endtask

Arreglar errores 502 con nginx y PHP-FPM

Hace algún tiempo, un servidor comenzó a generar errores 502: Bad Gateway de manera aleatoria y no reproducible. O sea, en las condiciones ideales para convertirse en un quebradero de cabeza.

Este servidor funciona con nginx + PHP-FPM, pero luego de descartar las razones más usuales de problemas que se pueden encontrar en este tipo de configuraciones, los errores seguían ocurriendo. Lo peor es que el servicio no solamente se interrumpía, sino que no volvía a funcionar hasta reiniciar manualmente nginx.

Finalmente pude dar con la causa: al parecer, habían ciertos procesos de PHP que nunca terminaban adecuadamente. Si bien el valor de max_execution_time estaba correctamente configurado, en ciertas condiciones hay procesos que no terminan de ejecutarse correctamente. La solución, de parte de PHP:

; detener la ejecución que no obedecen a max_execution_timerequest_terminate_timeout = 30s

; loguear peticiones con tiempo de ejecución excesivo
slowlog = /var/log/php5-fpm/$pool.log.slow
request_slowlog_timeout = 30s

Luego, por parte de nginx también es recomendable establecer límites a la conexión que hace el servidor web con el servidor FastCGI (en este caso, PHP-FPM)

http {
    ...
    # fix para "upstream sent too big header"
    fastcgi_buffers 8 16k;
    fastcgi_buffer_size 32k;
    # timeout para conexiones entre nginx y fastcgi
    fastcgi_connect_timeout 30s;
    fastcgi_send_timeout 30s;
    fastcgi_read_timeout 30s;
    ...
}

Algunas notas:

  • Según la documentación de nginx, el valor fasctgi_connect_timeout no debiese exceder los 75 segundos
  • Para fastcgi_send_timeout y fastcgi_read_timeout, el tiempo máximo se mide entre dos operaciones de escritura/lectura simultáneas, respectivamente, y no considera el tiempo de transmisión de datos. Es decir, si tras el lapso de tiempo indicado el servidor FastCGI no recibe/envía ningún dato, se corta la conexión.