boost.python: os.environ and LD_LIBRARY_PATH

This article is again about programming language, so if you are not interested in this area, see you next time.

My first python project is a test suite. I like Martin Fowler's articles, so, naturally, my first program is a test. My project is usually started with test, documentation, and interface design, yet incremental.

This is about python's os.environ and LD_LIBRARY_PATH. This is a follow up of the boost.python's story. My development environment is Linux, but later I have a plan to go to Windows also.

In python interpreter, we can change the environment variables through os.environ. For example, if you want to change the LD_LIBRARY_PATH:

  os.environ['LD_LIBRARY_PATH'] = '/some/directory/lib'

I thought this is all for LD_LIBRARY_PATH, but I hit a wall. My department policy doesn't allow to have administrator right and Internet access to the developers. Therefore, if I want to use something not in my computer, for example, boost or python, I first need to ask to get the source code, and need to think how to install the software locally. Under this environment, the setting of shared library path is important.

My project has the following directory structure.

    test_x +--+ pymodule +--+ mypython_binding_module.so
           +--+ testbase +--+ test_run.py
           +--+ boost144 +--+ lib +--+ ...
                                  +--+ libboost_python.so

The problem here is mypython_binding_module.so depends on libboost_python.so. LD_LIBRARY_PATH is an environment variable.  Since environment variable is program global and has implicit effect, I usually try to avoid to use them. But this time I need it and I try to set this in the python interpreter only.

Python interpreter can recognize mypython_binding_module.so as a python module, so I can set sys.path to import this module. But, mypython_binding_module.so depends on libboost_python.so, this can not be done by sys.path. dlopen() does this task. Therefore, I need to set LD_LIBRARY_PATH to load libboost_python.so.

 I set LD_LIBRARY_PATH by os.environ, however, this does not work. I first suspect, python's os module only tells the current environment variables, but actually did not change them. However, manual said python changed them.

The problem was at the dlopen(). dlopen()'s manual said, this only looks up the LD_LIBRARY_PATH when the program starts. The process changes the LD_LIBRARY_PATH, it doesn't affect the dlopen()'s behavior. This is understandable behavior considering the security.

Once I understood this, I changed the LD_LIBRARY_PATH through os.environ, then I created a child process. In this child process, LD_LIBRARY_PATH's change is effective as expected.

Here, os.environ can change the environment variables, but the function takes it or not is depends on the function. This is a simple thing, but, I don't realize for two days.

Yet, the problem of setting library path seems a common problem. I looked into the documentation (e.g., site module, etc.), I might miss something since I am only a beginner of python.


German translation of Murakami Haruki's cool and wild daydream

We translated Murakami Haruki's cool and wild daydream (in Murakami Asahidou Haiho-, Shinchousha) to German.  (村上春樹のワイルドでクールな白昼夢)


boost.python: how to pass a python object to C++ world and how to return a C++ created object to the python interpreter

This is a programming language story. If you are not interested in such theme, see you other time...

I usually use ruby for scripting, however, in industry python is quite widely used. When I was a student, I was just interested in programming languages, yet I just interested in them for a few weeks and I did not use most of them. But, recently I even did not look into, I feel I become old... This time, I will try what so called ''python.'' (In my Japanese Web, this is ''皆のすなる python というものを,'' this is beginning of Tosa-nikki by Kino Turayuki, established around 935. Sadly, I can not translate this well by my poor English skill.)

First I read a book, Learning Python by Marc Lutz. I took around two weeks, it is fun and I find python interesting. Then this week, I started to implement a program.

There are a lot of introduction web pages of python, so, it is not worth to add something similar to the net, the readers will be also bored such blog entry. Therefore, I will go into a bit detail. I will write about boost.python. This week, I am developing a python binding of a C++ library using boost.python.

boost.python is a library that you can call python interpreter from C++ or you can call C++ function from python interpreter. I would like to extend python with binding a C++ library.

The documentation of boost.python is quite well, but the examples are a bit less. Although, not so many people use boost.python compared with usual python users. I think the documentation is well done considering that. I hope the following my code would fill this gap a bit.

The code (passobj_mod.cpp) is an example how to pass a python dictionary from the interpreter to C++ code, and also pass a C++ created object to the python interpreter. I find boost.python itself is amazingly well designed and developed. I can easily implement these kind of task, though I took a day to find how to do that.

/// passobj_mod.cpp: How to pass the python dict to the C++ fucntion/method.
/// Copyright (C) 2010 Shitohichi Umaya
/// test for using C++ class from python: pass a string or a dict to a
/// C++ method

#include <boost/python.hpp>
#include <boost/python/object.hpp>
#include <boost/python/extract.hpp>
#include <boost/python/list.hpp>
#include <boost/python/dict.hpp>
#include <boost/python/str.hpp>

#include <stdexcept>
#include <iostream>

/// using namespace only for example
using namespace boost::python;

/// C++ object which python can instantiate
class PassObj {
    /// constructor
        // empty

    /// pass a python object, but this should be a python dictionary.
    /// \param[in] pydict a dictionary
    void pass_dict(object pydict) const {
        extract< dict > cppdict_ext(pydict);
            throw std::runtime_error(
                "PassObj::pass_dict: type error: not a python dict.");

        dict cppdict = cppdict_ext();
        list keylist = cppdict.keys();

        // careful with boost name. there already have a conflict.
        int const len = boost::python::len(keylist);
        std::cout << "len(keylist) = " << len << std::endl;
        for(int i = 0; i < len; ++i){
            // operator[] is in python::boost::object
            std::string keystr = extract< std::string >(str(keylist[i]));
            std::string valstr = extract< std::string >(str(cppdict[keylist[i]]));
            std::cout << "key:[" << keystr << "]->[" << valstr << "]" << std::endl;

    /// pass a python object, but this should be a python string.
    /// \param[in] pydict a string
    void pass_string(object pystr) const {
        extract< std::string > cppstr_ext(pystr);
            throw std::runtime_error(
                "PassObj::pass_str: type error: not a python string.");
        std::string cppstr = cppstr_ext();
        std::cout << "passed string: " << cppstr << std::endl;

    /// return a dict object. Does this works?
    /// \return a dict object
    object return_dict() const {
        dict cppdict;
        cppdict["this"] = "work?";
        cppdict["no"]   = "idea";
        cppdict["number"]   = 1;

        return cppdict;

    /// return a dict object. Does this works?
    /// \return a dict object
    object return_string() const {
        return str("Incredible, this works.");



/// importing module name is 'passobj_mod'
             "pass python dict object to c++ method")
             "pass python string to c++ method")
             "return C++ created dict to python")
             "return C++ created str to python")

# test_passobj_mod.py
# test pass object module, python side implementation
# Copyright (C) 2010 Shitohichi Umaya
import passobj_mod

pobj = passobj_mod.passobj()
print dir(pobj)
pobj.pass_string('This is python string, can you hear me?')
pdict = {'pythondict': 1, 'foo': 'bar', 'Bach': 'Goldberg Variation' }
dict_from_cpp = pobj.return_dict()
str_from_cpp  = pobj.return_string()
#! /bin/sh -x
# build.sh
# Copyright (C) 2010 Shitohichi Umaya
PYTHON_INCLUDE=`python-config --includes`

g++ ${PYTHON_INCLUDE} -DPIC -shared -fPIC ${MOD_CPP_SOURCE_BASE}.cpp -o ${MOD_CPP_SOURCE_BASE}.so -lboost_python

test_passobj_mod.py is a test example to call the methods of PassObj in passobj_mod.cpp. build.sh is a sh script to build the python module passobj_mod.so. If you create the files (with chmod) and type:

  # build.sh
  # python test_passobj_mod.py

then, I hope you can try that. I tested this on Ubuntu Linux 9.04.

This is my first python script except the examples in the book. There is no class, no def. However, I still find a fun to write a program and it is always rewarding a program runs as I expected.

unsigned and size_t are hard.

This is a computer language story. So you are not interested in that, I recommend to go to next.

C++ has a type 'unsigned' X, e.g., 'unsigned int.' For example, a minus index of an array usually doesn't make sense, therefore this type is used for that. This type is good for bit array storage, but using an unsigned int instead of an int to gain one more bit is almost never a good idea. (*) Especially combination with implicit conversion makes this hard. I think this unsigned number is not intuitive when the computation result is minus, it still stays a positive number.

For example, unsigned int -1 is usually equal to 4294967295 on 32bit machine.  This is depends on how an integer number is represented in a computer. Even one who knows this internal representation wrote a code on 32bit environment, sometimes it doesn't work on a 64bit machine. For example, he/she assumes size_t and unsigned int are the same type, and uses -1 as an illegal value.

The following code usually doesn't work on a 64bit machine, but works on a 32bit machine.
#include <iostream>
#include <vector>

void foo(std::vector< int > & vec, size_t idx){
    if(idx == size_t(-1)){
        std::cout << "Illegal index" << std::endl;
    std::cout << "OK! accessing a vector with idx = " << idx << std::endl;
    // vec[idx] = ...

int main()
    std::vector< int > vec;
    unsigned int idx = -1;      // illegal index
    foo(vec, idx);

    unsigned int uint_minus_1(-1);
    size_t   size_t_minus_1(-1);
    size_t   size_t_casted = static_cast< size_t >(uint_minus_1);

    std::cout << "(unsigned int)(-1)  = " << uint_minus_1   << std::endl;
    std::cout << "size_t(-1)          = " << size_t_minus_1 << std::endl;
    std::cout << "size_t(-1) (casted) = " << size_t_casted  << std::endl;
The result is as following. 
nvlp[16]bash % ./unsigned_fail
OK! accessing a vector with idx = 4294967295
(unsigned int)(-1)  = 4294967295
size_t(-1)          = 18446744073709551615
size_t(-1) (casted) = 4294967295

First of all, assign -1 to unsigned type is a problem. Also, implicit conversion makes invisible the problem. In C++ language, -1 has a different value according to the type. I think this is very difficult. Recent compiles might tell us this as a warning. I watch this warning since this is a potential error. One of my friend told me, "unsigned is evil." I also try to avoid unsigned type.

If you learn computer architecture, it sounds natural that -1 is equal to 4294967295. But, nowadays I try to think it is actually strange. If I think in that way, I could avoid the bugs in this example and I think I could write more portable and solid code.

(*) Bjarne Stroustrup, C++ Programming Language 3rd Ed. Section 4.4, paragraph 2. p.73


A personal annotations of Veach's thesis (16) p.168

p.116 4.6 Adjoint operator

I look back to the chapter 4.6 since adjoint operator is important topic now. This operator is written as Hermitian. This is a conjugate transpose, I just imagine what could be imaginary energy. There was a story about Hilbert space, so, maybe this is related with that. But until chapter 7, this operator indicates only real symmetry.

In the light transport equation, the light emitted to a surface and reflected, then reaches to the camera. If the camera and the light exchanged, the equation should be the same. I think that is this adjoint operator about.

Carsten W. gives me a comment my understanding sounds OK. Thanks.

p.122 particle tracing Equation 4.32's comment's comment

My blog explain why the weight is there. But if you see p.226 Equation (8.9) explains brief and precise. Also it is general, since I only handled two cases, instead, Veach's form is an integral form and everything is there. How simple he explained this.

p.226 vertex

Whenever I heard 'vertex' I imagine a point of a triangle/polygon. In this paper, a vertex is a sampling point. Sampling points are connected by edges. Some say 'the distance' as the number of vertices, but in this paper, the distance is number of edges. These have one number difference.


A personal annotations of Veach's thesis (15) p.168

p.168 D3 and Equation 5.30 updated

I got a comment about Equation 5.30 from my friend Daniel. However, Equation (1) has absolute value operator at |f'(x_0)|, I don't understand this yet. To see this Equation, we could think about this is a chain rule. If we write this in an integral form (as the substitution rule) as in the Equation (2). This is substantially equivalent with the chain rule. But still, the domain doesn't match with Equation 5.30.

On the other hand, if we read the thesis until p.170, Equation 5.35 is a definition and this looks like showing a linearity. It is Equation (3). Maybe the question is, what is the linear coefficient a_{\beta}. That's my guess. What coefficient makes this equation consistent through the Dirac's \delta, that could be the definition of Equation 5.35.

This has an advantage, if this can be, we don't need integrate every time, this \delta looks like just an ordinary function. This is convenient. (as the Veach said in the text, this kind of stuff is the great part of this paper.)

The following is a slightly changed from the paper, but, basically the same with p.170's Equations. Notice, \beta is a bijective function and \beta(x) = x', x = \beta^{-1}(x'),
Therefore, comparing the first equation and the last one, we have
Yes, finally we got it in this way. In the paper, this x and x' replacement is not at one point, maybe some reason, but, for my understanding, I replace them at once.

In the end, the domain of measure linearly changed. This is interesting. But, integration operator is a linear operator, maybe it is natural. Still, this result is interesting.

The last question, why |f'(x_0)| has the absolute operator? This Equation 5.30's  question still remains for me. But it is much better, I think I almost see it. (or I see an illusion as usual...)

I thank Daniel S. to give me hints of this question. I also thank his great patience on my stupid basic questions.


A personal annotations of Veach's thesis (14) P.137,p.140,p.168

p.137 Shading normal

The shading normal differs from the geometry normal. The shading normal has no physical meaning, therefore, this causes a trouble if we use it to compute physical energy. Indeed, if we use the bump mapping, then the energy computation for the geometry must be handled something special.

p.140 Figure 5.1

The light energy inside of refraction object should be scaled. I never think about that. Indeed, water (refraction object) makes incoming light more dense inside the water. This means energy is concentrated inside. Therefore, we need to consider this. But the usual case, when light goes into the refraction object, then energy concentration happens, also when light goes out from the same refraction object, the inverse effect happens and the energy is un-concentrated. Therefore, I never think about that. In this figure, the algorithm need the energy of inside the water, this effect should be considered. For example, if a camera is inside the water, there could be also a problem.

p.168 D3 and Equation 5.30

p.168's D3 and Equation 5.30 are not clear for me. If anybody knows that, please teach me.

A personal annotations of Veach's thesis (13) P.88, p.122

p.88 Notation of Equation 3.6.3

In Equation 3.6.3, there is a d!, it looks like a operator. It is used as d! cos theta d phi. However, I could not find the definition of this (friends and web.)

p.122 particle tracing Equation 4.32

In this Equation about alpha, there is a mysterious (for me) term 1/(q_{i+i}). f is projected solid angle, p_{i+1} is approximation of BSDF, so no problem. But, what is this 1/(q_{i+i})?
Figure 1: Sampling weight $\frac{1}{q_{i+1}}$. (1) terminate sampling by  probability $p$, (2) bounce probability is $(1-p)$. Because, the sample  value is better than nothing, the case (2) is respected by  $\frac{1}{1-p}$, that is $\frac{1}{q_{i+1}}$.}
Figure 1 shows the alpha update. The sampling is done by the Russian roulette method, then, the termination of a ray is decided by a probability. Intuitively, when you continue to sample, it is natural to respect the sampled result more. Because a sample has more information than no sample. Therefore, the sampled result has a weight of 1/(sample probability).

For example, if the ray terminate probability 0.5, then the weight is 1/0.5 = 2.0. If the ray terminate probability 1/3, then, the weight is 1/(1-1/3) = 3/2. The following two examples are not Russian roulette anymore, but, If the ray didn't terminate all the time, then 1/(1-0) = 1, means using the sampled value. If the ray always terminate, there is no bounce, then, no weight defined since the weight only has meaning when the ray bounce.

So far, I told you ``intuitive'' or ``natural'' something. I confess, my intuition is not so good. The following is a proof of overview of why this is OK.

The issue is if this weight causes a bias, we are in trouble. I wrote what is unbias means in the other blog entry. An unbias algorithm has zero expectation of the sampled calculated error. Figure 1's expectation is (where the true answer is Q),
But, sample value s_1 is zero because it is terminated. To be E to Q (or unbias means the error E-Q = 0), sample value s_2 has a weight alpha,
Therefore, we could compensate s_2 with alpha = 1/(1-p) and this lead us unbias.

Thanks to Leonhard G. who told me the overview of the proof.