Tuesday 19 November 2019

How to calculate loss and accuracy?

Loss is used to calculate the gradient in back-propagation. We should keep the actual value produced by the model and use it to calculate the loss.

On the other hand, accuracy is used by human to judge on how good the model performs on certain dataset. In this case, we need to round the actual value to the expected range, e.g. expected value either [0, 1].

Wednesday 13 November 2019

how to run tensorboard?



This can then be visualized with TensorBoard, which should be installable
and runnable with:
pip install tensorboard
tensorboard --logdir=runs



For training and testing loss, which should be installable
and runnable with:
from torch.utils.tensorboard import SummaryWriter
import numpy as np

writer = SummaryWriter()

for n_iter in range(100):
    writer.add_scalar('Loss/train', np.random.random(), n_iter)
    writer.add_scalar('Loss/test', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
writer.close() 
 
 
In order to merge multiple loss into one graph, which should be installable and runnable with:
from torch.utils.tensorboard import SummaryWriter
import numpy as np

writer = SummaryWriter()
for n_iter in range(10000):
    writer.add_scalars('data/scalar_group', {'loss': n_iter*np.arctan(n_iter)}, n_iter)
    if n_iter%1000==0:
        writer.add_scalars('data/scalar_group', {'top1': n_iter*np.sin(n_iter)}, n_iter)
        writer.add_scalars('data/scalar_group', {'top5': n_iter*np.cos(n_iter)}, n_iter) 
writer.close() 

python print actual number instead of scientific notations (e)

import numpy as np
np.set_printoptions(suppress=True)

Monday 21 October 2019

A typical training process of neural networks

source: https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py

A typical training procedure for a neural network is as follows:
  • Define the neural network that has some learnable parameters (or weights)
  • Iterate over a dataset of inputs
  • Process input through the network
  • Compute the loss (how far is the output from being correct)
  • Propagate gradients back into the network’s parameters
  • Update the weights of the network, typically using a simple update rule: weight = weight - learning_rate * gradient

torch - convert to tensor or numpy

# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU

if torch.cuda.is_available():
    device = torch.device("cuda")          # a CUDA device object
    y = torch.ones_like(x, device=device)  # directly create a tensor on GPU
    x = x.to(device)                       # or just use strings ``.to("cuda")``
    z = x + y
    print(z)
    print(z.to("cpu", torch.double))       # ``.to`` can also change dtype together!

torch - convert numpy to a tensor

Converting NumPy Array to Torch Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

See how changing the np array changed the Torch Tensor automatically
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)

torch - convert a tensor to numpy

Converting a Torch Tensor to a NumPy Array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensor([1., 1., 1., 1., 1.])
[1. 1. 1. 1. 1.]
See how the numpy array changed in value.

torch resize using view

Resizing: If you want to resize/reshape tensor, you can use torch.view:

torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])

torch tensor in-place operation

tensor([[-0.0111,  1.2464,  1.5858],
        [ 0.8375, -0.4832,  0.8757],
        [ 0.4529,  1.5446,  1.0424],
        [-0.4078, -1.1831, -0.8830],
        [-0.5558, -1.2425,  0.1504]])

Note

Any operation that mutates a tensor in-place is post-fixed with an ``_``. For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.

Model checkpointed using torch.save() unable to be loaded using torch.load() #12042

deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)

RuntimeError: storage has wrong size: expected -5099839699493302364 got 589824

This usually happens when multiple processes try to write to a single file.
However, this should be prevented with the if condition if rank == 0:.


https://discuss.pytorch.org/t/unable-to-load-waveglow-checkpoint-after-training-with-multiple-gpus/47959/2

https://github.com/pytorch/examples/blob/master/imagenet/main.py#L252

Thursday 10 October 2019

conda import export yml env current environment

1. To export conda environment:-
conda env export | grep -v "^prefix: " > environment.yml

2. To import conda environment:-
conda env create -f env.yml

3. To export pip requirements:-
pip list --format=freeze > requirements.txt

4. To import pip requirements:-
pip install -r requirements.txt

python numpy vectorization

[source: https://www.kdnuggets.com/2019/06/speeding-up-python-code-numpy.html]

Python is huge.
Over the past several years the popularity of Python has grown rapidly. A big part of that has been the rise of Data Science, Machine Learning, and AI, all of which have high-level Python libraries to work with!
When using Python for those types of work, it’s often necessary to work with very large datasets. Those large datasets get read directly into memory, and are stored and processed as Python arrays, lists, or dictionaries.
Working with such huge arrays can be time consuming; really that’s just the nature of the problem. You have thousands, millions, or even billions of data points. Every microsecond added to the processing of a single one of those points can drastically slow you down as a result of the large scale of the data you’re working with.

The slow way


The slow way of processing large datasets is by using raw Python. We can demonstrate this with a very simple example.
The code below multiplies the value of 1.0000001 by itself, 5 million times!
import time

start_time = time.time()

num_multiplies = 5000000
data = range(num_multiplies)
number = 1

for i in data:
    number *= 1.0000001

end_time = time.time()

print(number)
print("Run time = {}".format(end_time - start_time))

I have a pretty decent CPU at home, Intel i7–8700k plus 32GB of 3000MHz RAM. Yet still, multiplying those 5 million data points took 0.21367 seconds. If instead I change the value of num_multiplies to 1 billion times, the process took 43.24129 seconds!
Let’s try another one with an array.
We’ll build a Numpy array of size 1000x1000 with a value of 1 at each and again try to multiple each element by a float 1.0000001. The code is shown below.
On the same machine, multiplying those array values by 1.0000001 in a regular floating point loop took 1.28507 seconds.
import time
import numpy as np

start_time = time.time()

data = np.ones(shape=(1000, 1000), dtype=np.float)

for i in range(1000):
    for j in range(1000):
        data[i][j] *= 1.0000001
        data[i][j] *= 1.0000001
        data[i][j] *= 1.0000001
        data[i][j] *= 1.0000001
        data[i][j] *= 1.0000001

end_time = time.time()

print("Run time = {}".format(end_time - start_time))


What is Vectorization?


Numpy is designed to be efficient with matrix operations. More specifically, most processing in Numpy is vectorized.
Vectorization involves expressing mathematical operations, such as the multiplication we’re using here, as occurring on entire arrays rather than their individual elements (as in our for-loop).
With vectorization, the underlying code is parallelized such that the operation can be run on multiply array elements at once, rather than looping through them one at a time. As long as the operation you are applying does not rely on any other array elements, i.e a “state”, then vectorization will give you some good speed ups.
Looping over Python arrays, lists, or dictionaries, can be slow. Thus, vectorized operations in Numpy are mapped to highly optimized C code, making them much faster than their standard Python counterparts.

The fast way


Here’s the fast way to do things — by using Numpy the way it was designed to be used.
There’s a couple of points we can follow when looking to speed things up:
  • If there’s a for-loop over an array, there’s a good chance we can replace it with some built-in Numpy function
  • If we see any type of math, there’s a good chance we can replace it with some built-in Numpy function
Both of these points are really focused on replace non-vectorized Python code with optimised, vectorized, low-level C code.
Check out the fast version of our first example from before, this time with 1 billion multiplications.
We’ve done something very simple: we saw that we had a for-loop in which we were repeating the same mathematical operation many times. That should trigger immediately that we should go look for a Numpy function that can replace it.
We found one — the power function which simply applies a certain power to an input value. The dramatically sped of the code to run in 7.6293e-6 seconds — that’s a
import time
import numpy as np

start_time = time.time()

num_multiplies = 1000000000
data = range(num_multiplies)
number = 1

number *= np.power(1.0000001, num_multiplies)

end_time = time.time()

print(number)
print("Run time = {}".format(end_time - start_time))


It’s a very similar idea with multiplying values into Numpy arrays. We see that we’re using a double for-loop and should immediately recognised that there should be a faster way.
Conveniently, Numpy will automatically vectorise our code if we multiple our 1.0000001 scalar directly. So, we can write our multiplication in the same way as if we were multiplying by a Python list.
The code below demonstrates this and runs in 0.003618 seconds — that’s a 355X speedup!
import time
import numpy as np

start_time = time.time()

data = np.ones(shape=(1000, 1000), dtype=np.float)

for i in range(5):
    data *= 1.0000001

end_time = time.time()

print("Run time = {}".format(end_time - start_time))

pytorch RuntimeError: CUDA error: invalid device ordinal

export CUDA_VISIBLE_DEVICES=0,1

Monday 7 October 2019

Version between Keras and Tensorflow

Keras 2.2.4 -> Tensorflow 1.13.1

Keras 2.2.5 -> Tensorflow 1.14.0

Thursday 3 October 2019

how to convert list to array by removing comma

before:
new_bbox [array([559.        , 125.        , 607.        , 276.        ,
         0.64699197])] [[559.03448486 125.92767334 607.16009521 276.85614014   0.64699197]]

after:
new_bbox [[559. 125. 607. 276.   1.]] [[559. 125. 607. 276.   1.]]

please note that comma have been removed from list
use np.array to remove all comma

    new_bbox.append([xw1, yw1, xw2, yw2, s])
return np.array(new_bbox)

Tuesday 1 October 2019

how to debug in python

1a. insert a breakpoint() before any line you want to check

1b. then in the pdb mode:-
use dir to check the object values
use c to continue
use type to check data type
use esc to quit

or

2a. use the following lines to track the bug

try:
    something()
except:
    breakpoint()

the program will enter pdb mode and you can use the same thing as 1.

Thursday 26 September 2019

python: how to output to console and file at the same time?

method 1:

logger = setup_logger("NINJA", output_dir, 0)
logger.info("{}".format('Hello World'))
logger.info(args)

---------------------

import logging
import os
import sys


def setup_logger(name, save_dir, distributed_rank): 
    logger = logging.getLogger(name)
    logger.setLevel(logging.DEBUG)
    # don't log results for the non-master process 
    if distributed_rank > 0: 
        return logger
    ch = logging.StreamHandler(stream=sys.stdout)
    ch.setLevel(logging.DEBUG)
    formatter = logging.Formatter("%(asctime)s %(name)s %(levelname)s: %(message)s")
    ch.setFormatter(formatter)
    logger.propagate = False    logger.addHandler(ch)

    if save_dir:         
        fh = logging.FileHandler(os.path.join(save_dir, "log.txt"), mode='w')
        fh.setLevel(logging.DEBUG)
        fh.setFormatter(formatter)
        logger.addHandler(fh)

    return logger


method 2:

# log
if cfg.log_to_file:
    ReDirectSTD(cfg.stdout_file, 'stdout', False)
    ReDirectSTD(cfg.stderr_file, 'stderr', False)
# dump the configuration to log.
import pprint
print('-' * 60)
print('cfg.__dict__')
pprint.pprint(cfg.__dict__)
print('-' * 60)

---------------------

class ReDirectSTD(object):
    """
    overwrites the sys.stdout or sys.stderr
    Args:
      fpath: file path
      console: one of ['stdout', 'stderr']
      immediately_visiable: False
    Usage example:
      ReDirectSTD('stdout.txt', 'stdout', False)
      ReDirectSTD('stderr.txt', 'stderr', False)
    """
    def __init__(self, fpath=None, console='stdout', immediately_visiable=False):
        import sys
        import os
        assert console in ['stdout', 'stderr']
        self.console = sys.stdout if console == "stdout" else sys.stderr
        self.file = fpath
        self.f = None
        self.immediately_visiable = immediately_visiable
        if fpath is not None:
            # Remove existing log file
            if os.path.exists(fpath):
                os.remove(fpath)
        if console == 'stdout':
            sys.stdout = self
        else:
            sys.stderr = self

    def __del__(self):
        self.close()

    def __enter__(self):
        pass

    def __exit__(self, **args):
        self.close()

    def write(self, msg):
        self.console.write(msg)
        if self.file is not None:
            if not os.path.exists(os.path.dirname(os.path.abspath(self.file))):
                os.mkdir(os.path.dirname(os.path.abspath(self.file)))
            if self.immediately_visiable:
                with open(self.file, 'a') as f:
                    f.write(msg)
            else:
                if self.f is None:
                    self.f = open(self.file, 'w')
                self.f.write(msg)

    def flush(self):
        self.console.flush()
        if self.f is not None:
            self.f.flush()
            import os
            os.fsync(self.f.fileno())
   
    def close(self):
        self.console.close()
        if self.f is not None:
            self.f.close()

Thursday 19 September 2019

string, numpy, convert, "./output/[array(['hello-world'], dtype=U22)]/12454.png"

string, numpy, convert, "./output/[array(['hello-world'], dtype='<U22')]/12454.png" 

original is
att_list[idx] 

correct one should be
att_list[idx][0][0]

Sunday 15 September 2019

how to refresh nvidia-docker2

sudo systemctl daemon-reload
sudo systemctl restart docker

docker: Error response from daemon: Unknown runtime specified nvidia.

$ sudo gedit /etc/docker/daemon.json
{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

how to test nvidia-docker

docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
 
docker run --rm --runtime=nvidia nvidia/cuda:10.0-devel nvidia-smi 
 
docker run --rm --runtime=nvidia vistart/cuda:10.0-cudnn7-devel-ubuntu16.04 nvidia-smi 

Friday 13 September 2019

How to install nvidia driver into kernel?

ctrl + alt + F2
 
login: username
password: password
 
sudo apt purge nvidia* 
 
sudo service lightdm stop 
 
sudo sh ./NVIDIA-Linux-x86_64-410.104.run
 
sudo service lightdm start 

Tuesday 10 September 2019

../../lib/libopencv_imgcodecs.so.3.4.4: undefined reference to `TIFFReadRGBAStrip@LIBTIFF_4.0'

sudo apt remove libopencv*

conda uninstall libtiff

sudo su
make all -j16

how to get the current folder / directory size

to get the harddisk size
df -h .

to get the current directory size
du -hc .

to get the home folder size
du -sh /home

to get the current sub directory sorted size
du -h --max-depth=1 | sort -hr

how to export list into a file and import it again? use pickle

You can use pickle module for that. This module have two methods,
  1. Pickling(dump): Convert Python objects into string representation.
  2. Unpickling(load): Retrieving original objects from stored string representstion. 

import pickle
l = [1,2,3,4]
with open("test.txt", "wb") as fp: #Pickling
    pickle.dump(l, fp)
 

with open("test.txt", "rb") as fp: # Unpickling
    b = pickle.load(fp)
b
 
[1, 2, 3, 4]

Monday 9 September 2019

How to tar with multi thread / core


You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:

tar cf - paths-to-archive | pigz > archive.tar.gz



By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz

Sunday 1 September 2019

how to overwrite the sys.stdout or sys.stderr

class ReDirectSTD(object):
    """
    overwrites the sys.stdout or sys.stderr
    Args:
      fpath: file path
      console: one of ['stdout', 'stderr']
      immediately_visiable: False
    Usage example:
      ReDirectSTD('stdout.txt', 'stdout', False)
      ReDirectSTD('stderr.txt', 'stderr', False)
    """
    def __init__(self, fpath=None, console='stdout', immediately_visiable=False):
        import sys
        import os
        assert console in ['stdout', 'stderr']
        self.console = sys.stdout if console == "stdout" else sys.stderr
        self.file = fpath
        self.f = None
        self.immediately_visiable = immediately_visiable
        if fpath is not None:
            # Remove existing log file
            if os.path.exists(fpath):
                os.remove(fpath)
        if console == 'stdout':
            sys.stdout = self
        else:
            sys.stderr = self

    def __del__(self):
        self.close()

    def __enter__(self):
        pass

    def __exit__(self, **args):
        self.close()

    def write(self, msg):
        self.console.write(msg)
        if self.file is not None:
            if not os.path.exists(os.path.dirname(os.path.abspath(self.file))):
                os.mkdir(os.path.dirname(os.path.abspath(self.file)))
            if self.immediately_visiable:
                with open(self.file, 'a') as f:
                    f.write(msg)
            else:
                if self.f is None:
                    self.f = open(self.file, 'w')
                self.f.write(msg)

    def flush(self):
        self.console.flush()
        if self.f is not None:
            self.f.flush()
            import os
            os.fsync(self.f.fileno())
   
    def close(self):
        self.console.close()
        if self.f is not None:
            self.f.close()

Thursday 29 August 2019

how to add user or change password in ubuntu

to add user:
sudo adduser <name>
(then enter password for <name>)

to change the user right
sudo usermod -aG sudo <name>

to list all sudoers / root users
getent group sudo

to change <name> password
sudo passwd <name>

Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups

[original source: https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255]


Wednesday 28 August 2019

how to set PATH, LD_LIBRARY_PATH, PYTHONPATH

#export PYTHONPATH=/home/ninja/workspace/caffe/python
export PYTHONPATH=/home/ninja/workspace/caffe3/python

#export PATH=/home/ninja/anaconda3/condabin:/home/ninja/bin:/home/ninja/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ninja/anaconda/bin:/usr/local/include/opencv2:/usr/local/cuda-10.0/bin
export PATH=/home/ninja/bin:/home/ninja/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/include/opencv2:/usr/local/cuda-10.0/bin
#PATH=$PATH:$HOME/anaconda/bin:/usr/local/cuda/bin:/usr/local/include/opencv2
#PATH=$PATH:$HOME/anaconda/bin:/usr/local/cuda/bin # 10.0
#PATH=$PATH:$HOME/anaconda/bin:/usr/local/cuda-9.0/bin
#PATH=$PATH:$HOME/anaconda/bin:/usr/local/cuda-8.0/bin

export LD_LIBRARY_PATH=/usr/local/lib:/usr/lib/x86_64-linux-gnu:/lib/x86_64-linux-gnu/:/usr/local/cuda-10.0/lib:/home/ninja/anaconda3/envs/demo/lib
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/ninja/.conda/envs/keras3/lib:/usr/local/lib

hostname/ip, username and password (passphrase)

1. set the ssh password of remote ip address using the following commands:-
 $  ssh-keygen (no passphase)
 $  ssh-copy-id user@192.168.1.123
 $  ssh-copy-id user@192.168.1.234
 $  ssh-add

2. set the default hostname as following:-
$ sudo gedit /etc/hosts
(edit the file with <ip address> <corresponding hostname>)
192.168.1.136    anyhostname

3. set the default username of each hostname/ip
$ sudo gedit ~/.ssh/config
(edit the file with <hostname> <username>)
anyhostname anyusername

This way, you can even skip the "anyusername@" and do just "ssh anyhostname"

4. to debug
$ ssh -vv user@host

how to list out current tcp process and its id

lsof -i tcp

Tuesday 27 August 2019

how to safely remove the pendrive from ubuntu system

udisksctl unmount --block-device /dev/sdd1

we can use "df -h" to check the current value for pendrive (/dev/sdd1)

RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED, allow_unreachable=True) # allow_unreachable flag

torch.manual_seed(16)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ["CUDA_LAUNCH_BLOCKING"] = "1"

how to replace the text in vim

# general 
:%s,foo/bar/baz,foo/bar/boz,g
 
# special characters
:%s,data\[0\],data,g 
  
:%s/\/Users\/tom\/documents\/pdfs\//<new text>/g

Monday 26 August 2019

how to change the current folder permission, eg. create folder

sudo chmod -R ugo+rw ./*

 

===

sudo chmod 777 ./folder, everyone can access the folder

sudo chmod 700 ./folder, only login user can access the folder, guest cannot access other user folder, restrict to logged user only

how to install pinyin under ubuntu

sudo apt-get install ibus-pinyin ibus-libpinyin pinyin-database

Wednesday 21 August 2019

undefined reference to symbol ‘_ZN2cv6String10deallocateEv

使用qt编译Caffe时出现如下错误:
undefined reference to symbol ‘_ZN2cv6String10deallocateEv‘
error adding symbols: DSO missing from command line
 
原因是因为出现了opencv 高低版本的冲突,TX2 本身已经安装了opencv3.4,而安装caffe时又执行了:
apt-get install libopencv-dev
 
从而出现了opencv版本冲突问题,解决办法是执行:
sudo apt-get autoremove libopencv-dev

Tuesday 20 August 2019

To compress avi into mp4, x264 (web-cased format) using ffmpeg

ffmpeg -i input.avi -c:v libx264 -crf 19 -preset slow -c:a aac -b:a 192k -ac 2 out.mp4

Sunday 18 August 2019

The imported target “Qt5::Gui” references the file “/usr/lib/x86_64-linux-gnu/libEGL.so” but this file does not exist

sudo rm /usr/lib/x86_64-linux-gnu/libEGL.so; sudo ln /usr/lib/x86_64-linux-gnu/libEGL.so.1 /usr/lib/x86_64-linux-gnu/libEGL.so

grfmt_exr.hpp:52:31: fatal error: ImfChromaticities.h: No such file or directory

sudo apt-get install libopenexr-dev

No rule to make target ;usr/lib/x86_64-linux-gnu/libGL.so

sudo rm usr/lib/x86_64-linux-gnu/libGL.so

sudo ln -s  /usr/lib/libGL.so.1  /usr/lib/x86_64-linux-gnu/libGL.so

Tuesday 13 August 2019

to run tensorflow without session

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

Wednesday 7 August 2019

/usr/bin/x86_64-linux-gnu-ld: cannot find -lboost_python3

$ cd /usr/lib/x86_64-linux-gnu$ sudo ln -s libboost_python-py35.so libboost_python3.so

Thursday 1 August 2019

tmux - ubuntu multiple terminal sessions

(image source: https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/)

openpose: how to install

1. how to build openpose

cmake -DOpenCV_INCLUDE_DIRS=/usr/local/include \
  -DOpenCV_LIBS_DIR=/usr/local/lib \
  -DCaffe_INCLUDE_DIRS=/home/"${USER}"/workspace/caffe3/distribute/include \
  -DCaffe_LIBS=/home/"${USER}"/workspace/caffe3/build/lib/libcaffe.so -DBUILD_CAFFE=OFF ..


cmake -Dpython_version=3 -DBUILD_PYTHON=1 -DCMAKE_INSTALL_PREFIX=/home/ninja/workspace/openpose/build/install ..

make all -j32

sudo make install

export PYTHONPATH=/home/ninja/workspace/openpose/build/install/python

2. how to run testing

./build/examples/openpose/openpose.bin --video examples/media/video.avi

CUDA_VISIBLE_DEVICES=0 ./build/examples/openpose/openpose.bin --net_resolution "160x80" --video examples/media/video.avi


3. how to use python library

ninja@pp-pc:~/workspace/openpose$ python build/examples/tutorial_api_python/01_body_from_image.py

(an output image will be shown)


4. how to use docker (failed)

docker run -it --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 cwaffles/openpose-python

apt-get install xauth


docker run -it --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 openpose:docker

modules/python_bindings_generator/pyopencv_generated_include.h' failed

1. cd to workspace/opencv

2.  then type,
python ./modules/python/src2/gen2.py ./build/modules/python_bindings_generator ./build/modules/python_bindings_generator/headers.txt
 
3.after that, continue as usual cmake and make 

/usr/lib/x86_64-linux-gnu/libunwind.so.8: undefined reference to `lzma_index_end@XZ_5.0'

may be python version is wrong, try

cmake -Dpython_version=3 ..

how to mount the host folder into docker folder

 docker run -it --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 -v /home/ninja/tmp/:/openpose/tmp openpose:docker

-v host_folder:docker_folder

How to check the current cudnn?

dpkg -l | grep cudnn

ubuntu to find library dependencies

ldd /usr/local/lib/libopencv_core.so

Thursday 25 July 2019

cmake Could NOT find Atlas

$ sudo apt install liblapacke-dev
$ sudo apt install libatlas-base-dev

$ sudo apt install libatlas3-base



conda user permission

sudo chown -R user /home/user/anaconda3

Thursday 18 July 2019

THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=405 error=11 : invalid argument

Check the pytorch compatible version with cuda10.0.

You can install torch 1.0.0 as:
$ pip install -U https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl 
$ pip install https://download.pytorch.org/whl/cu100/torchvision-0.2.2-cp36-cp36m-linux_x86_64.whl

Or you can install torch 1.1.0 as:
$ pip install -U https://download.pytorch.org/whl/cu100/torch-1.1.0-cp36-cp36m-linux_x86_64.whl
$ pip install --force https://download.pytorch.org/whl/cu100/torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl 

Sunday 7 July 2019

how to build caffe in ubuntu16, cuda10.0, cudnn7.6, python3.5

This is similar to the previous post where we change the focus to python3.5.

Make sure you have install caffe for python2.7 before continue reading this post.

Here are few steps I have done for successfully compiling the caffe for python 3.5.

1. Clone the default caffe folder into a new folder named as caffe3. Then, delete all files in the build folder,
$ cp ./caffe/* ./caffe3/
$ sudo rm -r ./caffe3/build/*

2. Create a new conda environment and I named it as caffe3 as well. Make sure you use python3.5 as following,
$ conda create -n caffe3 python=3.5

3. Activate the caffe3 env and then install the numpy as following,
$ conda activate caffe3
$ pip install numpy
$ pip install scikit-image

4. Install additional dependencies as following:
$ sudo apt-get install python3-skimage
$ sudo apt-get install python3-protobuf

5. Once completed, cd to the caffe3 root folder and copy the makefile as following:
$ cd ./caffe3
$ cp Makefile.config.example Makefile.config

6. You will see a new file named as Makefile.config under ./caffe folder. Copy and paste the Makefile content as given in the appendix below. Note that please change the path accordingly as my anaconda env was installed at /home/ninja/.conda/envs/*, yours might be different such as /home/ninja/anaconda3/envs/*.

7. Once everything is ready, you are good to compile caffe, make sure you are in the ./caffe3 folder:
$ make clean
$ make all -j32
$ make test
$ make runtest
$ make pycaffe 
$ make distribute
(you should be able to see the distribute folder under caffe3)

8.If you have no error in compilation, congratulation! Next is to set the path in ~/.bashrc and you should able to import caffe.
$ sudo gedit ~/.bashrc

then add the following lines into ~/.bashrc, save it and source it

export PYTHONPATH=/home/ninja/caffe3/distribute/python:$PYTHONPATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/lib/x86_64-linux-gnu/:/home/ninja/caffe3/distribute/lib

$ source ~/.bashrc
$ conda activate caffe3
$ python (version 3.5.*)
>>> import caffe
>>> (pray, no error)

Example1 of Error:
./include/caffe/util/signal_handler.h:4:34: fatal error: caffe/proto/caffe.pb.h: No such file or directory

If you are using conda, you need to include the env lib into the LD_LIBRARY_PATH:
$ export LD_LIBRARY_PATH=/home/ninja/.conda/envs/caffe3/lib


Example2 of Error:
xxx... .nccl.h: No such file or directory

If you are using conda, you need to copy all include and lib files into env:
$ cp nccl_2.4.7-1+cuda10.0_x86_64/include/nccl* /home/ninja/.conda/envs/c3d11/include/
$ cp nccl_2.4.7-1+cuda10.0_x86_64/lib/libnccl* /home/ninja/.conda/envs/c3d11/lib/


Example3 of Error:
xxx... .cudnn.h: No such file or directory

If you are using conda, you need to copy all include and lib files into env:
$ cp cuda/include/cudnn.h /home/ninja/.conda/envs/c3d11/include/
$ cp cuda/lib64/libcudnn* /home/ninja/.conda/envs/c3d11/lib/

I am using cudnn-10.0-linux-x64-v7.6.0.64. 


Example4 of Error:
/usr/include/boost/python/detail/wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or directory
$ locate pyconfig.h
>> set the path in Makefile.config > PYTHON_INCLUDE


Example5 of Error:
Solve the problem: "cannot find -lboost_python3" when using Python3 Ubuntu16.04
$ cd /usr/lib/x86_64-linux-gnu
$ sudo ln -s libboost_python-py35.so libboost_python3.so


Example6 of Error:
Solve the problem: "/usr/bin/ld: cannot find -lopencv_imgcodecs" when using Python3 Ubuntu16.04, modify Makefile.config:-

PYTHON_LIBRARIES := boost_python3 python3.5m opencv_imgcodecs
PYTHON_LIB := $(ANACONDA_HOME)/lib /home/ninja/anaconda3/envs/caffe3/lib /home/ninja/workspace/opencv-3.4.4/distribute/lib



Example of Makefile, CUDA 8.0, CUDNN 6.0 used in this post.

## Refer to http://caffe.berkeleyvision.org/installation.html

# Contributions simplifying and improving our build system are welcome!

LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda3/lib


# cuDNN acceleration switch (uncomment to build with cuDNN).

USE_CUDNN := 1


# CPU-only switch (uncomment to build without GPU support).

# CPU_ONLY := 1


# uncomment to disable IO dependencies and corresponding data layers

# USE_OPENCV := 0

# USE_LEVELDB := 0

# USE_LMDB := 0

# This code is taken from https://github.com/sh1r0/caffe-android-lib

# USE_HDF5 := 0


# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)

#    You should not set this flag if you will be reading LMDBs with any

#    possibility of simultaneous read and write

ALLOW_LMDB_NOLOCK := 1


# Uncomment if you're using OpenCV 3

OPENCV_VERSION := 3


# To customize your choice of compiler, uncomment and set the following.

# N.B. the default for Linux is g++ and the default for OSX is clang++

# CUSTOM_CXX := g++


# CUDA directory contains bin/ and lib/ directories that we need.

CUDA_DIR := /usr/local/cuda-8.0

# On Ubuntu 14.04, if cuda tools are installed via

# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:

# CUDA_DIR := /usr


# CUDA architecture setting: going with all of them.

# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.

# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.

# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.

CUDA_ARCH := -gencode arch=compute_60,code=sm_61


# BLAS choice:

# atlas for ATLAS (default)

# mkl for MKL

# open for OpenBlas

BLAS := atlas

# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.

# Leave commented to accept the defaults for your choice of BLAS

# (which should work)!

# BLAS_INCLUDE := /path/to/your/blas

# BLAS_LIB := /path/to/your/blas


# Homebrew puts openblas in a directory that is not on the standard search path

# BLAS_INCLUDE := $(shell brew --prefix openblas)/include

# BLAS_LIB := $(shell brew --prefix openblas)/lib


# This is required only if you will compile the matlab interface.

# MATLAB directory should contain the mex binary in /bin.

# MATLAB_DIR := /usr/local

# MATLAB_DIR := /Applications/MATLAB_R2012b.app


# NOTE: this is required only if you will compile the python interface.

# We need to be able to find Python.h and numpy/arrayobject.h.

# PYTHON_INCLUDE := /usr/include/python2.7 \

        /usr/lib/python2.7/dist-packages/numpy/core/include

# Anaconda Python distribution is quite popular. Include path:

# Verify anaconda location, sometimes it's in root.

ANACONDA_HOME := /home/ninja/.conda/envs/caffe3

PYTHON_INCLUDE := $(ANACONDA_HOME)/include \

        $(ANACONDA_HOME)/include/python3.5 \

        $(ANACONDA_HOME)/lib/python3.5/site-packages/numpy/core/include


# Uncomment to use Python 3 (default is Python 2)

PYTHON_LIBRARIES := boost_python3 python3.5m

PYTHON_INCLUDE := /home/ninja/.conda/envs/caffe3/include/python3.5m \

                /home/ninja/.conda/envs/caffe3/lib/python3.5/site-packages/numpy/core/include


# We need to be able to find libpythonX.X.so or .dylib.

# PYTHON_LIB := /usr/lib

PYTHON_LIB := $(ANACONDA_HOME)/lib /home/ninja/.conda/envs/caffe3/lib


# Homebrew installs numpy in a non standard path (keg only)

# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include

# PYTHON_LIB += $(shell brew --prefix numpy)/lib


# Uncomment to support layers written in Python (will link against Python libs)

WITH_PYTHON_LAYER := 1


# Whatever else you find you need goes here.

#INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include

#LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial /home/ninja/opencv/include/ /home/ninja/.conda/envs/caffe3/include

LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial /home/ninja/opencv/build/lib


# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies

# INCLUDE_DIRS += $(shell brew --prefix)/include

# LIBRARY_DIRS += $(shell brew --prefix)/lib


# NCCL acceleration switch (uncomment to build with NCCL)

# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)

USE_NCCL := 1


# Uncomment to use `pkg-config` to specify OpenCV library paths.

# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)

USE_PKG_CONFIG := 1


# N.B. both build and distribute dirs are cleared on `make clean`

BUILD_DIR := build

DISTRIBUTE_DIR := distribute


# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171

# DEBUG := 1


# The ID of the GPU that 'make runtest' will use to run unit tests.

TEST_GPUID := 0


# enable pretty build (comment to see full commands)

Q ?= @






Example of Makefile, CUDA 10.0, CUDNN 7.6 used in this post.

## Refer to http://caffe.berkeleyvision.org/installation.html

# Contributions simplifying and improving our build system are welcome!

LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda3/lib


# cuDNN acceleration switch (uncomment to build with cuDNN).

USE_CUDNN := 1


# CPU-only switch (uncomment to build without GPU support).

# CPU_ONLY := 1


# uncomment to disable IO dependencies and corresponding data layers

# USE_OPENCV := 0

# USE_LEVELDB := 0

# USE_LMDB := 0

# This code is taken from https://github.com/sh1r0/caffe-android-lib

# USE_HDF5 := 0


# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)

#    You should not set this flag if you will be reading LMDBs with any

#    possibility of simultaneous read and write

ALLOW_LMDB_NOLOCK := 1


# Uncomment if you're using OpenCV 3

OPENCV_VERSION := 3


# To customize your choice of compiler, uncomment and set the following.

# N.B. the default for Linux is g++ and the default for OSX is clang++

# CUSTOM_CXX := g++


# CUDA directory contains bin/ and lib/ directories that we need.

CUDA_DIR := /usr/local/cuda-10.0

# On Ubuntu 14.04, if cuda tools are installed via

# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:

# CUDA_DIR := /usr


# CUDA architecture setting: going with all of them.

# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.

# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.

# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.

CUDA_ARCH := -gencode arch=compute_75,code=sm_75


# BLAS choice:

# atlas for ATLAS (default)

# mkl for MKL

# open for OpenBlas

BLAS := atlas

# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.

# Leave commented to accept the defaults for your choice of BLAS

# (which should work)!

# BLAS_INCLUDE := /path/to/your/blas

# BLAS_LIB := /path/to/your/blas


# Homebrew puts openblas in a directory that is not on the standard search path

# BLAS_INCLUDE := $(shell brew --prefix openblas)/include

# BLAS_LIB := $(shell brew --prefix openblas)/lib


# This is required only if you will compile the matlab interface.

# MATLAB directory should contain the mex binary in /bin.

# MATLAB_DIR := /usr/local

# MATLAB_DIR := /Applications/MATLAB_R2012b.app


# NOTE: this is required only if you will compile the python interface.

# We need to be able to find Python.h and numpy/arrayobject.h.

# PYTHON_INCLUDE := /usr/include/python2.7 \

        /usr/lib/python2.7/dist-packages/numpy/core/include

# Anaconda Python distribution is quite popular. Include path:

# Verify anaconda location, sometimes it's in root.

ANACONDA_HOME := /home/ninja/.conda/envs/caffe3

PYTHON_INCLUDE := $(ANACONDA_HOME)/include \

        $(ANACONDA_HOME)/include/python3.5 \

        $(ANACONDA_HOME)/lib/python3.5/site-packages/numpy/core/include


# Uncomment to use Python 3 (default is Python 2)

PYTHON_LIBRARIES := boost_python3 python3.5m

PYTHON_INCLUDE := /home/ninja/.conda/envs/caffe3/include/python3.5m \

                /home/ninja/.conda/envs/caffe3/lib/python3.5/site-packages/numpy/core/include


# We need to be able to find libpythonX.X.so or .dylib.

# PYTHON_LIB := /usr/lib

PYTHON_LIB := $(ANACONDA_HOME)/lib /home/ninja/.conda/envs/caffe3/lib


# Homebrew installs numpy in a non standard path (keg only)

# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include

# PYTHON_LIB += $(shell brew --prefix numpy)/lib


# Uncomment to support layers written in Python (will link against Python libs)

WITH_PYTHON_LAYER := 1


# Whatever else you find you need goes here.

#INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include

#LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial /home/ninja/opencv/include/ /home/ninja/.conda/envs/caffe3/include

LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial /home/ninja/opencv/build/lib


# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies

# INCLUDE_DIRS += $(shell brew --prefix)/include

# LIBRARY_DIRS += $(shell brew --prefix)/lib


# NCCL acceleration switch (uncomment to build with NCCL)

# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)

USE_NCCL := 1


# Uncomment to use `pkg-config` to specify OpenCV library paths.

# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)

USE_PKG_CONFIG := 1


# N.B. both build and distribute dirs are cleared on `make clean`

BUILD_DIR := build

DISTRIBUTE_DIR := distribute


# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171

# DEBUG := 1


# The ID of the GPU that 'make runtest' will use to run unit tests.

TEST_GPUID := 0


# enable pretty build (comment to see full commands)

Q ?= @

Thursday 27 June 2019

how to install bazel in ubuntu?

standard version:-
$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
$ sudo apt-get update && sudo apt-get install bazel
$ sudo apt-get upgrade bazel


customized version:-
$ BAZEL_VERSION="0.26.0"     # insert your desired version here, for example 0.26.0
$ wget https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh   # if not on x86_64, change that too
$ chmod +x bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh   # or the file you just downloaded
$ ./bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh --user
$ bazel version   # this should now print the same as BAZEL_VERSION

Wednesday 19 June 2019

Ubuntu How to Find *.jpg in Current Folder and Subfolders

In general, here are few useful commands:

1. find all images in current and its subfolders:-
find . -name "*.jpg" > temp.txt

2. To list all images with full path:-
find $(pwd) -name "*.ppm" > temp.txt

3. find all images in current and its subfolders, then count it
find . -name "*.jpg"| wc -l

=== Appendix ===

How to find files and directories in Linux

(ps: copy from https://www.computerhope.com/issues/ch001723.htm)
In Linux operating systems, the find command may be used to search for files and directories on your computer. To proceed, select a link from the following list or go through each section in order.
Note To use find, begin by opening a terminal session to access the command line.

Basic functionality of find

Running find without any options will produce a list of every file and directory in and beneath the working directory. For instance, if your working directory is /home/hope/Documents, running find will output the following:
  • Every file in /home/hope/Documents.
  • Every subdirectory in /home/hope/Documents.
  • Every file in each of those subdirectories.
Let's see it in action. First, let's check our working directory by using the pwd command:
pwd
/home/hope/Documents
Now let's run find without any options:
find
.

./images

./images/hp

./images/hp/snape.jpg

./images/hp/harry.jpg

./images/memes

./images/memes/winteriscoming.jpg

./images/memes/goodguygary.JPG

./images/memes/picard.jpg

./gimp-2.8.16.tar.bz2

./hp-fanfic

./hp-fanfic/malfoys-revenge.doc

./hp-fanfic/weekend-at-hagreds.doc

./hp-fanfic/dumbledores-lament.doc

./archlinux-2016.02.01-dual.iso
In this example, we see a total of ten files and four subdirectories in and beneath our Documents folder.
Notice that the output starts with a single dot, which represents the working directory. Running find with no options is the same as specifying that the search should begin in the working directory, like this:
find .
The above example is the "proper" way to use find. If you try to use it on another UNIX-like operating system, such as FreeBSD, you will find that specifying a directory is required, so it's good practice to use this form of the command.

Specifying where to search

To only list files and subdirectories that are contained in the directory /home/hope/Documents/images, specify it as the first argument of the command:
find /home/hope/Documents/images
/home/hope/Documents/images

/home/hope/Documents/images/hp

/home/hope/Documents/images/hp/snape.jpg

/home/hope/Documents/images/hp/harry.jpg

/home/hope/Documents/images/memes

/home/hope/Documents/images/memes/winteriscoming.jpg

/home/hope/Documents/images/memes/goodguygary.JPG

/home/hope/Documents/images/memes/picard.jpg
Notice that the full path is also shown in the results.
If our working directory is /home/hope/Documents, we can use the following command, which finds the same files:
find ./images
But this time, the output reflects the starting location of the search and looks like this:
./images

./images/hp

./images/hp/snape.jpg

./images/hp/harry.jpg

./images/memes

./images/memes/winteriscoming.jpg

./images/memes/goodguygary.JPG

./images/memes/picard.jpg
By default, the search will look in every subdirectory of your starting location. If you want to restrict how many levels of subdirectory to search, you can use the -maxdepth option with a number.
For instance, specifying -maxdepth 1 will search only in the directory where the search begins. If any subdirectories are found, they will be listed, but not searched.
find . -maxdepth 1
.

./images

./bigfiles.txt

./gimp-2.8.16.tar.bz2

./hp-fanfic

./archlinux-2016.02.01-dual.iso
Specifying -maxdepth 2 will search the directory and one subdirectory deep:
find . -maxdepth 2
.

./images

./images/hp

./images/memes

./gimp-2.8.16.tar.bz2

./hp-fanfic

./hp-fanfic/malfoys-revenge.doc

./hp-fanfic/weekend-at-hagreds.doc

./hp-fanfic/dumbledores-lament.doc

./archlinux-2016.02.01-dual.iso
Specifying -maxdepth 3 will search one level deeper than that:
find . -maxdepth 3
.

./images

./images/hp

./images/hp/snape.jpg

./images/hp/harry.jpg

./images/memes

./images/memes/winteriscoming.jpg

./images/memes/goodguygary.JPG

./images/memes/picard.jpg

./gimp-2.8.16.tar.bz2

./hp-fanfic

./hp-fanfic/malfoys-revenge.doc

./hp-fanfic/weekend-at-hagreds.doc

./hp-fanfic/dumbledores-lament.doc

./archlinux-2016.02.01-dual.iso

Finding by name

To restrict your search results to match only files and directories that have a certain name, use the -name option and put the name in quotes:
find . -name "picard.jpg"
./images/memes/picard.jpg
You can also use wildcards as part of your file name. For instance, to find all files whose name ends in .jpg, you can use an asterisk to represent the rest of the file name. When you run the command, the shell will glob the file name into anything that matches the pattern:
find . -name "*.jpg"
./images/hp/snape.jpg

./images/hp/harry.jpg

./images/memes/winteriscoming.jpg

./images/memes/picard.jpg
Notice that our command didn't list the file whose extension (in this case, JPG) is in capital letters. That's because unlike other operating systems, such as Microsoft Windows, Linux file names are case-sensitive.
To perform a case-insensitive search instead, use the -iname option:
find . -iname "*.jpg"
./images/hp/snape.jpg

./images/hp/harry.jpg

./images/memes/winteriscoming.jpg

./images/memes/goodguygary.JPG

./images/memes/picard.jpg

Finding only files, or only directories

To list files only and omit directory names from your results, specify -type f:
find . -type f
./images/hp/snape.jpg

./images/hp/harry.jpg

./images/memes/winteriscoming.jpg

./images/memes/goodguygary.JPG

./images/memes/picard.jpg

./gimp-2.8.16.tar.bz2

./hp-fanfic/malfoys-revenge.doc

./hp-fanfic/weekend-at-hagreds.doc

./hp-fanfic/dumbledores-lament.doc

./archlinux-2016.02.01-dual.iso
To list directories only and omit file names, specify -type d:
find . -type d
.

./images

./images/hp

./images/memes

./hp-fanfic

Finding files based on size

To display only files of a certain size, you can use the -size option. To specify the size, use a plus or a minus sign (for "more than" or "less than"), a number, and a quantitative suffix such as k, M, or G.
For instance, to find files that are "bigger than 50 kilobytes", use -size +50k:
find . -size +50k
./images/memes/winteriscoming.jpg

./gimp-2.8.16.tar.bz2

./archlinux-2016.02.01-dual.iso
For files "bigger than 10 megabytes", use -size +10M:
find . -size +10M
./gimp-2.8.16.tar.bz2

./archlinux-2016.02.01-dual.iso
For "bigger than 1 gigabyte", use -size +1G:
find . -size +1G
./archlinux-2016.02.01-dual.iso
For files in a certain size range, use two -size options. For instance, to find files "bigger than 10 megabytes, but smaller than 1 gigabyte", specify -size +10M -size -1G:
find . -size +10M -size -1G
./gimp-2.8.16.tar.bz2
        

Finding files based on modification, access, or status change

The -mtime option restricts search by how many days since the file's contents were modified. To specify days in the past, use a negative number. For example, to find only those files which were modified in the past two days (48 hours ago), use -mtime -2:
find . -mtime -2
The -mmin option does the same thing, but in terms of minutes, not days. For instance, this command shows only files modified in the past half hour:
find . -mmin -30
A similar option is -ctime, which checks when a file's status was last changed, measured in days. A status change is a change in the file's metadata. For instance, changing the permissions of a file is status change.
The option -cmin will search for a status change, measured in minutes.
You can also search for when a file was last accessed — in other words, when its contents were most recently viewed. The -atime option is used to search for files based upon their most recent access time, measured in days.
The -amin option will perform the same search restriction, but measured in minutes.

Redirecting output to a text file

If you are performing a very large search, you may want to save your search results in a file, so that you can view the results later. You can do this by redirecting your find output to a file:
find . -iname "*.jpg" > images.txt
You can then open your results in a text editor, or print them with the cat command.
cat images.txt
./images/hp/snape.jpg

./images/hp/harry.jpg

./images/memes/winteriscoming.jpg

./images/memes/goodguygary.JPG

./images/memes/picard.jpg
Alternatively, you can pipe your output to the tee command, which will print the output to the screen and write it to a file:
find . -size +500M | tee bigfiles.txt
./archlinux-2016.02.01-dual.iso
cat bigfiles.txt
./archlinux-2016.02.01-dual.iso

Suppressing error messages

You may receive the error message "Permission denied" when performing a search. For instance, if you search the root directory as a normal user:
find /
find: `/var/lib/sudo/ts': Permission denied

find: `/var/lib/sudo/lectured': Permission denied

find: `/var/lib/polkit-1': Permission denied

find: `/var/lib/container': Permission denied

find: `/var/lib/gdm3/.dbus': Permission denied

find: `/var/lib/gdm3/.config/ibus': Permission denied

...
You will receive that error message if find tries to access a file that your user account doesn't have permission to read. You may be able to perform the search as the superuser (root), which has complete access to every file on the system. But it's not recommended to do things as root, unless there are no other options.
If all you need to do is hide the "Permission denied" messages, you can add 2&>1 | grep -v "Permission denied" to the end of your command, like this:
find / 2>&1 | grep -v "Permission denied"
The above example filters out the "Permission denied" messages from your search. How?
2>&1 is a special redirect that sends error messages to the standard output to pipe the combined lines of output to the grep command. grep -v then performs an inverse match on "Permission denied", displaying only lines which do not contain that string.
Redirecting and using grep to filter the error messages is a useful technique when "Permission denied" is cluttering your search results and you can't perform the search as root.

Examples

find ~/. -name "*.txt" -amin -120
Find all files in your home directory and below which end in the extension ".txt". Display only files accessed in the past two hours.
find . -name "*.zip" -size +10M -mtime -3
Find all files in the working directory and below whose name has the extension ".zip" and whose size is greater than 10 megabytes. Display only files whose contents were modified in the last 72 hours.
find . -iname "*report*" -type f -maxdepth 2
Perform a case-insensitive search for files that contain the word "report" in their name. If the search finds a directory with "report" in its name, do not display it. Search only in the working directory, and one directory level beneath it.
find / -name "*init*" 2>&1 | grep -v "Permission denied" | tee ~/initfiles.txt
Find all files on the system whose name contains "init", suppressing error messages. Display results on the screen and output them to a file in your home directory named "initfiles.txt".

Thursday 9 May 2019

caffe: undefined reference to boost::gregorian::greg_month::as_short_string() const

the reason is due to the missing dependencies of boost date_time library in cmake configuration

Goto root > cmake > Dependencies.cmake

search a keyword: filesystem (original source)
find_package(Boost 1.61 COMPONENTS "python${PYTHON_VERSION_MAJOR}" system thread filesystem regex)

adding a keyword named as date_time into all corresponding search result (modified source):-
 find_package(Boost 1.61 COMPONENTS "python${PYTHON_VERSION_MAJOR}" date_time system thread filesystem regex)

Trace or track Python statement execution (trace)

Function in the 'trace' module in Python library generates trace of program execution, and annotated statement coverage. It also has functions to list functions called during run by generating caller relationships.

Following two Python scripts are used as an example to demonstrate features of trace module.

#myfunctions.py
import math
def area(x):
   a = math.pi*math.pow(x,2)
   return a
def factorial(x):
   if x==1:
      return 1
   else:
return x*factorial(x-1)
#mymain.py
import myfunctions
def main():
   x = 5
   print ('area=',myfunctions.area(x))
   print ('factorial=',myfunctions.factorial(x))

if __name__=='__main__':
   main()
 
The 'trace' module has a command line interface. All the functions in the module can be called using command line switches. The most important option is --trace which displays program lines as they are executed. In following example another option --ignore-dir is used. It ignores specified directories while generating the trace.

E:\python37>python -m trace --ignore-dir=../lib --trace mymain.py

Output

mymain.py(2): def main():
mymain.py(7): if __name__=='__main__':
mymain.py(8): main()
--- modulename: mymain, funcname: main
mymain.py(3): x=5
mymain.py(4): print ('area=',myfunctions.area(x))
--- modulename: myfunctions, funcname: area
myfunctions.py(3): a=math.pi*math.pow(x,2)
myfunctions.py(4): return a
area= 78.53981633974483
mymain.py(5): print ('factorial=',myfunctions.factorial(x))
--- modulename: myfunctions, funcname: factorial
myfunctions.py(6): if x==1:
myfunctions.py(9): return x*factorial(x-1)
--- modulename: myfunctions, funcname: factorial
myfunctions.py(6): if x==1:
myfunctions.py(9): return x*factorial(x-1)
--- modulename: myfunctions, funcname: factorial
myfunctions.py(6): if x==1:
myfunctions.py(9): return x*factorial(x-1)
--- modulename: myfunctions, funcname: factorial
myfunctions.py(6): if x==1:
myfunctions.py(9): return x*factorial(x-1)
--- modulename: myfunctions, funcname: factorial
myfunctions.py(6): if x==1:
myfunctions.py(7): return 1
factorial= 120
 
The --count option generates a file for each module in use with, cover extension.
E:\python37>python -m trace --count mymain.py
area= 78.53981633974483
factorial = 120
myfunctions.cover
1: import math
1: def area(x):
1:    a = math.pi*math.pow(x,2)
1:    return a
1: def factorial(x):
5:    if x==1:
1:       return 1
   else:
4:    return x*factorial(x-1)
mymain.cover
1: import myfunctions
1: def main():
1:    x = 5
1:    print ('area=',myfunctions.area(x))
1:    print ('factorial=',myfunctions.factorial(x))

1: if __name__=='__main__':
1:    main()
 
--summary option displays a brief summary if –count option is also used.
E:\python37>python -m trace --count --summary mymain.py
area = 78.53981633974483
factorial = 120
lines cov% module (path)
   8 100% myfunctions (E:\python37\myfunctions.py)
   7 100% mymain (mymain.py)
 
The --file option specifies name of file in which accumulates count over several tracing runs.
E:\python37>python -m trace --count --file report.txt mymain.py
area = 78.53981633974483
factorial = 120
Skipping counts file 'report.txt': [Errno 2] No such file or directory: 'report.txt'

E:\python37>python -m trace --count --file report.txt mymain.py
area= 78.53981633974483
factorial= 120
 
--listfuncs option displays functions called during execution of program.
E:\python37>python -m trace --listfunc mymain.py | findstr -v importlib
area= 78.53981633974483
factorial= 120

functions called:
filename: E:\python37\lib\encodings\cp1252.py, modulename: cp1252, funcname: IncrementalEncoder.encode
filename: E:\python37\myfunctions.py, modulename: myfunctions, funcname: <module>
filename: E:\python37\myfunctions.py, modulename: myfunctions, funcname: area
filename: E:\python37\myfunctions.py, modulename: myfunctions, funcname: factorial
filename: mymain.py, modulename: mymain, funcname: <module>
filename: mymain.py, modulename: mymain, funcname: main
 
--trackcalls option is used along with –list funcs option. It generates calling relationships.
E:\python37>python -m trace --listfunc --trackcalls mymain.py | findstr -v importlib
area= 78.53981633974483
factorial= 120

calling relationships:

--> E:\python37\myfunctions.py


*** E:\python37\lib\trace.py ***
--> mymain.py
trace.Trace.runctx -> mymain.<module>

*** E:\python37\myfunctions.py ***
myfunctions.factorial -> myfunctions.factorial

*** mymain.py ***
mymain.<module> -> mymain.main
--> E:\python37\lib\encodings\cp1252.py
mymain.main -> cp1252.IncrementalEncoder.encode
--> E:\python37\myfunctions.py
mymain.main -> myfunctions.area
mymain.main -> myfunctions.factorial
 
(ref: https://www.tutorialspoint.com/trace-or-track-python-statement-execution-trace)