# How to use 10,582 trainaug images on DeeplabV3 code?

You know what I mean if you have experience on training segmentation network models on Pascal VOC dataset. The dataset only provides 1464 pixel-level image annotations for training. But every paper uses 10,582 images for training, which is usually called trainaug. The additional annotations are from SBD, but the annotation format is not the same as Pascal VOC. Fortunately someone has already made a converted version, which is SegmentationClassAug.

DeeplabV3 code do not contain SBD annotations for some reasons that we can understand. So I wrote a simple script to solve this.

To use 10,582 trainaug images on DeeplabV3 code, you just need to do the following steps:

1. Create a script named convert_voc2012_aug.sh.

2. Create a txt file named trainaug.txt with this content.

4. Put all of them (‘convert_voc2012_aug.sh’, ‘trainaug.txt’, ‘VOCtrainval_11-May-2012.tar’, ‘SegmentationClassAug.zip’) to the research/deeplab/datasets folder.

5. Execute convert_voc2012_aug.sh (give it execute permission) in research/deeplab/datasets.

6. Change the code in research/deeplab/datasets/segmentation_dataset.py from:

to:

7. Don’t forget to change the train_split parameter in research/deeplab/train.py to trainaug.

# LeetCode-Interleaving Strings 解题报告

### Question

Given s1, s2, s3, find whether s3 is formed by the interleaving of s1 and s2.

For example,
Given:
s1 = “aabcc”,
s2 = “dbbca”,

When s3 = “aadbbcbcac”, return true.
When s3 = “aadbbbaccc”, return false.

# 解决数据集导致的大内存占用和磁盘IO问题

gvfsd-metadata is a daemon acting as a write serialiser to the internal gvfs metadata storage. It is autostarted by GIO clients when they make metadata changes. Read operations are done by client-side GIO code directly, and don’t require the daemon to be running. The gvfs metadata capabilities are used by the GNOME Files file manager, for example.

rm -rf ~/.local/share/gvfs-metadata


"folder_exclude_patterns": [".svn", ".git", ".hg", "CVS", "node_modules/*"],
"binary_file_patterns": ["*.mat","*.jpg", "*.jpeg", "*.png", "*.gif", "*.ttf", "*.tga", "*.dds", "*.ico", "*.eot", "*.pdf", "*.swf", "*.jar", "*.zip"],


# How to make linemod and KinectV2 work with ROS Indigo?

I’m using Ubuntu 14.04.5 with ROS Indigo, and I want to make ork work with linemod, a fairly simple need. But sometimes if some packages are not maintained well (especially in ROS), you have to investigate the problem and even to contribute code to the project…

Following the installation guide to install ork is very simple, don’t forget to install couchdb. Building from source is the only choice now, you have to modify the code to make it work as you wish.

In my case, tabletop method works well with KinectV1, even with KinectV2(but only the hd resolution config worked). However, linemod caused a huge memory leak, nearly 1GB/s, and it didn’t publish /recognized_object_array topic. At the beginning I thought it is the problem is in linemod, but it turned out to be in ork_renderer. A thread in ORK Google Group said,

Currently LINEMOD uses ork_renderer for its training phase. ork_renderer uses either GLUT or osmesa to generate synthetic images of the training data. It seems that the ork_renderer in your computer is linked to osmesa.

Fortunately now we just can change CMakeLists.txt to use GLUT. Just change option(USE_GLUT "Use GLUT instead of OSMesa" OFF) to option(USE_GLUT "Use GLUT instead of OSMesa" ON).

Update: Now I just use the version from JimmyDaSilva but not the official wg-perception.

But the current linemod version still have some problem related to assimp_devel, it seems the developer is working on it, you have to revert linemod to the previous version(35aebd).

So I just created a repo here to make the whole thing work. When linemod is training it will show a assimp window, but it do not contain anything in my case, not a serious problem, linemod works anyway with KinectV1, but not with KinectV2, because KinectV2 has a special resolution, causing an OpenCV error in linearMemoryPyramid. Fortunately again an awesome guy has worked it out, and he also fixed many other issues. I need to use KinectV2 in my work, so I followed this guy’s modification and successfully made it work on KinectV2 with QHD resolution. If you want to use SD resolution, you can set T={2,4} in linemod_detect.cpp and renderer_width: 512 renderer_height: 424 in training.ork as JimmyDaSilva said.

This ork repo integrated all of them, just to make it easier to work on ork, maybe for myself in the future.

Tips:

• If you used linemod to train, you’d better delete the whole object_recognition database in CouchDB.
• Using coke.stl is simpler, using coke.obj with texture will get a better result.
• When training linemod, make sure you are in the folder which contains the obj, mtl and image files.

# 多系统多头单尾的工作环境

Synergy是一个通过局域网实现的鼠标键盘共享软件。然而作为一个开源软件，它的下载竟然是收费的！虽然开源软件收费并不是什么坏事，然而开源软件本身是开源的，只要自己能编译出来或者使用别人编译的程序似乎也不违规？我对它的盈利模式深表怀疑。。。而且，Synergy也有Nightly Build是可以搜到的，也就是说你·根·本·就·可·以·免·费·下载到最新的版本（不过可能不够稳定）。