8 months ago

My target is to install ubuntu with lvmcache, use a ssd drive to speed up system io.

lvmcache pitfalls on ubuntu (2016/02)

  1. 14.04 lvm2 don't support lvmcache
  2. 15.04, 15.10 raid cause NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! error on my machine by default, system always stucks right after boot
  3. Failed to boot with lvm2's new default setting cache-policy-smq lvmcache failed to boot: “device-mapper: cache-policy: unknown policy type”

UPDATE 2016/02/18 ubuntu 16.04 has kernel 4.4 to avoid the CPU#0 stuck problem. i am going with 16.04 now.

1. install 16.04 + raid + lvm

2. add lvmcache

Create cache-pool with mq policy until smq is well-supported.

thin-provisioning-tools is needed: it's contain /usr/sbin/cache_check

apt-get install thin-provisioning-tools
apt-get update
aptitude full-upgrade
dd if=/dev/zero bs=1M count=1000 of=/dev/sde
pvcreate /dev/sde
vgextend vg0 /dev/sde
lvcreate -L 480M -n cachemeta vg0 /dev/sde
lvcreate -L 475000M -n cachedata vg0 /dev/sde
lvconvert --type cache-pool --cachepolicy mq --chunksize 8192 --poolmetadata vg0/cachemeta --cachemode writeback vg0/cachedata --yes
lvconvert --type cache --cachepool vg0/cachedata vg0/root

# Add a hook script to initramfs to add the right tools and modules
cat <<'EOF' > $HOOK
  echo "$PREREQ"
case $1 in
  exit 0
if [ ! -x /usr/sbin/cache_check ]; then
  exit 0
. /usr/share/initramfs-tools/hook-functions
copy_exec /usr/sbin/cache_check
manual_add_modules dm_cache dm_cache_mq dm_cache_smq dm_persistent_data dm_bufio

cp $HOOK /etc/initramfs-tools/hooks/lvmcache
chmod +x /etc/initramfs-tools/hooks/lvmcache

echo "dm_cache" >> /etc/initramfs-tools/modules
echo "dm_cache_mq" >> /etc/initramfs-tools/modules
echo "dm_cache_smq" >> /etc/initramfs-tools/modules
echo "dm_persistent_data" >> /etc/initramfs-tools/modules
echo "dm_bufio" >> /etc/initramfs-tools/modules

# Update initramfs

update-initramfs -u

Reboot now.

IF Boot failed: Use ubuntu 16.04 live CD to run boot-repair

16.04 xenial is required if you are using lvmcache

sudo -s
add-apt-repository ppa:yannubuntu/boot-repair && apt-get update
apt-get install -y boot-repair mdadm thin-provisioning-tools
9 months ago
  • 2016/1/26 Update: I have reviewed my steps and rewrite them into Dockerfile to test it.

About Dockerfile format please read this: https://docs.docker.com/engine/reference/builder/

1. Set target versions

ENV DEBIAN_FRONTEND=noninteractive \
        HOME=/root \
        PATH=/usr/local/rvm/bin:$PATH \
        NPS_VERSION= \

2. Install required packages

RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections && \
        echo 'APT::Get::Clean=always;' >> /etc/apt/apt.conf.d/99AutomaticClean
RUN apt-get update
RUN apt-get install -y build-essential zlib1g-dev libpcre3 libpcre3-dev unzip wget curl

3. Download ngx_pagespeed source (at /root)

RUN wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip && \
        unzip release-${NPS_VERSION}-beta.zip && \
        rm release-${NPS_VERSION}-beta.zip && \
        cd ngx_pagespeed-release-${NPS_VERSION}-beta/ && \
        wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz && \
        tar -xzf ${NPS_VERSION}.tar.gz && \
        rm ${NPS_VERSION}.tar.gz

4. Download nginx source (at /root)

RUN wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz && \
        tar -xzf nginx-${NGINX_VERSION}.tar.gz && \
        rm nginx-${NGINX_VERSION}.tar.gz

5. Install RVM and passenger

RUN curl -sSL https://rvm.io/mpapis.asc | gpg --import - && \
        curl -L https://get.rvm.io | /bin/bash -s stable && \
        echo 'source /etc/profile.d/rvm.sh' >> /etc/profile && \
        echo 'source /etc/profile.d/rvm.sh' >> /root/.bashrc && \
        rvm requirements && \
        rvm install 2.0.0 && \
        bash -l -c "rvm use --default 2.0.0 && \
        gem install passenger --no-rdoc --no-ri"

6. Build nginx & ngx_pagespeed by passenger-install-nginx-module

Using configure flags from nginx -V from Nginx Mainline 1.9.9 package

RUN adduser --system --no-create-home --disabled-login --disabled-password --group nginx && \
        usermod -g www-data nginx && \
        mkdir -p /etc/nginx/sites-available /etc/nginx/sites-enabled /var/cache/nginx && \
        bash -l -c "rvmsudo passenger-install-nginx-module --auto \
        --nginx-source-dir=$HOME/nginx-${NGINX_VERSION} \
        --conf-path=/etc/nginx/nginx.conf \
        --error-log-path=/var/log/nginx/error.log --group=nginx \
        --http-client-body-temp-path=/var/cache/nginx/client_temp \
        --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
        --http-log-path=/var/log/nginx/access.log \
        --http-proxy-temp-path=/var/cache/nginx/proxy_temp \
        --http-scgi-temp-path=/var/cache/nginx/scgi_temp \
        --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
        --lock-path=/var/run/nginx.lock --pid-path=/var/run/nginx.pid \
        --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --user=nginx \
        --with-file-aio --with-http_addition_module \
        --with-http_auth_request_module --with-http_dav_module \
        --with-http_flv_module --with-http_gunzip_module \
        --with-http_gzip_static_module --with-http_mp4_module \
        --with-http_random_index_module --with-http_realip_module \
        --with-http_secure_link_module --with-http_slice_module \
        --with-http_ssl_module --with-http_stub_status_module \
        --with-http_sub_module --with-http_v2_module --with-ipv6 --with-mail \
        --with-mail_ssl_module --with-stream --with-stream_ssl_module \
        --with-threads --with-cc-opt='-g -O2 -fstack-protector \
        --param=ssp-buffer-size=4 -Wformat -Werror=format-security \
        -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions \
        -Wl,-z,relro -Wl,--as-needed' \

7. Setup /etc/init.d/nginx, nginx.conf and self-signed certificate

ADD https://raw.github.com/JasonGiedymin/nginx-init-ubuntu/master/nginx /etc/init.d/nginx
ADD nginx.service.patch nginx.conf.patch /
RUN chmod +x /etc/init.d/nginx && \
        patch -p0 /etc/init.d/nginx < /nginx.service.patch && \
        update-rc.d -f nginx defaults && \
        patch -p0 /etc/nginx/nginx.conf < /nginx.conf.patch && \
        openssl req -subj '/CN=domain.com/O=My Company Name LTD./C=US' -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.pem

8. Setup rails example app

RUN cd /var/ && \
     apt-get install nodejs -y && \
     bash -l -c 'gem install bundler rails --no-rdoc --no-ri && \
     rails new www && \
     cd /var/www && \
     sed -i "s/secret_key_base.*/secret_key_base: `RAILS_ENV=production rake secret`/" config/secrets.yml && \
     bundle install && \
     rails generate controller welcome index && \
     sed -i "s/# root/root/" config/routes.rb && \
     chown -R nginx:nginx /var/www'

9. (optional) Build it into Docker container

You need to install docker to do this.

docker build -t jcppkkk/nginx-pagespeed .

10. Test it

You need to install docker to do this.

  1. h2 : http2 protocol
  2. X-Page-Speed : Page-Speed protocol
  3. Welcome#index : nginx with passenger ruby server is running
docker run --rm -it jcppkkk/nginx-pagespeed bash -c "service nginx start && openssl s_client -connect -nextprotoneg '' 2>/dev/null | grep 'Protocols.*h2' && curl -sLkI '' | grep 'X-Page-Speed' && curl -sk | grep Welcome"
 * Starting Nginx Server...    [ OK ]
Protocols advertised by server: h2, http/1.1
about 1 year ago

Build a ZFS raidz2 pool, share ZFS storage as iSCSI volume or NFS export and tuning I/O performance for ESXi access.

  • 2015/8/11 utilize server's RAM by writing to /sys/module/zfs/parameters/zfs_arc_max

Install ZFS

Before we can start using ZFS, we need to install it. Simply add the repository to apt-get with the following command:

apt-get install --yes software-properties-common
apt-add-repository --yes ppa:zfs-native/stable
apt-get update
apt-get install ubuntu-zfs

Now, let’s see if it has been correctly compiled and loaded by the kernel

dmesg | grep ZFS

You get an output like this:

# dmesg | grep ZFS
[ 5.979569] ZFS: Loaded module v0.6.4.1-1~trusty, ZFS pool version 5000, ZFS filesystem version 5

Creation of a RAID-Z2 disk array using 7 disks

Here is my server disk layout, from sdb to sdh will be ZFS raid pool

root@nfs1:~$ lsblk
sda      8:0    0   2.7T  0 disk
├─sda1   8:1    0     1M  0 part
├─sda2   8:2    0   2.7T  0 part /
└─sda3   8:3    0    32G  0 part [SWAP]
sdb      8:16   0   2.7T  0 disk
sdc      8:32   0   2.7T  0 disk
sdd      8:48   0   2.7T  0 disk
sde      8:64   0   2.7T  0 disk
sdf      8:80   0   2.7T  0 disk
sdg      8:96   0   2.7T  0 disk
sdh      8:112  0   2.7T  0 disk

Create ZFS pool from 7 disks

sudo zpool create -f datastore1 raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
  • We use all capacity of datastore1/iscsi or datastore1/nfs, which is around 13TB
    root@iscsi-storage-2:~# zfs list
    root@nfs1:~# zfs list
    datastore1  94.9K  13.3T  34.1K  /datastore1

Create iSCSI volumn using Linux SCSI Target(Targetcli)

1. Create ZFS data sets for iSCSI

sudo zfs create -o compression=off -o dedup=off -o volblocksize=32K -V 13000G datastore1/iscsi
sudo zfs set sync=disabled datastore1/iscsi

2. Create iscsi adaptor on ESXi:

vCenter > Host > Configuration > Storage Adaptors > Add iscsi adaptor, Get the iSCSI name

We will use block device to create iscsi

root@iscsi-storage-2:~# ll /dev/zvol/datastore1/iscsi
lrwxrwxrwx 1 root root 9 May 26 00:41 /dev/zvol/datastore1/iscsi -> ../../zd0

3. Create iSCSI target using ZFS

root@iscsi-storage-2:~# targetcli
/> cd backstores
/backstores> iblock/ create name=block_backend dev=/dev/zvol/datastore1/iscsi

... create iscsi target ...

/> cd /iscsi/iqn.2003-01.org.linux-iscsi.iscsi-storage-2.x8664:sn.f017b570b1d2/tpgt1/
luns/ create /backstores/iblock/block_backend

... Add portls ...
... Add Acls of ESXI hosts' initiator iqn name ...

/iscsi/iqn.20...570b1d2/tpgt1> / saveconfig
/iscsi/iqn.20...570b1d2/tpgt1> exit

4. Mount iSCSI on ESXi

  • On ESXi
    1. Add iscsi IP:port into iscsi adaptor
    2. Rescan All
    3. Add iSCSI volumn in Host > Configuration > Storage

Create NFS sharing

1. Install NFS service

$ sudo apt-get install nfs-kernel-server
$ sudo reboot
  • commented out && grep -q '^[[:space:]][^#]/' $export_files in /etc/init.d/nfs-kernel-server because I can't start the server with an empty /etc/exports/ file

2. Start NFS service

$ sudo service nfs-kernel-server start
 * Exporting directories for NFS kernel daemon... [ OK ]
 * Starting NFS kernel daemon                     [ OK ]

3. Create ZFS data sets for NFS

sudo zfs create -o compression=off -o dedup=off -o mountpoint=/nfs -o sharenfs=on datastore1/nfs
sudo zfs set sync=disabled datastore1/nfs

4. Test if NFS sharing is exported

root@nfs1:~# apt-get install nfs-common
root@nfs1:~# showmount -e
Export list for nfs1:
/nfs *

5. Auto export NFS folder

root@nfs1:~# vim /etc/rc.local
#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.
zfs unshare datastore1/nfs
zfs share datastore1/nfs
exit 0

Tuning for performance and I/O latency

1. Set the I/O Scheduler to noop (echo noop > /sys/block/sdb/queue/scheduler).

Skip zd0 if using NFS
# for i in zd0 sdb sdc sdd sde sdf sdg sdh; \
do echo noop > /sys/block/$i/queue/scheduler; cat /sys/block/$i/queue/scheduler; done
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq

2. Change the IO size to 32KB

Skip zd0 if using NFS
# for i in zd0 sdb sdc sdd sde sdf sdg sdh; \
do echo 32 > /sys/block/$i/queue/max_sectors_kb; echo 4 > /sys/block/$i/queue/nr_requests; done

ZFS iSCSI Benchmark Tests on ESX

...I tested with both 64KB and 32KB, for me 32KB worked out a little better.
...We can see that the avrtq-sz changed to 64 (32kb), which is good and we now see that the avg wait time went down to ~80ms (from ~1000ms). Lowering the number of requests to 4 lowered the DAVG to practically nothing, but the speed wasn’t that great.

3. Enable Disk write-back caching

# for i in sdb sdc sdd sde sdf sdg sdh; do hdparm -W1 /dev/$i; done

Improve hard drive write speed with write-back caching

4. Increase ZFS read cache(ZFS-ARC) size to 50GB

The default ZFS arc has 32GB ARC limitation. We can use higher cache memory size to rise the cache hit rate.

root@nfs1:~# echo $((50*1024*1024*1024)) >> /sys/module/zfs/parameters/zfs_arc_max
root@nfs1:~# echo options zfs zfs_arc_max=$((50*1024*1024*1024)) > /etc/modprobe.d/zfs.conf
root@nfs1:~# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=53687091200

root@nfs1:~# git clone --depth 1 https://github.com/frankuit/zfs-arcstats-bash.git
root@nfs1:~# zfs-arcstats-bash/arc2
This will display the cache hit and miss ratio's.
for a time limited run (in seconds) add a number of seconds behind this command
|l1reads    l1miss     l1hits     l1hit%     size  |
|175        10         165        94.285%    50 GB  |
|84         0          84         100.000%   49 GB  |
|110        8          102        92.727%    50 GB  |
|100        14         86         86.000%    50 GB  |
|362        14         348        96.132%    50 GB  |
|75         3          72         96.000%    50 GB  |

Benchmark Result

(Left) NFS Storage with 10GbE Network (Right) Local Disk Storage

Refer List

about 1 year ago
  1. http://frederik.orellana.dk/booting-ubuntu-14-04-cloud-images-without-a-cloud/
  2. http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt


sudo mkdir -p /var/lib/cloud/seed/nocloud

sudo tee /var/lib/cloud/seed/nocloud/meta-data <<EOF
instance-id: ubuntu
local-hostname: ubuntu

sudo tee /var/lib/cloud/seed/nocloud/user-data <<EOF
apt_update: true
apt_upgrade: true
 - source: "ppa:git-core/ppa"
 - unattended-upgrades
 - squid-deb-proxy-client
 - vim
 - ntp
 - git
timezone: Asia/Taipei
password: ubuntu
chpasswd: { expire: False }
ssh_pwauth: True
 - grep $(cat /etc/hostname) /etc/hosts || sudo echo $(cat /etc/hostname) >> /etc/hosts
  • Remove cloud-init instance, which will trigger cloud-init on next booting.
( cd /var/lib/cloud/instance && sudo rm -Rf * )
sudo shutdown -P now


over 1 year ago


How to compile ipfs in windows (continuing)

cd /d c:\
C:\> @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
C:\> choco install golang
C:\> choco install git
C:\> set GOPATH=C:\go
C:\> go get -d -u github.com/ipfs/go-ipfs/cmd/ipfs
C:\> cd c:\go\src\github.com\ipfs\go-ipfs
c:\go\src\github.com\ipfs\go-ipfs> go build -x -tags nofuse github.com\ipfs\go-ipfs\cmd\ipfs
c:\go\src\github.com\ipfs\go-ipfs> cd c:\
C:\> set PATH=%PATH%;c:\go\src\github.com\ipfs\go-ipfs

over 1 year ago

In squid-deb-proxy, add setting to Rewrite request URL. Replace *.archive.ubuntu.com with local mirror

Edit /etc/squid-deb-proxy/squid-deb-proxy.conf, add url_rewrite_program setting

url_rewrite_program /etc/squid-deb-proxy/redirect.php

Create /etc/squid-deb-proxy/redirect.php to processing request URLs


$temp = array();

// Extend stream timeout to 24 hours
stream_set_timeout(STDIN, 86400);
$pattern = '/(\w+\.)?archive\.ubuntu\.com/i';
$replacement = 'free.nchc.org.tw';

while ( $input = fgets(STDIN) ) {
        // Split the output (space delimited) from squid into an array.
        $temp = split(' ', $input);

        // Set the URL from squid to a temporary holder.
        $output = $temp[0] . "\n";

        // Check the URL and rewrite it if it matches archive.ubuntu.com
        if ( strpos($temp[0], "archive.ubuntu.com") !== false ) {
                $output = '302:'.preg_replace($pattern, $replacement, $temp[0])."\n";
        echo $output;

Let redirect.php be executable

sudo chmod +x /etc/squid-deb-proxy/redirect.php

Restart squid-deb-proxy and monitor if requests URL is redirected

sudo tail -F /var/log/squid-deb-proxy/access.log

The successful redirections should like this:

1429005569.051      1 TCP_REDIRECT/302 337 GET http://archive.ubuntu.com/ubuntu/dists/trusty/main/source/Sources.gz - HIER_NONE/- -
1429005569.054      1 TCP_REDIRECT/302 343 GET http://archive.ubuntu.com/ubuntu/dists/trusty/restricted/source/Sources.gz - HIER_NONE/- -
1429005569.055      0 TCP_REDIRECT/302 341 GET http://archive.ubuntu.com/ubuntu/dists/trusty/universe/source/Sources.gz - HIER_NONE/- -
1429005569.056      0 TCP_REDIRECT/302 344 GET http://archive.ubuntu.com/ubuntu/dists/trusty/main/binary-amd64/Packages.gz - HIER_NONE/- -
1429005569.057      0 TCP_REDIRECT/302 350 GET http://archive.ubuntu.com/ubuntu/dists/trusty/restricted/binary-amd64/Packages.gz - HIER_NONE/- -
1429005569.058      0 TCP_REDIRECT/302 348 GET http://archive.ubuntu.com/ubuntu/dists/trusty/universe/binary-amd64/Packages.gz - HIER_NONE/- -
1429005569.070      0 TCP_REDIRECT/302 345 GET http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/source/Sources.gz - HIER_NONE/- -
1429005569.072      0 TCP_REDIRECT/302 351 GET http://archive.ubuntu.com/ubuntu/dists/trusty-updates/restricted/source/Sources.gz - HIER_NONE/- -
1429005569.073      0 TCP_REDIRECT/302 349 GET http://archive.ubuntu.com/ubuntu/dists/trusty-updates/universe/source/Sources.gz - HIER_NONE/- -
1429005569.074      0 TCP_REDIRECT/302 352 GET http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/binary-amd64/Packages.gz - HIER_NONE/- -
1429005569.075      0 TCP_REDIRECT/302 358 GET http://archive.ubuntu.com/ubuntu/dists/trusty-updates/restricted/binary-amd64/Packages.gz - HIER_NONE/- -
1429005569.076      0 TCP_REDIRECT/302 356 GET http://archive.ubuntu.com/ubuntu/dists/trusty-updates/universe/binary-amd64/Packages.gz - HIER_NONE/- -
1429005569.200    147 TCP_REFRESH_UNMODIFIED/200 1334987 GET http://free.nchc.org.tw/ubuntu/dists/trusty/main/source/Sources.gz - HIER_DIRECT/ application/x-gzip
1429005569.609     29 TCP_REFRESH_UNMODIFIED/200 5736 GET http://free.nchc.org.tw/ubuntu/dists/trusty/restricted/source/Sources.gz - HIER_DIRECT/ application/x-gzip
1429005570.053    443 TCP_REFRESH_UNMODIFIED/200 7926093 GET http://free.nchc.org.tw/ubuntu/dists/trusty/universe/source/Sources.gz - HIER_DIRECT/ application/x-gzip
1429005570.237    138 TCP_REFRESH_UNMODIFIED/200 1743415 GET http://free.nchc.org.tw/ubuntu/dists/trusty/main/binary-amd64/Packages.gz - HIER_DIRECT/ application/x-gzip
1429005570.259     20 TCP_REFRESH_UNMODIFIED/200 16376 GET http://free.nchc.org.tw/ubuntu/dists/trusty/restricted/binary-amd64/Packages.gz - HIER_DIRECT/ application/x-gzip
1429005570.565    305 TCP_REFRESH_UNMODIFIED/200 7589291 GET http://free.nchc.org.tw/ubuntu/dists/trusty/universe/binary-amd64/Packages.gz - HIER_DIRECT/ application/x-gzip
1429005570.664     48 TCP_REFRESH_UNMODIFIED/200 245205 GET http://free.nchc.org.tw/ubuntu/dists/trusty-updates/main/source/Sources.gz - HIER_DIRECT/ application/x-gzip
1429005570.692     27 TCP_REFRESH_UNMODIFIED/200 2710 GET http://free.nchc.org.tw/ubuntu/dists/trusty-updates/restricted/source/Sources.gz - HIER_DIRECT/ application/x-gzip
1429005570.716     23 TCP_REFRESH_UNMODIFIED/200 135611 GET http://free.nchc.org.tw/ubuntu/dists/trusty-updates/universe/source/Sources.gz - HIER_DIRECT/ application/x-gzip
1429005570.788     70 TCP_REFRESH_UNMODIFIED/200 630238 GET http://free.nchc.org.tw/ubuntu/dists/trusty-updates/main/binary-amd64/Packages.gz - HIER_DIRECT/ application/x-gzip
1429005570.813     24 TCP_REFRESH_UNMODIFIED/200 15486 GET http://free.nchc.org.tw/ubuntu/dists/trusty-updates/restricted/binary-amd64/Packages.gz - HIER_DIRECT/ application/x-gzip
1429005570.860     46 TCP_REFRESH_UNMODIFIED/200 343861 GET http://free.nchc.org.tw/ubuntu/dists/trusty-updates/universe/binary-amd64/Packages.gz - HIER_DIRECT/ application/x-gzip
almost 3 years ago
@echo off
call :check_Permissions
pushd %temp%
@echo.install python-2.7.6.amd64
call :download "http://www.python.org/ftp/python/2.7.6/python-2.7.6.amd64.msi" "python-2.7.6.amd64.msi"
python-2.7.6.amd64.msi /passive
call :download "http://www.rapidee.com/download/RapidEE_setup.exe" "RapidEE_setup.exe"
RapidEE_setup.exe /SILENT
"%programfiles%\Rapid Environment Editor\RapidEE.exe" -a -c Path "C:\Python27;C:\Python27\scripts"
@echo.install setuptools
call :download "http://python-patch.googlecode.com/svn/trunk/patch.py" "patch.py"
call :create_patch
python patch.py -d C:\Python27 python27_patch.diff
call :download "https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py" "ez_setup.py"
python ez_setup.py
@echo.install pip
call :download "https://raw.github.com/pypa/pip/master/contrib/get-pip.py" "get-pip.py"
python get-pip.py
@echo.install wxPython2.8-win64-unicode-
call :download "http://downloads.sourceforge.net/project/wxpython/wxPython/" "wxPython2.8-win64-unicode-"
@echo.install robotframework
pip install --upgrade robotframework
@echo.install robotframework-ride
call :download "https://robotframework-ride.googlecode.com/files/robotframework-ride-1.2.3.win-amd64.exe" "robotframework-ride-1.2.3.win-amd64.exe"
pip install --upgrade robotframework-ride --allow-external robotframework-ride --allow-unverified robotframework-ride
@goto :EOF
@"C:\Windows\System32\WindowsPowerShell\v1.0\powershell" "$wc = New-Object System.Net.WebClient;$wc.DownloadFile('%1', '%2')"
@echo %2
@goto :EOF
@> python27_patch.diff (
@echo.Index: Lib/mimetypes.py
@echo.--- Lib/mimetypes.py  (revision 85786^)
@echo.+++ Lib/mimetypes.py  (working copy^)
@echo.@@ -27,6 +27,7 @@
@echo. import sys
@echo. import posixpath
@echo. import urllib
@echo.+from itertools import count
@echo. try:
@echo.     import _winreg
@echo. except ImportError:
@echo.@@ -239,19 +240,11 @@
@echo.             return
@echo.         def enum_types(mimedb^):
@echo.-            i = 0
@echo.-            while True:
@echo.+            for i in count(^):
@echo.                 try:
@echo.-                    ctype = _winreg.EnumKey(mimedb, i^)
@echo.+                    yield _winreg.EnumKey(mimedb, i^)
@echo.                 except EnvironmentError:
@echo.                     break
@echo.-                try:
@echo.-                    ctype = ctype.encode(default_encoding^) # omit in 3.x!
@echo.-                except UnicodeEncodeError:
@echo.-                    pass
@echo.-                else:
@echo.-                    yield ctype
@echo.-                i += 1
@echo.         default_encoding = sys.getdefaultencoding(^)
@echo.         with _winreg.OpenKey(_winreg.HKEY_CLASSES_ROOT, ''^) as hkcr:
@goto :EOF
    echo Administrative permissions required. Detecting permissions...
    net session >nul 2>&1
    if %errorLevel% == 0 (
        echo Success: Administrative permissions confirmed.
        @goto :EOF
    ) else (
        echo Failure: Current permissions inadequate.
        pause >nul
about 3 years ago

See through grandma's glasses, become Omniscient in The Cookieverse

Full Source on Github


  • Multiple timer for items.
  • Highlight best item/upgrade
      Max 3 buying steps optimization.

  • Calculating with Actual Data
      It calls a clone of in-game function Game.CalculateGains to calculate total cps. Effects from other products' amount and upgrades are included naturally.

  • Update Survivor
      Calculation keeps correct after each game-update, as the function name keep in the same.
      No need to wait me update script.


  • What's next :
    • Timer for upgrades.
  • 9/30
    • v.1.036.13
      • improve: low frequency timer when waiting time is over 1 hour
      • improve: better way to check for highlight updating.
      • fix: avoided concurrent execution of hl.highlight
      • fix: timer text is mouse pass-through now
  • 9/21
    • v.1.036.12
      • New: Algorithm for best item is changed to Best bestGainedCps per second in payback time
      • improve: timer string limits to 2 field (e.g. "1d 10h" or "1h 5m")
      • improve: smarter highlight updating
      • improve: smarter timer updating
    • v.1.036.11 Multiple level optimize and green colors in different level
  • 9/20
    • v.1.036.10 Fix single highlight error
    • v.1.036.09 Choose available items for first buying
    • v.1.036.08 Add highlight for upgrades(include upgrade CP calc in level-1 optimize)
  • 9/19
    • v.1.036.07 Reduce timer's cpu usage
  • 9/18
    • v.1.036.06 Same color for level-1 and level-2 optimal item since page updates after each bought
    • v.1.036.05 Fix: auto mark Building every second
    • v.1.036.04 Faster timer(250ms) when click big cookie; only light 1 assist item
    • v.1.036.03 Fix multi level highlight
  • 9/15
    • v.1.036.02 Fix: Version is block by googleAds
    • v.1.036 9/15 Maximum 3 level highlights, as accurate as Cookie Monster

How to use Grandma's glasses?

  1. Drag Grandma's glasses to bookmark toolbar, or use following code to create a bookmarklet.

    Bookmarklet source
    javascript:(function a(e){if(e.length){var g=document.createElement("script");g.type="text/javascript";if(g.readyState){g.onreadystatechange=function(){if(g.readyState=="loaded"||g.readyState=="complete"){g.onreadystatechange=null;a(e.slice(1))}}}else{g.onload=function(){a(e.slice(1))}}e[0]+=(e[0].indexOf('?')===-1)?"?":"&";e[0]+="ts="+new Date().getTime();g.src=e[0];document.getElementsByTagName("head")[0].appendChild(g)}}([
  2. Click the bookmarklet on Cookie-Clicker page.

  3. Done!

What do these colors mean?

  • Countdown timers shows how long you can buy the products if your cookies is not enough for them.
  • Light Green means the best item which has maximum Which means, (1) For affordable items, it is Fastest way to regain consumed cookies to get ready for next buying. (2) For starting from 0 cookies, it is actually calculating
  • Dark Green items is not the best one, but buying it can help you buy the Light Green item faster.


  • Grandma's glasses picks the best item as buying target. the comparison is based on
  • To buy the target item faster, Grandma's glasses
    1. list all combination of 1~3 buying steps. For example, if we have items from 1 to 10, and item 9 has max Income Per Cost. We want to know if buying some other items can speed up the waiting time to buy item 9. Grandma's glasses will list all possible buying steps as following:
      [9]       Just wait and buy item 9
      [1, 9]    Wait(if don't have enough cookies), buy item 1, wait, buy item 9
      [2, 9]
      [8, 9]
      [9, 9]    *Skipped. We don't buy something first in order to buy itself again
      [10, 9]
      [1, 1, 9]
      [10, 10, 9]
    2. Calculates each buying chain's waiting time.
    3. Choose the combination with least waiting time.
    4. Highlighted the target item Yellow and item in first step Green.
about 3 years ago



2013.08.06 12:36 pm













about 3 years ago

整理自 柴富的 魚缸建立硝化系統全過程


  • 硝化菌廣佈於土壤.淡海水和污水處理系統中.
  • 硝化菌的歸納為兩類:1.亞硝酸菌 2.硝酸鹽菌.
  • 硝化菌的基本形態:桿狀、球狀、螺旋狀等.
  • 硝化菌需要的無機碳源:碳酸、碳酸鹽等.
  • 硝化菌需要的營養元素:蛋白質、脂肪、酵素、維生素等
  • 硝化菌需要的無機化學能:氨源或亞硝酸鹽.
  • 硝化菌需要的氧氣:以每公斤的氨氮核計,至少要4.5公斤的氧,最好不低於2PPM
  • 硝化菌最適合的PH值:7.5~8.2之間
  • 硝化菌最適合的溫度:不超過30度不低於20度.
  • 硝化菌的運動:有鞭毛振動的菌體[可移動],不具鞭毛的菌種[隨水流飄移].
  • 硝化菌最適合的水流:硝化菌會分泌出一種黏性強的脂多糖類的化學物質,可把自己黏著在一起,組成凝菌膠團,便經的起水流沖刷.
  • 硝化菌與光:生態上的硝化菌均有避光現象.


大家把新買的魚缸興沖沖的裝備齊全, 加了水, 激活馬達, 都會問, 下一步怎麼辦? 「買魚」 相信這是最快閃入大腦的答案.然而一個星期過後, 魚一隻隻的回老家了, 才覺得不對, 哪做錯了…… 答案往往是, 硝化系統沒有建立完全.


整個系統的大綱是: 魚的廢物 (氨) -> 亞硝酸鹽 -> 硝酸鹽

(以下數據僅供參考, 因為太多的因素會使每個魚缸都不大一樣)

初期 (氨的累積)

魚下缸, 開始排放廢物, 氨 (阿摩尼亞) 開始累積, 阿摩尼亞對魚是超級有害的.通常在下魚的三天後氨的濃度開始上揚.

建議控制 氨 濃度:

  • 0.25-1.0 ppm: 25% 換水,餵食減半.
  • 1.0-2.0 ppm: 50% 換水, 減少餵食,
  • >2.0 ppm: 繼續換水, 直到< 1.0ppm,不要餵食.(因為系統付荷過重)
  • 這期間如果感覺魚快不行了,繼續換水, 直到< 1.0ppm,不要餵食.

中期 (亞硝酸鹽的累積)

亞硝酸菌 開始分解氨, 將它轉成亞硝酸鹽. 然而這也是對魚有害的. 有些魚在亞硝酸鹽 = 1ppm 就受不了了. 亞硝酸鹽濃度通常在一個星期後開使上揚.

建議控制 亞硝酸鹽 濃度:

  • 0.1-0.5ppm: 25%換水,餵食減半.
  • 0.5-1.0 ppm:50% 換水,減少餵食
  • >1.0 ppm: 繼續換水,直到 < 1.0 ppm,不要餵食.(因為系統付荷過重了)
  • 這期間如果感覺魚快不行了 繼續換水,直到 < 1.0 ppm,不要餵食.

後期 (硝酸鹽的累積)

再過一個星期後, 硝酸鹽菌開始長成. 硝酸鹽菌成長的比較慢. 差不多15小時才長一倍. 硝酸鹽菌 會將 亞硝酸鹽分解成 硝酸鹽. 少量的硝酸鹽是魚兒能接受的.且水草也能吸收. 不過濃度太高魚也會回老家的. 要靠定期的換水來稀釋硝酸鹽的濃度. 硝酸鹽的濃度也最好不要超過 20ppm.

建議控制 硝酸鹽 的濃度,保持在 < 5ppm


系統的建立只有一個秘訣:時間。因為所需的硝化菌就存在你我的身邊. 大家所需要給的只是一點時間, 跟一點幫助.

Q: 新買的魚缸如果只有兩三隻魚, 空空的能看嗎?
A: 我知道, 不過相信我, 慢慢來, 魚會感謝你的.

Day 1~7

買兩三隻比較粗養的魚, 餵少量的飼料. 第二天起開始測量阿摩尼亞的濃度.濃度會持續升高好幾天, 不要怕.

Day 7~14

大約在一個星期左右, 因為 硝化菌的長成, 開始進入中期. 阿摩尼亞的濃度會快速減退. 在這期間, 如果魚真的不大行了, 可少量換水來稀釋缸中的水.相對的,亞硝酸鹽的濃度開始爬升. 一個星期後, 和阿摩尼亞測試同步, 開始測試亞硝酸鹽濃度. 每兩天量一次, 亞硝酸鹽會到頂然後慢慢的撿退

在第一個星期中, 可加入商品硝化菌, 這樣也有幫助.幸運的話,可縮短一個星期的時間. 養魚的頭一個月是關鍵期. 是老天給我們的測試. 看我們是真的想養魚還是玩玩而已.

Day 17

開始進入後期階段. 再過一個星期, 當亞硝酸鹽和阿摩尼亞濃度都降為零時, 硝酸鹽的濃度開始增加.這時, 恭喜大大. 系統建立完成. 先別急著追加魚. 先換少量的水. 再等兩天. 再加魚.一次也不要加超過三隻. 不然一下子加太多, 系統會崩盤. 前三個星期所做的努力就付諸東水了.



1. 遲緩期


2. 對數生長期


3. 遞減生長期


4. 靜止期


5. 內呼吸期



硝化細菌在一般環境中也有老化及死亡的問題,老化及死亡是有機生命體必須共同面對的問題。硝化菌的生境條件: 凡是環境之物理、化學及生物等性均會影響硝化細菌之生長,因此硝化細菌的生境條件可被區分為物理因子、化學因子及生物因子三類。其中物理因子主要為溫度、光照、底質、水流等:化學因子主要是鹽度、溶氧、ph質、抑制劑等:生物因子主要為掠食者、競爭排除作用等。

1. 溫度


2. 光


3. 底質


4. 水流


5. 溶氧


6. ph值


7. 競爭排除作用














第30天,氨 和亞硝酸鹽已經檢測不出,水族箱完成了氮循環,你可以換掉一部分水,然後放入你想要的魚了。