Create ZFS iSCSI/NFS storage on Ubuntu 14.04 for esxi

Build a ZFS raidz2 pool, share ZFS storage as iSCSI volume or NFS export and tuning I/O performance for ESXi access.

  • 2015/8/11 utilize server's RAM by writing to /sys/module/zfs/parameters/zfs_arc_max

Install ZFS

Before we can start using ZFS, we need to install it. Simply add the repository to apt-get with the following command:

apt-get install --yes software-properties-common
apt-add-repository --yes ppa:zfs-native/stable
apt-get update
apt-get install ubuntu-zfs

Now, let’s see if it has been correctly compiled and loaded by the kernel

dmesg | grep ZFS

You get an output like this:

# dmesg | grep ZFS
[ 5.979569] ZFS: Loaded module v0.6.4.1-1~trusty, ZFS pool version 5000, ZFS filesystem version 5

Creation of a RAID-Z2 disk array using 7 disks

Here is my server disk layout, from sdb to sdh will be ZFS raid pool

root@nfs1:~$ lsblk
sda      8:0    0   2.7T  0 disk
├─sda1   8:1    0     1M  0 part
├─sda2   8:2    0   2.7T  0 part /
└─sda3   8:3    0    32G  0 part [SWAP]
sdb      8:16   0   2.7T  0 disk
sdc      8:32   0   2.7T  0 disk
sdd      8:48   0   2.7T  0 disk
sde      8:64   0   2.7T  0 disk
sdf      8:80   0   2.7T  0 disk
sdg      8:96   0   2.7T  0 disk
sdh      8:112  0   2.7T  0 disk

Create ZFS pool from 7 disks

sudo zpool create -f datastore1 raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
  • We use all capacity of datastore1/iscsi or datastore1/nfs, which is around 13TB
    root@iscsi-storage-2:~# zfs list
    root@nfs1:~# zfs list
    datastore1  94.9K  13.3T  34.1K  /datastore1

Create iSCSI volumn using Linux SCSI Target(Targetcli)

1. Create ZFS data sets for iSCSI

sudo zfs create -o compression=off -o dedup=off -o volblocksize=32K -V 13000G datastore1/iscsi
sudo zfs set sync=disabled datastore1/iscsi

2. Create iscsi adaptor on ESXi:

vCenter > Host > Configuration > Storage Adaptors > Add iscsi adaptor, Get the iSCSI name

We will use block device to create iscsi

root@iscsi-storage-2:~# ll /dev/zvol/datastore1/iscsi
lrwxrwxrwx 1 root root 9 May 26 00:41 /dev/zvol/datastore1/iscsi -> ../../zd0

3. Create iSCSI target using ZFS

root@iscsi-storage-2:~# targetcli
/> cd backstores
/backstores> iblock/ create name=block_backend dev=/dev/zvol/datastore1/iscsi

... create iscsi target ...

/> cd /iscsi/
luns/ create /backstores/iblock/block_backend

... Add portls ...
... Add Acls of ESXI hosts' initiator iqn name ...

/iscsi/iqn.20...570b1d2/tpgt1> / saveconfig
/iscsi/iqn.20...570b1d2/tpgt1> exit

4. Mount iSCSI on ESXi

  • On ESXi
    1. Add iscsi IP:port into iscsi adaptor
    2. Rescan All
    3. Add iSCSI volumn in Host > Configuration > Storage

Create NFS sharing

1. Install NFS service

$ sudo apt-get install nfs-kernel-server
$ sudo reboot
  • commented out && grep -q '^[[:space:]][^#]/' $export_files in /etc/init.d/nfs-kernel-server because I can't start the server with an empty /etc/exports/ file

2. Start NFS service

$ sudo service nfs-kernel-server start
 * Exporting directories for NFS kernel daemon... [ OK ]
 * Starting NFS kernel daemon                     [ OK ]

3. Create ZFS data sets for NFS

sudo zfs create -o compression=off -o dedup=off -o mountpoint=/nfs -o sharenfs=on datastore1/nfs
sudo zfs set sync=disabled datastore1/nfs

4. Test if NFS sharing is exported

root@nfs1:~# apt-get install nfs-common
root@nfs1:~# showmount -e
Export list for nfs1:
/nfs *

5. Auto export NFS folder

root@nfs1:~# vim /etc/rc.local
#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.
zfs unshare datastore1/nfs
zfs share datastore1/nfs
exit 0

Tuning for performance and I/O latency

1. Set the I/O Scheduler to noop (echo noop > /sys/block/sdb/queue/scheduler).

Skip zd0 if using NFS
# for i in zd0 sdb sdc sdd sde sdf sdg sdh; \
do echo noop > /sys/block/$i/queue/scheduler; cat /sys/block/$i/queue/scheduler; done
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq
[noop] deadline cfq

2. Change the IO size to 32KB

Skip zd0 if using NFS
# for i in zd0 sdb sdc sdd sde sdf sdg sdh; \
do echo 32 > /sys/block/$i/queue/max_sectors_kb; echo 4 > /sys/block/$i/queue/nr_requests; done

ZFS iSCSI Benchmark Tests on ESX

...I tested with both 64KB and 32KB, for me 32KB worked out a little better.
...We can see that the avrtq-sz changed to 64 (32kb), which is good and we now see that the avg wait time went down to ~80ms (from ~1000ms). Lowering the number of requests to 4 lowered the DAVG to practically nothing, but the speed wasn’t that great.

3. Enable Disk write-back caching

# for i in sdb sdc sdd sde sdf sdg sdh; do hdparm -W1 /dev/$i; done

Improve hard drive write speed with write-back caching

4. Increase ZFS read cache(ZFS-ARC) size to 50GB

The default ZFS arc has 32GB ARC limitation. We can use higher cache memory size to rise the cache hit rate.

root@nfs1:~# echo $((50*1024*1024*1024)) >> /sys/module/zfs/parameters/zfs_arc_max
root@nfs1:~# echo options zfs zfs_arc_max=$((50*1024*1024*1024)) > /etc/modprobe.d/zfs.conf
root@nfs1:~# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=53687091200

root@nfs1:~# git clone --depth 1
root@nfs1:~# zfs-arcstats-bash/arc2
This will display the cache hit and miss ratio's.
for a time limited run (in seconds) add a number of seconds behind this command
|l1reads    l1miss     l1hits     l1hit%     size  |
|175        10         165        94.285%    50 GB  |
|84         0          84         100.000%   49 GB  |
|110        8          102        92.727%    50 GB  |
|100        14         86         86.000%    50 GB  |
|362        14         348        96.132%    50 GB  |
|75         3          72         96.000%    50 GB  |

Benchmark Result

(Left) NFS Storage with 10GbE Network (Right) Local Disk Storage

Refer List

Prepare Ubuntu-14.04 cloud images on vSphere



sudo mkdir -p /var/lib/cloud/seed/nocloud

sudo tee /var/lib/cloud/seed/nocloud/meta-data <<EOF
instance-id: ubuntu
local-hostname: ubuntu

sudo tee /var/lib/cloud/seed/nocloud/user-data <<EOF
apt_update: true
apt_upgrade: true
 - source: "ppa:git-core/ppa"
 - unattended-upgrades
 - squid-deb-proxy-client
 - vim
 - ntp
 - git
timezone: Asia/Taipei
password: ubuntu
chpasswd: { expire: False }
ssh_pwauth: True
 - grep $(cat /etc/hostname) /etc/hosts || sudo echo $(cat /etc/hostname) >> /etc/hosts
  • Remove cloud-init instance, which will trigger cloud-init on next booting.
( cd /var/lib/cloud/instance && sudo rm -Rf * )
sudo shutdown -P now


go-ipfs on windows

How to compile ipfs in windows (continuing)

cd /d c:\
C:\> @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString(''))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
C:\> choco install golang
C:\> choco install git
C:\> set GOPATH=C:\go
C:\> go get -d -u
C:\> cd c:\go\src\\ipfs\go-ipfs
c:\go\src\\ipfs\go-ipfs> go build -x -tags nofuse\ipfs\go-ipfs\cmd\ipfs
c:\go\src\\ipfs\go-ipfs> cd c:\
C:\> set PATH=%PATH%;c:\go\src\\ipfs\go-ipfs

Install Nginx, Passenger, PageSpeed with spdy on Ubuntu 14.04

Build Passenger, nginx, ngx_pagespeed From Source

Follow build steps of ngx_pagespeed

Install dependencies:

$ sudo apt-get install build-essential zlib1g-dev libpcre3 libpcre3-dev unzip

check for last NPS_VERSION
Download ngx_pagespeed source with psol data:

$ cd
$ export NPS_VERSION=
$ wget${NPS_VERSION}
$ unzip release-${NPS_VERSION}
$ cd ngx_pagespeed-release-${NPS_VERSION}-beta/
$ wget${NPS_VERSION}.tar.gz
$ tar -xzvf ${NPS_VERSION}.tar.gz  # extracts to psol/

check for the latest NGINX_VERSION
Download nginx source code:

$ cd
$ export NGINX_VERSION=1.9.2
$ wget${NGINX_VERSION}.tar.gz
$ tar -xvzf nginx-${NGINX_VERSION}.tar.gz

Compile nginx,ngx_pagespeed with passenger.

I have installed rvm here, so i am going to use gem install in user mode.

$ gem install passenger
$ sudo -E -s
# passenger-install-nginx-module \
  --auto \
  --nginx-source-dir=$HOME/nginx-${NGINX_VERSION} \
  --extra-configure-flags=" \
    --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta \
    --conf-path=/etc/nginx/nginx.conf \
    --pid-path=/var/run/ \
    --sbin-path=/usr/sbin \
    --error-log-path=/var/log/nginx/error.log \
    --http-log-path=/var/log/nginx/access.log \
    --with-http_spdy_module \
    --with-http_gzip_static_module \
    --without-mail_pop3_module \
    --without-mail_smtp_module \

Configuration nginx, Passenger, ngx_pagespeed and spdy:

Create a self-signed certificate for spdy

$ cd /opt/nginx/conf;
$ openssl genrsa -des3 -out server.key 2048
$ openssl req -new -key server.key -out server.csr
$ openssl x509 -req -days 7305 -in server.csr -signkey server.key -out server.crt

Make cache folder for ngx_pagespeed

$ sudo mkdir /var/ngx_pagespeed_cache
$ sudo chown -R www-data:www-data /var/ngx_pagespeed_cache

Install nginx service for Ubuntu, use default prefix /opt/nginx

$ sudo wget -O /etc/init.d/nginx
$ sudo sed -i "s#/usr/local/nginx#/opt/nginx#" /etc/init.d/nginx
$ sudo sed -i "s#^DAEMON=.*#DAEMON=/usr/sbin/nginx#" /etc/init.d/nginx
$ sudo sed -i "s#^NGINX_CONF_FILE=.*#NGINX_CONF_FILE=/etc/nginx/nginx.conf#" /etc/init.d/nginx
$ sudo sed -i "s#^PIDSPATH=.*#PIDSPATH=/var/run#" /etc/init.d/nginx
$ sudo sed -i "s#^PIDFILE=.*" /etc/init.d/nginx
$ sudo chmod +x /etc/init.d/nginx
# start service on boot

$ sudo update-rc.d -f nginx defaults

To poll for current status sudo service nginx status
To stop server sudo service nginx stop
To start the server sudo service nginx start

Get passenger_ruby setting:

$ passenger-config --ruby-command
$ passenger-config was invoked through the following Ruby interpreter:
  Command: /home/jethro/.rvm/gems/ruby-2.0.0-p353/wrappers/ruby
  Version: ruby 2.0.0p353 (2013-11-22 revision 43784) [x86_64-linux]
  To use in Apache: PassengerRuby /home/jethro/.rvm/gems/ruby-2.0.0-p353/wrappers/ruby
  To use in Nginx : passenger_ruby /home/jethro/.rvm/gems/ruby-2.0.0-p353/wrappers/ruby
  To use with Standalone: /home/jethro/.rvm/gems/ruby-2.0.0-p353/wrappers/ruby /home/jethro/.rvm/gems/ruby-2.0.0-p353/gems/passenger-5.0.6/bin/passenger start

Get passenger_root setting:

$ passenger-config --root

Setup nginx configuration

## Replace `user` setting near line 2 to start nginx as specified user.
user  www-data; 

## Passenger setting
##  passenger_root, passenger_ruby, passenger_max_pool_size, passenger_enabled on;
http {
    passenger_root /somewhere/passenger-x.x.x;
    passenger_ruby /usr/bin/ruby;
    passenger_max_pool_size 10;

    gzip on;

    server {
        listen 80;
        root /webapps/foo/public;
        passenger_enabled on;

## spdy setting
http {
    server {
    listen 443 ssl spdy;
    ssl_certificate server.crt;
    ssl_certificate_key server.key;

## PageSpeed setting
## /var/ngx_pagespeed_cache is created at eariler step
http {
  server {
    pagespeed on;
    # Needs to exist and be writable by nginx.  Use tmpfs for best performance.
    pagespeed FileCachePath /var/ngx_pagespeed_cache;
    # Ensure requests for pagespeed optimized resources go to the pagespeed handler
    # and no extraneous headers get set.
    location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" {
      add_header "" "";
    location ~ "^/pagespeed_static/" { }
    location ~ "^/ngx_pagespeed_beacon$" { }

Test PageSpeed

sudo service nginx start
Starting Nginx Server...        [ OK ]  
curl -LkI '' | grep X-Page-Speed

Test spdy

openssl s_client -connect -nextprotoneg '' | grep spdy
Protocols advertised by server: spdy/3.1, http/1.1

[squid-deb-proxy] Redirect default Ubuntu repositories to local mirror

In squid-deb-proxy, add setting to Rewrite request URL. Replace * with local mirror

Edit /etc/squid-deb-proxy/squid-deb-proxy.conf, add url_rewrite_program setting

url_rewrite_program /etc/squid-deb-proxy/redirect.php

Create /etc/squid-deb-proxy/redirect.php to processing request URLs


$temp = array();

// Extend stream timeout to 24 hours
stream_set_timeout(STDIN, 86400);
$pattern = '/(\w+\.)?archive\.ubuntu\.com/i';
$replacement = '';

while ( $input = fgets(STDIN) ) {
        // Split the output (space delimited) from squid into an array.
        $temp = split(' ', $input);

        // Set the URL from squid to a temporary holder.
        $output = $temp[0] . "\n";

        // Check the URL and rewrite it if it matches
        if ( strpos($temp[0], "") !== false ) {
                $output = '302:'.preg_replace($pattern, $replacement, $temp[0])."\n";
        echo $output;

Let redirect.php be executable

sudo chmod +x /etc/squid-deb-proxy/redirect.php

Restart squid-deb-proxy and monitor if requests URL is redirected

sudo tail -F /var/log/squid-deb-proxy/access.log

The successful redirections should like this:

1429005569.051      1 TCP_REDIRECT/302 337 GET - HIER_NONE/- -
1429005569.054      1 TCP_REDIRECT/302 343 GET - HIER_NONE/- -
1429005569.055      0 TCP_REDIRECT/302 341 GET - HIER_NONE/- -
1429005569.056      0 TCP_REDIRECT/302 344 GET - HIER_NONE/- -
1429005569.057      0 TCP_REDIRECT/302 350 GET - HIER_NONE/- -
1429005569.058      0 TCP_REDIRECT/302 348 GET - HIER_NONE/- -
1429005569.070      0 TCP_REDIRECT/302 345 GET - HIER_NONE/- -
1429005569.072      0 TCP_REDIRECT/302 351 GET - HIER_NONE/- -
1429005569.073      0 TCP_REDIRECT/302 349 GET - HIER_NONE/- -
1429005569.074      0 TCP_REDIRECT/302 352 GET - HIER_NONE/- -
1429005569.075      0 TCP_REDIRECT/302 358 GET - HIER_NONE/- -
1429005569.076      0 TCP_REDIRECT/302 356 GET - HIER_NONE/- -
1429005569.200    147 TCP_REFRESH_UNMODIFIED/200 1334987 GET - HIER_DIRECT/ application/x-gzip
1429005569.609     29 TCP_REFRESH_UNMODIFIED/200 5736 GET - HIER_DIRECT/ application/x-gzip
1429005570.053    443 TCP_REFRESH_UNMODIFIED/200 7926093 GET - HIER_DIRECT/ application/x-gzip
1429005570.237    138 TCP_REFRESH_UNMODIFIED/200 1743415 GET - HIER_DIRECT/ application/x-gzip
1429005570.259     20 TCP_REFRESH_UNMODIFIED/200 16376 GET - HIER_DIRECT/ application/x-gzip
1429005570.565    305 TCP_REFRESH_UNMODIFIED/200 7589291 GET - HIER_DIRECT/ application/x-gzip
1429005570.664     48 TCP_REFRESH_UNMODIFIED/200 245205 GET - HIER_DIRECT/ application/x-gzip
1429005570.692     27 TCP_REFRESH_UNMODIFIED/200 2710 GET - HIER_DIRECT/ application/x-gzip
1429005570.716     23 TCP_REFRESH_UNMODIFIED/200 135611 GET - HIER_DIRECT/ application/x-gzip
1429005570.788     70 TCP_REFRESH_UNMODIFIED/200 630238 GET - HIER_DIRECT/ application/x-gzip
1429005570.813     24 TCP_REFRESH_UNMODIFIED/200 15486 GET - HIER_DIRECT/ application/x-gzip
1429005570.860     46 TCP_REFRESH_UNMODIFIED/200 343861 GET - HIER_DIRECT/ application/x-gzip

Install robotframework/robotframework-ride on windows 64bit

@echo off
call :check_Permissions
pushd %temp%
@echo.install python-2.7.6.amd64
call :download "" "python-2.7.6.amd64.msi"
python-2.7.6.amd64.msi /passive
call :download "" "RapidEE_setup.exe"
RapidEE_setup.exe /SILENT
"%programfiles%\Rapid Environment Editor\RapidEE.exe" -a -c Path "C:\Python27;C:\Python27\scripts"
@echo.install setuptools
call :download "" ""
call :create_patch
python -d C:\Python27 python27_patch.diff
call :download "" ""
@echo.install pip
call :download "" ""
@echo.install wxPython2.8-win64-unicode-
call :download "" "wxPython2.8-win64-unicode-"
@echo.install robotframework
pip install --upgrade robotframework
@echo.install robotframework-ride
call :download "" ""
pip install --upgrade robotframework-ride --allow-external robotframework-ride --allow-unverified robotframework-ride
@goto :EOF
@"C:\Windows\System32\WindowsPowerShell\v1.0\powershell" "$wc = New-Object System.Net.WebClient;$wc.DownloadFile('%1', '%2')"
@echo %2
@goto :EOF
@> python27_patch.diff (
@echo.Index: Lib/
@echo.--- Lib/  (revision 85786^)
@echo.+++ Lib/  (working copy^)
@echo.@@ -27,6 +27,7 @@
@echo. import sys
@echo. import posixpath
@echo. import urllib
@echo.+from itertools import count
@echo. try:
@echo.     import _winreg
@echo. except ImportError:
@echo.@@ -239,19 +240,11 @@
@echo.             return
@echo.         def enum_types(mimedb^):
@echo.-            i = 0
@echo.-            while True:
@echo.+            for i in count(^):
@echo.                 try:
@echo.-                    ctype = _winreg.EnumKey(mimedb, i^)
@echo.+                    yield _winreg.EnumKey(mimedb, i^)
@echo.                 except EnvironmentError:
@echo.                     break
@echo.-                try:
@echo.-                    ctype = ctype.encode(default_encoding^) # omit in 3.x!
@echo.-                except UnicodeEncodeError:
@echo.-                    pass
@echo.-                else:
@echo.-                    yield ctype
@echo.-                i += 1
@echo.         default_encoding = sys.getdefaultencoding(^)
@echo.         with _winreg.OpenKey(_winreg.HKEY_CLASSES_ROOT, ''^) as hkcr:
@goto :EOF
    echo Administrative permissions required. Detecting permissions...
    net session >nul 2>&1
    if %errorLevel% == 0 (
        echo Success: Administrative permissions confirmed.
        @goto :EOF
    ) else (
        echo Failure: Current permissions inadequate.
        pause >nul

Grandma's glasses

See through grandma's glasses, become Omniscient in The Cookieverse

Full Source on Github


  • Multiple timer for items.
  • Highlight best item/upgrade
      Max 3 buying steps optimization.

  • Calculating with Actual Data
      It calls a clone of in-game function Game.CalculateGains to calculate total cps. Effects from other products' amount and upgrades are included naturally.

  • Update Survivor
      Calculation keeps correct after each game-update, as the function name keep in the same.
      No need to wait me update script.


  • What's next :
    • Timer for upgrades.
  • 9/30
    • v.1.036.13
      • improve: low frequency timer when waiting time is over 1 hour
      • improve: better way to check for highlight updating.
      • fix: avoided concurrent execution of hl.highlight
      • fix: timer text is mouse pass-through now
  • 9/21
    • v.1.036.12
      • New: Algorithm for best item is changed to Best bestGainedCps per second in payback time
      • improve: timer string limits to 2 field (e.g. "1d 10h" or "1h 5m")
      • improve: smarter highlight updating
      • improve: smarter timer updating
    • v.1.036.11 Multiple level optimize and green colors in different level
  • 9/20
    • v.1.036.10 Fix single highlight error
    • v.1.036.09 Choose available items for first buying
    • v.1.036.08 Add highlight for upgrades(include upgrade CP calc in level-1 optimize)
  • 9/19
    • v.1.036.07 Reduce timer's cpu usage
  • 9/18
    • v.1.036.06 Same color for level-1 and level-2 optimal item since page updates after each bought
    • v.1.036.05 Fix: auto mark Building every second
    • v.1.036.04 Faster timer(250ms) when click big cookie; only light 1 assist item
    • v.1.036.03 Fix multi level highlight
  • 9/15
    • v.1.036.02 Fix: Version is block by googleAds
    • v.1.036 9/15 Maximum 3 level highlights, as accurate as Cookie Monster

How to use Grandma's glasses?

  1. Drag Grandma's glasses to bookmark toolbar, or use following code to create a bookmarklet.

    Bookmarklet source
    javascript:(function a(e){if(e.length){var g=document.createElement("script");g.type="text/javascript";if(g.readyState){g.onreadystatechange=function(){if(g.readyState=="loaded"||g.readyState=="complete"){g.onreadystatechange=null;a(e.slice(1))}}}else{g.onload=function(){a(e.slice(1))}}e[0]+=(e[0].indexOf('?')===-1)?"?":"&";e[0]+="ts="+new Date().getTime();g.src=e[0];document.getElementsByTagName("head")[0].appendChild(g)}}([
  2. Click the bookmarklet on Cookie-Clicker page.

  3. Done!

What do these colors mean?

Green product

  • Countdown timers shows how long you can buy the products if your cookies is not enough for them.
  • Light Green means the best item which has maximum Which means, (1) For affordable items, it is Fastest way to regain consumed cookies to get ready for next buying. (2) For starting from 0 cookies, it is actually calculating
  • Dark Green items is not the best one, but buying it can help you buy the Light Green item faster.


  • Grandma's glasses picks the best item as buying target. the comparison is based on
  • To buy the target item faster, Grandma's glasses
    1. list all combination of 1~3 buying steps. For example, if we have items from 1 to 10, and item 9 has max Income Per Cost. We want to know if buying some other items can speed up the waiting time to buy item 9. Grandma's glasses will list all possible buying steps as following:
      [9]       Just wait and buy item 9
      [1, 9]    Wait(if don't have enough cookies), buy item 1, wait, buy item 9
      [2, 9]
      [8, 9]
      [9, 9]    *Skipped. We don't buy something first in order to buy itself again
      [10, 9]
      [1, 1, 9]
      [10, 10, 9]
    2. Calculates each buying chain's waiting time.
    3. Choose the combination with least waiting time.
    4. Highlighted the target item Yellow and item in first step Green.




2013.08.06 12:36 pm














整理自 柴富的 魚缸建立硝化系統全過程


  • 硝化菌廣佈於土壤.淡海水和污水處理系統中.
  • 硝化菌的歸納為兩類:1.亞硝酸菌 2.硝酸鹽菌.
  • 硝化菌的基本形態:桿狀、球狀、螺旋狀等.
  • 硝化菌需要的無機碳源:碳酸、碳酸鹽等.
  • 硝化菌需要的營養元素:蛋白質、脂肪、酵素、維生素等
  • 硝化菌需要的無機化學能:氨源或亞硝酸鹽.
  • 硝化菌需要的氧氣:以每公斤的氨氮核計,至少要4.5公斤的氧,最好不低於2PPM
  • 硝化菌最適合的PH值:7.5~8.2之間
  • 硝化菌最適合的溫度:不超過30度不低於20度.
  • 硝化菌的運動:有鞭毛振動的菌體[可移動],不具鞭毛的菌種[隨水流飄移].
  • 硝化菌最適合的水流:硝化菌會分泌出一種黏性強的脂多糖類的化學物質,可把自己黏著在一起,組成凝菌膠團,便經的起水流沖刷.
  • 硝化菌與光:生態上的硝化菌均有避光現象.


大家把新買的魚缸興沖沖的裝備齊全, 加了水, 激活馬達, 都會問, 下一步怎麼辦? 「買魚」 相信這是最快閃入大腦的答案.然而一個星期過後, 魚一隻隻的回老家了, 才覺得不對, 哪做錯了…… 答案往往是, 硝化系統沒有建立完全.


整個系統的大綱是: 魚的廢物 (氨) -> 亞硝酸鹽 -> 硝酸鹽

(以下數據僅供參考, 因為太多的因素會使每個魚缸都不大一樣)

初期 (氨的累積)

魚下缸, 開始排放廢物, 氨 (阿摩尼亞) 開始累積, 阿摩尼亞對魚是超級有害的.通常在下魚的三天後氨的濃度開始上揚.

建議控制 氨 濃度:

  • 0.25-1.0 ppm: 25% 換水,餵食減半.
  • 1.0-2.0 ppm: 50% 換水, 減少餵食,
  • >2.0 ppm: 繼續換水, 直到< 1.0ppm,不要餵食.(因為系統付荷過重)
  • 這期間如果感覺魚快不行了,繼續換水, 直到< 1.0ppm,不要餵食.

中期 (亞硝酸鹽的累積)

亞硝酸菌 開始分解氨, 將它轉成亞硝酸鹽. 然而這也是對魚有害的. 有些魚在亞硝酸鹽 = 1ppm 就受不了了. 亞硝酸鹽濃度通常在一個星期後開使上揚.

建議控制 亞硝酸鹽 濃度:

  • 0.1-0.5ppm: 25%換水,餵食減半.
  • 0.5-1.0 ppm:50% 換水,減少餵食
  • >1.0 ppm: 繼續換水,直到 < 1.0 ppm,不要餵食.(因為系統付荷過重了)
  • 這期間如果感覺魚快不行了 繼續換水,直到 < 1.0 ppm,不要餵食.

後期 (硝酸鹽的累積)

再過一個星期後, 硝酸鹽菌開始長成. 硝酸鹽菌成長的比較慢. 差不多15小時才長一倍. 硝酸鹽菌 會將 亞硝酸鹽分解成 硝酸鹽. 少量的硝酸鹽是魚兒能接受的.且水草也能吸收. 不過濃度太高魚也會回老家的. 要靠定期的換水來稀釋硝酸鹽的濃度. 硝酸鹽的濃度也最好不要超過 20ppm.

建議控制 硝酸鹽 的濃度,保持在 < 5ppm


系統的建立只有一個秘訣:時間。因為所需的硝化菌就存在你我的身邊. 大家所需要給的只是一點時間, 跟一點幫助.

Q: 新買的魚缸如果只有兩三隻魚, 空空的能看嗎?
A: 我知道, 不過相信我, 慢慢來, 魚會感謝你的.

Day 1~7

買兩三隻比較粗養的魚, 餵少量的飼料. 第二天起開始測量阿摩尼亞的濃度.濃度會持續升高好幾天, 不要怕.

Day 7~14

大約在一個星期左右, 因為 硝化菌的長成, 開始進入中期. 阿摩尼亞的濃度會快速減退. 在這期間, 如果魚真的不大行了, 可少量換水來稀釋缸中的水.相對的,亞硝酸鹽的濃度開始爬升. 一個星期後, 和阿摩尼亞測試同步, 開始測試亞硝酸鹽濃度. 每兩天量一次, 亞硝酸鹽會到頂然後慢慢的撿退

在第一個星期中, 可加入商品硝化菌, 這樣也有幫助.幸運的話,可縮短一個星期的時間. 養魚的頭一個月是關鍵期. 是老天給我們的測試. 看我們是真的想養魚還是玩玩而已.

Day 17

開始進入後期階段. 再過一個星期, 當亞硝酸鹽和阿摩尼亞濃度都降為零時, 硝酸鹽的濃度開始增加.這時, 恭喜大大. 系統建立完成. 先別急著追加魚. 先換少量的水. 再等兩天. 再加魚.一次也不要加超過三隻. 不然一下子加太多, 系統會崩盤. 前三個星期所做的努力就付諸東水了.



1. 遲緩期


2. 對數生長期


3. 遞減生長期


4. 靜止期


5. 內呼吸期



硝化細菌在一般環境中也有老化及死亡的問題,老化及死亡是有機生命體必須共同面對的問題。硝化菌的生境條件: 凡是環境之物理、化學及生物等性均會影響硝化細菌之生長,因此硝化細菌的生境條件可被區分為物理因子、化學因子及生物因子三類。其中物理因子主要為溫度、光照、底質、水流等:化學因子主要是鹽度、溶氧、ph質、抑制劑等:生物因子主要為掠食者、競爭排除作用等。

1. 溫度


2. 光


3. 底質


4. 水流


5. 溶氧


6. ph值


7. 競爭排除作用














第30天,氨 和亞硝酸鹽已經檢測不出,水族箱完成了氮循環,你可以換掉一部分水,然後放入你想要的魚了。