GVKun编程网logo

(OK) running CORE—Common Open Research Emulator—docker(lookup registry-1.docker.io on)

48

在本文中,我们将详细介绍(OK)runningCORE—CommonOpenResearchEmulator—docker的各个方面,并为您提供关于lookupregistry-1.docker.io

在本文中,我们将详细介绍(OK) running CORE—Common Open Research Emulator—docker的各个方面,并为您提供关于lookup registry-1.docker.io on的相关解答,同时,我们也将为您带来关于(OK) CentOS7/Fedora23——Installing Docker——core—pip、(OK) Fedora 23——CORE——docker——(1)——> install-kernel、(OK) Fedora 23——CORE——docker——(2)——> install-quagga、(OK) Fedora 23——CORE——docker——(3)——> install-docker的有用知识。

本文目录一览:

(OK) running CORE—Common Open Research Emulator—docker(lookup registry-1.docker.io on)

(OK) running CORE—Common Open Research Emulator—docker(lookup registry-1.docker.io on)

-----------------------INSTALL quagga
http://blog.chinaunix.net/uid-14735472-id-5595972.html

core-manual.pdf

[root@localhost quagga-0.99.24]# pwd
/opt/tools/network_simulators/quagga-0.99.24

[root@localhost quagga-0.99.24]#
cp pimd/pimd.conf.sample  /usr/local/etc/quagga/pimd.conf
cp isisd/isisd.conf.sample  /usr/local/etc/quagga/isisd.conf
cp babeld/babeld.conf.sample  /usr/local/etc/quagga/babeld.conf
cp ospf6d/ospf6d.conf.sample  /usr/local/etc/quagga/ospf6d.conf
cp ospfd/ospfd.conf.sample  /usr/local/etc/quagga/ospfd.conf
cp ripngd/ripngd.conf.sample  /usr/local/etc/quagga/ripngd.conf
cp ripd/ripd.conf.sample  /usr/local/etc/quagga/ripd.conf
cp bgpd/bgpd.conf.sample  /usr/local/etc/quagga/bgpd.conf
cp zebra/zebra.conf.sample  /usr/local/etc/quagga/zebra.conf
cp vtysh/vtysh.conf.sample  /usr/local/etc/quagga/vtysh.conf

ln -s /usr/local/etc/quagga/pimd.conf /etc/quagga/pimd.conf
ln -s /usr/local/etc/quagga/isisd.conf /etc/quagga/isisd.conf
ln -s /usr/local/etc/quagga/babeld.conf /etc/quagga/babeld.conf
ln -s /usr/local/etc/quagga/ospf6d.conf /etc/quagga/ospf6d.conf
ln -s /usr/local/etc/quagga/ospfd.conf /etc/quagga/ospfd.conf
ln -s /usr/local/etc/quagga/ripngd.conf /etc/quagga/ripngd.conf
ln -s /usr/local/etc/quagga/ripd.conf /etc/quagga/ripd.conf
ln -s /usr/local/etc/quagga/bgpd.conf /etc/quagga/bgpd.conf
ln -s /usr/local/etc/quagga/zebra.conf /etc/quagga/zebra.conf
ln -s /usr/local/etc/quagga/vtysh.conf /etc/quagga/vtysh.conf

[root@localhost core-4.8]# cp /usr/local/etc/quagga/zebra.conf /usr/local/etc/quagga/Quagga.conf

+++++++++++++++++++++++++ install docker etc.

# Fedora 23

    # dnf install openvswitch docker-io xterm wireshark-gnome ImageMagick tcl tcllib tk kernel-modules-extra util-linux

    # echo ''DOCKER_STORAGE_OPTIONS="-s overlay"'' >> /etc/sysconfig/docker-storage
    # systemctl restart docker

----------
如果出现如下问题:
# systemctl start docker
Job for docker.service failed. See ''systemctl status docker.service'' and ''journalctl -xn'' for details.
解决方法:
# rm /var/lib/docker -rf
# systemctl daemon-reload
# systemctl start docker

// 下面两条命令不用
# dnf remove docker
# dnf install docker
----------

----------
    Arch:
    # cp /usr/lib/systemd/system/docker.service /etc/systemd/system/docker.service
    ### add overlay to ExecStart
    ExecStart=/usr/bin/docker daemon -s overlay -H fd://
    ### reload systemd files and restart docker.service
    # systemctl daemon-reload
    # systemctl restart docker

    Check status with docker info:
    # docker info | grep Storage
    Storage Driver: overlay
----------

http://stackoverflow.com/questions/20994863/how-to-use-docker-or-linux-containers-for-network-emulation

<span style="font-size:18px;">
    CORE Network Emulator does have a Docker Service that I contributed and wrote an article about. The initial version that is in 4.8 is mostly broken but I have fixed and improved it. A pull request is on GitHub.

    The service allows you to tag Docker Images with ''core'' and then they appear as an option in the services settings. You must select the Docker image which starts the docker service in the container. You then select the container or containers that you want to run in that node. It scales quite well and I have had over 2000 nodes on my 16Gb machine.

    You mentioned OVS as well. This is not yet built in to CORE but can be used manually. I just answered a question on the CORE mailing list on this. It gives a brief overview of switching out a standard CORE switch(bridge) with OVS. Text reproduced below if it is useful:

</span>

+++++++++++++++++++++++++++++
[root@localhost quagga]# ll
总用量 6612
-rw-r--r--.  1 root root 2471193 1 月  13 21:57 quagga-0.99.21mr2.2.tar.gz
-rw-r--r--.  1 root root 1680796 1 月  12 15:36 quagga-0.99.24.tar.xz
-rw-r--r--.  1 root root 2560375 1 月  14 14:27 quagga-svnsnap.tgz  // 最新版
[root@localhost quagga]#


-------------------Fedora 23, Installing Quagga

# tar xzf quagga-svnsnap.tgz
# cd quagga
[root@localhost quagga]# ./bootstrap.sh
[root@localhost quagga]# ./configure --enable-user=root --enable-group=root --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh --localstatedir=/var/run/quagga

// copy basic.texi, ipv6.texi in quagga-0.99.24/doc   to   quagga-0.99.21mr2.2/doc
[root@localhost quagga]# cp ../quagga-0.99.24/doc/basic.texi ../quagga-0.99.24/doc/ipv6.texi doc/

[root@localhost quagga]# make -j4
[root@localhost quagga]# make install
[root@localhost quagga]# systemctl cat zebra.service

[root@localhost quagga]# systemctl start zebra.service
Job for zebra.service failed because a configured resource limit was exceeded. See "systemctl status zebra.service" and "journalctl -xe" for details.
[root@localhost quagga]# mkdir /run/quagga/

[root@localhost quagga]# systemctl start zebra.service
[root@localhost quagga]# systemctl status zebra.service
[root@localhost quagga]# systemctl stop zebra.service

[root@localhost quagga]# vtysh
[root@localhost quagga]# telnet localhost 2601

+++++++++++++++++++++++++++++

[root@localhost core]# systemctl start squid.service
[root@localhost core]# systemctl status squid.service

+++++++++++++++++++++++++++++

# tar xzf core-network_4.8.orig.tar.gz
# cd core-4.8

-----------------------INSTALL CORE --- OK OK

Fedora 23:

dnf install bash bridge-utils ebtables iproute libev python procps-ng net-tools tcl tk tkimg autoconf automake make libev-devel python-devel ImageMagick help2man


// 重要,在 CORE 中,执行 /root/.core/configs/m-MPE-manet.imn,不能正常初始化。需要执行如下命令。

//  http://blog.csdn.net/ztguang/article/details/51262543

dnf install kernel-modules-extra-`uname -r`

CentOS 7:
yum install bash bridge-utils ebtables iproute libev python procps-ng net-tools tcl tk tkimg autoconf automake make libev-devel python-devel ImageMagick help2man


You can obtain the CORE source from the CORE source page. Choose either a stable release version orthe development snapshot available in thenightly_snapshots directory.The -j8 argument tomake will run eight simultaneous jobs, to speed upbuilds on multi-core systems. Notice theconfigure flag to tell the buildsystem that a systemd service file should be installed under Fedora.

[root@localhost core-4.8]# ./bootstrap.sh
[root@localhost core-4.8]# ./configure --with-startup=systemd
[root@localhost core-4.8]# make -j4
[root@localhost core-4.8]# make install

Note that the Linux RPM and Debian packages do not use the/usr/local prefix, and files are instead installed to /usr/sbin, and /usr/lib. This difference is a result of aligning with the directorystructure of Linux packaging systems and FreeBSD ports packaging.

Another note is that the Python distutils in Fedora Linux will install the COREPython modules to/usr/lib/python2.7/site-packages/core, instead ofusing thedist-packages directory.

The CORE Manual documentation is built separately from thedoc/ sub-directory in the source. It requires Sphinx:

sudo yum install python-sphinx
cd core-4.8/doc
make html
make latexpdf
-----------------------INSTALL CORE --- OK OK



-----------------------Test CORE

To test that the CORE Network Emulator is working, start the CORE daemon and the GUI.

[root@localhost core-4.8]# pwd
/opt/tools/network_simulators/core/core-4.8

[root@localhost core-4.8]# /etc/init.d/core-daemon start
[root@localhost core-4.8]# core-gui

[root@localhost core-4.8]# ls /tmp/pycore.56386/
n1                    n1.xy                 n3.pid                n5.log                n7.conf/              n9
n10                   n2                    n3.xy                 n5.pid                n7.log                n9.conf/
n10.conf/             n2.conf/              n4                    n5.xy                 n7.pid                n9.log
n10.log               n2.log                n4.conf/              n6                    n7.xy                 n9.pid
n10.pid               n2.pid                n4.log                n6.conf/              n8                    n9.xy
n10.xy                n2.xy                 n4.pid                n6.log                n8.conf/              nodes
n1.conf/              n3                    n4.xy                 n6.pid                n8.log                servers
n1.log                n3.conf/              n5                    n6.xy                 n8.pid                session-deployed.xml
n1.pid                n3.log                n5.conf/              n7                    n8.xy                 state

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
关键点
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

core-svnsnap.tgz

[root@localhost core]# pwd
/opt/tools/network_simulators/core/core

[root@localhost core]# gedit ./daemon/core/mobility.py
------------------------------------- 设置节点移动的时间间隔
    def runround(self):
        '''''' Advance script time and move nodes.
        ''''''
        #ztg add
        time.sleep(6)
------------------------------------- 设置节点移动的间距
        #ztg add
        #wp = self.WayPoint(time, nodenum, coords=(x,y,z), speed=speed)
        wp = self.WayPoint(time, nodenum, coords=(x,y,z), speed=3)
-------------------------------------
[root@localhost core]# systemctl start squid.service
[root@localhost core]# systemctl start docker

[root@localhost core]# /etc/init.d/core-daemon stop
[root@localhost core]# make uninstall ;  make clean
[root@localhost core]# make -j4
[root@localhost core]# make install
[root@localhost core]# systemctl enable core-daemon
Created symlink from /etc/systemd/system/multi-user.target.wants/core-daemon.service to /etc/systemd/system/core-daemon.service.

problem:
[root@n6 n6.conf]# docker info
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
resolve:
[root@localhost core]# iptables -F
[root@localhost core]# ip6tables -F

[root@localhost 桌面]# service core-daemon status
Redirecting to /bin/systemctl status  core-daemon.service
core-daemon.service - Common Open Research Emulator Service
   Loaded: loaded (/etc/systemd/system/core-daemon.service; disabled)
   Active: inactive (dead)

[root@localhost 桌面]# cat /etc/systemd/system/core-daemon.service
<span >
    [Unit]
    Description=Common Open Research Emulator Service
    After=network.target

    [Service]
    Type=forking
    PIDFile=/var/run/core-daemon.pid
    ExecStart=/usr/bin/python /usr/local/sbin/core-daemon -d

    [Install]
    WantedBy=multi-user.target

</span>

Here maybe what is installed with ''make install'':

<span style="font-size:18px;">
    /usr/local/bin/core-gui
    /usr/local/sbin/core-daemon
    /usr/local/sbin/[vcmd, vnoded, coresendmsg, core-cleanup.sh]
    /usr/local/lib/core/*
    /usr/local/share/core/*
    /usr/local/lib/python2.6/dist-packages/core/*
    /usr/local/lib/python2.6/dist-packages/[netns,vcmd].so
    /etc/core/*
    /etc/init.d/core

</span>

[root@localhost core]# /usr/share/openvswitch/scripts/ovs-ctl --system-id=random start
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]


//Fedora 23 & CentOS 7

[root@localhost core]# systemctl daemon-reload
[root@localhost core]# systemctl start core-daemon.service
[root@localhost core]# core-gui



----------------
NOTE: first to run the following command if using docker.
# systemctl start docker.service
----------------

NOTE: /root/.core/configs/m-MPE-manet.imn


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
----------------
error:
----------------
  File "/usr/lib/python2.7/site-packages/docker/client.py", line 142, in _raise_for_status
    raise errors.APIError(e, response, explanation=explanation)
APIError: 400 Client Error: Bad Request ("client version 1.10 is too old. Minimum supported API version is 1.21, please upgrade your client to a newer version")
----------------

solution:
----------------
# gedit /opt/tools/network_simulators/core/core/daemon/core/services/dockersvc.py
----------------
if ''Client'' in globals():
    client = Client(version=''1.21'')
----------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

docker.sh

#!/bin/sh
# auto-generated by Docker (docker.py)
echo "nameserver 8.8.8.8" > /run/resolvconf/resolv.conf
service docker start
# you could add a command to start a image here eg:
# docker run -d --net host --name coreDock

[root@localhost 桌面]# ls /var/lib/docker/containers/
[root@localhost 桌面]# ls /run/shm
[root@localhost 桌面]# ls /run/resolvconf

---------------------------
[root@n6 n6.conf]# ls /run/shm
[root@n6 n6.conf]# cat /run/resolvconf/resolv.conf
nameserver 8.8.8.8
[root@n6 n6.conf]# ls /var/lib/docker/containers/
[root@n6 n6.conf]#


+++++++++++++++++++++++++++++
http://www.segurancaremota.com.br/2014/01/simular-roteamentos-no-linux-com-core.html

If you are looking for an environment light, practical and efficient to simulate networks, this environment is the core.

Best of all is that if all goes well, it ''s just save the configuration files (default quagga) and up to its routers.

With this system, you can train your skills with networks in an environment totally safe.

So we go to the "hands - on" and install the system. I made a "compiled" after reading the documentation in the official site -http://www.nrl.navy.mil/itd/ncs/products/core

The instructions will follow shortly after the video, but remember, I did this article in a Linux Kali (Debian), so if you''ll do in other distribution there may be some variation. In the Fedora also worked perfectly.

-------------------Downloads:

Core - http://downloads.pf.itd.nrl.navy.mil/core/source/core-4.6.tar.gz  

Quagga - http://downloads.sourceforge.net/project/quagga/quagga-0.99.20.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fquagga%2F&ts=1390603600&use_mirror=ufpr

Installing pre requirements # apt-get install bash bridge-utils ebtables iproute  libev-dev python tcl8.5 tk8.5 libtk-img autoconf  automake gcc libev-dev make python-dev  libreadline-dev pkg-config imagemagick help2man node gawk quagga  

-------------------Install CORE
# tar xzf core-4.6.tar.gz
# cd core-4.6
# ./bootstrap.sh
# ./configure
# make -j8
# make install

-------------------Installing Quagga
# yum group install ''Development Tools''        [on CentOS/RHEL 7/6]
# dnf group install ''Development Tools''        [on Fedora 22+ Versions]

/opt/tools/network_simulators/quagga/quagga-svnsnap.tgz
-rw-r--r--. 1 root root 2560375 1 月  14 14:27 ../quagga-svnsnap.tgz

# tar xzf quagga-0.99.21mr2.2.tar.gz
# cd quagga-0.99.21mr2.2


configure.ac:217: error: possibly undefined macro: AC_PROG_LIBTOOL
[root@localhost quagga]# dnf install libtool
[root@localhost quagga]# dnf install autoconf-archive

//automake: You are advised to start using ''subdir-objects'' option throughout your
[root@localhost quagga]# gedit configure.ac
dnl AM_INIT_AUTOMAKE(1.6)
AM_INIT_AUTOMAKE([subdir-objects])

[root@localhost quagga]# ./bootstrap.sh

[root@localhost quagga]# dnf install gcc-c++

[root@localhost quagga]# ./configure --enable-user=root --enable-group=root --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh --localstatedir=/var/run/quagga

// 下面一行暂不需要
// copy basic.texi, ipv6.texi in quagga-0.99.24/doc   to   quagga-0.99.21mr2.2/doc

// makeinfo: command not found
[root@localhost quagga]# dnf install texinfo


# make -j4
# make install

-------------------Testing the environment
# core-gui


(OK) CentOS7/Fedora23——Installing Docker——core—pip

(OK) CentOS7/Fedora23——Installing Docker——core—pip

# rpm -ivh ftp://ftp.muug.mb.ca/mirror/centos/7.2.1511/os/x86_64/Packages/PyYAML-3.10-11.el7.x86_64.rpm
# yum -y install docker docker-registry   OR    # yum -y install docker-engine docker-registry
# rpm -qa | grep docker
docker-engine-selinux-1.9.1-1.el7.centos.noarch
docker-registry-0.9.1-7.el7.x86_64
docker-engine-1.9.1-1.el7.centos.x86_64

# rm /etc/docker/key.json

# systemctl daemon-reload
# systemctl start docker.service
# systemctl enable docker.service
# systemctl disable docker.service

# docker version

# docker search centos

# docker pull centos
# docker run -i -t centos /bin/bash

--------------------------------
problem:
[root@localhost ~]# systemctl restart docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
[root@localhost ~]# docker daemon
WARN[0000] Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
FATA[0000] Error starting daemon: error initializing graphdriver: "/var/lib/docker" contains other graphdrivers: overlay; Please cleanup or explicitly choose storage driver (-s <DRIVER>)

resolve:
# rm /var/lib/docker/overlay/ -rf

--------------------------------
----------------------------------------------
uninstall docker:
----------------------------------------------
systemctl stop docker
systemctl disable docker
systemctl daemon-reload
yum -y remove docker*
rm -rf /etc/docker /var/lib/docker /var/run/docker
----------------------------------------------
core—pip  ——  refer to /opt/tools/network_simulators/core/core/daemon/core/service.py
----------------------------------------------
curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
python get-pip.py
pip --help
pip -V

++++++++++++++++++++++++++++++++++++++++++++

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
docker.io/centos    latest              60e65a8e4030        4 weeks ago         196.6 MB

# docker tag c8a648134623 docker.io/centos:core

# ls /var/lib/docker/
containers  devicemapper  graph  linkgraph.db  network  repositories-devicemapper  tmp  trust  volumes

# cat /var/lib/docker/repositories-devicemapper
{"Repositories":{"docker.io/centos":{"latest":"60e65a8e4030022260a4f84166814b2683e1cdfc9725a9c262e90ba9c5ae2332"},"hello-world":{"latest":"0a6ba66e537a53a5ea94f7c6a99c534c6adb12e3ed09326d4bf3b38f7c3ba4e7"}},"ConfirmDefPush":true}

++++++++++++++++++++++++++++++++++++++++++++
Where are docker images stored on the host machine?
++++++++++++++++++++++++++++++++++++++++++++
# docker info
Data file: /var/lib/docker/devicemapper/devicemapper/data
Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata

---------------------------------------------------------------
http://stackoverflow.com/questions/19234831/where-are-docker-images-stored-on-the-host-machine
---------------------------------------------------------------

The contents of the /var/lib/docker directory vary depending on the driver Docker is using for storage.

By default this will be aufs but can fall back to btrfs, devicemapper or vfs. In most places this will be aufs but the RedHats went with devicemapper.

You can manually set the storage driver with the -s or --storage-driver= option to the Docker daemon.

  • /var/lib/docker/{driver-name} will contain the driver specific storage for contents of the images.
  • /var/lib/docker/graph/ now only contains metadata about the image, in the json and layersize files.

In the case of aufs:

  • /var/lib/docker/aufs/diff/ has the file contents of the images.
  • /var/lib/docker/repositories-aufs is a JSON file containing local image information. This can be viewed with the command docker images.

In the case of devicemapper:

  • /var/lib/docker/devicemapper/devicemapper/data stores the images
  • /var/lib/docker/devicemapper/devicemapper/metadata the metadata
  • Note these files are thin provisioned "sparse" files so aren''t as big as they seem.
++++++++++++++++++++++++++++++


--------------------------------
problem:
[root@localhost ~]# systemctl restart docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
[root@localhost ~]# docker daemon
WARN[0000] Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
FATA[0000] Error starting daemon: error initializing graphdriver: "/var/lib/docker" contains other graphdrivers: overlay; Please cleanup or explicitly choose storage driver (-s <DRIVER>)

resolve:
# rm /var/lib/docker/overlay/ -rf

--------------------------------
Docker not starting “ could not delete the default bridge network: network bridge has active endpoints”"

Run
sudo mv /var/lib/docker/network/files/ /tmp/dn-bak

to reset your networks. Then restart docker (sudo systemctl restart docker or sudo service docker restart depending on your OS). If everything works again you can delete the dn-bak directory.

--------------------------------

[root@localhost ~]# gedit /etc/sysconfig/docker
DOCKER_OPTS="--dns 8.8.8.8 --dns 75.75.75.76"
DOCKER_OPTS="--iptables=true --dns=10.20.100.1 --dns=8.8.8.8"

--------------------------------

On arch linux I needed
ip link set down docker0 instead of ifconfig docker0 down and
systemctl restart docker instead of service docker start.
To delete all images, I did
docker rmi $(docker images -q)

++++++++++++++++
docker tag c8a648134623 docker.io/centos:core

/etc/sysconfig/docker-storage 这个配置文件
DOCKER_STORAGE_OPTIONS="--storage-opt dm.no_warn_on_loop_devices=true"
or
DOCKER_STORAGE_OPTIONS="-s overlay"

------------------------------------------------------------------------------------------------------
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
------------------------------------------------------------------------------------------------------
Description of problem:

`docker version`:

`docker info`:

`uname -a`:

Environment details (AWS, VirtualBox, physical, etc.):

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual Results:

Expected Results:

Additional info:
------------------------------------------------------------------------------------------------------


(OK) Fedora 23——CORE——docker——(1)——> install-kernel

(OK) Fedora 23——CORE——docker——(1)——> install-kernel


Information for package kernel
http://koji.fedoraproject.org/koji/packageinfo?packageID=8



dnf install kernel-4.3.3-301.fc23
dnf install kernel-devel-4.3.3-301.fc23
dnf install kernel-headers-4.3.3-301.fc23
dnf install kernel-debug-devel-4.3.3-301.fc23


 kernel                x86_64        4.3.3-301.fc23        fedora        46 k
 kernel-core            x86_64        4.3.3-301.fc23        fedora        20 M
 kernel-modules            x86_64        4.3.3-301.fc23        fedora        18 M

 kernel-modules-extra        x86_64        4.3.3-301.fc23        fedora

 kernel-devel            x86_64        4.3.3-301.fc23        fedora        9.8 M

 kernel-headers            x86_64        4.3.3-301.fc23        fedora        987 k

 kernel-debug-devel        x86_64        4.3.3-301.fc23        fedora        9.9 M

-----------------------------------------------------------------------------

Information for build kernel-4.4.3-300.fc23
http://koji.fedoraproject.org/koji/buildinfo?buildID=739250

wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.4.3/300.fc23/x86_64/kernel-debug-devel-4.4.3-300.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.4.3/300.fc23/x86_64/kernel-devel-4.4.3-300.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.4.3/300.fc23/x86_64/kernel-core-4.4.3-300.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.4.3/300.fc23/x86_64/kernel-modules-4.4.3-300.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.4.3/300.fc23/x86_64/kernel-modules-extra-4.4.3-300.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.4.3/300.fc23/x86_64/kernel-4.4.3-300.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.4.3/300.fc23/x86_64/kernel-headers-4.4.3-300.fc23.x86_64.rpm

下面按顺序 执行

rpm -ivh  kernel-debug-devel-4.4.3-300.fc23.x86_64.rpm
rpm -ivh  kernel-devel-4.4.3-300.fc23.x86_64.rpm
rpm -ivh  kernel-core-4.4.3-300.fc23.x86_64.rpm
rpm -ivh  kernel-modules-4.4.3-300.fc23.x86_64.rpm
rpm -ivh  kernel-modules-extra-4.4.3-300.fc23.x86_64.rpm
rpm -ivh  kernel-4.4.3-300.fc23.x86_64.rpm

暂时没有执行
rpm -ivh  kernel-headers-4.4.3-300.fc23.x86_64.rpm

[root @localhost kernel-rpm]# rpm -ivh  kernel-headers-4.4.3-300.fc23.x86_64.rpm
错误:依赖检测失败:
    kernel-headers < 4.4.8-300.fc23 被 (已安裝) kernel-headers-4.4.8-300.fc23.x86_64 取代

-----------------------------------------------------------------------------
Information for build kernel-4.3.3-301.fc23
http://koji.fedoraproject.org/koji/buildinfo?buildID=711494

wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.3.3/301.fc23/x86_64/kernel-debug-devel-4.3.3-301.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.3.3/301.fc23/x86_64/kernel-devel-4.3.3-301.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.3.3/301.fc23/x86_64/kernel-core-4.3.3-301.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.3.3/301.fc23/x86_64/kernel-modules-4.3.3-301.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.3.3/301.fc23/x86_64/kernel-modules-extra-4.3.3-301.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.3.3/301.fc23/x86_64/kernel-4.3.3-301.fc23.x86_64.rpm
wget  https://kojipkgs.fedoraproject.org//packages/kernel/4.3.3/301.fc23/x86_64/kernel-headers-4.3.3-301.fc23.x86_64.rpm

下面按顺序 执行

rpm -ivh  kernel-debug-devel-4.3.3-301.fc23.x86_64.rpm
rpm -ivh  kernel-devel-4.3.3-301.fc23.x86_64.rpm
rpm -ivh  kernel-core-4.3.3-301.fc23.x86_64.rpm
rpm -ivh  kernel-modules-4.3.3-301.fc23.x86_64.rpm
rpm -ivh  kernel-modules-extra-4.3.3-301.fc23.x86_64.rpm
rpm -ivh  kernel-4.3.3-301.fc23.x86_64.rpm

暂时没有执行
rpm -ivh  kernel-headers-4.3.3-301.fc23.x86_64.rpm

[root @localhost kernel-rpm]# rpm -ivh  kernel-headers-4.3.3-301.fc23.x86_64.rpm
错误:依赖检测失败:
    kernel-headers < 4.4.8-300.fc23 被 (已安裝) kernel-headers-4.4.8-300.fc23.x86_64 取代

-----------------------------------------------------------------------------
rpm --oldpackage -ivh  kernel-debug-devel-4.3.3-301.fc23.x86_64.rpm
rpm --oldpackage -ivh  kernel-devel-4.3.3-301.fc23.x86_64.rpm
rpm --oldpackage -ivh  kernel-core-4.3.3-301.fc23.x86_64.rpm
rpm --oldpackage -ivh  kernel-modules-4.3.3-301.fc23.x86_64.rpm
rpm --oldpackage -ivh  kernel-modules-extra-4.3.3-301.fc23.x86_64.rpm
rpm --oldpackage -ivh  kernel-4.3.3-301.fc23.x86_64.rpm



下面由于版本问题,暂时不执行

rpm --oldpackage -ivh  kernel-headers-4.3.3-301.fc23.x86_64.rpm


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
dnf - Fedora23——删除多余不用的内核
http://blog.csdn.net/ztguang/article/details/51302063
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[root @localhost ~]# uname -r
uname -r

4.3.5-300.fc23.x86_64

[root @localhost ~]# rpm -qa | grep kernel |grep 4.4.8
rpm -qa | grep kernel |grep 4.4.8

kernel-core-4.4.8-300.fc23.x86_64
kernel-headers-4.4.8-300.fc23.x86_64
kernel-modules-4.4.8-300.fc23.x86_64
kernel-modules-extra-4.4.8-300.fc23.x86_64
kernel-devel-4.4.8-300.fc23.x86_64
kernel-4.4.8-300.fc23.x86_64


[root @localhost ~]#
dnf remove kernel-core-4.4.8-300.fc23.x86_64
dnf remove kernel-devel-4.4.8-300.fc23.x86_64

dnf remove kernel-modules-4.4.8-300.fc23.x86_64
dnf remove kernel-modules-extra-4.4.8-300.fc23.x86_64
dnf remove kernel-4.4.8-300.fc23.x86_64

下面这条命令暂时不执行,因为要删除的包太多
dnf remove kernel-headers-4.4.8-300.fc23.x86_64

[root@localhost ~]# dnf remove kernel-headers-4.4.8-300.fc23.x86_64
===================================================================
移除  226 软件包

安装大小:754 M
确定吗?[y/N]: n
操作中止。

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

删除内核:4.3.5

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

[root@localhost ~]#  rpm -qa | grep kernel |grep 4.3.5

kernel-modules-4.3.5-300.fc23.x86_64
kernel-4.3.5-300.fc23.x86_64
kernel-devel-4.3.5-300.fc23.x86_64
kernel-core-4.3.5-300.fc23.x86_64
kernel-modules-extra-4.3.5-300.fc23.x86_64
kernel-debug-devel-4.3.5-300.fc23.x86_64

[root@localhost ~]#

dnf remove kernel-modules-4.3.5-300.fc23.x86_64
dnf remove kernel-4.3.5-300.fc23.x86_64
dnf remove kernel-devel-4.3.5-300.fc23.x86_64
dnf remove kernel-core-4.3.5-300.fc23.x86_64
dnf remove kernel-modules-extra-4.3.5-300.fc23.x86_64
dnf remove kernel-debug-devel-4.3.5-300.fc23.x86_64
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

(OK) Fedora 23——CORE——docker——(2)——> install-quagga

(OK) Fedora 23——CORE——docker——(2)——> install-quagga


-------------------Installing Quagga on Fedora 23

# dnf group install ''Development Tools''        [on Fedora 22+ Versions]

/opt/tools/network_simulators/quagga/quagga-svnsnap.tgz
-rwxrwx---.  1 root root 2471193 1 月  13 21:57 quagga-0.99.21mr2.2.tar.gz
-rwxrwx---.  1 root root 2560375 1 月  14 14:27 quagga-svnsnap.tgz

# tar xzf quagga-svnsnap.tgz
# cd quagga

[root @localhost quagga]# ./bootstrap.sh
[root @localhost quagga]# ./configure --enable-user=root --enable-group=root --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh --localstatedir=/var/run/quagga

// copy basic.texi, ipv6.texi in quagga-0.99.24/doc   to   quagga-0.99.21mr2.2/doc
[root @localhost quagga]# cp ../quagga-0.99.24/doc/basic.texi ../quagga-0.99.24/doc/ipv6.texi doc/

[root @localhost quagga]# make -j4
[root @localhost quagga]# make install

---------------------------------------------------------------------------------
[root@localhost quagga]# pwd
/root/core-tools/quagga

/bin/cp pimd/pimd.conf.sample  /usr/local/etc/quagga/pimd.conf
/bin/cp isisd/isisd.conf.sample  /usr/local/etc/quagga/isisd.conf
/bin/cp babeld/babeld.conf.sample  /usr/local/etc/quagga/babeld.conf
/bin/cp ospf6d/ospf6d.conf.sample  /usr/local/etc/quagga/ospf6d.conf
/bin/cp ospfd/ospfd.conf.sample  /usr/local/etc/quagga/ospfd.conf
/bin/cp ripngd/ripngd.conf.sample  /usr/local/etc/quagga/ripngd.conf
/bin/cp ripd/ripd.conf.sample  /usr/local/etc/quagga/ripd.conf
/bin/cp bgpd/bgpd.conf.sample  /usr/local/etc/quagga/bgpd.conf
/bin/cp zebra/zebra.conf.sample  /usr/local/etc/quagga/zebra.conf
/bin/cp vtysh/vtysh.conf.sample  /usr/local/etc/quagga/vtysh.conf

ln -s /usr/local/etc/quagga/pimd.conf /etc/quagga/pimd.conf
ln -s /usr/local/etc/quagga/isisd.conf /etc/quagga/isisd.conf
ln -s /usr/local/etc/quagga/babeld.conf /etc/quagga/babeld.conf
ln -s /usr/local/etc/quagga/ospf6d.conf /etc/quagga/ospf6d.conf
ln -s /usr/local/etc/quagga/ospfd.conf /etc/quagga/ospfd.conf
ln -s /usr/local/etc/quagga/ripngd.conf /etc/quagga/ripngd.conf
ln -s /usr/local/etc/quagga/ripd.conf /etc/quagga/ripd.conf
ln -s /usr/local/etc/quagga/bgpd.conf /etc/quagga/bgpd.conf
ln -s /usr/local/etc/quagga/zebra.conf /etc/quagga/zebra.conf
ln -s /usr/local/etc/quagga/vtysh.conf /etc/quagga/vtysh.conf

/bin/cp /usr/local/etc/quagga/zebra.conf /usr/local/etc/quagga/Quagga.conf

---------------------------------------------------------------------------------
// # cp /etc/sysconfig/quagga.bac /etc/sysconfig/quagga        //this command is not used

# gedit /usr/lib/systemd/system/zebra.service

[Unit]
Description=GNU Zebra routing manager
Wants=network.target
Before=network.target
ConditionPathExists=/usr/local/etc/quagga/zebra.conf  

[Service]
Type=forking
PIDFile=/run/quagga/zebra.pid
EnvironmentFile=-/etc/sysconfig/quagga
ExecStartPre=/sbin/ip route flush proto zebra
ExecStart=/usr/local/sbin/zebra -d $ZEBRA_OPTS -f /usr/local/etc/quagga/zebra.conf
Restart=on-abort  

[Install]
WantedBy=multi-user.target

---------------------------------------------------------------------------------
[root@localhost quagga]# systemctl cat zebra.service

[root@localhost quagga]# systemctl start zebra.service
Job for zebra.service failed because a configured resource limit was exceeded. See "systemctl status zebra.service" and "journalctl -xe" for details.
[root@localhost quagga]# mkdir /run/quagga/

[root@localhost quagga]# systemctl start zebra.service
[root@localhost quagga]# systemctl status zebra.service
[root@localhost quagga]# systemctl stop zebra.service

[root@localhost quagga]# vtysh
[root@localhost quagga]# telnet localhost 2601

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

So far, OK

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

configure.ac:217: error: possibly undefined macro: AC_PROG_LIBTOOL
[root@localhost quagga]# dnf install libtool
[root@localhost quagga]# dnf install autoconf-archive

//automake: You are advised to start using ''subdir-objects'' option throughout your
[root@localhost quagga]# gedit configure.ac
dnl AM_INIT_AUTOMAKE(1.6)
AM_INIT_AUTOMAKE([subdir-objects])

[root@localhost quagga]# ./bootstrap.sh

[root@localhost quagga]# dnf install gcc-c++

[root@localhost quagga]# ./configure --enable-user=root --enable-group=root --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh --localstatedir=/var/run/quagga

// 下面一行暂不需要
// copy basic.texi, ipv6.texi in quagga-0.99.24/doc   to   quagga-0.99.21mr2.2/doc

// makeinfo: command not found
[root@localhost quagga]# dnf install texinfo


# make -j4
# make install

[root@localhost quagga]# pwd
cp pimd/pimd.conf.sample  /usr/local/etc/quagga/pimd.conf
cp isisd/isisd.conf.sample  /usr/local/etc/quagga/isisd.conf
cp babeld/babeld.conf.sample  /usr/local/etc/quagga/babeld.conf
cp ospf6d/ospf6d.conf.sample  /usr/local/etc/quagga/ospf6d.conf
cp ospfd/ospfd.conf.sample  /usr/local/etc/quagga/ospfd.conf
cp ripngd/ripngd.conf.sample  /usr/local/etc/quagga/ripngd.conf
cp ripd/ripd.conf.sample  /usr/local/etc/quagga/ripd.conf
cp bgpd/bgpd.conf.sample  /usr/local/etc/quagga/bgpd.conf
cp zebra/zebra.conf.sample  /usr/local/etc/quagga/zebra.conf
cp vtysh/vtysh.conf.sample  /usr/local/etc/quagga/vtysh.conf

ln -s /usr/local/etc/quagga/pimd.conf /etc/quagga/pimd.conf
ln -s /usr/local/etc/quagga/isisd.conf /etc/quagga/isisd.conf
ln -s /usr/local/etc/quagga/babeld.conf /etc/quagga/babeld.conf
ln -s /usr/local/etc/quagga/ospf6d.conf /etc/quagga/ospf6d.conf
ln -s /usr/local/etc/quagga/ospfd.conf /etc/quagga/ospfd.conf
ln -s /usr/local/etc/quagga/ripngd.conf /etc/quagga/ripngd.conf
ln -s /usr/local/etc/quagga/ripd.conf /etc/quagga/ripd.conf
ln -s /usr/local/etc/quagga/bgpd.conf /etc/quagga/bgpd.conf
ln -s /usr/local/etc/quagga/zebra.conf /etc/quagga/zebra.conf
ln -s /usr/local/etc/quagga/vtysh.conf /etc/quagga/vtysh.conf

cp /usr/local/etc/quagga/zebra.conf /usr/local/etc/quagga/Quagga.conf



-------------------Fedora 23, Installing Quagga
------/root/core-tools/quagga/missing: 行 81: makeinfo: 未找到命令
------need to:    dnf install texinfo

# tar xzf quagga-svnsnap.tgz
# cd quagga
[root@localhost quagga]# ./bootstrap.sh
[root@localhost quagga]# ./configure --enable-user=root --enable-group=root --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh --localstatedir=/var/run/quagga

// copy basic.texi, ipv6.texi in quagga-0.99.24/doc   to   quagga-0.99.21mr2.2/doc
[root@localhost quagga]# cp ../quagga-0.99.24/doc/basic.texi ../quagga-0.99.24/doc/ipv6.texi doc/

[root@localhost quagga]# make -j4
[root@localhost quagga]# make install
[root@localhost quagga]# systemctl cat zebra.service

[root@localhost quagga]# systemctl start zebra.service
Job for zebra.service failed because a configured resource limit was exceeded. See "systemctl status zebra.service" and "journalctl -xe" for details.
[root@localhost quagga]# mkdir /run/quagga/

[root@localhost quagga]# systemctl start zebra.service
[root@localhost quagga]# systemctl status zebra.service
[root@localhost quagga]# systemctl stop zebra.service

[root@localhost quagga]# vtysh
[root@localhost quagga]# telnet localhost 2601

+++++++++++++++++++++++++++++

(OK) Fedora 23——CORE——docker——(3)——> install-docker

(OK) Fedora 23——CORE——docker——(3)——> install-docker

+++++++++++++++++++++++++ install docker etc.

# Fedora 23

dnf install openvswitch docker-io xterm wireshark-gnome ImageMagick tcl tcllib tk kernel-modules-extra util-linux

----------------------------------------------------
install docker 1.9.1
----------------------------------------------------

//docker 1.9.1
//dnf update --exclude=kernel*
//dnf update

dnf remove docker
dnf install docker-io

rm /var/lib/docker/ -rf
ls /var/lib/docker/

systemctl start docker
systemctl stop docker
systemctl status docker
systemctl enable docker

docker search busybox
docker pull busybox
docker images
docker tag 307ac631f1b5 docker.io/busybox:core
docker rmi docker.io/busybox:core

docker run --rm -it busybox /bin/sh

dnf remove docker-io
rm /var/lib/docker/ -rf

----------------------------------------------------

So far, OK

----------------------------------------------------
https://docs.docker.com/engine/installation/linux/fedora/

install docker 1.11.1
----------------------------------------------------
tee /etc/yum.repos.d/docker.repo <<-''EOF''
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/fedora/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF

----------------------------------------------------
//docker 1.11.1
//dnf update

dnf update --exclude=kernel*
init 6

dnf update
dnf install docker-engine

dnf remove docker-engine


----------------------------------------------------

dnf install docker-engine-1.10.3 docker-engine-selinux-1.10.3

---------------------------------------------------- the two following lines, not use

    # echo ''DOCKER_STORAGE_OPTIONS="-s overlay"'' >> /etc/sysconfig/docker-storage
    # systemctl restart docker

----------------------------------------------------
ls /etc/systemd/system
ls /usr/lib/systemd/system
ls /usr/lib/systemd/system/docker.service

rm /var/lib/docker/overlay/ -rf
rm /var/lib/docker/ -rf
----------------------------------------------------

[root @localhost 桌面]# gedit /usr/lib/systemd/system/docker.service

# ExecStart=/usr/bin/docker daemon -H fd://
ExecStart=/usr/bin/docker daemon -s overlay


systemctl start docker.service
systemctl restart docker.service
systemctl stop docker.service
systemctl status docker.service

systemctl daemon-reload

----------------------------------------------------

Work around for me right now is to downgrade to 1.6.2.

# yum downgrade docker-1.6.2-14.el7.centos
# systemctl restart docker

----------------------------------------------------
docker info
docker version

----------------------------------------------------

[root @localhost 桌面]#
docker search busybox
docker pull busybox
docker images
docker tag 47bcc53f74dc busybox:core
docker rmi busybox:core

docker run --rm -it busybox /bin/sh


[root @localhost 桌面]# docker logs $(docker ps -q) | tail -20

[root @n2 n2.conf]#
docker daemon -s overlay &
docker run --rm -it busybox /bin/sh

rm /var/lib/docker/ -rf

----------------------------------------------------

So far, OK

----------------------------------------------------


docker run hello-world

systemctl status systemd-udevd.service -l

//List Containers
docker ps
docker ps -a
docker ps -l
//Attach to a Specific Container
docker attach 9c09acd48a25
//View Logs for a Docker Container 2c9d5e12800e
docker logs 2c9d5e12800e

docker images
docker tag 778a53015523 centos:core

docker search centos
docker pull centos

docker images
docker rmi 778a53015523
docker tag 40467a0b3d66 centos:core
docker tag 44776f55294a ubuntu:core

docker run hello-world

docker run centos echo "hello world!"
docker run ubuntu echo "hello world!"
docker run ubuntu:core echo "hello world!"

docker run -it busybox /bin/sh
docker run --rm -it busybox /bin/sh

docker tag 307ac631f1b5 docker.io/busybox:core
docker run --rm -it busybox:core /bin/sh

docker run -v /tmp/dockerdev:/dev -it --rm centos:core bash

docker run -d --net host --name coreDock busybox /bin/sh

docker ps -a
brctl show
ldd $(which docker)

ps aux |grep docker

----------------------------------------------------
----------
如果出现如下问题:
# systemctl start docker
Job for docker.service failed. See ''systemctl status docker.service'' and ''journalctl -xn'' for details.
解决方法:
# rm /var/lib/docker -rf
# systemctl daemon-reload
# systemctl start docker

--------------------------------------------------------------------------------------------------------------
http://stackoverflow.com/questions/20994863/how-to-use-docker-or-linux-containers-for-network-emulation
--------------------------------------------------------------------------------------------------------------
    CORE Network Emulator does have a Docker Service that I contributed and wrote an article about. The initial version that is in 4.8 is mostly broken but I have fixed and improved it. A pull request is on GitHub.  
 
    The service allows you to tag Docker Images with ''core'' and then they appear as an option in the services settings. You must select the Docker image which starts the docker service in the container. You then select the container or containers that you want to run in that node. It scales quite well and I have had over 2000 nodes on my 16Gb machine.  
 
    You mentioned OVS as well. This is not yet built in to CORE but can be used manually. I just answered a question on the CORE mailing list on this. It gives a brief overview of switching out a standard CORE switch(bridge) with OVS. Text reproduced below if it is useful:  
--------------------------------------------------------------------------------------------------------------




关于(OK) running CORE—Common Open Research Emulator—dockerlookup registry-1.docker.io on的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于(OK) CentOS7/Fedora23——Installing Docker——core—pip、(OK) Fedora 23——CORE——docker——(1)——> install-kernel、(OK) Fedora 23——CORE——docker——(2)——> install-quagga、(OK) Fedora 23——CORE——docker——(3)——> install-docker的相关知识,请在本站寻找。

本文标签: