GVKun编程网logo

linux配置node forever(linux配置文件放在哪个目录)

20

如果您对linux配置nodeforever和linux配置文件放在哪个目录感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解linux配置nodeforever的各种细节,并对linux配置文件

如果您对linux配置node foreverlinux配置文件放在哪个目录感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解linux配置node forever的各种细节,并对linux配置文件放在哪个目录进行深入的分析,此外还有关于8 Linux Commands Every Developer Should Know、forever 启动nodejs、Linux Guide for Developers --- ubuntu开发者、linux – 编译MonoDevelop 4.2.3的实用技巧。

本文目录一览:

linux配置node forever(linux配置文件放在哪个目录)

linux配置node forever(linux配置文件放在哪个目录)

1:首先安装node.js

  1) 去官网下载和自己系统匹配的文件:

    英文网址:https://nodejs.org/en/download/

     中文网址:http://nodejs.cn/download/

    通过  uname -a  命令查看到我的Linux系统位数是64位(备注:x86_64表示64位系统, i686 i386表示32位系统)

2:下载下来的tar文件上传到服务器并且解压,然后通过建立软连接变为全局

  1)上传服务器可以是自己任意路径,目前我的放置路径为  cd /opt/

  2)解压上传(解压后的文件我这边将名字改为了nodejs,这个地方自己随意,只要在建立软连接的时候写正确就可以)

      ① tar -xvf   node-v6.10.0-linux-x64.tar.xz   

      ② mv node-v6.10.0-linux-x64  nodejs 

      ③确认一下nodejs下bin目录是否有node 和npm文件,如果有执行软连接,如果没有重新下载执行上边步骤;

  3)建立软连接,变为全局   

   ①ln -s /opt/nodejs/bin/npm /usr/local/bin/ 

     ②ln -s /opt/nodejs/bin/node /usr/local/bin/

    如果之前已经建立了软连接,我是直接用 rm -rf node 和 rm -rf npm进行删除

  4)最后一步检验nodejs是否已变为全局

    node -v

    npm -v

3: node安装成功之后在进行forever安装

  1) 运行 npm install forever -g 出现一直卡主,可以使用阿里镜像和cnpm解决。我采用cnpm。

  2) 运行forever然后出现-bash: forever: command not found,出现这个原因是因为没有配置环境变量

  3) 执行命令vim /etc/profile 在最后一行加上

    export PATH=$PATH:/opt/nodejs/lib/node_modules/forever/bin
    export PATH=$PATH:/opt/nodejs/bin

    再运行forever就可以了

8 Linux Commands Every Developer Should Know

8 Linux Commands Every Developer Should Know

8 Linux Commands Every Developer Should Know

Every developer, at some point in their career, will find themselves looking for some information on a Linux* box. I don''t claim to be an expert, in fact, I claim to be very under-skilled when it comes to linux command line mastery. However, with the following 8 commands I can get pretty much anything I need, done. 

note: There are extensive documents on each of the following commands. This blog post is not meant to show the exhaustive features of any of the commands. Instead, this is a blog post that shows my most common usages of my most commonly used commands. If you don''t know linux commands well, and you find yourself needing to grab some data, this blog post might give you a bit of guidance. 

Let''s start with some sample documents. Let''s assume that I have 2 files showing orders that are being placed with a third party and the responses the third party sends.
order.out.log
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99

order.in.log
8:22:20 111, Order Complete
8:23:50 112, Order sent to fulfillment
8:24:20 113, Refund sent to processing
cat
cat - concatenate files and print on the standard output
The cat command is simple, as the following example shows.
jfields$ cat order.out.log 
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
As the description shows, you can also use it to concatenate multiple files.
jfields$ cat order.* 
8:22:20 111, Order Complete
8:23:50 112, Order sent to fulfillment
8:24:20 113, Refund sent to processing
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
If I wanted to view my log files I can concatenate them and print them to standard out, as the example above shows. That''s cool, but things could be a bit more readable. 

sort
sort - sort lines of text files
Using sort is an obvious choice here.
jfields$ cat order.* | sort
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:22:20 111, Order Complete
8:23:45 112, 1, Joy of Clojure, Hardcover, 29.99
8:23:50 112, Order sent to fulfillment
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:20 113, Refund sent to processing
As the example above shows, my data is now sorted. With small sample files, you can probably deal with reading the entire file. However, any real production log is likely to have plenty of lines that you don''t care about. You''re going to want a way to filter the results of  piping cat to sort. 

grep
grep, egrep, fgrep - print lines matching a pattern
Let''s pretend that I only care about finding an order for PofEAA. Using grep I can limit my results to PofEAA transactions.
jfields$ cat order.* | sort | grep Patterns
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
Assume that an issue occurred with the refund on order 113, and you want to see all data related to that order - grep is your friend again.
jfields$ cat order.* | sort | grep ":\d\d 113, "
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:20 113, Refund sent to processing
You''ll notice that I put a bit more than "113" in my regex for grep. This is because 113 can also come up in a product title or a price. With a few extra characters, I can limit the results to strictly the transactions I''m looking for.

Now that we''ve sent the order details on to refunds, we also want to send the daily totals of sales and refunds on to the accounting team. They''ve asked for each line item for PofEAA, but they only care about the quantity and price. What we need to do is cut out everything we don''t care about. 

cut
cut - remove sections from each line of files
Using grep again, we can see that we grab the appropriate lines. Once we grab what we need, we can cut the line up into pieces, and rid ourselves of the unnecessary data.
jfields$ cat order.* | sort | grep Patterns
8:22:19 111, 1, Patterns of Enterprise Architecture, Kindle edition, 39.99
8:24:19 113, -1, Patterns of Enterprise Architecture, Kindle edition, 39.99
jfields$ cat order.* | sort | grep Patterns | cut -d"," -f2,5
 1, 39.99
 -1, 39.99
At this point we''ve reduced our data down to what accounting is looking for, so it''s time to paste it into a spreadsheet and be done with that task. 

Using cut is helpful in tracking down problems, but if you''re generating an output file you''ll often want something more complicated. Let''s assume that accounting also needs to know the order ids for building some type of reference documentation. We can get the information using cut, but the accounting team wants the order id to be at the end of the line, and surrounded in single quotes. (for the record, you might be able to do this with cut, I''ve never tried) 

sed
sed - A stream editor. A stream editor is used to perform basic text transformations on an input stream.
The following example shows how we can use sed to transform our lines in the requested way, and then cut is used to remove unnecessary data.
jfields$ cat order.* | sort | grep Patterns \
>| sed s/"[0-9\:]* \([0-9]*\)\, \(.*\)"/"\2, ''\1''"/
1, Patterns of Enterprise Architecture, Kindle edition, 39.99, ''111''
-1, Patterns of Enterprise Architecture, Kindle edition, 39.99, ''113''
lmp-jfields01:~ jfields$ cat order.* | sort | grep Patterns \
>| sed s/"[0-9\:]* \([0-9]*\)\, \(.*\)"/"\2, ''\1''"/ | cut -d"," -f1,4,5
1, 39.99, ''111''
-1, 39.99, ''113''
There''s a bit going on in that example regex, but nothing too complicated. The regex does the following things
  • remove the timestamp
  • capture the order number
  • remove the comma and space after the order number
  • capture the remainder of the line
There''s a bit of noise in there (quotes and slashes), but that''s to be expected when you''re working on the command line. 

Once we''ve captured the data we need, we can use \1 & \2 to reorder and output the data in our desired format. We also include the requested double quotes, and add our own comma to keep our format consistent. Finally, we use cut to remove the superfluous data. 

Now you''re in trouble. You''ve demonstrated that you can slice up a log file in fairly short order, and the CIO needs a quick report of the total number of book transactions broken down by book. 

uniq
uniq - removes duplicate lines from a uniqed file
(we''ll assume that other types of transactions can take place and ''filter'' our in file for ''Kindle'' and ''Hardcover'') 

The following example shows how to grep for only book related transactions, cut unnecessary information, and get a counted & unique list of each line.
jfields$ cat order.out.log | grep "\(Kindle\|Hardcover\)" | cut -d"," -f3 | sort | uniq -c
   1  Joy of Clojure
   2  Patterns of Enterprise Architecture
Had the requirements been a bit simpler, say "get me a list of all books with transactions", uniq also would have been the answer.
jfields$ cat order.out.log | grep "\(Kindle\|Hardcover\)" | cut -d"," -f3 | sort | uniq
 Joy of Clojure
 Patterns of Enterprise Architecture
All of these tricks work well, if you know where to find the file you need; however, sometimes you''ll find yourself in a deeply nested directory structure without any hints as to where you need to go. If you''re lucky enough to know the name of the file you need (or you have a decent guess) you shouldn''t have any trouble finding what you need. 

find
find - search for files in a directory hierarchy
In our above examples we''ve been working with order.in.log and order.out.log. On my box those files exist in my home directory. The following example shows how to find those files from a higher level, without even knowing the full filename.
jfields$ find /Users -name "order*"
Users/jfields/order.in.log
Users/jfields/order.out.log
Find has plenty of other options, but this does the trick for me about 99% of the time. 

Along the same lines, once you find a file you need, you''re not always going to know what''s in it and how you want to slice it up. Piping the output to standard out works fine when the output is short; however, when there''s a bit more data than what fits on a screen, you''ll probably want to pipe the output to less. 

less
less - allows forward & backward movement within a file
As an example, let''s go all the way back to our simple cat | sort example. If you execute the following command you''ll end up in less, with your in & out logs merged and sorted. Within less you can forward search with "/" and backward search with "?". Both searches take a regex.
jfields$ cat order* | sort | less
While in less you can try /113.*, which will highlight all transactions for order 113. You can also try ?.*112, which will highlight all timestamps associated with order 112. Finally, you can use ''q'' to quit less. 

The linux command line is rich, and someone intimidating. However, with the previous 8 commands, you should be able to get quite a few log slicing tasks completed - without having to drop to your favorite scripting language. 

* okay, possibly Unix, that''s not the point

forever 启动nodejs

forever 启动nodejs


  forever可以看做是一个nodejs的守护进程,能够启动,停止,重启我们的app应用。

1.全局安装 forever

// 记得加-g,forever要求安装到全局环境下 
sudo npm install forever -g

2.启动

// 1. 简单的启动 
forever start app.js 

// 2. 指定forever信息输出文件,当然,默认它会放到~/.forever/forever.log 
forever start -l forever.log app.js 

// 3. 指定app.js中的日志信息和错误日志输出文件, 
// -o 就是console.log输出的信息,-e 就是console.error输出的信息 
forever start -o out.log -e err.log app.js 

// 4. 追加日志,forever默认是不能覆盖上次的启动日志, 
// 所以如果第二次启动不加-a,则会不让运行 
forever start -l forever.log -a app.js 

// 5. 监听当前文件夹下的所有文件改动 
forever start -w app.js

3.文件改动监听并自动重启

// 1. 监听当前文件夹下的所有文件改动(不太建议这样) 
forever start -w app.js

4. 显示所有运行的服务

forever list

5. 停止操作

// 1. 停止所有运行的node App 
forever stopall 

// 2. 停止其中一个node App 
forever stop app.js 
// 当然还可以这样 
// forever list 找到对应的id,然后: 
forever stop [id]

6.重启操作

重启操作跟停止操作保持一致。

// 1. 启动所有 
forever restartall

 

Linux Guide for Developers --- ubuntu开发者

Linux Guide for Developers --- ubuntu开发者

命令行参考手册http://billie66.github.io/TLCL/index.html

linux – 编译MonoDevelop 4.2.3

linux – 编译MonoDevelop 4.2.3

我需要帮助,我正在尝试编译monodevelop代码,但是当我使用命令“./configure”告诉我我需要安装一个单声道版本,但我安装了它

[raven@localhost ~]$mono -V
    Mono JIT compiler version 3.2.8 (tarball Fri May 30 08:15:47 CDT 2014)
    copyright (C) 2002-2014 Novell,Inc,Xamarin Inc and Contributors. www.mono-project.com
        TLS:           __thread
        SIGSEGV:       altstack
        Notifications: epoll
        Architecture:  amd64
        disabled:      none
        Misc:          softdebug 
        LLVM:          supported,not enabled.
        GC:            sgen
    [raven@localhost ~]$cd /home/raven/Downloads/monodevelop-4.2.3
    [raven@localhost monodevelop-4.2.3]$./configure
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
    checking for gawk... gawk
    checking whether make sets $(MAKE)... yes
    checking how to create a ustar tar archive... gnutar
    checking whether to enable maintainer-specific portions of Makefiles... no
    checking for mono... /usr/local/bin/mono
    checking for gmcs... /usr/local/bin/gmcs
    checking for pkg-config... /usr/bin/pkg-config
    configure: error: You need mono 3.0.4 or newer
    [raven@localhost monodevelop-4.2.3]$

解决方法

configure脚本通过pkg-config读取单声道版本.确保已安装mono.pc并且您的pkg-config正在查找它.由于您似乎已将mono安装到/usr/local,因此您的mono.pc可能位于/usr/local/lib / pkg-config中.但是,您正在使用/usr/bin中的pkg-config,它可能未配置为查看/usr/local.您应该使用添加到PKG_CONfig_PATH的正确目录重新运行configure,例如:

PKG_CONfig_PATH=/usr/local/lib/pkgconfig ./configure

我们今天的关于linux配置node foreverlinux配置文件放在哪个目录的分享已经告一段落,感谢您的关注,如果您想了解更多关于8 Linux Commands Every Developer Should Know、forever 启动nodejs、Linux Guide for Developers --- ubuntu开发者、linux – 编译MonoDevelop 4.2.3的相关信息,请在本站查询。

本文标签: