这篇文章主要围绕Google爬行缓存代理和crawlcachingproxy展开,旨在为您提供一份详细的参考资料。我们将全面介绍Google爬行缓存代理的优缺点,解答crawlcachingproxy
这篇文章主要围绕Google爬行缓存代理和crawl caching proxy展开,旨在为您提供一份详细的参考资料。我们将全面介绍Google爬行缓存代理的优缺点,解答crawl caching proxy的相关问题,同时也会为您带来26 nginx反向代理-proxy_cache、Charles Proxy 不保存代理设置、CMPSC 311 Proxy Lab: Writing a Caching Web Proxy、ES6中的代理(Proxy)和反射(Reflection)的实用方法。
本文目录一览:- Google爬行缓存代理(crawl caching proxy)(google爬虫插件)
- 26 nginx反向代理-proxy_cache
- Charles Proxy 不保存代理设置
- CMPSC 311 Proxy Lab: Writing a Caching Web Proxy
- ES6中的代理(Proxy)和反射(Reflection)
Google爬行缓存代理(crawl caching proxy)(google爬虫插件)
前两天人们注意到Google Adsense蜘蛛所抓取的网页会出现在搜索结果中。Matt Cutts在他的博客里迅速的做出了回应,对这种现象做了进一步解释。
简单来说,Google在完成大爸爸数据中心升级后,各种蜘蛛抓取网页的机制产生了变化。不是各个蜘蛛直接抓取网页,而是由一个爬行缓存代理crawl caching proxy抓取网页,然后不同的蜘蛛从这个缓存中获取内容,从而节省了带宽。
Matt Cutts的帖子比较完整的翻译可以在幻灭和小添的博客里看到。
我要补充的是:第一:Matt Cutts特意指出,这个新的抓取机制不会让你的网页被抓取的更快,也不会对排名有任何影响。这个爬行缓存代理也不影响各个蜘蛛本来应该爬行的频率和时间表。只不过各个蜘蛛不直接爬行网页,而是从缓存中获取。
第二:更引起我注意的是,Matt Cutts说这个爬行缓存代理是大爸爸更新之后才有的。因为运行的很顺利,在其他人发现这种现象之前,Matt Cutts自己没意识到这种新的机制已经运行了。这说明Matt Cutts并不能掌握所有各个部门的最新情况,那么还有什么是Matt Cutts也还不知道的呢?
第三:Matt Cutts讲这个机制的目标是节省带宽,而不是隐藏页面(cloaked page)检测。我觉得弦外之音是,利用同样的技术,Google完全可以用其他的蜘蛛来检测隐藏页面。当然也可能是我过度敏感了。
另外,最近很多网站出现被收录的页面数目急剧下降的现象,我怀疑和这个新的页面抓取方法造成的混乱有关。很显然不是排名算法改变导致收录的网页数目下降,而是蜘蛛爬行的问题。
26 nginx反向代理-proxy_cache
proxy_cache设置
proxy_cache将从c上获取到的数据根据预设规则存放到8上(内存+碰盘) 留着备用,A请求B时,B会把缓存的这些数据直接给A,而不需要再去向c去获取。proxy_cache相关功能生效的前提是, 需要设置prg xy buffering ons
proxy_cache主要参数
1.proxy_cache
语法法:proxy_cache zone|off
默认为off, 即关闭proxy_cache功能, zone为用于存放缓存的内存区域名称。
例:proxy_cache my_zone;
从ng in xe.7.66版本开始, proxy_cache机制开启后会检测被代理端的HTTP响应头中的”Cache-Control”,
“Expire~头域。如, Cache-Control为no-cache时, 是不会缓存数据的。
2.proxy_cache_bypass
语法:proxy_cache_bypass string;
该参数设定, 什么情况下的请求不读取cache而是直接从后端的服务器上获取资源。
这里的string通常为ng in x的一些变量。
例:proxy_ca hce_bypass S cookie_no cacheS arg_no cacheS arg_comment;
意思是, 如果S cookie_no cacheS arg_no cacheS arg_comp ent这些变量的值只更任何一个不力域者不六主寸,
3.proxy_no_cache
语法:proxy_no_cache string;
该梦数和proxy_cache_bypass类似, 用来设定什么情况下不缓存。
例:proxy_no_cacheS cookie_no cacheS arg_no cacheS arg-comment;
表示, 如果S cookie_no cacheS arg_no cacheS arg_comment的值只要有一项不为8或者不为空时, 不缓存数据。
4.proxy_cache_key
语法:proxy_cache_key string;
定义cache key, 如: proxy_cache_keyS schemeS n roxy_hostS uriS is_args S args; (该值为默认值,一般不用设置)
5.proxy_cache_path
语法:proxy_cache_path path[levels-levels]keys_zonewname:size
[inactive w time] [max_size w size]
path设置缓存数据存放的路径;
levels设置目录层级, 如levels-1:2, 表示有两级子目录
keys_zone设置内存zone的名字和大小, 如keys_zonelmyZone:10m
inactive设置缓存多长时间就失效, 当硬盘上的绣存数据在该时间段内没有被访问过, 就会失效了, 该数据就会被器除, 默认为105.
max_size设置硬盘中最多可以缓存多少数据, 当到达该数值时, ng in x会珊除最少访问的数据。
例:proxy_cache_path/data/ng in x_cache/levels-1:2keys_zone-my_zone:16minactive-300smax_size-5g
Proxy_cache示例
添加 proxy_cache_path /data/nginx_cache/ levels=1:2 keys_zone=my_zone:10m inactive=300s max_size=5g;
vi ../nginx.conf
include mime.types;
default_type application/octet-stream;
server_names_hash_max_size 4096;
log_format main
# combined_realip
''$remote_addr - $remote_user [$time_local] "$request"''
# "$request_uri"
''$status $body_bytes_sent "$http_referer"''
''"$http_user_agent" $http_x_forwarded_for" $host $server_port'';
#access_log logs/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 30;
proxy_cache_path /data/nginx_cache/ levels=1:2 keys_zone=my_zone:10m inactive=300s max_size=5g;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 8 4k;
output_buffers 4 32k;
postpone_output 1460;
[root@localhost vhost]#mkdir /data/nginx.conf
添加 proxy_cache my_zone;
vi fp.conf
server
{
listen 80;
server_name www.test.com;
access_log /tmp/proxy.log main ;
location /
{
proxy_cache my_zone;
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect http://$host:8080/ /;
}
}
[root@localhost vhost]# ls -l /data/nginx_cache/
总用量 0
[root@localhost vhost]# ps aux|grep nginx
root 2602 0.0 0.1 36036 1940 ? Ss 14:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 3045 0.0 0.3 37868 3940 ? S 18:23 0:00 nginx: worker process
nobody 3046 0.0 0.3 37868 3940 ? S 18:23 0:00 nginx: worker process
nobody 3047 0.0 0.1 36036 1448 ? S 18:23 0:00 nginx: cache manager process
root 3058 0.0 0.0 112732 972 pts/1 S+ 18:30 0:00 grep --color=auto nginx
[root@localhost vhost]# ls -l /data/nginx_cache/
总用量 0
[root@localhost vhost]# curl -x127.0.0.1:80 www.test.com
test.com_8080
Charles Proxy 不保存代理设置
如何解决Charles Proxy 不保存代理设置?
我正在尝试在我的 Mac 上配置 Charles 代理,但我发现在日志中,在我的 macOS 上保存代理设置时抛出了异常。
查尔斯版本 - 4.6.1 MacOS 版本 - Catalina
有人可以帮我解决这个问题吗?
java.util.concurrent.ExecutionException: com.xk72.charles.macos.MacOSNative$MacOSProxyHelperException: Could not apply proxy setting
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)
CMPSC 311 Proxy Lab: Writing a Caching Web Proxy
代写CMPSC 311作业、代做C/C++,web语言作业、代写HTML/WEB课程设计作业、代做Web Proxy作业
CMPSC 311, Fall 2018
Proxy Lab: Writing a Caching Web Proxy
Assigned: Wed, Nov 14, 2018
Due: Wed, Dec 5, 11:59 PM
Last Possible Time to Turn In: Fri, Dec 07, 11:59 PM
1 Introduction
A Web proxy is a program that acts as a middleman between a Web browser and an end server. Instead of
contacting the end server directly to get a Web page, the browser contacts the proxy, which forwards the
request on to the end server. When the end server replies to the proxy, the proxy sends the reply on to the
browser.
Proxies are useful for many purposes. Sometimes proxies are used in firewalls, so that browsers behind a
firewall can only contact a server beyond the firewall via the proxy. Proxies can also act as anonymizers:
by stripping requests of all identifying information, a proxy can make the browser anonymous to Web
servers. Proxies can even be used to cache web objects by storing local copies of objects from servers then
responding to future requests by reading them out of its cache rather than by communicating again with
remote servers.
In this lab, you will write a simple HTTP proxy that caches web objects. For the first part of the lab, you will
set up the proxy to accept incoming connections, read and parse requests, forward requests to web servers,
read the servers’ responses, and forward those responses to the corresponding clients. This first part will
involve learning about basic HTTP operation and how to use sockets to write programs that communicate
over network connections. In the second part, you will add caching to your proxy using a simple main
memory cache of recently accessed web content.
2 Logistics
This is an individual project.
1
3 Handout instructions
Download proxylab-handout.tar file from Canvas. Copy the handout file to a protected directory
on the Linux machine where you plan to do your work, and then issue the following command:
linux> tar xvf proxylab-handout.tar
This will generate a handout directory called proxylab-handout. The README file describes the
various files.
4 Part I: Implementing a sequential web proxy
The first step is implementing a basic sequential proxy that handles HTTP/1.0 GET requests. Other requests
type, such as POST, are strictly optional.
When started, your proxy should listen for incoming connections on a port whose number will be specified
on the command line. Once a connection is established, your proxy should read the entirety of the request
from the client and parse the request. It should determine whether the client has sent a valid HTTP request;
if so, it can then establish its own connection to the appropriate web server then request the object the client
specified. Finally, your proxy should read the server’s response and forward it to the client.
4.1 HTTP/1.0 GET requests
When an end user enters a URL such as http://web.mit.edu/index.html into the address bar
of a web browser, the browser will send an HTTP request to the proxy that begins with a line that might
resemble the following:
GET http://web.mit.edu/index.html HTTP/1.1
In that case, the proxy should parse the request into at least the following fields: the hostname, web.mit.edu;
and the path or query and everything following it, /index.html. That way, the proxy can determine that
it should open a connection to web.mit.edu and send an HTTP request of its own starting with a line of
the following form:
GET /index.html HTTP/1.0
Note that all lines in an HTTP request end with a carriage return, ‘\r’, followed by a newline, ‘\n’. Also
important is that every HTTP request is terminated by an empty line: "\r\n".
You should notice in the above example that the web browser’s request line ends with HTTP/1.1, while
the proxy’s request line ends with HTTP/1.0. Modern web browsers will generate HTTP/1.1 requests, but
your proxy should handle them and forward them as HTTP/1.0 requests.
2
It is important to consider that HTTP requests, even just the subset of HTTP/1.0 GET requests, can be
incredibly complicated. The textbook describes certain details of HTTP transactions, but you should refer
to RFC 1945 for the complete HTTP/1.0 specification. Ideally your HTTP request parser will be fully
robust according to the relevant sections of RFC 1945, except for one detail: while the specification allows
for multiline request fields, your proxy is not required to properly handle them. Of course, your proxy
should never prematurely abort due to a malformed request.
4.2 Request headers
The important request headers for this lab are the Host, User-Agent, Connection, and Proxy-Connection
headers:
Always send a Host header. While this behavior is technically not sanctioned by the HTTP/1.0
specification, it is necessary to coax sensible responses out of certain Web servers, especially those
that use virtual hosting.
The Host header describes the hostname of the end server. For example, to access http://web.
mit.edu/index.html, your proxy would send the following header:
Host: web.mit.edu
It is possible that web browsers will attach their own Host headers to their HTTP requests. If that is
the case, your proxy should use the same Host header as the browser.
You may choose to always send the following User-Agent header:
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.3)
Gecko/20120305 Firefox/10.0.3
The header is provided on two separate lines because it does not fit as a single line in the writeup, but
your proxy should send the header as a single line.
The User-Agent header identifies the client (in terms of parameters such as the operating system
and browser), and web servers often use the identifying information to manipulate the content they
serve. Sending this particular User-Agent: string may improve, in content and diversity, the material
that you get back during simple telnet-style testing.
Always send the following Connection header:
Connection: close
Always send the following Proxy-Connection header:
Proxy-Connection: close
3
The Connection and Proxy-Connection headers are used to specify whether a connection
will be kept alive after the first request/response exchange is completed. It is perfectly acceptable
(and suggested) to have your proxy open a new connection for each request. Specifying close as
the value of these headers alerts web servers that your proxy intends to close connections after the
first request/response exchange.
For your convenience, the values of the described User-Agent header is provided to you as a string
constant in proxy.c.
Finally, if a browser sends any additional request headers as part of an HTTP request, your proxy should
forward them unchanged.
4.3 Port numbers
There are two significant classes of port numbers for this lab: HTTP request ports and your proxy’s listening
port.
The HTTP request port is an optional field in the URL of an HTTP request. That is, the URL may be of
the form, http://cse-cmpsc311.cse.psu.edu:8080, in which case your proxy should connect
to the host cse-cmpsc311.cse.psu.edu on port 8080 instead of the default HTTP port, which is port
80. Your proxy must properly function whether or not the port number is included in the URL.
The listening port is the port on which your proxy should listen for incoming connections. Your proxy
should accept a command line argument specifying the listening port number for your proxy. For example,
with the following command, your proxy should listen for connections on port 8081:
linux> ./proxy 8081
You may select any non-privileged listening port (greater than 1,024 and less than 65,536) as long as it
is not used by other processes. Since each proxy must use a unique listening port and many people will
simultaneously be working on each machine, the script port-for-user.pl is provided to help you
pick your own personal port number. Use it to generate port number based on your user ID:
linux> ./port-for-user.pl droh
droh: 45806
The port, p, returned by port-for-user.pl is always an even number. So if you need an additional
port number, say for the Tiny server, you can safely use ports p and p + 1.
Please don’t pick your own random port. If you do, you run the risk of interfering with another user.
5 Part II: Caching your Requests
For the second part of the lab, you will add a cache to your proxy that stores recently-used Web objects in
memory. HTTP actually defines a fairly complex model by which web servers can give instructions as to
4
how the objects they serve should be cached and clients can specify how caches should be used on their
behalf. However, your proxy will adopt a simplified approach.
When your proxy receives a web object from a server, it should cache it in memory as it transmits the object
to the client. If another client requests the same object from the same server, your proxy need not reconnect
to the server; it can simply resend the cached object.
Obviously, if your proxy were to cache every object that is ever requested, it would require an unlimited
amount of memory. Moreover, because some web objects are larger than others, it might be the case that
one giant object will consume the entire cache, preventing other objects from being cached at all. To avoid
those problems, your proxy should have both a maximum cache size and a maximum cache object size.
5.1 Maximum cache size
The entirety of your proxy’s cache should have the following maximum size:
MAX_CACHE_SIZE = 16 MB (16777216 Bytes)
When calculating the size of its cache, your proxy must only count bytes used to store the actual web objects;
any extraneous bytes, including metadata, should be ignored.
5.2 Maximum object size
Your proxy should only cache web objects that do not exceed the following maximum size:
MAX_OBJECT_SIZE = 8 MB (8388608 Bytes)
For your convenience, both size limits are provided as macros in proxy.c.
The easiest way to implement a correct cache is to allocate a buffer for the active connection and accumulate
data as it is received from the server. If the size of the buffer ever exceeds the maximum object size, the
buffer can be discarded. If the entirety of the web server’s response is read before the maximum object size
is exceeded, then the object can be cached. Using this scheme, the maximum amount of data your proxy
will ever use for web objects is the following:
MAX_CACHE_SIZE + MAX_OBJECT_SIZE
5.3 Eviction policy
Your proxy’s cache should employ an eviction policy that is a least-recently-used (LRU) eviction policy for
your sequential proxy server. Notice that both reading an object from the cache and writing it into the cache
count as using the object.
5
6 Evaluation
This assignment will be graded out of a total of 55 points:
BasicCorrectness: 30 points for basic proxy operation
Cache: 25 points for a working cache
6.1 Autograding
Your handout materials include an autograder, called driver.sh, that your instructor will use to get
preliminary scores for BasicCorrectness, and Cache. From the proxylab-handout directory:
linux> ./driver.sh
You must run the driver on a Linux machine.
The autograder does only simple checks to confirm that your code is acting like a caching proxy. For the
final grade, we will do additional manual testing to see how your proxy deals with real pages. Here is a list
of some pages that still uses http protocol (as of Nov. 14th 2018) that you can use to test.
http://web.mit.edu
http://www.espn.com
http://www.bbc.com
http://cse-cmpsc311.cse.psu.edu:8080
6.2 Robustness
As always, you must deliver a program that is robust to errors and even malformed or malicious input.
Servers are typically long-running processes, and web proxies are no exception. Think carefully about how
long-running processes should react to different types of errors. For many kinds of errors, it is certainly
inappropriate for your proxy to immediately exit.
Robustness implies other requirements as well, including invulnerability to error cases like segmentation
faults and a lack of memory leaks and file descriptor leaks.
7 Testing and debugging
Besides the simple autograder, you will not have any sample inputs or a test program to test your implementation.
You will have to come up with your own tests and perhaps even your own testing harness to help
you debug your code and decide when you have a correct implementation. This is a valuable skill in the real
world, where exact operating conditions are rarely known and reference solutions are often unavailable.
6
Fortunately there are many tools you can use to debug and test your proxy. Be sure to exercise all code paths
and test a representative set of inputs, including base cases, typical cases, and edge cases.
7.1 Tiny web server
Your handout directory the source code for the CS:APP Tiny web server. While not as powerful as thttpd,
the CS:APP Tiny web server will be easy for you to modify as you see fit. It’s also a reasonable starting
point for your proxy code. And it’s the server that the driver code uses to fetch pages.
7.2 telnet
As described in your textbook (11.5.3), you can use telnet to open a connection to your proxy and send
it HTTP requests.
7.3 curl
You can use curl to generate HTTP requests to any server, including your own proxy. It is an extremely
useful debugging tool. For example, if your proxy and Tiny are both running on the local machine, Tiny is
listening on port 8080, and proxy is listening on port 8081, then you can request a page from Tiny via your
proxy using the following curl command:
$ curl -v --proxy localhost:8081 http://localhost:8080
* About to connect() to proxy localhost port 8081 (#0)
* Trying ::1... Connection refused
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET http://localhost:8080/ HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
> Proxy-Connection: Keep-Alive
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Server: Tiny Web Server
< Connection: close
< Content-length: 121
< Content-type: text/html
<
<html>
<head><title>test</title></head>
<body>
<img align="middle" src="godzilla.gif">
Dave O’Hallaron
</body>
</html>
7
* Closing connection #0
7.4 netcat
netcat, also known as nc, is a versatile network utility. You can use netcat just like telnet, to open
connections to servers. Hence, imagining that your proxy were running on localhost using port 8081
you can do something like the following to manually test your proxy:
$ nc localhost 8081
GET http://cse-cmpsc311.cse.psu.edu:8080 HTTP/1.1
HTTP/1.0 200 OK
MIME-Version: 1.0
Content-Type: text/html
Content-Length: 40922
....
In addition to being able to connect to Web servers, netcat can also operate as a server itself. With the
following command, you can run netcat as a server listening on port 12345:
sh> nc -l 12345
Once you have set up a netcat server, you can generate a request to a phony object on it through your
proxy, and you will be able to inspect the exact request that your proxy sent to netcat.
7.5 Web browsers
Eventually you should test your proxy using the most recent version of Mozilla Firefox. Visiting About Firefox
will automatically update your browser to the most recent version.
To configure Firefox to work with a proxy, visit
Preferences>Advanced>Network>Settings
It will be very exciting to see your proxy working through a real Web browser. Although the functionality of
your proxy will be limited, you will notice that you are able to browse the vast majority of websites through
your proxy.
An important caveat is that you must be very careful when testing caching using a Web browser. All modern
Web browsers have caches of their own, which you should disable before attempting to test your proxy’s
cache.
8
8 Handin instructions
The provided Makefile includes functionality to build your final handin file. Issue the following command
from your working directory:
linux> make handin
The output is the file ../proxylab-handin.tar, which you can then handin.
Please make sure that the handin.tar file you submitted really works. You should download your submitted
version, unpack in a fresh directory, enter make and test the generated proxy program. This is the last project
of the semester and you will not have a chance to resubmit if you provide us a wrong copy.
Submit thie proxylab-handin.tar file to Canvas.
Chapters 10-11 of the textbook contains useful information on system-level I/O, network programming,
HTTP protocols.
RFC 1945 (http://www.ietf.org/rfc/rfc1945.txt) is the complete specification for the
HTTP/1.0 protocol.
9 Hints
As discussed in Section 10.11 of your textbook, using standard I/O functions for socket input and
output is a problem. Instead, we recommend that you use the Robust I/O (RIO) package, which is
provided in the csapp.c file in the handout directory.
The error-handling functions provide in csapp.c are not appropriate for your proxy because once a
server begins accepting connections, it is not supposed to terminate. You’ll need to modify them or
write your own.
You are free to modify the files in the handout directory any way you like. For example, for the sake
of good modularity, you might implement your cache functions as a library in files called cache.c
and cache.h. Of course, adding new files will require you to update the provided Makefile.
As discussed in the Aside on page 964 of the CS:APP3e text, your proxy must ignore SIGPIPE signals
and should deal gracefully with write operations that return EPIPE errors.
Sometimes, calling read to receive bytes from a socket that has been prematurely closed will cause
read to return -1 with errno set to ECONNRESET. Your proxy should not terminate due to this
error either.
Remember that not all content on the web is ASCII text. Much of the content on the web is binary
data, such as images and video. Ensure that you account for binary data when selecting and using
functions for network I/O.
9http://www.6daixie.com/contents/13/2339.html
Forward all requests as HTTP/1.0 even if the original request was HTTP/1.1.
Good luck!
因为专业,所以值得信赖。如有需要,请加QQ:99515681 或邮箱:99515681@qq.com
微信:codinghelp
ES6中的代理(Proxy)和反射(Reflection)
代理和反射的定义
调用 new Proxy() 可常见代替其它目标 (target) 对象的代理,它虚拟化了目标,所以二者看起来功能一致。
代理可拦截JS引擎内部目标的底层对象操作,这些底层操作被拦截后会触发响应特定操作的陷阱函数。
反射 API 以 Reflect 对象的形式出现,对象中方法的默认特性与相同的底层操作一致,而代理可以覆写这种操作,每一个代理陷阱对应一个命名和参数都相同的 Reflect 方法。
应用
基础用法
let target = {};
let p = new Proxy(target, {});
p.a = 37; // 操作转发到目标
console.log(target.a); // 37. 操作已经被正确地转发
get、set、has、deleteProperty的使用
- get() 方法用于拦截对象的读取属性操作
- set() 方法用于拦截设置属性值的操作
- has() 方法可以看作是针对 in 操作的钩子
- deleteProperty() 方法用于拦截对对象属性的 delete 操作
let target = {
name: ''target'',
color: ''blue'',
size: 50,
skill: ''drink''
}
let proxy = new Proxy(target, {
set: function(trapTarget, key, value, receiver) {
// 忽略不希望受到影响的已有属性
if(!trapTarget.hasOwnProperty(key)) {
if(isNaN(value)) {
throw new TypeError(''属性必须是数字!'')
}
}
//添加属性
return Reflect.set(trapTarget, key, value, receiver)
},
get: function(trapTarget, key, receiver) {
if(!(key in receiver)) {
throw new TypeError(key + ''属性不存在!'')
}
return Reflect.get(trapTarget, key, receiver)
},
has: function(trapTarget, key) {
if(key === ''color'') {
return false
}else {
return Reflect.has(trapTarget, key)
}
},
deleteProperty: function(trapTarget, key) {
if(key === ''skill'') {
return false
}else {
return Reflect.deleteProperty(trapTarget, key)
}
}
});
//添加一个新属性
proxy.count = 1
console.log(target.count) //1
proxy.name = ''proxy''
console.log(proxy.name) //proxy
console.log(target.name) //proxy
proxy.anotherName = ''proxy'' //抛出错误: 属性必须是数字!
console.log(proxy.age) //抛出错误:age属性不存在!
console.log(''name'' in proxy) //true
console.log(''color'' in proxy) //false
console.log(''size'' in proxy) //true
let result1 = delete proxy.size
console.log(''size'' in proxy) //false
console.log(''skill'' in proxy) //true
let result2 = delete proxy.skill
console.log(''skill'' in proxy) //true
参考
《深入理解ES6》
今天关于Google爬行缓存代理和crawl caching proxy的分享就到这里,希望大家有所收获,若想了解更多关于26 nginx反向代理-proxy_cache、Charles Proxy 不保存代理设置、CMPSC 311 Proxy Lab: Writing a Caching Web Proxy、ES6中的代理(Proxy)和反射(Reflection)等相关知识,可以在本站进行查询。
本文标签: