GVKun编程网logo

MSSQL 2000 错误823恢复数据案例(sql错误823什么意思)

1

在本文中,您将会了解到关于MSSQL2000错误823恢复数据案例的新资讯,同时我们还将为您解释sql错误823什么意思的相关在本文中,我们将带你探索MSSQL2000错误823恢复数据案例的奥秘,分

在本文中,您将会了解到关于MSSQL 2000 错误823恢复数据案例的新资讯,同时我们还将为您解释sql错误823什么意思的相关在本文中,我们将带你探索MSSQL 2000 错误823恢复数据案例的奥秘,分析sql错误823什么意思的特点,并给出一些关于2000 个软件开发领域的高频特殊词及精选例句(一)、2000 个软件开发领域的高频特殊词及精选例句(三)、2000 个软件开发领域的高频特殊词及精选例句(二)、2000 个软件开发领域的高频特殊词及精选例句(五)的实用技巧。

本文目录一览:

MSSQL 2000 错误823恢复数据案例(sql错误823什么意思)

MSSQL 2000 错误823恢复数据案例(sql错误823什么意思)

一、故障描述

 MSsql Server 2000 附加数据库错误823,附加数据库失败。数据库没有备份,不能通过备份恢复数据库,急需恢复数据库中的数据。

二、故障分析

sql Server数据库 823 错误的原因一般是:数据库物理页面损坏,有时可能是页面有但是校验值损坏,导致无法识别该页面,也有可能因为断电或者文件系统问题出现页面丢失了也会报这个823错误。

三、恢复过程

1 尝试附加数据库,然后修复数据库

图一: 

MSSQL 2000 错误823恢复数据案例

2 使用北亚MSsql文件检测工具对数据库进行检测

图二: 

MSSQL 2000 错误823恢复数据案例

3 计算并修改数据库错误数据页的校验值

4 重新附加数据库,附加数据库成功

图三: 

MSSQL 2000 错误823恢复数据案例

5 dbcc检测数据库

图四: 

MSSQL 2000 错误823恢复数据案例

6 修复上述错误,再一次dbcc检测数据库

图五: 

MSSQL 2000 错误823恢复数据案例

四、结果

sql Server 数据库823 错误完美解决,数据库可以正常附加,通过dbcc检测完整,数据完全恢复。 

图六:

MSSQL 2000 错误823恢复数据案例

2000 个软件开发领域的高频特殊词及精选例句(一)

2000 个软件开发领域的高频特殊词及精选例句(一)

高春辉、王春生、朱峰:关于开源创业的 15 件小事

superword 是一个 Java 实现的英文单词分析软件,主要研究英语单词音近形似转化规律、前缀后缀规律、词之间的相似性规律等等。

1、单词 hadoop 的匹配文本:


  1. Subash D''Souza is a professional software developer with strong expertise in crunching big data using Hadoop/HBase with Hive/Pig. Apache Flume Distributed Log Collection for Hadoop


  2. The growing pains you’re facing have been faced by other people and there are systems to handle it: Hadoop and HBase. Monitoring with Ganglia


  3. BigTable, Hadoop, SimpleDB, and memcached restrict object lookups by key for exactly these reasons. CouchDB The Definitive Guide


  4. The success of MR led to a wide range of third-party implementations (notably open-source Hadoop [15]), and a number of hybrid systems that combine parallel DBMSs with MR, offered by vendors like Aster, Cloudera, Greenplum, and Vertica. Dremel - Interactive Analysis of WebScale Datasets


  5. As such, Hadoop has similar limitations as MapReduce when developing multi-stage pipelines. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  6. Edward Capriolo is a member of the Apache Software Foundation and a committer for the Hadoop-Hive project. Cassandra High Performance Cookbook


  7. This sort of dependency is similar in principle to the compile-time dependency you might have on any API you are programming, like what you might see if you are writing MapReduce jobs for execution on a Hadoop cluster or writing enterprise integration code with Apache Camel. Gradle Beyond The Basics


  8. At Facebook, Hadoop has traditionally been used in conjunction with Hive for storage and analysis of large data sets. Apache Hadoop Goes Realtime at Facebook


  9. Chapter 12, Integrating Hadoop In this chapter, written by Jeremy Hanna, we put Cassandra in a larger context and see how to integrate it with the popular implementation of Google’s Map/Reduce algorithm, Hadoop. Cassandra The Definitive Guide


  10. Outside Google, projects such as Apache Hadoop have been gaining popularity rapidly. Tenzing - A SQL Implementation On The MapReduce Framework



2、单词 http 的匹配文本:


  1. To have full digital access to this book and others on similar topics from O’Reilly and other publishers, sign up for free at http://my.safaribooksonline.com. Getting Started with CouchDB


  2. That’s a promise! So make sure to visit the book’s website at http://books.couchdb.org/relax to keep up-to-date. CouchDB The Definitive Guide


  3. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. Cassandra High Performance Cookbook


  4. Ceph is licensed under the LGPL and is available at http://ceph.sourceforge.net/ Ceph A Scalable, High-Performance Distributed File System


  5. For information on translations, please e-mail info@apress.com, or visit http://www.apress.com. Beginning CouchDB


  6. For more information about our books, courses, conferences, and news, see our website at http://www.oreilly.com. Scaling CouchDB


  7. It allows you to send data over HTTP in JSON to index, search, and manage your Elasticsearch cluster. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. A request to Elasticsearch consists of the same parts as any HTTP request. Elasticsearch The Definitive Guide


  9. Finding Out More If you’d like to find out more about Cassandra, and to get the latest updates, visit this book’s companion website at http://www.cassandraguide.com. Cassandra The Definitive Guide


  10. It discusses ActiveMQ URIs and all the transport connectors for both client-to-broker and broker-to-broker communications including TCP, NIO, STOMP, failover, SSL, HTTP/S, and much more. ActiveMQ in Action



3、单词 api 的匹配文本:


  1. An HTTP-based REST API makes communicating with the database easier, because so many modern environments are capable of talking HTTP. Getting Started with CouchDB


  2. The CouchDB API is designed to provide a convenient but thin wrapper around the database core. CouchDB The Definitive Guide


  3. In addition to this, the book you are holding will guide you through ElasticSearch Java API, showing how to use it, not only when it comes to CRUD operations but also when it comes to cluster and indices maintenance and manipulation. Mastering ElasticSearch


  4. The CLI is a helpful interactive facade on top of the Cassandra API. Cassandra High Performance Cookbook


  5. Another breakthrough came in April 2006 when it was announced that CouchDB would be solely accessible via an HTTP-based RESTful API. Beginning CouchDB


  6. Since CouchDB uses an HTTP API, standard HTTP load balancing software or hardware can be used. Scaling CouchDB


  7. Even though Elasticsearch is written in Java, there''s more than just a Java API that lets you work with it. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. More information about the Java clients can be found in the Java API section of the Guide. Elasticsearch The Definitive Guide


  9. We present an overview of what’s different about Cassandra’s model for querying and updating data, and then get to work using the API. Cassandra The Definitive Guide


  10. Heterogeneous application integration—The ActiveMQ broker is written using the Java language, so naturally a Java client API is provided. ActiveMQ in Action



4、单词 xml 的匹配文本:


  1. So what do you do if your data contains a newline character, like some XML? Using a sequence file can solve this problem because it uses non-printable characters for delimiters. Apache Flume Distributed Log Collection for Hadoop


  2. Shows A show converts a single document into another format, usually HTML, although you can output the document in any format, including JSON or XML, to suit your. Getting Started with CouchDB


  3. An XML-format dump of the entire cluster state can be requested by a remote poller from any single node in the cluster on port 8649 Monitoring with Ganglia


  4. The design doc fields show and list contain functions used to transform raw JSON into HTML, XML, or other Content-Types. CouchDB The Definitive Guide


  5. Many commercial DBMSs support storage of nested data using XML (e.g., [19]) Dremel - Interactive Analysis of WebScale Datasets


  6. In the XML world he is known as the developer of DB Generator for the Apache Cocoon project, the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration by using Oracle JVM Directory implementation and in the Restlet.org project the Oracle XDB Restlet Adapter, an alternative to write native REST web services inside the database-resident JVM. Mastering ElasticSearch


  7. You can use the preceding build.xml with the Free-Form project type. Cassandra High Performance Cookbook


  8. It is simple to map data from an existing database or XML to a JSON structure. Beginning CouchDB


  9. This might include reducing or reorganizing joins, throwing out resource-intensive features such as XML processing within a stored procedure, and so forth. Cassandra The Definitive Guide


  10. ActiveMQ also supports the use of Apache Camel within the broker’s XML configuration file. ActiveMQ in Action



5、单词 solr 的匹配文本:


  1. SOLr: a rich Set of Interfaces to Lucene The SOLR project, http://lucene.apache.org/solr/, is a stand-alone, enterprise-grade search service built on top of Lucene. Pro Hadoop


  2. He is also a co-founder of the solr.pl site which publishes information and tutorials about Solr and Lucene library and is the co-author of the ElasticSearch Server book published by Packt Publishing. Mastering ElasticSearch


  3. Solr is a popular frontend that provides a web service for Lucene as well as caching warming and other advanced capabilities. Cassandra High Performance Cookbook


  4. The name is a unique label for the module itself, and is often the same as the project name (like solr-core, spring-web, or junit) Gradle Beyond The Basics


  5. If you''ve heard about Lucene before, you''ve probably also heard about Apache Solr, which is also an open-source, distributed search engine based on it. Elasticsearch in Action MEAP V07 cHAPTER 1


  6. In this chapter we''ll be using Apache Lucene-based Apache Solr and ElasticIndex for indexing purposes. Hadoop MapReduce Cookbook


  7. If the reducer decides that all of the queue-ids for a message-id create a complete message path, then the message is serialized and queued for the SolrOutputFormat. Hadoop:The Definitive Guide 3rd Edition


  8. Common choices are the Apache Lucene-based solutions, such as Lucene itself, or Solr, a high-performance enterprise search server.‡ Similar to the indexing solutions, there are a few possible approaches: HBase:The Definitive Guide


  9. Comprehensive information on Apache Solr 3 with examples and tips so you can focus on the important parts. Apache Kafka


  10. Integration with Apache Lucene/Solr is available to store external indexes and perform fast global searches. Cassandra The Definitive Guide



6、单词 hbase 的匹配文本:


  1. Subash D''Souza is a professional software developer with strong expertise in crunching big data using Hadoop/HBase with Hive/Pig. Apache Flume Distributed Log Collection for Hadoop


  2. The growing pains you’re facing have been faced by other people and there are systems to handle it: Hadoop and HBase. Monitoring with Ganglia


  3. The subsystems should transfer data from different data sources to Hadoop-compatible data storage systems such as HDFS and HBase. Hadoop Operations and Cluster Management Cookbook


  4. Hoya is a Java-tool designed to leverage YARN to spin up dynamic HBase clusters[21] on demand. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Installing HBase HBase is a highly scalable NoSQL data store that supports columnar-style data storage. Hadoop MapReduce Cookbook


  6. A single master node runs the master HDFS, MapReduce, and HBase daemons. Hadoop in Practice


  7. Given our high confidence in the reliability and scalability of HDFS, we began to explore Hadoop and HBase for such applications. Apache Hadoop Goes Realtime at Facebook


  8. ZooKeeper is often a major component in other Hadoop-related projects, such as HBase   and Katta. Hadoop in Action


  9. Graph databases such as Neo4J and the set of databases derived at least in part from the design of Google’s Bigtable database (such as MongoDB, HBase, Hypertable, and Redis) all are focused slightly less on Availability and more on ensuring Consistency and Partition Tolerance. Cassandra The Definitive Guide


  10. Another member of the broader Hadoop family is HBase, an open-source implementation of another Google technology. Hadoop Beginners Guide



7、单词 hdfs 的匹配文本:


  1. After that we’ll deep-dive into the details on many of the more useful Flume components, including the very important File Channel for persistence of in-flight data records and the HDFS Sink for buffering and writing data into HDFS, the Hadoop Distributed File System. Apache Flume Distributed Log Collection for Hadoop


  2. This file is organized into different contexts: jvm, rpc, hdfs, mapred, and hbase. Monitoring with Ganglia


  3. As Hadoop’s fault tolerance improved, persistent HDFS clusters became the norm. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Chapter 11, Hadoop and Cassanda: Hadoop is a distributed file system, HDFS that provides high throughput and redundant storage and MapReduce, a software framework for distributed processing of large data sets on compute clusters. Cassandra High Performance Cookbook


  5. Chapter 2, Advanced HDFS, introduces a set of advanced HDFS operations that would be useful when performing large-scale data processing with Hadoop MapReduce as well as with non-MapReduce use cases. Hadoop MapReduce Cookbook


  6. After you’re used to thinking in MapReduce, the next challenge is typically related to the logistics of working with Hadoop, such as how to move data in and out of HDFS, and effective and efficient ways to work with data in Hadoop. Hadoop in Practice


  7. As such, there has been less emphasis on making Hadoop performant for random access workloads by providing low latency access to HDFS. Apache Hadoop Goes Realtime at Facebook


  8. An appendix contains a listing of HDFS commands along with their descriptions and usage. Hadoop in Action


  9. Because Hadoop has some unfamiliar terminology, here are some useful definitions: HDFS. Cassandra The Definitive Guide


  10. Parts of Hadoop The top-level Hadoop project has many component subprojects, several of which we''ll discuss in this book, but the two main ones are Hadoop Distributed File System (HDFS) and MapReduce. Hadoop Beginners Guide



8、单词 mysql 的匹配文本:


  1. He has worked with Perl/ PHP/Python, primarily for coding and MySQL/Oracle as the backend, for several years prior to moving into Hadoop fulltime. Apache Flume Distributed Log Collection for Hadoop


  2. Formerly as a technical writer, he spent time building both the documentation system and writing content for MySQL and the MySQL groups within Sun and then Oracle. Getting Started with CouchDB


  3. You should have PHP support for MySQL installed on your gweb server before attempting to configure this support. Monitoring with Ganglia


  4. We''ve moved a large and critical application suite from MySQL to F1 F1 - The Fault-Tolerant Distributed RDBMS


  5. FlumeJava’s parallel collections abstract away the details of how data is represented, including whether the data is represented as an in-memory data structure, as one or more files, or as an external storage service such as a MySql database or a Bigtable [5] FlumeJava - Easy, Efficient Data-Parallel Pipelines


  6. About Packt Publishing Packt, pronounced ''packed'', published its first book "Mastering phpMyAdmin for Effective MySQL Management" in April 2004 and subsequently continued to specialize in publishing highly focused books on specific technologies and solutions. Cassandra High Performance Cookbook


  7. The whole process involved running Map/Reduce[7] jobs against the MySQL data files, indexing them and then storing the reverse-index in Cassandra. Cassandra A Decentralized Structured Storage System


  8. For several decades now, relational database management systems have reigned supreme in application and database development, with the likes of Oracle, SQL Server, and MySQL being used in every type of software application imaginable. Beginning CouchDB


  9. Once you start to scale many other data stores (MySQL, Bigtable), some nodes need to be set up as masters in order to organize other nodes, which are set up as slaves. Cassandra The Definitive Guide


  10. Here’s an example of an ActiveMQ configuration for the JDBC message store using the MySQL database: ActiveMQ in Action



9、单词 apache 的匹配文本:


  1. Then I was introduced to Apache CouchDB, and I rediscovered the passion I had when developing applications on the Electron. Getting Started with CouchDB


  2. Apache CouchDB is one of a new breed of database management systems. CouchDB The Definitive Guide


  3. Nevertheless, I hope that by reading this book you''ll be able to easily get into all the details about ElasticSearch and underlying Apache Lucene, and I also hope that it will let you get the desired knowledge easier and faster. Mastering ElasticSearch


  4. Over 150 recipes to design and optimize large-scale Apache Cassandra deployments. Cassandra High Performance Cookbook


  5. Apache CouchDB is an exciting database management system that is steadily improving with each new version. Beginning CouchDB


  6. Instead, we will focus on installing and configuring the Apache HTTP Server as a load balancer. Scaling CouchDB


  7. It''s open-source, distributed and it''s built on top of Apache Lucene, an open-source search engine library, which allows you to implement search functionality in your own Java application. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. It did not come out from any of the Apache releases yet. Namenode High Availability


  9. Cassandra was accepted into the Apache Incubator, and by the time it graduated in March 2010, it had become a true open source success story, with committers from Rackspace, Digg, Twitter, and other companies that wouldn’t have written their own database from scratch, but together built something important. Cassandra The Definitive Guide


  10. Chapter 11 presents advanced features provided by ActiveMQ such as wildcards and composite destinations, advisory messages, virtual topics, some info about ActiveMQ plug-ins, and an introduction to message routing with Apache Camel. ActiveMQ in Action



10、单词 gradle 的匹配文本:


  1. Gradle has already amassed a large audience, and we’ve seen tremendous growth over the last two years, but we’re getting ready for a still larger influx of end-users. Gradle in Action


  2. The conference is visited by 1000 Java developers and he got the chance to educate some of them about the greatness of Gradle and Groovy. Gradle Effective Implementation Guide


  3. The build tool you are going to use in this book is Gradle (http://gradle.org) Pro Spring MVC


  4. His current topics of research center around project automation: build tools (Maven, Leiningen, Gradle), distributed version control (Git), Continuous Integration (Jenkins) and Quality Metrics (Sonar) Jenkins The Definitive Guide


  5. Chances are you might be using it already or have switched to a more modern system such as Maven or Gradle. Sonar Code Quality Testing Essentials


  6. The first thing that is worth mentioning is that the current versions of Spring Security use Gradle as the build tool. Pro Spring Security


  7. Gradle Beyond the Basics, the image of a Belgian shepherd dog, and related trade dress are trademarks of O’Reilly Media, Inc. Gradle Beyond The Basics


  8. It is straightforward to use another build tool, such as Gradle or Ant with Ivy, as long as you use the same set of dependencies defined here.) For building MapReduce jobs you only need to have the hadoop-core dependency, which contains all the Hadoop classes. Hadoop:The Definitive Guide 3rd Edition


  9. Additionally, you will need to have Maven or Gradle installed to pull in the dependencies required to work with the available sample code. Getting Started with Google Guava


  10. Building and Testing with Gradle, the image of a bush wren, and related trade dress are trademarks of O’Reilly Media, Inc. Building and Testing with Gradle



11、单词 url 的匹配文本:


  1. Depending on your platform and installation type, your browser may have already been opened with the correct link; if not, you can access Futon, the built-in administration interface to CouchDB by going to the URL: http://127.0.0.1:5984/_utils/index.html Futon is a web-based interface to the main functionality in CouchDB and provides support for editing the configuration information, creating databases, documents, design documents (and therefore views, shows and lists) and starting and requesting replication. Getting Started with CouchDB


  2. To use it, invoke the URL along with key/value pairs that define events. Monitoring with Ganglia


  3. It is followed by the URL that specifies your CouchDB IP address and port. CouchDB The Definitive Guide


  4. ElasticSearch assumes that data is sent in the URL, or as the request body as JSON document (http://en.wikipedia.org/wiki/JSON) Mastering ElasticSearch


  5. Open your favorite web browser, and navigate to the URL http://127.0.0.1:5984/_utils. Beginning CouchDB


  6. Send an If-None-Match HTTP header containing the Etag’s value with subsequent requests to the same URL (Etags are only valid for a given URL) Scaling CouchDB


  7. This method is often referred to as a query string search, since we pass the search as a URL query string parameter: Elasticsearch The Definitive Guide


  8. Next, go to the GitHub page hosting the project you’re interested in and find the project’s Git URL. Cassandra The Definitive Guide


  9. If you don’t have the Sun J2SE installed and you’re using Linux, Solaris, or Windows, download and install it from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/ index.html. ActiveMQ in Action


  10. It provides aggregate statistics, such as the number of unique visitors per day and the page views per URL per day, as well as site-tracking reports, such as the percentage of users that made a purchase, given that they earlier viewed a specific page. Bigtable - A Distributed Storage System for Structured Data



12、单词 schema 的匹配文本:


  1. Just like in a database that requires a schema, there are some conventions and field types that you should consider including in your documents. Getting Started with CouchDB


  2. We’ll show you how CouchDB’s schema-free document model is a better fit for common applications, how the built-in query engine is a powerful way to use and process your data, and how CouchDB’s design lends itself to modularization and scalability. CouchDB The Definitive Guide


  3. Each document may have a different set of fields; there is no schema or imposed structure. Mastering ElasticSearch


  4. Chapter 6, Schema Design: The Cassandra data model is designed for storing large amounts of data across many nodes. Cassandra High Performance Cookbook


  5. Although the system supports the notion of multiple tables all deployments have only one table in their schema. Cassandra A Decentralized Structured Storage System


  6. Although the strict schema-based structure these databases adhere to is required in some web applications, such as transaction systems, it is not a good fit for many modern web projects, such as blogs, wikis, and discussion forums, which by their nature are a better fit to a document-oriented database. Beginning CouchDB


  7. CouchDB is a schema-less database, giving you much flexibility in designing the document boundaries for your data. Scaling CouchDB


  8. Every type has its own mapping or schema definition, which defines the data structure for documents of that type, much like the columns in a database table. Elasticsearch The Definitive Guide


  9. It is highly available and offers a schema-free data model. Cassandra The Definitive Guide


  10. The JDBC message store uses a schema consisting of three tables. ActiveMQ in Action



13、单词 metadata 的匹配文本:


  1. The HEAD method returns metadata about the document or other object being accessed within the HTTP headers returned. Getting Started with CouchDB


  2. The actual result is JSON-encoded and contains a little more metadata: CouchDB The Definitive Guide


  3. Recipes cover how the CLI is used to make changes to the metadata such as key spaces, column families, and cache settings. Cassandra High Performance Cookbook


  4. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable object storage devices (OSDs) Ceph A Scalable, High-Performance Distributed File System


  5. The metadata about the ranges a node is responsible is cached locally at each node and in a fault-tolerant manner inside Zookeeper - this way a node that crashes and comes back up knows what ranges it was responsible for. Cassandra A Decentralized Structured Storage System


  6. In addition to data fields, each document includes metadata that is maintained by the CouchDB server itself, such as a revision number and more. Beginning CouchDB


  7. A document can be a blog post with all its metadata, a user with all its metadata, or any other type of data you plan to search for. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. As storage clusters grow to thousands of devices or more, consistent management of data placement, failure detection, and failure recovery places an increasingly large burden on client, controller, or metadata directory nodes, limiting scalability. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  9. The following list shows only the commands related to metadata and configuration; there are other commands for getting and setting values that we explore later. Cassandra The Definitive Guide


  10. The headers provide metadata about the message used by both clients and JMS providers. ActiveMQ in Action



14、单词 mongodb 的匹配文本:


  1. Other sinks are available on the Internet that can be used to write data to MongoDB, Cassandra, RabbitMQ, Redis, and just about any other data store you can think of. Apache Flume Distributed Log Collection for Hadoop


  2. Like MongoDB itself, this book is very straightforward and approachable. MongoDB The Definitive Guide 2nd Edition


  3. Hadoop was a logical choice since it a widely used system, but we hope that users of MongoDB’s MapReduce and other MapReduce implementations will be able to extrapolate the examples in this text to their particular system of choice. MapReduce Design Patterns


  4. The goal of Drill is to create an interactive analysis platform for Big Data using a standard SQL-supporting relational database management system (RDBMS), Hadoop, and other NoSQL implementations (including Cassandra and MongoDB) Professional Hadoop Solutions


  5. Exporting data from HDFS into MongoDB This recipe will use the MongoOutputFormat class to load data from an HDFS instance into a MongoDB collection. Hadoop RealWorld Solutions Cookbook


  6. You can do that with Elasticsearch, and it works especially well with document-oriented stores like MongoDB, because data is represented in Elasticsearch as documents, too. Elasticsearch in Action MEAP V07 cHAPTER 1


  7. MongoDB also provides auto-sharding capabilities to manage failover and node balancing. Cassandra The Definitive Guide


  8. The format of the row key is an account ID (which is a MongoDB ObjectId) and a content item ID (witha MD5 hash of the URL of the content item being tracked) separated by an underscore which we split on to provide that data in the result set. Programming Hive


  9. That MongoDB also based its query language on JSON lent a high degree of comfort and harmony to the usage of this new database. MongoDB in Action


  10. These systems can also be grouped with the NoSQL databases (MongoDB [27], to name one of many): both offer higher performance and scale better than traditional databases, but provide weaker semantics. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



15、单词 jvm 的匹配文本:


  1. The downside of using this channel is that an agent failure (hardware problem, power outage, JVM crash, Flume restart, and so on) results in loss of data. Apache Flume Distributed Log Collection for Hadoop


  2. Java Virtual Machine Metrics The sFlow Java Virtual Machine (JVM) metrics (Table 8-3) are based on the metrics exposed through the Java Management Extensions (JMX) interface, ensuring consistency with existing JMX-based monitoring systems. Monitoring with Ganglia


  3. Groovy is a language for the Java Virtual Machine (JVM), such as Java and Scala. Gradle Effective Implementation Guide


  4. In the XML world he is known as the developer of DB Generator for the Apache Cocoon project, the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration by using Oracle JVM Directory implementation and in the Restlet.org project the Oracle XDB Restlet Adapter, an alternative to write native REST web services inside the database-resident JVM. Mastering ElasticSearch


  5. In addition to JVM internals, applications can maintain their own counters and provide operations that the user can trigger remotely. Cassandra High Performance Cookbook


  6. The File constructor will create an absolute path out of the supplied parameter, but the constructor argument will be considered as if it is relative to the directory in which the JVM started up.1 This directory may change if you are invoking Gradle directly, through the wrapper, through an IDE, or through integration with a Continuous Integration server. Gradle Beyond The Basics


  7. It would work, but would need a lot of RAM to hold copies of essentially the same data, and would create many more data structures that the JVM would have to spend time garbage collecting. Elasticsearch The Definitive Guide


  8. We pass the file locations to the JVM by pasting the following code at the bottom of this file. Cassandra The Definitive Guide


  9. The SSL cipher suites for the ActiveMQ SSL transport are provided by the JVM. ActiveMQ in Action


  10. Groovy is a dynamic language of the JVM, similar in many respects to Java, but with some important differences. Building and Testing with Gradle



16、单词 plugin 的匹配文本:


  1. While not covered in the Flume documentation, it is common to enable JMX in your Flume JVM (http://bit.ly/javajmx) and use the Nagios JMX plugin (http://bit.ly/nagiosjmx) to alert on performance abnormalities in your Flume agents. Apache Flume Distributed Log Collection for Hadoop


  2. Collectively, these enable Nagios to query metric information from gweb instead of relying on remote execution systems such as Nagios Service Check Acceptor (NSCA) and Nagios Remote Plugin Executor (NRPE) Monitoring with Ganglia


  3. We’ll explore the use of the Java plugin for standardized and nonconventional use cases and examine productivity tools for fast development turnaround. Gradle in Action


  4. We will see how we can decouple the definition and usage of a custom task and plugin into separate source files. Gradle Effective Implementation Guide


  5. Finally, we will take a deep look at ElasticSearch extensions by developing a custom river plugin for data indexing and a custom analysis plugin for data analysis during query and index time. Mastering ElasticSearch


  6. Using the Cassandra maven plugin With maven, it is simple to create a software project that has Cassandra support built in. Cassandra High Performance Cookbook


  7. The base build from which we will derive FileCollection examples apply plugin: ''java'' Gradle Beyond The Basics


  8. BookKeeper Journal Manager (BKJM) is NameNode plugin implementation, involves BK client to read/write to/from BK cluster. Namenode High Availability


  9. The Consumer is executed using the maven-exec-plugin (http://mng.bz/bf7g) by passing it some system properties as arguments using the exec.args property. ActiveMQ in Action


  10. In Gradle, a plugin may provide configurable build actions by introducing one or more tasks, but it is fundamentally a means of extending the build DSL to include the domain encompassed by the plug-in. Building and Testing with Gradle



17、单词 sql 的匹配文本:


  1. He also has experience optimizing SQL queries for database interactions. Apache Flume Distributed Log Collection for Hadoop


  2. By working with the grain and letting CouchDB do this for us, we save ourselves a tremendous amount of CPU cycles that would otherwise have been spent serializing object graphs from SQL, converting them into domain objects, and using those objects to do application-level validation. CouchDB The Definitive Guide


  3. Second, Dremel provides a high-level, SQL-like language to express ad hoc queries. Dremel - Interactive Analysis of WebScale Datasets


  4. Using analogies from the SQL world, index is something similar to a database. Mastering ElasticSearch


  5. Her interests and experiences lie across multiple tiers, from web frontend GUI to middleware, from middleware to backend SQL RDBMS and NoSQL data storage. Cassandra High Performance Cookbook


  6. He introduced the concept of JavaScript frameworks and Ajax to the company’s web division in 2008, developing a PL/SQL and JavaScript framework that is now used in every Core web product. Beginning CouchDB


  7. You can deploy a search engine on top of a relational database, to create indexes and speed up the SQL queries. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. However, our core data model is not restricted to our particular RPC framework; we also trace activities such as SMTP sessions in Gmail, HTTP requests from the outside world, and outbound queries to SQL servers. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  9. It is much like the INSERT keyword in SQL except that, if the document already exists, then the new document would replace the old. Elasticsearch The Definitive Guide


  10. An important one is the Structured Query Language (SQL), which is feature-rich and uses a simple, declarative syntax. Cassandra The Definitive Guide



18、单词 implementations 的匹配文本:


  1. Some custom implementations are covered should your needs necessitate it. Apache Flume Distributed Log Collection for Hadoop


  2. Implementations In addition to gmetric, there are several other metric-generating utilities that are available. Monitoring with Ganglia


  3. This header is often useful in web development when there are known errors in client implementations that a server might want to prepare the response for. CouchDB The Definitive Guide


  4. The success of MR led to a wide range of third-party implementations (notably open-source Hadoop [15]), and a number of hybrid systems that combine parallel DBMSs with MR, offered by vendors like Aster, Cloudera, Greenplum, and Vertica. Dremel - Interactive Analysis of WebScale Datasets


  5. We learned how to choose the right Lucene Directory implementation and which of the available implementations is the right choice in our environment. Mastering ElasticSearch


  6. Other JVM implementations and ports exist for other operating systems. Cassandra High Performance Cookbook


  7. This behavior is explicitly captured by the readdirplus [31] extension, which returns lstat results with directory entries (as some OS-specific implementations of getdir already do) Ceph A Scalable, High-Performance Distributed File System


  8. Hence we do not need to deal with or handle the concurrency issues that exist in B-Tree based database implementations. Cassandra A Decentralized Structured Storage System


  9. The strategy is outlined in the common abstract class, allowing different implementations of an algorithm (different strategies for accomplishing the same work) Cassandra The Definitive Guide


  10. The JMS API minimizes the amount of enterprise messaging knowledge a Java programmer is required to possess in order to develop complex messaging applications, while still maintaining a certain amount of portability across JMS provider implementations. ActiveMQ in Action



19、单词 osgi 的匹配文本:


  1. Preventing any kind of dependencies between the modules allows the new module systems, such as OSGi, to dynamically reload module B without reloading module A so long as the appropriate framework hooks are in place and all communications between the modules A and B happen via the core-api module. Scala in Depth


  2. Gradle provides support for Java, Groovy, Scala, Web, and OSGi projects, out of the box. Gradle Effective Implementation Guide


  3. However, SpringSource also hosts its own Maven repository and provides Spring libraries in the form of Enterprise Bundle Repositories (EBRs), which are OSGi compatible. Pro Spring 3


  4. Imagine being able to replace the PDF parser with a commercially licensed alternative, upgrade the plain text parser for a version with the latest bug fixes, and add a new parser for a custom file format without having to restart or even reload any other components of the system! This is what the OSGi framework makes possible. Tika in Action


  5. Chapter 4 says more about this topic, but for now, you just need to know that Spring Batch can run anywhere the Spring Framework can run: in its own Java process, in a web container, in an application, or even in an Open Services Gateway initiative (OSGi) container. Spring Batch in Action


  6. However, I’d noticed the enterprise world’s interest in OSGi, so I too was interested. Spring Dynamic Modules in Action


  7. The application server is easy to use (ZIP installer and NetBeans/ Eclipse/IntelliJ integration), lightweight (downloads starting at 37 MB, small disk/ memory footprint), and modular (OSGi-based, containers start on demand) Java EE 7 Essentials


  8. This provides a series of architectural advantages, such as an OSGi-based component model and being able to run JRockit Mission Control both as a standalone application and inside the Eclipse IDE. Oracle JRockit The Definitive Guide


  9. Chapter 17 presents the main principles behind the Open Services Gateway initiative (OSGi) and demonstrates how its approach to modularity and its service registry can be utilized in Spring Integration applications. Spring Integration in Action


  10. Because add-ons are OSGi components, we spend time detailing enough of OSGi to be dangerous, and then we dive right in and create three add-ons: a Norwegian language addon, a Roo wrapper add-on to expose a non-OSGi JAR to the Roo system, and a “Simple” add-on to provide jQuery support. Spring Roo in Action



20、单词 dependencies 的匹配文本:


  1. To install CouchDB for the first time, including any dependencies: Getting Started with CouchDB


  2. For other versions of Mac OS X, the dependencies might vary. Monitoring with Ganglia


  3. The appeal of this approach is that CouchDB can serve the whole application without dependencies on a complex environment such as might be maintained on a production web server. CouchDB The Definitive Guide


  4. Its core comes as a single file of Java library with no dependencies, and allows you to index documents and search them with its out of the box full text search capabilities. Mastering ElasticSearch


  5. As a result, you do not have to run a list of targets sequentially; the dist target will run its dependents such as compile and init and their dependencies in proper order. Cassandra High Performance Cookbook


  6. Even if your system’s package manager does not include CouchDB itself in its repository, it’s quite likely that it does feature some of Couch’s dependencies (Erlang, ICU, Spidermonkey, libcurl, and so on) Beginning CouchDB


  7. Dapper traces were used to assess the total query cost, and prompted an effort to redesign operations in order to minimize load on their system dependencies. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. The use of a separate building ensures both that the backup will survive building damage, and that the backups introduce no cyclic dependencies in the system; a GFS cell in the same building potentially might rely on the Chubby cell for electing its master. Chubby - The lock service for loosely-coupled distributed systems


  9. Because Ivy takes care of all the dependencies, it’s easy to build Cassandra once you have the source. Cassandra The Definitive Guide


  10. Loosely coupled architectures, on the other hand, exhibit fewer dependencies, making them better at handling unforeseen changes. ActiveMQ in Action



21、单词 runtime 的匹配文本:


  1. You will need to include the flume-ng-sdk JAR in the classpath of your Java application at runtime to use Flume''s Log4J Appender. Apache Flume Distributed Log Collection for Hadoop


  2. Megastore emphasizes cost-transparent APIs with runtime costs that match application developers'' intuitions. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  3. So rather than reinvent a new type of modular interface, why not just reuse a tried and true interface? Of course, the fact that gmond is built on top of the Apache Portability Runtime (APR) libraries made the Apache way of implementing a modular interface an obvious fit. Monitoring with Ganglia


  4. We use CouchDB’s JavaScript runtime’s built-in toJSON function in our equality test, which is better than testing for raw equality. CouchDB The Definitive Guide


  5. Please remember that the field data cache is not built during indexing, but can be rebuilt while querying and because of that we can change filtering during runtime by updating the fieldata section using the mappings API. Mastering ElasticSearch


  6. Of course, it would be much nicer if you could define the URL and title you want to add to the database at runtime. Beginning CouchDB


  7. Their implementations are connected to the Sawzall runtime and interact with system internals, managing low-level data formats and distributed computations. Interpreting the Data Parallel Analysis with Sawzall


  8. Dapper penetration can be assessed in two dimensions: the fraction of production processes that can generate Dapper traces (i.e., those that are linked with Dapperinstrumented runtime libraries) and the fraction of production machines running Dapper’s trace collection daemon. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  9. It’s often useful to contextualize events at runtime against other events that might be related in order to infer some conclusion to support business decision making. Cassandra The Definitive Guide


  10. The Apache Geronimo application server is a fully certified Java EE 5 runtime that uses many leading open source projects for various aspects of the application server. ActiveMQ in Action



22、单词 jenkins 的匹配文本:


  1. This chapter discusses the installation and configuration procedures needed to run Gradle on Jenkins, an open-source CI server. Gradle in Action


  2. Over 80 recipes to maintain, secure, communicate, test, build, and improve the software development process with Jenkins. Jenkins Continuous Integration Cookbook


  3. Then, we are going to see how the continuous integration servers Jenkins, JetBrains TeamCity, and Atlassian Bamboo support Gradle. Gradle Effective Implementation Guide


  4. Allows you to run SBT build actions from Jenkins and lets you configure SBT using Jenkins. Scala in Action


  5. Seven years ago, I wrote the first line of code that started this whole project that is now known as Jenkins, and was originally called Hudson. Jenkins The Definitive Guide


  6. There are two implementations you can choose from: Jenkins and Murmur. Hadoop in Practice


  7. The encoding of the region names for the on-disk directories is also different: they use a Jenkins hash to encode the region name. HBase:The Definitive Guide


  8. His current topics of research center around project automation, including: build tools (Gradle, Leiningen, Maven, Ant), distributed version control (Git, Mercurial), testing frameworks (Geb, Spock, JUnit, TestNG, Mockito), continuous integration (Jenkins, Hudson, Bamboo) and code quality metrics (Sonar, CodeNarc, PMD) Building and Testing with Gradle


  9. Jenkins Tool used for OpenStack development to run jobs automatically. OpenStack Operations Guide


  10. The first book to include detailed screenshots and recipes for using Jenkins continuous integration server (formerly known as Hudson) Selenium 2 Testing Tools Beginner''s Guide



23、单词 couchdb 的匹配文本:


  1. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc., Getting Started with CouchDB, the cover image of a hawk’s bill sea turtle, and related trade dress are trademarks of O’Reilly Media, Inc. Getting Started with CouchDB


  2. CouchDB: The Definitive Guide, the image of a Pomeranian dog, and related trade dress are trademarks of O’Reilly Media, Inc. CouchDB The Definitive Guide


  3. For example, CouchDB uses a combination of counter and content hash. NoSQL Distilled


  4. Apache CouchDB is an exciting database management system that is steadily improving with each new version. Beginning CouchDB


  5. Scaling CouchDB, the image of a chough, and related trade dress are trademarks of O’Reilly Media, Inc. Scaling CouchDB


  6. In fact, some document stores (like CouchDB) have made this a key point of their value proposition: documents can be held in a multimaster database that automatically replicates concurrently accessed, contended state across instances without undue interference from the user. Graph Databases


  7. CouchDB is a document-oriented database—a distributed keyvalue store that encodes values in JSON. SQL Antipatterns


  8. This is not entirely surprising, as its creator, Damien Katz, worked on Lotus Notes at IBM before deciding to embark on this project, with the idea that it would be a database ”of the Web.” The documents stored by CouchDB do not need to share the same schema, and query capabilities are made available via views, which are constructed as JavaScript functions. Cassandra The Definitive Guide


  9. For example, CouchDB (which is generally considered as MongoDB’s biggest “competitor”) doesn’t support dynamic queries. The Definitive Guide to MongoDB


  10. As of this writing, the only well-known document database apart from MongoDB is Apache’s CouchDB. MongoDB in Action



24、单词 cpu 的匹配文本:


  1. By working with the grain and letting CouchDB do this for us, we save ourselves a tremendous amount of CPU cycles that would otherwise have been spent serializing object graphs from SQL, converting them into domain objects, and using those objects to do application-level validation. CouchDB The Definitive Guide


  2. You are only limited by resources such as memory, CPU time, and hard disk space. Cassandra High Performance Cookbook


  3. More recent distributed file systems have adopted architectures based on object-based storage, in which conventional hard disks are replaced with intelligent object storage devices (OSDs) which combine a CPU, network. Ceph A Scalable, High-Performance Distributed File System


  4. The salient feature of Scuttlebutt is that it has very efficient CPU utilization and very efficient utilization of the gossip channel. Cassandra A Decentralized Structured Storage System


  5. This could be through added memory, a faster CPU, or larger hard drives. Scaling CouchDB


  6. Ideal maximum throughput: • In practice, network bandwidth or cpu limits the total capacity in bytes. BookKeeper


  7. OSDs constructed from commodity components combine a CPU, network interface, and local cache with an underlying disk or RAID, and replace the convention block-based storage interface with one based on named, variable-length objects. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  8. But with big enough from values, the sorting process can become very heavy indeed, using vast amounts of CPU, memory and bandwidth. Elasticsearch The Definitive Guide


  9. The least disruptive thing (for the programming staff, anyway) is to vertically scale by adding CPU, adding memory, and getting faster hard drives, but if you continue to be successful and add more customers, at some point (perhaps into the tens of millions of rows), you’ll likely have to start thinking about how you can add more machines. Cassandra The Definitive Guide


  10. It’s also advisable to reduce the CPU load per connection. ActiveMQ in Action



25、单词 bytes 的匹配文本:


  1. The body is an array of bytes that contains the actual payload. Apache Flume Distributed Log Collection for Hadoop


  2. For CPU stats, for example, the value represents a percentage, and network stats interpret the value as a raw number of bytes. Monitoring with Ganglia


  3. The Content-Length header simply tells us how many bytes the response body has. CouchDB The Definitive Guide


  4. The number of segments an index will contain is proportional to the logarithm of the next size in bytes. Mastering ElasticSearch


  5. Storing large objects The row data stored in Cassandra is typically smaller in size, between a few bytes to a few thousand bytes. Cassandra High Performance Cookbook


  6. File and directory metadata in Ceph is very small, consisting almost entirely of directory entries (file names) and inodes (80 bytes) Ceph A Scalable, High-Performance Distributed File System


  7. Ideal maximum throughput: • In practice, network bandwidth or cpu limits the total capacity in bytes. BookKeeper


  8. This sorts by directly comparing the bytes, validating that the input can be parsed as US-ASCII. Cassandra The Definitive Guide


  9. Efficiency—Since this connector uses the OpenWire protocol to convert messages to a stream of bytes (and back), it’s very efficient in terms of network usage and performance. ActiveMQ in Action


  10. The map is indexed by a row key, column key, and a timestamp; each value in the map is an uninterpreted array of bytes. Bigtable - A Distributed Storage System for Structured Data



26、单词 lucene 的匹配文本:


  1. Nevertheless, I hope that by reading this book you''ll be able to easily get into all the details about ElasticSearch and underlying Apache Lucene, and I also hope that it will let you get the desired knowledge easier and faster. Mastering ElasticSearch


  2. His background includes a wide range of programming languages (including Java and C#), search engines (including Lucene), databases, and NoSQL distributed data stores. Cassandra High Performance Cookbook


  3. It''s open-source, distributed and it''s built on top of Apache Lucene, an open-source search engine library, which allows you to implement search functionality in your own Java application. Elasticsearch in Action MEAP V07 cHAPTER 1


  4. In this chapter we''ll be using Apache Lucene-based Apache Solr and ElasticIndex for indexing purposes. Hadoop MapReduce Cookbook


  5. Your data might be XML files sitting behind a number of FTP servers, text log files sitting on a central web server, or Lucene indexes1 in HDFS. Hadoop in Practice


  6. Elasticsearch uses Lucene internally for all of its indexing and search, but it. Elasticsearch The Definitive Guide


  7. Does not work well with other Lucene constructs, like FieldCache (by doc info) elasticsearch. The Road to a Distributed, (Near) Real Time, Search Engine


  8. Hadoop started out as a subproject of Nutch , which in turn was a subproject of Apache Lucene. Hadoop in Action


  9. Integration with Apache Lucene/Solr is available to store external indexes and perform fast global searches. Cassandra The Definitive Guide


  10. Doug started work on the implementations of these Google systems, and Hadoop was soon born, firstly as a subproject of Lucene and soon was its own top-level project within the Apache open source foundation. Hadoop Beginners Guide



27、单词 redis 的匹配文本:


  1. Other sinks are available on the Internet that can be used to write data to MongoDB, Cassandra, RabbitMQ, Redis, and just about any other data store you can think of. Apache Flume Distributed Log Collection for Hadoop


  2. Redis is good at queue operations, and occasionally you can use memcached for this purpose, too. High Performance MySQL 3rd Edition


  3. Twitter is built atop several open source protocols, primarily HTTP, Thrift, Memcached, MySQL, and Redis. Netty in Action MEAP V08


  4. In a realworld application you’d include the logic for increasing the user points, say on a Redis database. RabbitMQ in Action


  5. This example is a basic means for writing to a number of Redis instances in parallel from MapReduce. MapReduce Design Patterns


  6. For example, Riak allows you to add metadata to aggregates for indexing and interaggregate links, Redis allows you to break down the aggregate into lists or sets. NoSQL Distilled


  7. Key-value stores You can use applications such as Memcached or Redis for external caching. bk-admin-guide-cloud-latest


  8. There is also a recipe that shows how to use Redis to join data from an external data store. Hadoop RealWorld Solutions Cookbook


  9. Graph databases such as Neo4J and the set of databases derived at least in part from the design of Google’s Bigtable database (such as MongoDB, HBase, Hypertable, and Redis) all are focused slightly less on Availability and more on ensuring Consistency and Partition Tolerance. Cassandra The Definitive Guide


  10. Redis Cookbook, the image of the mouse opossum, and related trade dress are trademarks of O’Reilly Media, Inc. Redis Cookbook



28、单词 html 的匹配文本:


  1. I won''t cover this here, but if you need it, there is an example in the Flume User Guide (http://flume.apache.org/FlumeUserGuide.html) Apache Flume Distributed Log Collection for Hadoop


  2. Depending on your platform and installation type, your browser may have already been opened with the correct link; if not, you can access Futon, the built-in administration interface to CouchDB by going to the URL: http://127.0.0.1:5984/_utils/index.html Futon is a web-based interface to the main functionality in CouchDB and provides support for editing the configuration information, creating databases, documents, design documents (and therefore views, shows and lists) and starting and requesting replication. Getting Started with CouchDB


  3. The source and target members of our replication request are actually links (like in HTML) and so far we’ve seen links relative to the server we’re working on (hence local) CouchDB The Definitive Guide


  4. One of the examples of character mapper is HTML tags removal process. Mastering ElasticSearch


  5. He dabbled with web development in the late 1990s and was an expert in HTML, CSS, and JavaScript by the time he finished school. Beginning CouchDB


  6. Change into the log directory and generate the HTML and graph reports using the tsung_stats.pl script package with Tsung: Scaling CouchDB


  7. In this book, we''ll stick with the popular formats: JSON for HTML communication and YAML for configuration. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. This contains a snippet of text from the about field with the matching words wrapped in  HTML tags: Elasticsearch The Definitive Guide


  9. If you’re interested, you can read the complete Avro specification at http://avro.apache.org/docs/current/spec.html, though it is definitely not required to work with Cassandra. Cassandra The Definitive Guide


  10. If you don’t have the Sun J2SE installed and you’re using Linux, Solaris, or Windows, download and install it from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/ index.html. ActiveMQ in Action



29、单词 metrics 的匹配文本:


  1. This mode is good for things like metrics where gaps can easily be tolerated, as new data is just a second away. Apache Flume Distributed Log Collection for Hadoop


  2. As you can see from the help page, the first version of ganglia allowed you to query and sort by 21 different system metrics right out of the box. Monitoring with Ganglia


  3. Using the JDepend plugin To get quality metrics for our code base, we can use JDepend. Gradle Effective Implementation Guide


  4. We expose various system level metrics to Ganglia and this has helped us understand the behavior of the system when subject to our production workload. Cassandra A Decentralized Structured Storage System


  5. Every key in the response contains the same metrics: current, count, mean, max, min, stddev, and description. Beginning CouchDB


  6. This makes it easy for the owners of shared services to rank their users in terms of various metrics (e.g., inbound network load, outbound network load, or total time spent servicing requests) Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Hector directly supports JMX by exposing metrics such as bad connections, available connections, idle connections, and more. Cassandra The Definitive Guide


  8. As with any other infrastructure software, it’s important for developers and administrators to be able to monitor broker metrics during runtime and notice any suspicious behavior that could possibly impact messaging clients. ActiveMQ in Action


  9. We log various facts and metrics for each of these queries. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. His current topics of research center around project automation, including: build tools (Gradle, Leiningen, Maven, Ant), distributed version control (Git, Mercurial), testing frameworks (Geb, Spock, JUnit, TestNG, Mockito), continuous integration (Jenkins, Hudson, Bamboo) and code quality metrics (Sonar, CodeNarc, PMD) Building and Testing with Gradle



30、单词 cassandra 的匹配文本:


  1. Other sinks are available on the Internet that can be used to write data to MongoDB, Cassandra, RabbitMQ, Redis, and just about any other data store you can think of. Apache Flume Distributed Log Collection for Hadoop


  2. Because of high performance, a small codebase, and compact resulting data, there is a wide adoption of Avro not only in the Hadoop community, but also by many other NoSQL implementations (including Cassandra) Professional Hadoop Solutions


  3. Over 150 recipes to design and optimize large-scale Apache Cassandra deployments. Cassandra High Performance Cookbook


  4. Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different data centers) Cassandra A Decentralized Structured Storage System


  5. The company looked at various options, including RDBMS, in-memory counters, and Cassandra, before settling on HBase. HBase in Action


  6. He closely works with Big Data technologies, such as Hadoop and Cassandra daily. Hadoop MapReduce Cookbook


  7. Custom IndexReader and IndexWriter to work on top of Cassandra. elasticsearch. The Road to a Distributed, (Near) Real Time, Search Engine


  8. Cassandra: The Definitive Guide, the image of a Paradise flycatcher, and related trade dress are trademarks of O’Reilly Media, Inc. Cassandra The Definitive Guide


  9. Get the best out of Cassandra using this efficient recipe bank. Hadoop Beginners Guide


  10. Programming Hive introduces Hive, an essential tool in the Hadoop ecosystem that provides an SQL (Structured Query Language) dialect for querying data stored in the Hadoop Distributed Filesystem (HDFS), other filesystems that integrate with Hadoop, such as MapR-FS and Amazon’s S3 and databases like HBase (the Hadoop database) and Cassandra. Programming Hive



31、单词 mapper 的匹配文本:


  1. Usually, each mapper is assigned a single block of a file as input (unless you have used certain compression codecs) Apache Flume Distributed Log Collection for Hadoop


  2. One of the examples of character mapper is HTML tags removal process. Mastering ElasticSearch


  3. The Map phase then invokes a user-defined function, the Mapper, on each element, independently and in parallel. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. The mapper takes a column and breaks it into tokens (individual words) using StringTokenizer, a class that splits strings on common tokens such as spaces and columns. Cassandra High Performance Cookbook


  5. The code has three parts—mapper, reducer, and the main program. Hadoop MapReduce Cookbook


  6. The mapper splits the line into distinct words, and outputs each word (the key) along with the word''s originating filename (the value) Hadoop in Practice


  7. Each phase is defined by a data processing function, and these functions are called mapper and reducer, respectively. Hadoop in Action


  8. Readers familiar with MapReduce programs will notice how familiar this mapper looks. Cassandra The Definitive Guide


  9. If small enough, the secondary table is pulled into the memory of each mapper / reducer process for inmemory lookups, which typically is the fastest method for joining. Tenzing - A SQL Implementation On The MapReduce Framework


  10. Note that each mapper simply outputs a series of individual key/value outputs; these are combined into a key and list of values in the shuffle method. Hadoop Beginners Guide



32、单词 filesystem 的匹配文本:


  1. The problem with HDFS and streaming data/logs HDFS isn''t a real filesystem, at least not in the traditional sense, and many of the things we take for granted with normal filesystems don''t apply here, for example being able to mount it. Apache Flume Distributed Log Collection for Hadoop


  2. Change to the actual mount point of a SAN filesystem. Monitoring with Ganglia


  3. For single documents, the request then gets passed to the database module where the data for the document is looked up on the filesystem and returned all the way up again. CouchDB The Definitive Guide


  4. Our access to the cluster was too brief to debug and characterize the middling performance from the 2.1.0 filesystem. Apache Hadoop YARN Yet Another Resource Negotiator


  5. In case of ElasticSearch it is usually better to use the new IO-based system store instead of the simple filesystem store. Mastering ElasticSearch


  6. Most of the software will be installed to various locations throughout the Linux filesystem, so where exactly you download and compile the applications is not important. Beginning CouchDB


  7. For example, a call to fileTree() might scan the filesystem at the time it is called, producing an immutable list that the build can then manipulate. Gradle Beyond The Basics


  8. At startup time, the HDFS NameNode reads filesystem metadata from a file called the fsimage file. Apache Hadoop Goes Realtime at Facebook


  9. This might mean optimizing the channels the database uses to write to the underlying filesystem. Cassandra The Definitive Guide


  10. The colon at the beginning of the project name indicates the root of the project tree, in a similar way that a forward slash indicates the root directory in a Unix filesystem, or a backslash indicates the root directory on Windows. Building and Testing with Gradle



33、单词 json 的匹配文本:


  1. The format is also self-describing using JSON, making for a good long-term data storage format, as your data format may evolve over time. Apache Flume Distributed Log Collection for Hadoop


  2. In many cases, your application can use the JSON object structure directly without having to serialize your objects or data into the fixed format required by the database engine. Getting Started with CouchDB


  3. Case Study Greg Borenstein, a friend and coworker, built a small library for converting Songbird playlists to JSON objects and decided to store these in CouchDB as part of a backup application. CouchDB The Definitive Guide


  4. From the client point of view, document is a JSON object (see more about JSON format at http://en.wikipedia.org/wiki/JSON) Mastering ElasticSearch


  5. Chapter 9, Coding and Internals: This chapter covers programming recipes that go beyond the typical application programmer interface, including building Cassandra from source, creating custom types for use with Cassandra, and modifying tools like the JSON export tools. Cassandra High Performance Cookbook


  6. In the second part of this book, you will learn about Futon, CouchDB’s excellent web-based admin interface, and then be introduced to the JSON format, which is used to describe the data in CouchDB. Beginning CouchDB


  7. Replication can be run from Futon, CouchDB’s web administration console, or by sending a POST request to _replicate containing a JSON object with replication parameters. Scaling CouchDB


  8. It allows you to send data over HTTP in JSON to index, search, and manage your Elasticsearch cluster. Elasticsearch in Action MEAP V07 cHAPTER 1


  9. Any optional query string parameters, eg ?pretty will pretty-print the JSON response to make it easier to read. Elasticsearch The Definitive Guide


  10. It also has scripts for converting SSTables (the datafiles) to JSON and back. Cassandra The Definitive Guide



34、单词 annotations 的匹配文本:


  1. Code annotations accompany many of the code listings and highlight important concepts. Gradle in Action


  2. Gradle scans the test classpath for all class files and checks if they have specific JUnit or TestNG annotations. Gradle Effective Implementation Guide


  3. To eliminate programmer annotations that help identify dependencies, we exploit the structure of MapReduce computations. Incoop MapReduce for Incremental Computations


  4. A plug-in that creates WSDL files from an annotated Java web service implementation should not include its own code for scanning for annotations and generating content in a specialized XML vocabulary, but should instead declare a dependency on an existing library to do that work, and provide a mechanism for that library to be fetched from an online repository if it is not already present on the build system. Gradle Beyond The Basics


  5. In commands and source code, we make extensive use of bold text; and annotations draw your attention to the important aspects of listings. HBase in Action


  6. Our instrumentation is restricted to a low enough level in the software stack that even largescale distributed systems like Google web search could be traced without additional annotations. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Code annotations accompany many of the listings, highlighting important concepts. Hadoop in Action


  8. Kundera (Java ORM) Kundera is an object-relational mapping (ORM) implementation for Cassandra written using Java annotations. Cassandra The Definitive Guide


  9. Components of Hadoop that wish to produce metrics implement the MetricsSource interface or use a set of simple Java annotations; those that wish to receive and process metric data implement the MericsSink interface. Hadoop Operations


  10. This facilitates compiled tests being able to reference JUnit or TestNG annotations or base classes. Building and Testing with Gradle



35、单词 servlet 的匹配文本:


  1. An example of such an exception type is the ServletException. Core Java Volume I Fundamentals 9th Edition


  2. Some of the additional modules that are available from this repository include modules for monitoring an Apache Web Server, MySQL database, and Xen virtual machine, as well as Tomcat and Jetty servlet monitoring through JMX. Monitoring with Ganglia


  3. As the name Servlet may already indicate, it serves a client request and constructs the response. Gradle in Action


  4. Finally, we will add a web project with a Java servlet class that uses classes from the services project. Gradle Effective Implementation Guide


  5. In a servlet or JavaServer Page (JSP) that you write in a SPNEGO-protected web application, you can get the user information passed in the HTTP Kerberos SPNEGO simply by calling getUserPrincipal() and getRemoteUser() on the HttpRequest object in the same way that you would get the principal information if the user used username/password authentication, or even digital certificate authentication. Professional Hadoop Solutions


  6. What’s interesting with the implementation of this servlet is that it redirects the actual file download to one of the DataNodes that contains the first block of the file. Hadoop in Practice


  7. Finally, coprocessors can be chained, very similar to what the Java Servlet API does with request filters. HBase:The Definitive Guide


  8. Similarly, as servlet authors need not implement the low-level details of socket I/O, event handling loops, and complex thread coordination, MapReduce developers program to a well-defined, simple interface and the “container” does the heavy lifting. Hadoop Operations


  9. The first type of application server implements the Java Servlet specification (http://mng.bz/cmMj) and is known as a web container. ActiveMQ in Action


  10. Most socket-based web services (for example, HTTP daemons, servlet engines, and application servers) are multithreaded. Concurrent Programming in Java Design Principles and Pattern



36、单词 jdbc 的匹配文本:


  1. Using the JDBC channel is a bad idea as it would introduce a bottleneck and single point of failure instead of what should be designed as a highly distributed system. Apache Flume Distributed Log Collection for Hadoop


  2. I don’t want to bore you with repetitive details of a class that uses JDBC, so I won’t discuss the code in detail. Gradle in Action


  3. Instead of moving data between clusters, Sqoop was designed to move data from and into relational databases using a JDBC driver to connect. Hadoop RealWorld Solutions Cookbook


  4. Alternatively, you can place the database in a directory outside of your development directory and edit the JDBC URL to point to the correct path, but this step is left as an exercise for the reader. Gradle Beyond The Basics


  5. Discussion MapReduce contains DBInputFormat and DBOutputFormat classes, which can be used to read and write data from databases via JDBC. Hadoop in Practice


  6. Conceptually this is analogous to using JDBC to execute SQL queries. Hadoop in Action


  7. Instead of just abstracting the database interactions from the developer’s point of view, the way JDBC does, an entirely different mechanism is used. Cassandra The Definitive Guide


  8. You can of course use any RDBMS for which a JDBC driver is available, but if you do so, you''ll need to modify the aspects of this chapter that require direct interaction with the database server. Hadoop Beginners Guide


  9. For even greater performance, Sqoop supports database-specific plug-ins that use native features of the RDBMS rather than incurring the overhead of JDBC. Hadoop Operations


  10. For example, ActiveMQ offers its own style of ultra-fast message persistence via KahaDB, but also supports standard JDBC-accessible databases. ActiveMQ in Action



37、单词 parser 的匹配文本:


  1. If there is an error message in a log file that indicates that Jetty could not start its web server, there is a nonvalidating XML parser in the classpath ahead of the validating XML parser that Hadoop Core supplies. Pro Hadoop


  2. A typical example of a transitive dependency would be the XML parser library Xerces that requires the XML APIs library to function correctly. Gradle in Action


  3. In the dialog window with task types, we select JUnit Parser from the Tests section. Gradle Effective Implementation Guide


  4. Lucene query language Some of the query types provided by ElasticSearch support Apache Lucene query parser syntax. Mastering ElasticSearch


  5. Since the XML is pretty simple, we usually avoid using a full-blown XML parser and just parse it with some string operations in our Java code. MapReduce Design Patterns


  6. This is a perfect job for a Copy task with a filter attached to it, but that filter has a lot of work to do! It has to be a full-featured Markdown parser and HTML renderer. Gradle Beyond The Basics


  7. The XML element in Text form has been supplied, so you’ll need to use an XML parser to extract content from the XML. Hadoop in Practice


  8. It just calls the parser’s parse() method, which parses the fields of interest from a line of input, checks whether. Hadoop:The Definitive Guide 3rd Edition


  9. Besides having added components like a crawler and a parser, a web search engine differs from a basic document search engine in terms of scale. Hadoop in Action


  10. Hive processes the above SQL query in a series of phases: (1) The parser transforms the query into an Abstract Syntax Tree. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



38、单词 activemq 的匹配文本:


  1. Right now there are many different pieces of software designed specifically for task queues (ActiveMQ, RabbitMQ, Gearman, Amazon SQS, and others), but there are also ad hoc methods of creating task queues in situations where queues aren’t expected. Redis in Action


  2. We selected ActiveMQ as a potential system for the message queue. Building LinkedIn’s Real-time Activity Data1


  3. Setting up an ActiveMQ server and using JMS with Spring are discussed in the following sections. Pro Spring 3


  4. ActiveMQ will be supported for many more years to come! Next Generation Open Source Messaging with Apollo


  5. Options like ActiveMQ, ZeroMQ, and Apache Qpid all providing different open source approaches to message queuing. RabbitMQ in Action


  6. We’ll need to install the Sun Java Runtime, get the ActiveMQ package, and configure it. Managing Infrastructure with Puppet


  7. Puppet queue uses the ActiveMQ middleware service to handle message passing and queuing. Pro Puppet


  8. Kafka can be compared with Scribe or Flume as it is useful for processing activity stream data; but from the architecture perspective, it is closer to traditional messaging systems such as ActiveMQ or RabitMQ. Apache Kafka


  9. ActiveMQ and RabbitMQ don’t seem to have an easy way to batch messages and we assume that it used a batch size of 1 Kafka a Distributed Messaging System for Log Processing


  10. ActiveMQ was developed to provide an open source alternative: its central theme is to put the power in the hands of developers. ActiveMQ in Action



39、单词 jms 的匹配文本:


  1. Performance test of different server types including web (HTTP and HTTPS), SOAP, database, LDAP, JMS, mail, and native commands or shell scripts. Performance Testing With JMeter 2.9


  2. This specification a JMS provider that implements both point-to-point messaging as well as. Java EE 7 Specification


  3. In addition, several JMS interfaces implement Autocloseable and thus are automatically closed after use. Java EE 7 Essentials


  4. Since it is not a required component of Profile, Web Profile products are not required to include an entation of JMS, nor do they have to support other JMS-related. Java EE 7 Web Profile Specification


  5. At first, it seems like a thoughtful and elegant design to ensure overall durability of the database, and appears unproblematic because it is familiar from many distributed computing paradigms, such as Java Message Service (JMS) Cassandra The Definitive Guide


  6. By far, the most commonly used appenders write log events to disk, but appenders for outputting log events to the console, sending data to syslog, or even to JMS exist. Hadoop Operations


  7. You can do this using an underlying messaging protocol, such as JMS, or independently of the transport protocol using the design of the Web Service. Java Performance Tuning 2nd Edition


  8. Although ActiveMQ implements the JMS specification, which has not moved in many years, ActiveMQ provides many features beyond this spec and it has not stopped innovating. ActiveMQ in Action


  9. However, for asynchronous interactions, say, a Web service or JMS client, the rate at which messages are injected into the system, injection rate, may be the more important criteria. Java Performance


  10. For different approaches, see for example the Aleph, JMS, and Ninja packages, accessible via links from the online supplement. Concurrent Programming in Java Design Principles and Pattern



40、单词 tika 的匹配文本:


  1. Apache Tika, a project in  itself extends capabilities of Apache Solr to run on top of different types of fi les. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. Chris began to advocate for Tika as a standalone project in 2006 Tika in Action


  3. Tika makes a variety of metadata available about documents; this example just used two. Solr 3 Enterprise Search Server


  4. To do that, we either need to parse the data in some external application or setup Solr to use Apache Tika. Solr 3.1 Cookbook


  5. The fetch function makes use of the Apache Tika library (http://tika.apache.org/) to fetch and extract the contents of the document: Storm Real-Time Processing Cookbook


  6. With Solr this is easy because it integrates with Apache Tika project that supports most popular document formats. Solr in action


  7. It also does not provide any document filters, although Lucene has a sister project at Apache, Tika, which handles document filtering very well. Lucene in Action 2nd Edition


  8. To do that, we either need to parse the data in some external application or set up Solr to use Apache Tika. Apache Solr 4 Cookbook



41、单词 configuring 的匹配文本:


  1. So, let us start with the first step, downloading and configuring Flume. Apache Flume Distributed Log Collection for Hadoop


  2. Configuring and Building CouchDB Unless you have specific requirements, configure will probably work everything out for you and you can simply run: Getting Started with CouchDB


  3. Configuring CouchApp with .couchapprc If you don’t want to have to put the full URL (potentially including authentication parameters) of your database onto the command line each time you push, you can use the .couchapprc file to store deployment settings. CouchDB The Definitive Guide


  4. Now let''s look at the possibilities of configuring the behavior of query rewriting. Mastering ElasticSearch


  5. However, there are a few subtle things to keep in mind when configuring them. Cassandra High Performance Cookbook


  6. Configuring CouchDB When you first build or install CouchDB on your system, there is zero configuration required to start creating CouchDB databases. Beginning CouchDB


  7. Instead, we will focus on installing and configuring the Apache HTTP Server as a load balancer. Scaling CouchDB


  8. Additionally, configuring analyzers at node level requires changing the config file on every node and restarting every node which becomes a maintenance nightmare. Elasticsearch The Definitive Guide


  9. Chapter 6, Configuring Cassandra This chapter shows you how to specify partitioners, replica placement strategies, and snitches. Cassandra The Definitive Guide


  10. In part 2, you’ll learn about configuring various aspects of ActiveMQ for connectivity, message persistence, and security. ActiveMQ in Action



42、单词 namespace 的匹配文本:


  1. To add properties, you’re required to use the ext namespace. Gradle in Action


  2. Part of this long iteration phase was due to a slow realization that Spanner should do more than tackle the problem of a globallyreplicated namespace, and should also focus on database features that Bigtable was missing. Spanner - Google’s Globally-Distributed Database


  3. The Ceph file system has three main components: the client, each instance of which exposes a near-POSIX file system interface to a host or process; a cluster of OSDs, which collectively stores all data and metadata; and a metadata server cluster, which manages the namespace (file names and directories) while coordinating security, consistency and coherence (see Figure 1) Ceph A Scalable, High-Performance Distributed File System


  4. The basics: Planet-scale structured storage Next generation of Bigtable stack Provides a single, location-agnostic namespace Manual and access-based data placement. Storage Architecture and Challenges


  5. In a different scenario, you might decide to “namespace” the task names by prefixing them with lb or liquibase to keep them from colliding with tasks from other plug-ins, but for our purposes here we can keep the task names short and simple. Gradle Beyond The Basics


  6. Actually, in Elasticsearch, our data is stored and indexed in shards, while an index is just a logical namespace which groups together one or more shards. Elasticsearch The Definitive Guide


  7. The keyspace is essentially just a logical namespace to hold column families and certain configuration properties. Cassandra The Definitive Guide


  8. The following listing demonstrates how to configure ActiveMQ using its custom Spring schema via an XML namespace. ActiveMQ in Action


  9. Chubby provides a namespace that consists of directories and small files. Bigtable - A Distributed Storage System for Structured Data


  10. Gradle brings in everything from the Ant namespace into the Gradle namespace such that using a core Ant task is as easy as calling ant.

    Building and Testing with Gradle



43、单词 www 的匹配文本:


  1. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub. Apache Flume Distributed Log Collection for Hadoop


  2. Find us on Facebook: http://facebook.com/oreilly Follow us on Twitter: http://twitter.com/oreillymedia Watch us on YouTube: http://www.youtube.com/oreillymedia. Getting Started with CouchDB


  3. Dear reader, refer to http://www.elasticsearchserverbook.com from time to time, where you''ll be able to find the newest errata dedicated to the book and additional articles extending it. Mastering ElasticSearch


  4. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. Cassandra High Performance Cookbook


  5. For information on translations, please e-mail info@apress.com, or visit http://www.apress.com. Beginning CouchDB


  6. For more information about our books, courses, conferences, and news, see our website at http://www.oreilly.com. Scaling CouchDB


  7. Typically, you''ll download the package from http://www.elasticsearch.org/download/, which is the best fit for your environment. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. Finding Out More If you’d like to find out more about Cassandra, and to get the latest updates, visit this book’s companion website at http://www.cassandraguide.com. Cassandra The Definitive Guide


  9. If you don’t have the Sun J2SE installed and you’re using Linux, Solaris, or Windows, download and install it from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/ index.html. ActiveMQ in Action


  10. Irrelevant details were elided to keep the example short.) The call to Apply performs an atomic mutation to the Webtable: it adds one anchor to www.cnn.com and deletes a different anchor. Bigtable - A Distributed Storage System for Structured Data



44、单词 jrockit 的匹配文本:


  1. I remember quite clearly the first time I met the JRockit team. Oracle JRockit The Definitive Guide


  2. Work with the JRockit Mission Control 3.1/4.0 tools suite to debug or profile your Java applications. Java 7 Concurrency Cookbook



45、单词 linux 的匹配文本:


  1. You''d be hard pressed to find such a company using the bleeding-edge Linux kernel rather than something like Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu LTS, or any of the other distributions that aim for is stability and compatibility. Apache Flume Distributed Log Collection for Hadoop


  2. Install using the native packages for your chosen Linux platform. Getting Started with CouchDB


  3. Debian GNU/Linux You can install the CouchDB package by running: CouchDB The Definitive Guide


  4. In this second case, called EC2, Amazon provides server instances and of course they can be used for installing and running the ElasticSearch cluster (among many other things as those are normal Linux machines) Mastering ElasticSearch


  5. He has experience as a developer as well Linux and network administrator and enjoys the rich world of Open Source software. Cassandra High Performance Cookbook


  6. A user space approach, aside from providing greater flexibility and easier implementation, also avoids cumbersome interaction with the Linux VFS and page cache, both of which were designed for a different interface and workload. Ceph A Scalable, High-Performance Distributed File System


  7. Next, I provide you with a step-by-step guide to installing CouchDB on various distributions of the Linux operating system as well as on a Mac, before guiding you through the creation of your first database. Beginning CouchDB


  8. Managing scalability in object storage systems for HPC Linux clusters. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  9. On Linux, GZip extraction utilities should be preinstalled; on Windows, you’ll need to get a program such as WinZip, which is commercial, or something like 7-Zip, which is freeware. Cassandra The Definitive Guide


  10. If you don’t have the Sun J2SE installed and you’re using Linux, Solaris, or Windows, download and install it from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/ index.html. ActiveMQ in Action



46、单词 jpa 的匹配文本:


  1. This technique is helpful for defining Scala classes that work with the Java Persistence API (JPA) Scala in Depth


  2. Hibernate implements the Java Persistence API (JPA), and by using the JPA annotation Entity you’re specifying Hibernate to persist the object to the database. Scala in Action


  3. The support for Hibernate, MyBatis, JDO, and JPA makes already simple APIs even simpler, thus easing the burden on developers. Pro Spring 3


  4. This work can be done by hand.The Java Persistence Architecture (JPA) allows you to annotate a Java object and use JPA to handle the serialization and de serialization automatically. Cassandra High Performance Cookbook


  5. Often these objects are backed by a relational database of some type, and you manage them through object relational mapping tools such as Hibernate, JPA, or JDO. Solr 3 Enterprise Search Server


  6. This causes problems when using JPA annotations in Scala code, for example, as discussed in [JPAScala] Programming Scala


  7. JPA/JPO You can use, for example, DataNucleus to put a JPA/JPO access layer on top of HBase. HBase:The Definitive Guide


  8. The Java Persistence API (JPA) is defined as JSR 338, and the complete specification can be downloaded. Java EE 7 Essentials


  9. Note: The JPA specification is a separate specification referenced in the EJB 3.0 specification.) Session beans usually implement the core business logic of an enterprise application and represent the interactive session. Java Performance


  10. Those of you that are new to JPA can rest assured that you were not forgotten. Pro JPA2



47、单词 rabbitmq 的匹配文本:


  1. Other sinks are available on the Internet that can be used to write data to MongoDB, Cassandra, RabbitMQ, Redis, and just about any other data store you can think of. Apache Flume Distributed Log Collection for Hadoop


  2. RabbitMQ and Gearman21 can be very helpful for some purposes, too. High Performance MySQL 3rd Edition


  3. This is usually implemented with RabbitMQ today, but could be any AMPQ message queue (such as Apache Qpid), or Zero MQ. bk-compute-adminguide-grizzly


  4. The easiest way to create a failure-tolerant message queue is probably a combination of RabbitMQ and Hovercraft. CouchDB The Definitive Guide


  5. This is currently implemented with RabbitMQ today, but theoretically could be any AMPQ message queue supported by the Python ampqlib and carrot libraries. Deploying OpenStack


  6. Usually implemented with RabbitMQ, but could be any AMPQ message queue, such as Apache Qpid or Zero MQ. bk-admin-guide-cloud-latest


  7. An OpenStack highavailability configuration uses existing native Pacemaker RAs (such as those managing MySQL databases or virtual IP addresses), existing third-party RAs (such as for RabbitMQ), and native OpenStack RAs (such as those managing the OpenStack Identity and Image Services) high-availability-guide


  8. ActiveMQ and RabbitMQ don’t seem to have an easy way to batch messages and we assume that it used a batch size of 1 Kafka a Distributed Messaging System for Log Processing


  9. There are lots of different queues you could use: Amazon’s SQS, RabbitMQ, or even a MongoDB capped collection (although make sure it’s on a separate server than the cluster it’s protecting) Scaling MongoDB


  10. A dedicated VM was created for each service (nova-scheduler, rabbitmq, database etc) OpenStack Operations Guide



48、单词 concurrency 的匹配文本:


  1. By reducing latency for a given system (and that is true not only for storage systems), you affect concurrency and throughput capabilities. CouchDB The Definitive Guide


  2. Like MapReduce, Dryad handles the details of communication, partitioning, placement, concurrency and fault tolerance. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. This architecture is designed to control resource utilization in a high concurrency environment. Cassandra High Performance Cookbook


  4. In addition to the actual data persistence component, the system needs to have the following characteristics; scalable and robust solutions for load balancing, membership and failure detection, failure recovery, replica synchronization, overload handling, state transfer, concurrency and job scheduling, request marshalling, request routing, system monitoring and alarming, and configuration management. Cassandra A Decentralized Structured Storage System


  5. Add to this the fact that CouchDB is developed in Erlang OTP, a fault-tolerant programming language that offers excellent concurrency features, and you know that your CouchDB database will scale well without a loss of reliability and availability. Beginning CouchDB


  6. Transaction, distribution, and concurrency boundaries are also used in domain-driven design when defining Aggregates. Scaling CouchDB


  7. Pessimistic concurrency control widely used by relational databases, assumes that conflicting changes are likely to happen and so blocks access to a resource in order to prevent conflicts. Elasticsearch The Definitive Guide


  8. Cassandra, however, is different: its concurrency model is based on SEDA, so a single operation may start with one thread, which then hands off the work to another thread, which may hand it off to other threads. Cassandra The Definitive Guide


  9. A note on multithreading in JMS applications The JMS spec specifically defines concurrency for various objects in the JMS API and requires that only a few objects support concurrent access. ActiveMQ in Action


  10. As a result, concurrency control over rows can be implemented very efficiently. Bigtable - A Distributed Storage System for Structured Data



49、单词 frameworks 的匹配文本:


  1. Designs that incorporate single points of failure are generally to be avoided in large application frameworks where it can be safely assumed, given the sheer amount of hardware involved, that some percentage of nodes are always going to be on the fritz. Monitoring with Ganglia


  2. We simulate a single resource manager and two scheduler frameworks, one handling batch jobs and one handling service jobs. Omega - flexible, scalable schedulers for large compute clusters


  3. A common pattern in most languages, frameworks, and databases is using types to distinguish between subsets of your data. CouchDB The Definitive Guide


  4. Explore the versatility of Spring Python by integrating it with frameworks, libraries, and tools. Mastering ElasticSearch


  5. Our books and publications share the experiences of your fellow IT professionals in adapting and customizing today''s systems, applications, and frameworks. Cassandra High Performance Cookbook


  6. He introduced the concept of JavaScript frameworks and Ajax to the company’s web division in 2008, developing a PL/SQL and JavaScript framework that is now used in every Core web product. Beginning CouchDB


  7. We plan to instrument nonRPC communication frameworks as they evolve and find a user base. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. And if you’re an application developer, you’ll no doubt be familiar with the many object-relational mapping (ORM) frameworks that have sprung up in recent years to help ease the difficulty in mapping application objects to a relational model. Cassandra The Definitive Guide


  9. These powerful but complex frameworks are designed for software engineers implementing complex parallel algorithms. Tenzing - A SQL Implementation On The MapReduce Framework


  10. The first is the simple testing of Java classes with existing test frameworks like JUnit and TestNG. Building and Testing with Gradle



50、单词 subclass 的匹配文本:


  1. If there are multiple requests to create the same vertex in the same superstep, then by default the system just picks one arbitrarily, but users with special needs may specify a better conflict resolution policy by defining an appropriate handler method in their Vertex subclass. Pregel  -  A System for Large-Scale Graph Processing


  2. A TreeMap is a subclass of Map that sorts on key. MapReduce Design Patterns


  3. No methods are required for implementation; however, in order for the class to function at Hive runtime as a UDF, your subclass must override evaluate() Hadoop RealWorld Solutions Cookbook


  4. That Coordinate is translated into a Point, a subclass of Geometry, using the same GeometryFactory class you used earlier. HBase in Action


  5. To use the TwoDArrayWritable type as the value type of a reducer''s input, you need to specify the type of the stored values by creating a subclass of  the TwoDArrayWritable type similar to the ArrayWritable type. Hadoop MapReduce Cookbook


  6. The big question is, does Avro play nicely withMapReduce? Avro comes with somemapper and reducer classes that you can subclass to work with Avro. Hadoop in Practice


  7. This is the approach that GenericWritable takes, and you have to subclass it to specify which types to support. Hadoop:The Definitive Guide 3rd Edition


  8. Who wants to worry about how files are divided into splits ? In creating your own InputFormat class you should subclass the FileInputFormat class, which takes care of file splitting. Hadoop in Action


  9. For our own mapper implementations, we will subclass this base class and override the specified method as follows: Hadoop Beginners Guide


  10. Hadoop MapReduce (specifically jobs that use input formats that subclass FileIn putFormat) wind up with a map task for each block of the file(s) processed as part of the job.1 This means the file block size can significantly impact the efficiency of a MapReduce job for better or worse. Hadoop Operations



51、单词 boolean 的匹配文本:


  1. The edge value has type void (edges carry no information), and the messages are boolean. Pregel  -  A System for Large-Scale Graph Processing


  2. The rereduce is a boolean value that is used to indicate when the reduce function is being called by itself (i.e., in rereduce mode) Getting Started with CouchDB


  3. In particular, boolean values can be set using yes, true, and on for a positive value and their opposites, no, false, and off for a negative value. Monitoring with Ganglia


  4. A query can also contain Boolean operators that connect terms to each other forming clauses. Mastering ElasticSearch


  5. Boolean: This is a logical data type having one of two values: true or false. Beginning CouchDB


  6. Within a special construct, called a when statement, one defines a quantifier, a variable, and a boolean condition using the variable. Interpreting the Data Parallel Analysis with Sawzall


  7. We are planning to add simple features supported by commercial search engines like boolean operators. The anatomy of a large-scale hypertextual Web search engine


  8. A key is the name of a field or property, and a value can be a string, a number, a boolean, another object, an array of values, or some other specialized type such as a string representing a date or an object. Elasticsearch The Definitive Guide


  9. These are number, Unicode string, boolean, array, object, and null. Cassandra The Definitive Guide


  10. Methods are provided for working with many primitive Java types for header values including Boolean, byte, short, int, long, float, double, and also the String object type. ActiveMQ in Action



52、单词 permissions 的匹配文本:


  1. Instead, create a CouchDB user (couchdb) and set the ownership and permissions of the CouchDB database and support files to the configured user. Getting Started with CouchDB


  2. Does the Apache error log contain any errors? Look for errors about file permissions, missing PHP modules, and the like. Monitoring with Ganglia


  3. The scheduler can see the entire state of the cell and has complete freedom to lay claim to any available cluster resources provided it has the appropriate permissions and priority – even ones that another scheduler has already acquired. Omega - flexible, scalable schedulers for large compute clusters


  4. The Gradle wrapper also takes care of unpacking the distribution and setting the appropriate permissions to execute the batch file. Gradle in Action


  5. Ensure the cassandra user has ownership of this directory and the directory has appropriate file access permissions: Cassandra High Performance Cookbook


  6. I will now show how to change to the source subdirectory inside this new directory and change the permissions of a few files to make sure they are ready for the build process: Beginning CouchDB


  7. To allow the files in different directories to be served from different Chubby masters, we do not expose operations that can move files from one directory to another, we do not maintain directory modified times, and we avoid path-dependent permission semantics (that is, access to a file is controlled by the permissions on the file itself rather than on directories on the path leading to the file) Chubby - The lock service for loosely-coupled distributed systems


  8. Finally, create a couple of directories for Cassandra to store its data and logs, and give them the proper permissions, as shown here: Cassandra The Definitive Guide


  9. Change the permissions on the file to be accessible only by the owner. Hadoop Beginners Guide


  10. For example, does user X have the necessary permissions to execute program Y on system Z? Such privileges are often referred to as access control lists (ACLs) and determine who or what can access a given resource to perform a given operation. ActiveMQ in Action



53、单词 roo 的匹配文本:


  1. A special thanks goes out to Raymond De Roo, Brian Wohlgemuth, David Calafrancesco, Tera Doty, Jay Rubin, Bill Catlan, Anthony Howe, Mark O’Neal, George Montgomery, George Barber, and the myriad other people who patiently listened to me gripe about things, let me bounce ideas off them to see whether an outsider could understand what I was trying to say, or just managed to bring a smile to my face when I needed it most. High Performance MySQL 3rd Edition


  2. In each new version, more features are being added, such as Groovy scripting language support, Spring Roo support, and SpringSource tcServer (an application server with paid support offered by SpringSource that was built on top of the Tomcat server) support. Pro Spring 3


  3. Ruby on Rails has been a particularly rich source of such ideas—Spring Roo and Grails are essentially Rails clones, bringing Rails-like development to Java and Groovy, respectively. Spring in Practice


  4. Spring Roo and GWT If you are new to using GWT or just want a rapid way to create GWT applications, Spring Roo provides scaffolding support for GWT and simplifies integrating Spring and GWT. Spring Security 3.1


  5. Later, we will build some advanced data-driven tests using different data sources in JUnit and a data-driven test in Ruby using Roo, and Python. Selenium Testing Tools Cookbook


  6. Our pain is your gain, and that includes all of our hard work with code that was written the night before, identifying bugs for the Roo team to fix, and working with the fantastic community of readers we have in Manning’s MEAP program, aligned as well with completing the manuscript around the time of the Roo 1.2.1 release. Spring Roo in Action


  7. The emphasis on practical development is also evident in the chapter on Spring Roo, the rapid application development (RAD) solution from the Spring team. Spring Data Modern Data Access for Enterprise Java



54、单词 apis 的匹配文本:


  1. Megastore emphasizes cost-transparent APIs with runtime costs that match application developers'' intuitions. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. First, one of the major components of gmond is the APR library, a cross-platform interface intended to provide a set of APIs to common platform functionality in a common and predictable manner. Monitoring with Ganglia


  3. In this section, we’ll take a look at the function APIs, and talk about how functions in a design document are related within applications. CouchDB The Definitive Guide


  4. In addition to that, it explains the batch APIs available in ElasticSearch and shows how to use filters to optimize your queries. Mastering ElasticSearch


  5. The goal of FlumeJava is to allow a programmer to express his or her data-parallel computation in a clear, modular way, while simultaneously executing it with performance approaching that of the best possible hand-optimized programs written directly against MapReduce APIs. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  6. Many web services and feed providers are making their APIs available with the option to return a response in JSON format. Beginning CouchDB


  7. The resulting system also includes code to collect traces, tools to visualize them, and libraries and APIs (Application Programming Interfaces) to analyze large collections of traces. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. In this chapter we will discuss the APIs that we use to create, retrieve, update. Elasticsearch The Definitive Guide


  9. Although they are often not stored directly in their “native” format, developers can work with the documents using APIs as if they were. Cassandra The Definitive Guide


  10. Although ActiveMQ is written in Java, APIs for many languages other than Java are provided, including C/C++, .NET, Perl, PHP, Python, Ruby, and many more. ActiveMQ in Action



55、单词 asynchronous 的匹配文本:


  1. This new asynchronous design leaves the input thread blissfully unaware of any downstream problem. Apache Flume Distributed Log Collection for Hadoop


  2. Use this in cases where you don’t trust user input or where you need to trigger an asynchronous job. CouchDB The Definitive Guide


  3. The merge process is asynchronous and in general, should not interfere with indexing and searching, from a Lucene point of view. Mastering ElasticSearch


  4. With consistency thus relaxed, clients can buffer small writes and submit only large, asynchronous writes to OSDs; the only latency seen by applications will be due to clients which fill their caches waiting for data to flush to disk. Ceph A Scalable, High-Performance Distributed File System


  5. The system can be configured to perform either synchronous or asynchronous writes. Cassandra A Decentralized Structured Storage System


  6. You could then leverage these languages to output the data into JSON, which can be returned to JavaScript as the response to an asynchronous HTTP request. Beginning CouchDB


  7. When computation is deferred or made asynchronous, most Google developers use a common control flow library to construct callbacks and schedule them in a thread pool or other executor. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. Ironically, such programmers are usually wrong, especially when they use locks in a distributed system; few consider the effects of independent machine failures on locks in a system with asynchronous communications. Chubby - The lock service for loosely-coupled distributed systems


  9. Framed Transport was added to Thrift in order to support asynchronous servers. Cassandra The Definitive Guide


  10. Apache ActiveMQ is one such solution, providing the ability for applications to communicate in an asynchronous, loosely coupled manner. ActiveMQ in Action



56、单词 mvc 的匹配文本:


  1. This decouples the sources of data, the services, from the visual consumers of the data, facilitating use of the Model- View-Controller (MVC) pattern. Thinking in Java 4th Edition


  2. Figure 3.4 illustrates the MVC architecture pattern in the context of a Java web application. Gradle in Action


  3. Think of Application as a controller of the MVC model that handles all the requests. Scala in Action


  4. In fact, event-based architectures help to enforce the separation of concerns and modularity that MVC advocates. Testable JavaScript


  5. It’s far from complete, but it should feel familiar to anyone who has worked with any of the popular frameworks, such as Spring MVC. Functional Programming Patterns in Scala and Clojure


  6. The use of the Active Record pattern as a model in MVC is a good example of a leaky abstraction. SQL Antipatterns


  7. Rails was so much easier to use than Spring MVC, Hibernate, and the other “streamlined” Java web frameworks. Beginning Scala


  8. It accomplishes this flexibility by using a Model View Controller (MVC) design, which allows advanced features such as form validation against an XML schema document, and in-page data refreshes. Cassandra The Definitive Guide


  9. Swing is based on the Model View Controller (MVC) architectural framework pattern. Java 7 Recipes A Problem-Solution Approach


  10. Finally, Listing 6-25 shows the last part of our MVC architecture, the JSP page to render the results. Pro JPA2



57、单词 google 的匹配文本:


  1. Creating mountains of data has become very easy, thanks to Facebook, Twitter, Amazon, digital cameras and camera phones, YouTube, Google, and just about anything else you can think of connected to the Internet. Apache Flume Distributed Log Collection for Hadoop


  2. HBase is an Apache-licensed open source system modeled after Google’s Bigtable. Monitoring with Ganglia


  3. We used some simple JavaScript and Google Groups to allow painless commenting. CouchDB The Definitive Guide


  4. For example, this is what Google shows us when we enter elasticsaerch instead of elasticsearch: Mastering ElasticSearch


  5. Similarly, the Google File System [7] is optimized for very large files and a workload consisting largely of reads and file appends. Ceph A Scalable, High-Performance Distributed File System


  6. The Google File System (GFS)[9] is another distributed file system built for hosting the state of Google’s internal applications. Cassandra A Decentralized Structured Storage System


  7. Google use an implementation of map/reduce in its web index. Beginning CouchDB


  8. Before Chubby was deployed, most distributed systems at Google used ad hoc methods for primary election (when work could be duplicated without harm), or required operator intervention (when correctness was essential) Chubby - The lock service for loosely-coupled distributed systems


  9. Chapter 12, Integrating Hadoop In this chapter, written by Jeremy Hanna, we put Cassandra in a larger context and see how to integrate it with the popular implementation of Google’s Map/Reduce algorithm, Hadoop. Cassandra The Definitive Guide


  10. Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. Bigtable - A Distributed Storage System for Structured Data



58、单词 ruby 的匹配文本:


  1. By implementing a similar module with Perl, Ruby, or PHP, gmond would instantly gain the ability to be extended by implementing gmond metric modules in those languages as well. Monitoring with Ganglia


  2. In later chapters, we’ll focus on using CouchDB from server-side languages such as Ruby and Python. CouchDB The Definitive Guide


  3. Even though it wasn’t part of the core Maven project, a similar approach was proposed by the project Maven Polyglot that allows you to write your build definition logic, which is the project object model (POM) file, in Groovy, Ruby, Scala, or Clojure. Gradle in Action


  4. For example, generating Ruby binding may require ruby and ruby-devel packages to be installed. Cassandra High Performance Cookbook


  5. You will also see how to work with CouchDB from your Python and Ruby applications. Beginning CouchDB


  6. The results, shown in Table 1, demonstrate Sawzall is significantly faster than Python, Ruby, or Perl, at least for these microbenchmarks. Interpreting the Data Parallel Analysis with Sawzall


  7. If you’re familiar with the methodMissing and propertyMissing facilities of Groovy or Ruby’s method_miss ing, you’ve already got the idea. Gradle Beyond The Basics


  8. But because of the interface used to access Cassandra, you can use Cassandra from a wide variety of languages, including C#, Scala, Python, and Ruby. Cassandra The Definitive Guide


  9. We will also show you a few programs in Ruby, but these are usually only to demonstrate language independence, and you don''t need to be a Ruby expert. Hadoop Beginners Guide


  10. Although ActiveMQ is written in Java, APIs for many languages other than Java are provided, including C/C++, .NET, Perl, PHP, Python, Ruby, and many more. ActiveMQ in Action



59、单词 sqoop 的匹配文本:


  1. A typical Hadoop-based Big Data platform includes the Hadoop Distributed File System (HDFS), the parallel computing framework (MapReduce), common utilities, a column-oriented data storage table (HBase), high-level data management systems (Pig and Hive), a Big Data analytics library (Mahout), a distributed coordination system (ZooKeeper), a workflow management module (Oozie), data transfer modules such as Sqoop, data aggregation modules such as Flume, and data serialization modules such as Avro. Hadoop Operations and Cluster Management Cookbook


  2. It’s been four years since, via a post to the Apache JIRA, the first version of Sqoop was released to the world as an addition to Hadoop. Apache Sqoop Cookbook


  3. Sqoop is a connectivity tool for moving data between relational databases and data warehouses and Hadoop. Professional Hadoop Solutions


  4. Sqoop is an Apache project that is part of the broader Hadoop ecosphere. Hadoop RealWorld Solutions Cookbook


  5. Finally, we’ll cover how to use Sqoop for database ingress and egress activities, and we’ll look at how to ingress and egress data in HBase. Hadoop in Practice


  6. Sqoop A tool for efficient bulk transfer of data between structured data stores (such as relational databases) and HDFS. Hadoop:The Definitive Guide 3rd Edition


  7. HBase Hadoop database for random read/write access Hive SQL-like queries and tables on large data sets Pig Dataflow language and compiler Oozie Workflow for interdependent Hadoop jobs Sqoop Integrates databases and data warehouses with Hadoop. HBase:The Definitive Guide


  8. Wouldn''t it be much better to use an existing tool for the purpose? There is indeed such a tool that we will use throughout the rest of this chapter, Sqoop. Hadoop Beginners Guide


  9. Apache Sqoop Not only does Hadoop not want to replace your database, it wants to be friends with it. Hadoop Operations



60、单词 innodb 的匹配文本:


  1. The innodb options help in dealing with the small, primary key size restriction of MySQL. Web Crawling and Data Mining with Apache Nutch


  2. We cover SHOW INNODB STATUS in particular detail, because it provides deep insight into the operations of the InnoDB transactional storage engine. High Performance MySQL 3rd Edition


  3. They span the spectrum from simple in-memory key/value stores to complicated SQL-speaking MySQL/InnoDB derivatives. MongoDB The Definitive Guide 2nd Edition


  4. InnoDB - The MySQL engine once supported by Riak, but now deprecated. A Little Riak Book


  5. Transaction logging One compromise between speed and durability can be seen in MySQL’s InnoDB. MongoDB in Action


  6. Chapter 11, Transactions Lists the main issues required to support transactions, and uses InnoDB to illustrate the typical architecture used to provide that support. Understanding MySQL Internals



61、单词 caching 的匹配文本:


  1. Cache-control The cache control HTTP response header provides a suggestion for client caching mechanisms on how to treat the returned information. Getting Started with CouchDB


  2. In a high-availability deployment with proper caching, this means that a given show function will be called only once for any particular document, and the CouchDB server may not even be contacted for subsequent requests. CouchDB The Definitive Guide


  3. Using Cassandra for distributed caching Cassandra has several variables that can be configured for each keyspace and column family that drastically change the profile of how it operates. Cassandra High Performance Cookbook


  4. Ceph could allow consistency to be further relaxed by caching metadata longer, much like earlier versions of NFS, which typically cache for 30 seconds. Ceph A Scalable, High-Performance Distributed File System


  5. In order to make the searches fast Cassandra provides certain hooks for intelligent caching of data. Cassandra A Decentralized Structured Storage System


  6. Cache documents and query results using memcached or another caching system. Scaling CouchDB


  7. More significantly, objects in PAST are immutable, facilitating cryptographic protection and simplifying consistency and caching, but limiting the systems usefulness as a general storage service. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  8. Older filters will never be reused so caching is disabled by default. Elasticsearch The Definitive Guide


  9. But there are a number of things, including memory settings, data storage, hardware choices, caching, and buffer sizes, that you can tune to squeeze out even more performance. Cassandra The Definitive Guide


  10. Chapter 5 discusses message persistence in ActiveMQ; how messages are stored for queues and topics, the four styles of message stores available, and message caching. ActiveMQ in Action



62、单词 scheduler 的匹配文本:


  1. Extending gmond with gmetric The gmetric utility, which is distributed with the monitoring core, as well as being available in multiple native variants for various programming languages, allows us to submit metrics values without having to rely on either the scheduler present in gmond, or the constraints of the native shared library or Python modules on which it depends. Monitoring with Ganglia


  2. Abstract Increasing scale and the need for rapid response to changing requirements are hard to meet with current monolithic cluster scheduler architectures. Omega - flexible, scalable schedulers for large compute clusters


  3. You could use this feature to synchronize database servers within a cluster or between data centers using a job scheduler such as cron, or you could use it to synchronize data with your laptop for offline work as you travel. CouchDB The Definitive Guide


  4. Each job consists of a set of tasks, and is mapped by the scheduler to a set of available machines within a cluster. MapReduce - Simplied Data Processing on Large Clusters


  5. The concurrent merge scheduler This is a merge scheduler that will use multiple threads in order to perform segments merging. Mastering ElasticSearch


  6. Somewhat akin to FlumeJava’s executor, the Cascading evaluator breaks the dataflow graph into pieces, and, if possible, runs those in parallel, using the underlying Hadoop job scheduler. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  7. Normally, users will have access to an enterprise-wide task scheduler or configuration management system. Cassandra High Performance Cookbook


  8. This provides our low-level disk scheduler with longer I/O queues and a corresponding increase in scheduling efficiency. Ceph A Scalable, High-Performance Distributed File System


  9. We also restrict the Dapper daemon to the lowest possible priority in the kernel scheduler in case CPU contention arises within a heavily-loaded host machine. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  10. Cron is a well-known job scheduler on Unix systems, and it uses an expression string to denote when a job should be scheduled. ActiveMQ in Action



63、单词 initialization 的匹配文本:


  1. Multiple parameters can be passed to the module’s initialization function by including one or more param sections. Monitoring with Ganglia


  2. Depending on how recently the partition was compacted, this should result in efficient, low-CPU initialization of a new node. CouchDB The Definitive Guide


  3. GRADLE’S BUILD LIFECYCLE PHASES Whenever you execute a Gradle build, three distinct lifecycle phases are run: initialization, configuration, and execution. Gradle in Action


  4. A recent problem we experienced was a bug in machine initialization code that caused processor caches to be disabled: computations on affected machines slowed down by over a factor of one hundred. MapReduce - Simplied Data Processing on Large Clusters


  5. Then we have the constructor, which is only used for super class initialization; so we can skip discussing it. Mastering ElasticSearch


  6. If, for some reason, the server is not running and you get an error message when you try the previous line, it may be an issue with the initialization script. Beginning CouchDB


  7. The next declaration has a more complicated initialization expression that uses some built-in functions to extract the cardinal number of the minute of the week from the time value. Interpreting the Data Parallel Analysis with Sawzall


  8. During the initialization phase, Gradle starts up and locates the build files it must process. Gradle Beyond The Basics


  9. Plugins are loaded during initialization, and no plugins are included by default. Elasticsearch in Action MEAP V07 cHAPTER 1


  10. Every time Gradle executes a build, it runs through three lifecycle phases: initialization, configuration, and execution. Building and Testing with Gradle



64、单词 config 的匹配文本:


  1. You aren’t averse to the idea of maintaining a config file on all of your hosts. Monitoring with Ganglia


  2. The powerful iterator API allows for flexibility to filter and aggregate rows on the fly, as well as output raw transformations for an easy way to make Atom feeds, HTML lists, CSV files, config files, or even just modified JSON. CouchDB The Definitive Guide


  3. Many of these Gradle plugins require a configuration file located in a directory config/

    that defines the rules for the analysis. Gradle in Action


  4. Suppose we want to work with a directory named config, in our build script. Gradle Effective Implementation Guide


  5. If CouchDB is installed on your local computer, you can open the Futon configuration utility by pointing your web browser to the address http://127.0.0.1:5984/_utils/config.html. Beginning CouchDB


  6. You’ll connect to the Flume Master UI, and select the config menu from the top, as highlighted in figure 2.7 Hadoop in Practice


  7. Additionally, configuring analyzers at node level requires changing the config file on every node and restarting every node which becomes a maintenance nightmare. Elasticsearch The Definitive Guide


  8. But this YAML file is intended only for users upgrading their config files from XML to YAML. Cassandra The Definitive Guide


  9. An embedded broker using an external configuration file can be achieved using the brokerConfig transport option and by specifying the URI for the activemq.xml file. ActiveMQ in Action


  10. The nested configurations via config and fileset directly correspond to the Checkstyle Ant equivalents as if this was being configured with Ant’s XML syntax. Building and Testing with Gradle



65、单词 classpath 的匹配文本:


  1. Note that if the

    directory is specified, then it is always included first in the classpath. Apache Flume Distributed Log Collection for Hadoop


  2. You tell the compiler about it by referencing the library’s JAR file in the classpath. Gradle in Action


  3. Next, we must add the output directory with the compiled interface class file, to the compileClasspath property of the main source set. Gradle Effective Implementation Guide


  4. Also, ensure that the test.classpath is included in the test target. Cassandra High Performance Cookbook


  5. The Java compiler has a command-line switch for specifying the classpath, and that switch must be provided with an operatingspecific string. Gradle Beyond The Basics


  6. For example, the following statements in pig-env.sh will tell Pig the version of Hadoop used by the cluster is Pig’s classpath: Hadoop in Action


  7. This probably means that you started Cassandra directly from within the bin directory, and it therefore sets up its Java classpath incorrectly and can’t find the CliMain file to start the client. Cassandra The Definitive Guide


  8. Time for action – setting up the classpath To compile any Hadoop-related code, we will need to refer to the standard Hadoop-bundled classes. Hadoop Beginners Guide


  9. In order to run the clients using the HTTP transport protocol, one dependency must be added to the classpath. ActiveMQ in Action


  10. This directory is automatically compiled and added to the build classpath. Building and Testing with Gradle



66、单词 superclass 的匹配文本:


  1. The existing class is called the superclass, base class, or parent class. Core Java Volume I Fundamentals 9th Edition


  2. Many of the methods used here come from the superclass AbstractCopyTask, as shown in figure 4.9 Gradle in Action


  3. Any class that implements an interface must provide an implementation for each method defined by the interface or inherit the implementation from a superclass. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. This implementation first leverages the superclass (FileInputFormat) to get splits with locations calculated to ensure data locality. Professional Hadoop Solutions


  5. The basic idea in Java is that we can implement a generic class by using an appropriate superclass, such as Object. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. Similarly, all classes inheriting from a superclass inherit the validation behavior as well. Java EE 7 Essentials


  7. The fingerprint is obtained by ordering the descriptions of the class, superclass, interfaces, field types, and method signatures in a canonical way, and then applying the so-called Secure Hash Algorithm (SHA) to that data. Core Java Volume II Advanced Features 9th Edition


  8. All constructors must invoke a superclass constructor, explicitly or implicitly, and a subclass would have no accessible superclass constructor to invoke. Effective Java 2nd Edition


  9. So the synchronized modifier is not automatically inherited when subclasses override superclass methods, and methods in interfaces cannot be declared as synchronized. Concurrent Programming in Java Design Principles and Pattern


  10. All arrays, regardless of the type of data they hold, have Object as their superclass. Java 7 Recipes A Problem-Solution Approach



67、单词 plugins 的匹配文本:


  1. The community is quite strong and there are many plugins for other applications available. Apache Flume Distributed Log Collection for Hadoop


  2. Operators denote criticality The operators specified in the Nagios definitions for the Ganglia plugins always indicate the “critical” state. Monitoring with Ganglia


  3. He has been a long term Gradle contributor and is the author of several popular Gradle plugins. Gradle in Action


  4. We can write tasks, apply plugins, and write build logic using the Gradle build language. Gradle Effective Implementation Guide


  5. Chapter 9, Developing ElasticSearch plugins, covers ElasticSearch plugins development by showing and deeply describing how to write your own river and language plugin. Mastering ElasticSearch


  6. Plugins are loaded during initialization, and no plugins are included by default. Elasticsearch in Action MEAP V07 cHAPTER 1


  7. Facebook Insights provides developers and website owners with access to real-time analytics related to Facebook activity across websites with social plugins, Facebook Pages, and Facebook Ads. Apache Hadoop Goes Realtime at Facebook


  8. Flume interceptors are plugins that can manipulate and modify events before they pass from the source to the channel. Hadoop Beginners Guide


  9. The

    element provides the plugins element for declaring plug-ins. ActiveMQ in Action


  10. Gradle’s built-in tasks and plugins are enough for many users to script their builds without any custom code, but not always. Building and Testing with Gradle



68、单词 mahout 的匹配文本:


  1. Mahout in Action aims to be accessible, to unearth the interesting nuggets of insight for the enthusiast, and to save the practitioner time in getting work done. Mahout in Action


  2. Apache Lucene along with Mahout and Solr were developed to address these needs. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  3. The Taste project (a recommendation engine) has become a part of Mahout and is included in the 0.1 release. Pro Hadoop


  4. A typical Hadoop-based Big Data platform includes the Hadoop Distributed File System (HDFS), the parallel computing framework (MapReduce), common utilities, a column-oriented data storage table (HBase), high-level data management systems (Pig and Hive), a Big Data analytics library (Mahout), a distributed coordination system (ZooKeeper), a workflow management module (Oozie), data transfer modules such as Sqoop, data aggregation modules such as Flume, and data serialization modules such as Avro. Hadoop Operations and Cluster Management Cookbook


  5. Mahout — This is a machine-learning and data-mining library that provides MapReduce implementations for popular algorithms used for clustering, regression testing, and statistical modeling. Professional Hadoop Solutions


  6. This book covers unloading/loading to and from HDFS, graph analytics with Giraph, batch data analysis using Hive, Pig, and MapReduce, machine-learning approaches with Mahout, debugging and troubleshooting MapReduce jobs, and columnar storage and retrieval of structured data using Apache Accumulo. Hadoop RealWorld Solutions Cookbook


  7. Chapter 9, Mass Text Data Processing, explains how to use Hadoop and Mahout to process large text datasets, and how to perform data preprocessing and loading operations using Hadoop. Hadoop MapReduce Cookbook


  8. And more thanks go to Josh Patterson, who reviewed my Mahout chapter. Hadoop in Practice


  9. Apache Mahout (http://mahout.apache.org/) is a project to build machine-learning libraries (such as classification and clustering algorithms) that run on Hadoop. Hadoop:The Definitive Guide 3rd Edition


  10. The Cloudera distribution is available at http://www.cloudera.com/hadoop and contains a large number of Apache products, from Hadoop itself, Hive, Pig, and HBase through tools such as Sqoop and Flume, to other lesser-known products such as Mahout and Whir. Hadoop Beginners Guide



69、单词 enum 的匹配文本:


  1. The ordinal method yields the position of an enumerated constant in the enum declaration, counting. Core Java Volume I Fundamentals 9th Edition


  2. Each action command is mapped to an enum called CommandLineInput. Gradle in Action


  3. The other values of AnalyzerScope enum are GLOBAL and INDEX. Mastering ElasticSearch


  4. Enum has a name, an optional namespace, document, and alias, and contains a list of symbols (valid JSON strings) Professional Hadoop Solutions


  5. In the CounterExample job, an Enum for tracking the count of each type of bad record was defined: Hadoop RealWorld Solutions Cookbook


  6. This method returns a ReturnCode, which is an enum defined as a part of the Filter interface. HBase in Action


  7. The set of counters in an enum will form a group of counters. Hadoop MapReduce Cookbook


  8. The counter enum you’ll use to set the number of hops when you’ve reached the target node. Hadoop in Practice


  9. ObjectWritable is a general-purpose wrapper for the following: Java primitives, String, enum, Writable, null, or arrays of any of these types. Hadoop:The Definitive Guide 3rd Edition


  10. The second form uses a Java enum to specify counter   names, which forces you to have them defined at compile time, but it also allows for type checking. Hadoop in Action



70、单词 proofreaders 的匹配文本:


  1. Special thanks go to Ivan Kirkpatrick and Clint Combs, my technical proofreaders, for their in-depth feedback on the text and the code. Scala in Action


  2. We hope your other projects aren’t as exciting as ours! Thanks also to our technical editor Mark Henry Ryan and our technical proofreaders Jerry Kuch and Kristine Kuch. HBase in Action


  3. These will be cleaned up during production of the book by copyeditors and proofreaders. Elasticsearch in Action MEAP V07 cHAPTER 1


  4. To our technical proofreaders, Neale Upstone and Doug Warren: your careful attention to the configuration and code has been greatly appreciated. Spring Integration in Action


  5. We wish to thank our production team of Mary Piergies; maestro Troy Mott and his band of merry editors: Ben Berg, Tara McGoldrick, and Bob Herbstman; our talented proofreaders: Katie Tennant and Alyson Brener; and others behind the scenes whom we are not able to name. Spring Roo in Action



71、单词 copyeditors 的匹配文本:


  1. These will be cleaned up during production of the book by copyeditors and proofreaders. Elasticsearch in Action MEAP V07 cHAPTER 1



72、单词 iterator 的匹配文本:


  1. It can inspect and modify the values of out-edges using methods supplied by the out-edge iterator. Pregel  -  A System for Large-Scale Graph Processing


  2. The powerful iterator API allows for flexibility to filter and aggregate rows on the fly, as well as output raw transformations for an easy way to make Atom feeds, HTML lists, CSV files, config files, or even just modified JSON. CouchDB The Definitive Guide


  3. The intermediate values are supplied to the user’s reduce function via an iterator. MapReduce - Simplied Data Processing on Large Clusters


  4. Each Reducer invocation is passed a key and an iterator over all the values associated with that key, and emits zero. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. It is worth noting that by making the values available as an iterator, Hadoop gives the programmer a chance to process the data without storing them in memory. Hadoop MapReduce Cookbook


  6. The MapReduce framework reuses iterator value objects supplied to combine. Hadoop in Practice


  7. In addition, the reduce() method no longer accepts the list of values as an Iterator but as an Iterable, which is easier to iterate through using Java’s foreach syntax. Hadoop in Action


  8. The Reducer interface uses an Iterator object instead of an Iterable object; this was changed as the latter works with the Java for each syntax and makes for cleaner code. Hadoop Beginners Guide


  9. Percolator provides a low-level Get/Set/iterator API rather than a high-level SQL interface, so we created indexes and did all the ‘query planning’ by hand. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications


  10. Sometime you just need an iterator, and other times you’d like to express a moderately complex conditional without resorting to major build file surgery. Building and Testing with Gradle



73、单词 username 的匹配文本:


  1. This will prompt you for a username and password that will be given administration rights. Getting Started with CouchDB


  2. When a user successfully authenticates, a hash is generated from the username and a secret key and is stored in a cookie and made available to the rest of gweb. Monitoring with Ganglia


  3. You can even do things like pass the username into the list using a query parameter (but it’s not recommended, as you’ll ruin cache efficiency) CouchDB The Definitive Guide


  4. It requires you to fill out your email address, name, and password, as well as a username and domain. Gradle in Action


  5. It is not a good idea to store the username and password as plain text in the build file; this is because anyone can read our password, if stored in plain text. Gradle Effective Implementation Guide


  6. Each of your testing clients should use the same username for running tests. Scaling CouchDB


  7. We assign values to the url, username, pass word, changeLog, and command properties through a very standard assignment syntax. Gradle Beyond The Basics


  8. For example, if you have a User column family and you want to find users in a particular city, you might create a second column family called UserCity that stores user data with the city as keys (instead of the username) and that has columns named for the users who live in that city. Cassandra The Definitive Guide


  9. Obviously, for authentication purposes, each user must have a username and a password. ActiveMQ in Action


  10. Because most of the properties have defaults (some of which, like username and password, are obviously specific to the build), each invocation of the task has very little to configure. Building and Testing with Gradle



74、单词 jdk 的匹配文本:


  1. As a prerequisite, make sure you’ve already installed the JDK with a version of 1.5 or higher. Gradle in Action


  2. Suppose we have both OpenJDK and Oracle Java installed in /usr/openjdk and / usr/jdk respectively. Hadoop Operations and Cluster Management Cookbook


  3. What you need for this book In order to work with Gradle and the code samples in the book, we need at least a Java Development Kit (JDK 1.5 or higher), Gradle, and a good text editor. Gradle Effective Implementation Guide


  4. In order to check what Java processes are running and what their virtual machines identifiers are, one can just run a jps command which is provided with most JDK distributions. Mastering ElasticSearch


  5. JAVA_HOME must be set to point to a JDK, not to a JRE. Hadoop MapReduce Cookbook


  6. If a current JDK isn’t in the system path you’ll want to come here to configure your JAVA_HOME. Hadoop in Practice


  7. You can download the latest JDK for other operating systems from Sun at http://java.sun.com/javase/downloads/index.jsp. Hadoop in Action


  8. Ivy requires Ant, and building from source requires the complete JDK, version 1.6.0_20 or better, not just the JRE. Cassandra The Definitive Guide


  9. Time for action – checking the prerequisites Hadoop is written in Java, so you will need a recent Java Development Kit (JDK) installed on the Ubuntu host. Hadoop Beginners Guide


  10. Here’s an example of this using the keytool that comes with the JDK: ActiveMQ in Action



75、单词 timestamp 的匹配文本:


  1. The headers are key/value pairs that can be used to make routing decisions or carry other structured information (such as the timestamp of the event or hostname of the server where the event originated) Apache Flume Distributed Log Collection for Hadoop


  2. Allowed options are now (uses current system time), UNIX timestamp, or any other well-formed date, as supported by PHP’s strtotime function. Monitoring with Ganglia


  3. Instead of a string, we are going to use an array, where individual members are part of a timestamp in decreasing significance. CouchDB The Definitive Guide


  4. This method allows you to set your own value for this timestamp. Mastering ElasticSearch


  5. The CLI uses microseconds since epoch time as the value of timestamp when setting columns. Cassandra High Performance Cookbook


  6. When users were interested in individual machines, they were also interested in a specific service, so we eventually combined the two into a composite index which allows for efficient lookup by service name, host machine, and timestamp, in that order. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Instead, break a big reindex down into smaller jobs by filtering on a date or timestamp field: Elasticsearch The Definitive Guide


  8. Putting this all together, we have the basic Cassandra data structures: the column, which is a name/value pair (and a client-supplied timestamp of when it was last updated), and a column family, which is a container for rows that have similar, but not identical, column sets. Cassandra The Definitive Guide


  9. A few properties for the timestamp plug-in can be configured, as shown in table 11.5 ActiveMQ in Action


  10. The map is indexed by a row key, column key, and a timestamp; each value in the map is an uninterpreted array of bytes. Bigtable - A Distributed Storage System for Structured Data



76、单词 tcp 的匹配文本:


  1. Syslog has an older UDP transport as well as a newer TCP protocol that can handle data larger than a single UDP packet can transmit (about 64k), as well as deal with network-related congestion events that might require the data to be retransmitted. Apache Flume Distributed Log Collection for Hadoop


  2. It includes interactive query functionality and may be polled by external monitoring systems via a simple text protocol on TCP port 8652 Monitoring with Ganglia


  3. This is curl telling us that it is going to establish a TCP connection to the CouchDB server we specified in our request URI. CouchDB The Definitive Guide


  4. This recipe shows how to interpret what the following TCP connection graphs are showing. Cassandra High Performance Cookbook


  5. In all tests, clients, OSDs, and MDSs are user processes running on a dual-processor Linux cluster with SCSI disks and communicating using TCP. Ceph A Scalable, High-Performance Distributed File System


  6. All system control messages rely on UDP based messaging while the application related messages for replication and request routing relies on TCP. Cassandra A Decentralized Structured Storage System


  7. The port attribute indicates the TCP/IP port number to use. Scaling CouchDB


  8. A failure on the TCP socket results in a limited number of reconnect attempts before a failure is reported to the monitor cluster. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  9. It is accessible over a TCP/IP interface and is available from a variety of languages, including Java and web services. Cassandra The Definitive Guide


  10. It discusses ActiveMQ URIs and all the transport connectors for both client-to-broker and broker-to-broker communications including TCP, NIO, STOMP, failover, SSL, HTTP/S, and much more. ActiveMQ in Action



77、单词 tuple 的匹配文本:


  1. In the Pregel implementation of this algorithm the vertex value is a tuple of two values: a flag indicating which set the vertex is in (L or R), and the name of its matched vertex once known. Pregel  -  A System for Large-Scale Graph Processing


  2. In addition, the values flowing through a Cascading pipeline are special untyped “tuple” values, and Cascading operations focus on transforms over tuples; in contrast, a FlumeJava pipeline computes over arbitrary Java objects using arbitrary Java computations. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. A column is stored as a tuple of name, value, and timestamp. Cassandra High Performance Cookbook


  4. The IsUseragentBot class extends the abstract class FilterFunc, which allows us to override the exec(Tuple t) method. Hadoop RealWorld Solutions Cookbook


  5. A compiletime directive in Sawzall, proto, somewhat analogous to C’s #include directive, imports the DDL for a protocol buffer from a file and defines the Sawzall tuple type that describes the layout. Interpreting the Data Parallel Analysis with Sawzall


  6. This selfdescribing coordinate tuple was originally introduced to the Java ecosystem by the Maven repository format. Gradle Beyond The Basics


  7. It works in a way similar to this technique and captures all of the content between a start and end tag, supplying it as a single byte array field in a Pig tuple. Hadoop in Practice


  8. A tuple is just like a row of data in a database table, with multiple fields in a particular order. Hadoop:The Definitive Guide 3rd Edition


  9. A field in a tuple or a value in a map can be null or any atomic or complex type. Hadoop in Action


  10. The MapReduce jobs consist of DAGs of relational operators (such as filter, select (project), join, aggregation) that operate as iterators: each operator forwards a data tuple to the next operator after processing it. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



78、单词 screenshot 的匹配文本:


  1. The screenshot in Figure 10-10 shows the word cloud generated from a MapReduce view of CouchDB’s archived tweets. CouchDB The Definitive Guide


  2. Figure 3.2 shows a screenshot of the user interface rendered in a browser. Gradle in Action


  3. We have only one task, which is listed in the task tree, as shown in the following screenshot: Gradle Effective Implementation Guide


  4. After that Paramedic shows us two primary shards, as given in the following screenshot: Mastering ElasticSearch


  5. The following screenshot shows the current status of the HDFS installation including the number of nodes, total storage, storage taken by each node. Hadoop MapReduce Cookbook


  6. User annotations are not shown in this screenshot, but they may be selectively included in the global time line on a spanby-span basis. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Figure 13.12 Screenshot of the counter in JobTracker’s job summary page. Hadoop in Practice


  8. Figure 8-2 shows a screenshot of the keyspace and column family configuration information that the web. Cassandra The Definitive Guide


  9. As shown in the preceding screenshot, click on the Create bucket button and enter a name for the new bucket. Hadoop Beginners Guide


  10. Though the screenshot in figure 8.1 shows the project structure for the jms-webapp-local application, the directory structure for the other instances of the application is only slightly different. ActiveMQ in Action



79、单词 scalability 的匹配文本:


  1. CouchDB doesn’t attempt to solve your scalability problems with any single solution, but instead provides you with a simple and flexible system that can be molded and adapted to your needs. Getting Started with CouchDB


  2. We’ll show you how CouchDB’s schema-free document model is a better fit for common applications, how the built-in query engine is a powerful way to use and process your data, and how CouchDB’s design lends itself to modularization and scalability. CouchDB The Definitive Guide


  3. The experience has made him a learned individual in technologies like optimization, high-availability, and scalability. Cassandra High Performance Cookbook


  4. Abstract We have developed Ceph, a distributed file system that provides excellent performance, reliability, and scalability. Ceph A Scalable, High-Performance Distributed File System


  5. The way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. Cassandra A Decentralized Structured Storage System


  6. You should test the scalability of your system before it is deployed. Scaling CouchDB


  7. Our implementation offers excellent performance, reliability, and scalability while providing clients with the illusion of a single logical object store. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  8. We will also discuss how best to model your data to take advantage of the horizontal scalability of Elasticsearch, and how to configure. Elasticsearch The Definitive Guide


  9. We encounter scalability problems when our relational applications become successful and usage goes up. Cassandra The Definitive Guide


  10. Part 4 discusses advanced features in ActiveMQ such as high availability, scalability, many advanced broker and client features, performance tuning, and administration of ActiveMQ. ActiveMQ in Action



80、单词 constructors 的匹配文本:


  1. The memory management is performed explicitly by constructors, assignment operators, and destructors. Core Java Volume I Fundamentals 9th Edition


  2. Constructors All code that creates and launches a MapReduce job into a Hadoop cluster creates a JobConf object. Pro Hadoop


  3. In contrast, restructuring is usually required in XQuery and object-oriented query languages, e.g., using nested for-loops and constructors. Dremel - Interactive Analysis of WebScale Datasets


  4. Making an HBaseAdmin instance explicitly requires a Configuration instance, a detail hidden from you by the default HTable and HTablePool constructors. HBase in Action


  5. A node contains the data and links to the previous and next nodes, along with appropriate constructors. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. The first part of the implementation is straightforward: there are two Text instance variables, first and second, and associated constructors, getters, and setters. Hadoop:The Definitive Guide 3rd Edition


  7. It expects one or a list of Put objects that, in turn, are created with one of these constructors: HBase:The Definitive Guide


  8. If you want buffering and the data input methods for a file, you need to use the following rather monstrous sequence of constructors: Core Java Volume II Advanced Features 9th Edition


  9. It consists of nothing but C# properties for each of those three items, plus constructors. Cassandra The Definitive Guide


  10. Also, constructors cannot be qualified as synchronized (although block synchronization can be used within constructors) Concurrent Programming in Java Design Principles and Pattern



81、单词 dataset 的匹配文本:


  1. The vertex-centric approach is reminiscent of MapReduce in that users focus on a local action, processing each item independently, and the system composes these actions to lift computation to a large dataset. Pregel  -  A System for Large-Scale Graph Processing


  2. She runs an MR job that cranks through the input data and produces a dataset containing the new signals, stored in billions of records in the distributed file system. Dremel - Interactive Analysis of WebScale Datasets


  3. Routing Maybe routing will be handy in our case? The obvious gain from using routing is the possibility to create effective queries that return only books from our base dataset or data that belongs to our business collaborate (because routing allows us to hit only a part of our index) Mastering ElasticSearch


  4. The uncompressed dataset is tens of terabytes, which is small compared to many NoSQL instances, but was large enough to cause difficulties with sharded MySQL. Spanner - Google’s Globally-Distributed Database


  5. Elasticsearch is designed to work well with a small dataset and to scale to big. Elasticsearch The Definitive Guide


  6. Recently written data should be available quickly, but the entire dataset will also be periodically scanned in order to perform timebased rollups. Apache Hadoop Goes Realtime at Facebook


  7. Doing queries across the entire dataset was tedious and could take hours to run. Cassandra The Definitive Guide


  8. Limited support (no joins) for UPDATE and DELETE is implemented by applying the update or delete criteria on the data to create a new dataset. Tenzing - A SQL Implementation On The MapReduce Framework


  9. These enable a highly interactive Web UI where it is common that a single mouse click leads to processing a trillion values in the underlying dataset. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. Unlike Percolator, ElasTraS offers limited transactional semantics (Sinfonia-like mini-transactions) when dynamically partitioning the dataset and has no support for structuring computation. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



82、单词 daemon 的匹配文本:


  1. All of these run inside a daemon called an agent. Apache Flume Distributed Log Collection for Hadoop


  2. To collect the dendrite updates, you deployed a single instance of a daemon process, called an axon, Monitoring with Ganglia


  3. This comes as a daemon for Twisted, a popular and high-performance event-driven network programming framework for Python. CouchDB The Definitive Guide


  4. The previous code uses the EsExecutors class, which is an ElasticSearch way of running daemon threads inside the ElasticSearch nodes. Mastering ElasticSearch


  5. TrueTime is implemented by a set of time master machines per datacenter and a timeslave daemon per machine. Spanner - Google’s Globally-Distributed Database


  6. After startup, Cassandra detaches from the console and runs as a daemon. Cassandra High Performance Cookbook


  7. Sending data and running queries by sending JSON objects over HTTP makes it easy for someone to extend anything, from a syslog daemon like rsyslog to a connecting framework like ManifoldCF, to interact with Elasticsearch. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. Dapper penetration can be assessed in two dimensions: the fraction of production processes that can generate Dapper traces (i.e., those that are linked with Dapperinstrumented runtime libraries) and the fraction of production machines running Dapper’s trace collection daemon. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  9. Add -d if you want to run it in the background as a daemon. Elasticsearch The Definitive Guide


  10. JConsole showing the peak thread count for a Cassandra daemon. Cassandra The Definitive Guide



83、单词 topology 的匹配文本:


  1. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. Pregel  -  A System for Large-Scale Graph Processing


  2. This topology is adequate for most environments, but in some cases it is desirable to specify a few specific listeners rather than allowing every node to receive (and thereby waste CPU cycles to process) metrics from every other node. Monitoring with Ganglia


  3. It has the advantage that it’s simple, which helps in understanding topology and diagnosing failures. CouchDB The Definitive Guide


  4. Figure 11 shows the execution times for each query as a function of the server topology. Dremel - Interactive Analysis of WebScale Datasets


  5. Our network topology is simple enough that “distances” can be accurately estimated from IP addresses. GFS - The Google File System


  6. This node is responsible for managing the cluster state and process of assigning shards to nodes in reaction of changes in cluster topology. Mastering ElasticSearch


  7. Instead, it uses an internal process called Gossip to communicate changes about the topology of the Ring between nodes. Cassandra High Performance Cookbook


  8. Having covered each of the Hadoop daemons, we depict the topology of one typical Hadoop cluster in figure 2.3 Hadoop in Action


  9. The new node will not immediately accept requests so that it has time to learn the topology of the ring and accept data that it may also be responsible for. Cassandra The Definitive Guide


  10. Figure 4.5 An example of a complex network of brokers topology. ActiveMQ in Action



84、单词 partitioning 的匹配文本:


  1. You could use multi-master, master/slave, partitioning, sharding, write-through caches, and all sorts of other complex techniques. CouchDB The Definitive Guide


  2. The run-time system takes care of the details of partitioning the input data, scheduling the program’s execution across a set of machines, handling machine failures, and managing the required inter-machine communication. MapReduce - Simplied Data Processing on Large Clusters


  3. Like MapReduce, Dryad handles the details of communication, partitioning, placement, concurrency and fault tolerance. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. Ceph utilizes a novel metadata cluster architecture based on Dynamic Subtree Partitioning [30] that adaptively and intelligently distributes responsibility for managing the file system directory hierarchy among tens or even hundreds of MDSs. Ceph A Scalable, High-Performance Distributed File System


  5. Describing the details of each of the solutions is beyond the scope of this paper, so we will focus on the core distributed systems techniques used in Cassandra: partitioning, replication, membership, failure handling and scaling. Cassandra A Decentralized Structured Storage System


  6. An alternative to automatic partitioning is to manually partition your documents into different databases by type of document. Scaling CouchDB


  7. Partitioning your documents into different containers or shards, which can be stored on a single node or on multiple nodes. Elasticsearch The Definitive Guide


  8. Partitioning is intended to enable large Chubby cells with little communication between the partitions. Chubby - The lock service for loosely-coupled distributed systems


  9. And even if you can write such a huge check, you still need to carefully choose partitioning keys to the point where you can never entirely ignore the limitation. Cassandra The Definitive Guide


  10. Finally we’ll examine traffic partitioning, which will balance scaling and performance, but will add more complexity to your ActiveMQ application. ActiveMQ in Action



85、单词 urls 的匹配文本:


  1. The structure for the URLs has been standardized, and you should be able to both look at a URL that you are using to understand what it does, and to construct one to access the information that you want. Getting Started with CouchDB


  2. As of Ganglia 3.3, hostnames are converted to lowercase, RRD files are created with lowercase filenames, and lowercase hostnames should be used in URLs to access the metrics. Monitoring with Ganglia


  3. We’ll also briefly describe capabilities available on CouchDB’s roadmap, like replication filters, update handlers for parsing non-JSON input, and a rewrite handler for making application URLs more palatable. CouchDB The Definitive Guide


  4. A document can have multiple Names, which are different URLs by which the document can be referenced. Dremel - Interactive Analysis of WebScale Datasets


  5. Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "What we would like to do is, use the BM25 similarity model for the name field and the contents field." Mastering ElasticSearch


  6. This means you can manage the database by simply visiting URLs in your web browser, using command-line tools such as curl or, more importantly, via any programming language that supports HTTP requests. Beginning CouchDB


  7. The following ProxyPassReverse directives instructs Apache to adjust the URLs in the HTTP response headers to match that of the proxy server, instead of the reverse proxied server. Scaling CouchDB


  8. Indicates new terms, URLs, email addresses, filenames, and file extensions. Cassandra The Definitive Guide


  9. README.txt—A file containing some URLs to documentation to get new users started with ActiveMQ. ActiveMQ in Action


  10. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving) Bigtable - A Distributed Storage System for Structured Data



86、单词 jmx 的匹配文本:


  1. Several Flume components report metrics to the JMX platform MBean server. Apache Flume Distributed Log Collection for Hadoop


  2. Java Virtual Machine Metrics The sFlow Java Virtual Machine (JVM) metrics (Table 8-3) are based on the metrics exposed through the Java Management Extensions (JMX) interface, ensuring consistency with existing JMX-based monitoring systems. Monitoring with Ganglia


  3. This allows for the use of JMX-aware applications to collect information about the state of the servers. Pro Hadoop


  4. In this chapter, we will first introduce the management framework, Java Management Extension (JMX) for system monitoring. Hadoop Operations and Cluster Management Cookbook


  5. The nodetool program communicates with the JMX port to confirm that the server is alive. Cassandra High Performance Cookbook


  6. The collection mechanism for metrics is decoupled from the component that receives the updates, and there are various pluggable outputs, including local files, Ganglia, and JMX. Hadoop:The Definitive Guide 3rd Edition


  7. HBase already had a number of metrics exported through JMX. Apache Hadoop Goes Realtime at Facebook


  8. Cassandra has a rich Java Management Extensions (JMX) interface baked in, which we put to use to monitor all of these and more. Cassandra The Definitive Guide


  9. Notably, the new metrics subsystem (referred to as metrics2) supports sending metrics to multiple plug-ins, filtering of metrics in various ways, and more complete support for JMX. Hadoop Operations


  10. It shows how to configure ActiveMQ for JMX monitoring and demonstrates this using JConsole. ActiveMQ in Action



87、单词 unix 的匹配文本:


  1. The Ganglia projects uses the autotools build system to detect the tools available on most Linux and UNIX-like environments and build the binaries. Monitoring with Ganglia


  2. The first few appendixes showed you how to install CouchDB packages for Unix-like, Mac OS X, and Windows operating systems. CouchDB The Definitive Guide


  3. We will discuss more about it in the Avoid swapping on Unix-like systems section in Chapter 6, Fighting with Fire. Mastering ElasticSearch


  4. The defaults for a Unix/Linux system limit the number of open files. Cassandra High Performance Cookbook


  5. It does not assume any knowledge in relation to CouchDB, although some experience with UNIX commands and basic JavaScript skills are beneficial. Beginning CouchDB


  6. The input files argument accepts standard Unix shell file-name-matching metacharacters to identify the files to be processed. Interpreting the Data Parallel Analysis with Sawzall


  7. You can check if it''s there by using the env command on UNIX-like systems and the set command on Windows. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. Chubby exports a file system interface similar to, but simpler than that of UNIX [22] Chubby - The lock service for loosely-coupled distributed systems


  9. The clock is stored as a long (representing the milliseconds since the Unix epoch), so we could wrap this in a new java.util.Date object if we wanted to. Cassandra The Definitive Guide


  10. Click on the link to the 5.4.1 release and you’ll find both tarball and zip formats available (the tarball is for Linux and Unix; the zip is for Windows) ActiveMQ in Action



88、单词 packt 的匹配文本:


  1. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. Apache Flume Distributed Log Collection for Hadoop


  2. Little did I know, when I was first asked by Packt Publishing whether I would be interested in writing a book about HBase administration on September 2011, how much work and stress (but also a lot of fun) it was going to be. HBase Administration Cookbook


  3. Do you need instant solutions to your IT questions? PacktLib is Packt''s online digital book library. Hadoop Operations and Cluster Management Cookbook


  4. However, Packt Publishing cannot guarantee the accuracy of this information. Gradle Effective Implementation Guide


  5. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Mastering ElasticSearch


  6. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Cassandra High Performance Cookbook


  7. I would like to commend the authors of this book for a job well done, and would like to thank Packt Publishing for the opportunity to assist in the editing of this publication. Hadoop RealWorld Solutions Cookbook


  8. Thanks to the editorial staff at Packt, for providing me the opportunity to write this book and for providing feedback and guidance throughout the process. Hadoop MapReduce Cookbook


  9. Here, you can access, read and search across Packt''s entire library of books. Getting Started with Google Guava


  10. Thank you, Packt Publishing for selecting me as one of the technical reviewers for this wonderful book. Hadoop Beginners Guide



89、单词 ids 的匹配文本:


  1. We haven’t even considered complexities like postal addresses (or the fact that there might be more than one), and my contact record doesn’t include additional details like my Skype IDs, instant messaging accounts, or that I have a few more addresses than those listed above. Getting Started with CouchDB


  2. The bigger picture we’re working on is like an app server, but different in one crucial regard: rather than let the developer do whatever he wants (loop a list of DocIDs and make queries, make queries based on the results of other queries, etc.), we’re defining “safe” transformations, such as view, show, list, and update. CouchDB The Definitive Guide


  3. A Document has a required integer DocId and optional Links, containing a list of Forward and Backward entries holding DocIds of other web pages. Dremel - Interactive Analysis of WebScale Datasets


  4. The columns are sorted chronologically, using unique IDs (UUIDs) for column names to avoid clashes. Cassandra High Performance Cookbook


  5. This unique ID can be assigned by the user or application, or it can use a universally unique identifier (UUID)—a random number generated by CouchDB that greatly reduces the chance of duplicate IDs being used. Beginning CouchDB


  6. CouchDB will perform best with document IDs that are mostly monotonic (in simpler terms, mostly sequential) Scaling CouchDB


  7. A trace context is a small and easily copyable container of span attributes such as trace and span ids. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. Each shard returns the doc IDs and sort values of all of the docs in its priority queue to the coordinating node, Node 1, which merges these values into its own priority queue to produce a globally sorted list of results. Elasticsearch The Definitive Guide


  9. It is still a common design requirement to store IDs related to other entities in your tables, but operations such as cascading deletes are not available. Cassandra The Definitive Guide


  10. Its speed is the result of the combination of a fast transactional journal comprised of data log files, the highly optimized indexing of message IDs, and inmemory message caching. ActiveMQ in Action



90、单词 aop 的匹配文本:


  1. If you are familiar with the AOP Spring Framework, it is similar to a MethodInterceptor. Apache Flume Distributed Log Collection for Hadoop


  2. Because of this, mixins have part of the flavor of aspect-oriented programming (AOP), and aspects are often suggested to solve the mixin problem. Thinking in Java 4th Edition


  3. However, the good news for Spring and AOP developers is that starting from version 2.0, Spring provides much tighter integration with AspectJ. Pro Spring 3


  4. In this chapter, we will cover dependency injection and AOP. Pro Spring MVC


  5. The main points you will see throughout the different projects is the use of DI, XML namespace-based configuration, and AOP, which as you will see in the next section, are the pillars upon which Spring is built on. Pro Spring Security


  6. Advising the Build Graph In the early 2000s, the paradigm of aspect-oriented programming (AOP) became a marginally popular way to manage the complexity of enterprise Java software. Gradle Beyond The Basics


  7. You can think of observer coprocessors as analogous to triggers from a relational database or to advice from aspect-oriented programming (AOP) HBase in Action


  8. The DTD bean and property mechanism can’t hide complex bean creation, which is a shortcoming in configuring advanced features like aspect-oriented programming (AOP) and security. Spring Batch in Action


  9. The goal of AOP is to enable systemic changes of the same kind across many modules, while avoiding copying and pasting the same code over and over into each location. Programming Scala


  10. Using retry advice on a data access layer via Spring’s AOP (aspect-oriented programming) support is one way to implement this functionality. Spring Data Modern Data Access for Enterprise Java



91、单词 serialization 的匹配文本:


  1. Tiering Flume agents is covered using Avro serialization, as well as using the Flume command line as a standalone Avro client for testing and importing data manually. Apache Flume Distributed Log Collection for Hadoop


  2. MapReduce, however, is essentially functional, so expressing a graph algorithm as a chained MapReduce requires passing the entire state of the graph from one stage to the next—in general requiring much more communication and associated serialization overhead. Pregel  -  A System for Large-Scale Graph Processing


  3. Every individual host in the grid is an active participant, and together they cooperate, organically distributing the workload while avoiding serialization and single points of failure. Monitoring with Ganglia


  4. The extra serialization involved in using super columns and extra space used makes them less efficient. Cassandra High Performance Cookbook


  5. This allows Ceph to leverage the intelligence present in OSDs to distribute the complexity surrounding data access, update serialization, replication and reliability, failure detection, and recovery. Ceph A Scalable, High-Performance Distributed File System


  6. Elasticsearch uses JSON (or JavaScript Object Notation ) as the serialization format for documents. Elasticsearch The Definitive Guide


  7. For example, it uses two different JSON serialization libraries, the Google collections project, and several Apache Commons libraries. Cassandra The Definitive Guide


  8. However, this proved to be inefficient because of the serialization and deserialization costs associated with translating to and from Sawzall’s native type system. Tenzing - A SQL Implementation On The MapReduce Framework


  9. The main purpose of this interface is to provide mechanisms for the serialization and deserialization of data as it is passed across the network or read and written from the disk. Hadoop Beginners Guide


  10. From a client perspective, the NIO transport connector is practically the same as the standard TCP connector, in terms of its use of TCP as the underlying network protocol and OpenWire as the message serialization protocol. ActiveMQ in Action



92、单词 namenode 的匹配文本:


  1. Pacemaker ships with an IPaddr resource agent to manage VIP, so we only need to create our own namenode resource agent: HBase Administration Cookbook


  2. They will experience substantial idle cycles waiting for replies from the namenode. HDFS scalability the limits to growth


  3. The file system protocol is hdfs, the host to contact for services is NamenodeHost, and the port to connect to is 8020, which is the default port for HDFS. Pro Hadoop


  4. Format the HDFS filesystem from NameNode with the following command: hadoop namenode -format. Hadoop Operations and Cluster Management Cookbook


  5. Replace NAMENODE_ HOSTNAME and PORT with the relevant values corresponding to the NameNode of your HDFS cluster. Hadoop MapReduce Cookbook


  6. Data compatibility concerns persistent data and metadata formats, such as the format in which the HDFS namenode stores its persistent data. Hadoop:The Definitive Guide 3rd Edition


  7. Writing variations also exist for the different Hadoop components such as NameNode (name node, namenode, and namenode), DataNode, JobTracker, and TaskTracker. Hadoop in Action


  8. It often runs on the same server as the namenode in smaller clusters. Cassandra The Definitive Guide


  9. Namenode 1 Stores filesystem metadata, stores file to block map, and provides a global picture of the filesystem. Hadoop Operations


  10. It runs the namenode and jobtracker daemons, but it also has Hive installed on it. Programming Hive



93、单词 uri 的匹配文本:


  1. This is curl telling us that it is going to establish a TCP connection to the CouchDB server we specified in our request URI. CouchDB The Definitive Guide


  2. Configure the NameNode RPC and HTTP URI for namenode1 by adding the following into the file: Hadoop Operations and Cluster Management Cookbook


  3. We can pass a URL or URI instance as an argument. Gradle Effective Implementation Guide


  4. The simplest way is to add a routing URI parameter when indexing a document. Mastering ElasticSearch


  5. What the server does with your request depends on the URI you are issuing the request to, the request method you are using (GET, POST, PUT, DELETE), and the data you send along with your request. Beginning CouchDB


  6. The subsequent rewrite rule then proxies all requests to URIs starting with /api to the equivalent URI on http://couch-master.example.com:5984 (again, only if the previous rewrite condition has been met): Scaling CouchDB


  7. You can identify the current filesystem of the FileSystem object using the getUri() function as follows. Hadoop MapReduce Cookbook


  8. The file:/ URI denotes a path on the local filesystem, and the hdfs:/ URI denotes a path in HDFS. Hadoop in Practice


  9. As we’ll see shortly, most setups don’t need to specify the scheme://authority part of the URI. Hadoop in Action


  10. Without going too deep into discussing URIs, let’s briefly summarize the URI structure. ActiveMQ in Action



94、单词 jsp 的匹配文本:


  1. Check the job status via the MapReduce admin page—http://master1:50030/ jobtracker.jsp: HBase Administration Cookbook


  2. To see it, visit http://bugs.sun.com/bugdatabase/index.jsp and enter the bug identification number 4252539 Core Java Volume I Fundamentals 9th Edition


  3. The response of a Servlet is rendered by the view component— the Java Server Page (JSP) Gradle in Action


  4. After the teragen job finishes, we can check the node storage space usage by opening the URL http://master:50070/dfsnodelist.jsp?whatNodes=LIVE. Hadoop Operations and Cluster Management Cookbook


  5. The MapReduce administration JSP enables you to view both the overall state of the cluster and details of the particular job execution. Professional Hadoop Solutions


  6. In CDH you can access the JobTracker UI at http://localhost:50030/jobtracker.jsp. Hadoop in Practice


  7. Adding the preceding fragment to web.xml will display the /error-404.jsp page to a client attempting to access a nonexistent resource. Java EE 7 Essentials


  8. Web technologies such as JavaServer Faces (JSF) and JavaServer Pages (JSP) rely heavily on the JavaBeans component model. Core Java Volume II Advanced Features 9th Edition


  9. You can download the latest JDK for other operating systems from Sun at http://java.sun.com/javase/downloads/index.jsp. Hadoop in Action


  10. Application frameworks such as Java Server Pages (JSP) and Java Server Faces (JSF) are built upon servlet technology. Java 7 Recipes A Problem-Solution Approach



95、单词 dynamically 的匹配文本:


  1. Metric modules are dynamically loadable shared object files that extend the available metrics gmond is able to collect. Monitoring with Ganglia


  2. Most programming environments solve this problem with templates; for example, documents that look like HTML but have portions of their content filled out dynamically. CouchDB The Definitive Guide


  3. Changing the analyzer during indexing The next thing worth mentioning when it comes to handling multilingual data is the possibility of dynamically changing the analyzer during indexing. Mastering ElasticSearch


  4. Figure 2: Ceph dynamically maps subtrees of the directory hierarchy to metadata servers based on the current workload. Ceph A Scalable, High-Performance Distributed File System


  5. This requires, the ability to dynamically partition the data over the set of nodes (i.e., storage hosts) in the cluster. Cassandra A Decentralized Structured Storage System


  6. This alleviates the performance issues associated with generating a view dynamically each and every time it is run, especially on databases that store millions of records of data. Beginning CouchDB


  7. The number of replica shards can be changed dynamically on a live cluster, allowing us to scale up or down as demand requires. Elasticsearch The Definitive Guide


  8. Robust data structures • An efficient, small binary format for RPC calls • Easy integration with dynamically typed languages such as Python, Ruby, Smalltalk, Perl, PHP, and Objective-C Avro has certain advantages that Thrift doesn’t, in particular the fact that static code generation is not required to use RPC for your application, though you can use it for. Cassandra The Definitive Guide


  9. Each class will dynamically send and receive to/from the topics (an example is provided next) ActiveMQ in Action


  10. Finally, Bigtable schema parameters let clients dynamically control whether to serve data out of memory or from disk. Bigtable - A Distributed Storage System for Structured Data



96、单词 cached 的匹配文本:


  1. This means that they are largely restricted to handling GET requests, but it also means they can be cached by HTTP proxies like Varnish. CouchDB The Definitive Guide


  2. What''s more, cached filters don''t require too much memory (it only carries the information about which documents matche the filter) and can be easily reused by consecutive queries run against the same filter to greatly increase query performance. Mastering ElasticSearch


  3. Remember that all the columns of a row must be cached when using the row cache. Cassandra High Performance Cookbook


  4. By default, if a readdir is immediately followed by one or more stats, the briefly cached information is returned; otherwise it is discarded. Ceph A Scalable, High-Performance Distributed File System


  5. The metadata about the ranges a node is responsible is cached locally at each node and in a fault-tolerant manner inside Zookeeper - this way a node that crashes and comes back up knows what ranges it was responsible for. Cassandra A Decentralized Structured Storage System


  6. Assuming you have cached your documents, then some percentage of these requests will result in cache hits. Scaling CouchDB


  7. These cached filters can be reused very efficiently for subsequent requests. Elasticsearch The Definitive Guide


  8. A notification of a conflicting lock request would tell a client to finish using data associated with the lock: it would finish pending operations, flush modifications to a home location, discard cached data, and release. Chubby - The lock service for loosely-coupled distributed systems


  9. If you use a fraction, you’re indicating a percentage of keys to cache, and an integer value indicates an absolute number of keys whose locations will be cached. Cassandra The Definitive Guide


  10. ActiveMQ supports both of these types of message delivery and can also be configured to support message recovery, an in-between state where messages are cached in memory. ActiveMQ in Action



97、单词 avro 的匹配文本:


  1. Tiering Flume agents is covered using Avro serialization, as well as using the Flume command line as a standalone Avro client for testing and importing data manually. Apache Flume Distributed Log Collection for Hadoop


  2. A typical Hadoop-based Big Data platform includes the Hadoop Distributed File System (HDFS), the parallel computing framework (MapReduce), common utilities, a column-oriented data storage table (HBase), high-level data management systems (Pig and Hive), a Big Data analytics library (Mahout), a distributed coordination system (ZooKeeper), a workflow management module (Oozie), data transfer modules such as Sqoop, data aggregation modules such as Flume, and data serialization modules such as Avro. Hadoop Operations and Cluster Management Cookbook


  3. You also learn about Apache Avro — an Hadoop framework for data serialization, which can be tightly integrated with Hadoop-based storage. Professional Hadoop Solutions


  4. The OutputFormat uses Avro and more JARs are required to run the job. Cassandra High Performance Cookbook


  5. It shows how to use different serialization libraries, including Avro, Thrift, and Protocol Buffers. Hadoop RealWorld Solutions Cookbook


  6. The focus of chapter 3 is on ways to work with data stored in different formats, such as XML and JSON, which paves the way to a broader examination of data formats such as Thrift and Avro that work best with big data and Hadoop. Hadoop in Practice


  7. Avro A serialization system for efficient, cross-language RPC and persistent data storage. Hadoop:The Definitive Guide 3rd Edition


  8. This directory includes the Thrift and Avro RPC libraries for interacting with Cassandra. Cassandra The Definitive Guide


  9. They are Protocol Buffers (created by Google and hosted at http://code.google.com/p/protobuf), Thrift (originally created by Facebook and now an Apache project at http://thrift.apache.org), and Avro (created by Doug Cutting, the original creator of Hadoop) Hadoop Beginners Guide


  10. Some of the more common sources are an Avro RPC source to which applications can speak directly, syslog, a raw netcat-style source, and a source that executes a command and reads from its standard output. Hadoop Operations



98、单词 oozie 的匹配文本:


  1. A typical Hadoop-based Big Data platform includes the Hadoop Distributed File System (HDFS), the parallel computing framework (MapReduce), common utilities, a column-oriented data storage table (HBase), high-level data management systems (Pig and Hive), a Big Data analytics library (Mahout), a distributed coordination system (ZooKeeper), a workflow management module (Oozie), data transfer modules such as Sqoop, data aggregation modules such as Flume, and data serialization modules such as Avro. Hadoop Operations and Cluster Management Cookbook


  2. It is tested at scale, rest of ecosystem projects like Pig, Hive, Oozie, etc. Apache Hadoop YARN Yet Another Resource Negotiator


  3. Oozie, an open source Apache project, has functionality for building workflows and coordinating job running. MapReduce Design Patterns


  4. Why do you need an additional orchestration layer on top of MapReduce, and how does Oozie fit the bill? Professional Hadoop Solutions


  5. Apache Oozie is a workflow system for Hadoop MapReduce computations. Hadoop MapReduce Cookbook


  6. For this book you’ll also be working with Oozie, HBase, and other projects, but you’ll find instructions for these technologies in their respective sections. Hadoop in Practice


  7. Oozie A service for running and scheduling workflows of Hadoop jobs (including MapReduce, Pig, Hive, and Sqoop jobs) Hadoop:The Definitive Guide 3rd Edition


  8. HBase Hadoop database for random read/write access Hive SQL-like queries and tables on large data sets Pig Dataflow language and compiler Oozie Workflow for interdependent Hadoop jobs Sqoop Integrates databases and data warehouses with Hadoop. HBase:The Definitive Guide


  9. Oozie We have said many times that Hadoop clusters do not live in a vacuum and need to integrate with other systems and into broader workflows. Hadoop Beginners Guide


  10. Apache Oozie It’s not uncommon for large production clusters to run many coordinated MapReduce jobs in a workfow. Hadoop Operations



99、单词 writable 的匹配文本:


  1. The key on each record will always be a LongWritable containing the current timestamp or if the timestamp event header is set, will be used instead. Apache Flume Distributed Log Collection for Hadoop


  2. In this case, the filesystem is both readable and writable. Hadoop Operations and Cluster Management Cookbook


  3. To avoid permission issues, carry out the installation in user-writable directories. Cassandra High Performance Cookbook


  4. In the mapper, we use the ThriftWritable class of Elephant Bird to wrap the WeblogRecord object. Hadoop RealWorld Solutions Cookbook


  5. Choosing appropriate Hadoop data types Hadoop uses the Writable interface based classes as the data types for the MapReduce computations. Hadoop MapReduce Cookbook


  6. The DBInputFormat class requires a bean representation of the table being imported, which implements both the Writable and DBWritable interfaces. Hadoop in Practice


  7. You may notice we use special classes such as LongWritable , IntWritable , and Text instead of the more familiar Long, Integer, and String classes of Java. Hadoop in Action


  8. That is, a distributed database designer must choose to make the system either always readable or always writable. Cassandra The Definitive Guide


  9. Take a look at the signatures of our mapper and reducer classes: the WordCountMapper class gives IntWritable and Text as input and gives Text and IntWritable as output. Hadoop Beginners Guide


  10. The version 1.0.0 lineage of Hadoop releases has a large number of group writable files in the tarball. Hadoop Operations



100、单词 nutch 的匹配文本:


  1. For example, it supports benchmarks for Nutch (a text indexing software package), PageRank (the PageRank algorithm), the Mahout machine learning algorithms, and Hive queries. Hadoop Operations and Cluster Management Cookbook


  2. Shortly after the release of the paper, a free and open source software pioneer by the name of Doug Cutting started working on a MapReduce implementation to solve scalability in another project he was working on called Nutch, an effort to build an open source search engine. MapReduce Design Patterns


  3. In 2002, while developing a project called Nutch (a search engine project focused on crawling, indexing, and searching Internet web pages), Doug Cutting and Mike Cafarella were struggling with a solution for processing a vast amount of information. Professional Hadoop Solutions


  4. From there, Hadoop was extracted out of Nutch and eventually became an Apache top-level project. HBase in Action


  5. The Apache Hadoop project was started as a support project for the Apache Nutch search engine, before spawning off as a separate top-level project. Hadoop MapReduce Cookbook


  6. My team was making discoveries similar to those that Doug Cutting and others at Nutch had made several years earlier regarding how to efficiently store and manage terabytes of crawled and analyzed data. Hadoop in Practice


  7. They’d devised systems to solve precisely the problems we were having with Nutch. Hadoop:The Definitive Guide 3rd Edition


  8. It’s not perfect, but it’s ready for other people to play with and examine.” Mike had been working with Doug Cutting on Nutch, an open source search engine. HBase:The Definitive Guide


  9. Hadoop started out as a subproject of Nutch , which in turn was a subproject of Apache Lucene. Hadoop in Action


  10. Thanks, Doug At the same time, Doug Cutting was working on the Nutch open source web search engine. Hadoop Beginners Guide



101、单词 subclasses 的匹配文本:


  1. To enable this optimization the user subclasses the Combiner class, overriding a virtual Combine() method. Pregel  -  A System for Large-Scale Graph Processing


  2. FlumeJava includes subclasses of DoFn, e.g., MapFn and FilterFn, that provide simpler interfaces in special cases. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. The most common input formats are subclasses of FileInputFormat, with the Hadoop default being TextInputFormat. MapReduce Design Patterns


  4. These two objects are subclasses of DataInputStream and DataOutputStream from the Java I/O package, which means that they support standard I/O operations. Professional Hadoop Solutions


  5. Pig has an abstract class called LoadFunc that has a number of methods that must be provided by concrete subclasses, as seen in figure 11.4 Hadoop in Practice


  6. Subclasses need only populate data in HDFS (perhaps by copying from a local file), run a MapReduce job, and confirm the output is as expected. Hadoop:The Definitive Guide 3rd Edition


  7. There are specific subclasses, depending on where the host is used, in other words, on the master, region server, and so on. HBase:The Definitive Guide


  8. This is an abstract class for interfacing with the filesystem, and there are different concrete subclasses for handling HDFS and the local filesystem. Hadoop in Action


  9. In most cases, the implementation of a MapReduce job will provide job-specific subclasses of the Mapper and Reducer base classes found in this package. Hadoop Beginners Guide


  10. Every instance of class Object (and its subclasses) possesses a lock that is obtained on entry to a synchronized method and automatically released upon exit. Concurrent Programming in Java Design Principles and Pattern



102、单词 operand 的匹配文本:


  1. There are actually two forms of these operators; you’ve just seen the postfix form of the operator that is placed after the operand. Core Java Volume I Fundamentals 9th Edition


  2. AND: It means that the given two terms (left and right operand) need to match in order for the clause to be matched. Mastering ElasticSearch


  3. An example is division when the second operand is zero or sqrt when its argument is negative. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. At an assignment to a field: if the type of the left-hand operand is a raw type, then a compile-time unchecked warning occurs if erasure changes the field''s type. The Java Language Specification Java SE 7 Edition


  5. The bytecode has been annotated with the hexadecimal values corresponding to the opcodes and operand data. Oracle JRockit The Definitive Guide


  6. It is known as the functional approach because methods return the result of applying a function to their operand without modifying it. Effective Java 2nd Edition


  7. The assumption is that the first four bits of the command represent an operator, while the next 16 bits are an operand. Java 7 New Features Cookbook


  8. Some operators operate on only one operand; these are called unary operators. Java in a Nutshell 5th Edition


  9. Specifically, the conditional-And operator, &&, evaluates its right side only if the result of its left operand is true. Java Performance Tuning 2nd Edition


  10. Many of the constraints on bytecodes can be checked statically, such as the operand of an “ldc” bytecode must be a valid constant pool index whose type. Java Performance



103、单词 php 的匹配文本:


  1. He has worked with Perl/ PHP/Python, primarily for coding and MySQL/Oracle as the backend, for several years prior to moving into Hadoop fulltime. Apache Flume Distributed Log Collection for Hadoop


  2. Before I move on, however, I should mention that gweb is a PHP program, which most people run under the Apache web server (although any web server with PHP or FastCGI support should do the trick) Monitoring with Ganglia


  3. If you are not familiar with JavaScript but have used C or any other C-like language such as Java, PHP, or C#, this should look familiar. CouchDB The Definitive Guide


  4. He is also a well-known speaker who delivers talks on Drupal, Open Source, PHP, Moodle, and so on, along with organizing and delivering TechTalks in Drupal meetups and Drupal Camps in Mumbai, India. Mastering ElasticSearch


  5. Generating Thrift bindings for other languages (C++, PHP, and others) Cassandra High Performance Cookbook


  6. Later in this book you will use other tools to interact with the CouchDB database, including CouchDB’s very own Futon web-based administration interface, the JavaScript client-side web development language, and the PHP, Ruby, and Python programming languages. Beginning CouchDB


  7. Zach is now a developer at Elasticsearch and maintains the PHP. Elasticsearch The Definitive Guide


  8. Recall that PHP was originally designed to work within static HTML content. Hadoop in Action


  9. To generate Thrift bindings for other languages, pass it to the --gen switch (for example, thrift --gen php) Cassandra The Definitive Guide


  10. Although ActiveMQ is written in Java, APIs for many languages other than Java are provided, including C/C++, .NET, Perl, PHP, Python, Ruby, and many more. ActiveMQ in Action



104、单词 applet 的匹配文本:


  1. We describe how to package programs in JAR files, and how to deliver applications over the Internet with the Java Web Start and applet mechanisms. Core Java Volume I Fundamentals 9th Edition


  2. A Macintosh applet or application will have a different GUI than that of a Windows or Unix system. Java Network Programming and Distributed Computing


  3. Also, the decision to compile Java to JVM bytecode (a virtual machine code that soon every browser supported) meant that it became the language of choice for internet applet programs (do you remember applets?) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. The use of applet containers providing 1.1 APIs is outside the scope of this specification. Java EE 7 Specification


  5. In fact, whenever the class loader of an applet reads a JAR file, it reads and decompresses data from the network. Core Java Volume II Advanced Features 9th Edition


  6. The title of the window is specified using the java_applet_title parameter. Java 7 New Features Cookbook


  7. As of Java 1.2, any Java code, whether it is an applet, a servlet, a JavaBeans component, or a complete Java application, can be run with restricted permissions that prevent it from doing harm to the host system. Java in a Nutshell 5th Edition


  8. This also allows you to place the thread’s task in a subclass of some other class, such as Applet or HTTPServlet. Java Network Programming 4th Edition


  9. As is typical of GUI-based programs, ParticleApplet uses several auxiliary classes that do most of the work. Concurrent Programming in Java Design Principles and Pattern


  10. The following code is that of a Java applet class. Java 7 Recipes A Problem-Solution Approach



105、单词 sharding 的匹配文本:


  1. We’ve mitigated this problem somewhat by setting up the Ganglia web UI on each server to redirect requests to the appropriate server for each category of metrics, so end users see a unified system that hides the sharding. Monitoring with Ganglia


  2. You could use multi-master, master/slave, partitioning, sharding, write-through caches, and all sorts of other complex techniques. CouchDB The Definitive Guide


  3. Those Lucene indices are called shards and the process of this spreading is called sharding. Mastering ElasticSearch


  4. The MySQL sharding scheme assigned each customer and all related data to a fixed shard. Spanner - Google’s Globally-Distributed Database


  5. Working with secondary indexes The primary ordering and sharding is done by the row key. Cassandra High Performance Cookbook


  6. As your data keeps growing, an important feature is the ability to split your data across multiple servers, also known as sharding. Elasticsearch in Action MEAP V07 cHAPTER 1


  7. Because it doesn’t have any sharding capabilities, you need to be careful not to overwhelm the database from which you are reading by running too many mappers. Hadoop:The Definitive Guide 3rd Edition


  8. This was true of many tools and techniques for scaling data processing, including caching (for example, memcached), replication, sharding, and, of course, MapReduce/ Hadoop. Hadoop in Action


  9. Sharding and shared-nothing architecture If you can’t split it, you can’t scale it. Cassandra The Definitive Guide


  10. A better and actually very common approach is to start by sharding (i.e., distributing) the data quasi randomly across the machines. PowerDrill - Processing a Trillion Cells per Mouse Click



106、单词 optimized 的匹配文本:


  1. In large grids with thousands of nodes per cluster, or carefully optimized HPC grids, in which every CPU cycle spent on something other than the problem is a wasted cycle, “normal” compute nodes are often configured as deaf in order to minimize the overhead associated with aggregating cluster state. Monitoring with Ganglia


  2. An index is a data structure optimized for quick search and retrieval. CouchDB The Definitive Guide


  3. The algorithm addresses a general case when a query may reference repeated fields; a simpler optimized version is used for flat-relational queries, i.e., those referencing only required and optional fields. Dremel - Interactive Analysis of WebScale Datasets


  4. Performance considerations The previous query execution is already optimized by ElasticSearch internals, by using caching mechanism. Mastering ElasticSearch


  5. Because Cassandra handles replication, disk systems can be optimized for more performance versus redundancy. Cassandra High Performance Cookbook


  6. This strategy provides the best of both worlds: streaming updates to disk in an efficient (sequential) fashion, and a vastly reduced re-write workload, allowing the long-term on-disk storage layout to be optimized for future read access. Ceph A Scalable, High-Performance Distributed File System


  7. In some highly optimized services even small monitoring overheads are easily noticeable, and might compel the deployment teams to turn the tracing system off. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. The node handling the request can no longer make use of the optimized bulk format to parse just the metadata line in order to decide which shard should receive the request. Elasticsearch The Definitive Guide


  9. This model is optimized for reading data, as it allows data to be read from any slave. Cassandra The Definitive Guide


  10. Its speed is the result of the combination of a fast transactional journal comprised of data log files, the highly optimized indexing of message IDs, and inmemory message caching. ActiveMQ in Action



107、单词 transactional 的匹配文本:


  1. Flume only provides transactional guarantees for each channel in each individual agent. Apache Flume Distributed Log Collection for Hadoop


  2. Operations within an account are transactional and consistent: a user who sends or labels a message is guaranteed to observe the change despite possible failover to another replica. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  3. Accountants don’t use erasers” is a quote from Pat Helland, a senior architect of transactional systems who worked at Microsoft and Amazon. CouchDB The Definitive Guide


  4. Third, F1 requires strong transactional semantics, which made using other NoSQL systems impractical. Spanner - Google’s Globally-Distributed Database


  5. In Chapter 3, Application Programmer Interface we should know how to use transactional style locking using Cages. Cassandra High Performance Cookbook


  6. However there have been some applications that have asked for transactional mainly for the purpose of maintaining secondary indices. Cassandra A Decentralized Structured Storage System


  7. Within a single CouchDB node, an update of a single document is transactional. Scaling CouchDB


  8. It’s transactional, supporting JTA/JTS, two-phase commit, deadlock detection, and MVCC. Cassandra The Definitive Guide


  9. This is a file-based message store that combines a transactional journal, for reliable message storage and recovery, with good performance and scalability. ActiveMQ in Action


  10. For example, in a transactional web-indexing system the programmer can make assumptions like: the hash of the contents of a document is always consistent with the table that indexes duplicates. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



108、单词 amazon 的匹配文本:


  1. Creating mountains of data has become very easy, thanks to Facebook, Twitter, Amazon, digital cameras and camera phones, YouTube, Google, and just about anything else you can think of connected to the Internet. Apache Flume Distributed Log Collection for Hadoop


  2. This issue is also relevant when monitoring dynamic environments such as cloud resources (for example, Amazon EC2) Monitoring with Ganglia


  3. Werner Vogels, Amazon CTO and Vice President If availability is a priority, we can let clients write data to one node of the database without waiting for other nodes to come into agreement. CouchDB The Definitive Guide


  4. Amazon EC2 discovery The Amazon store, in addition to selling goods has a few popular services as selling storage or computing power in the pay-as-you-go model. Mastering ElasticSearch


  5. Dynamo[6] is a storage system that is used by Amazon to store and retrieve user shopping carts. Cassandra A Decentralized Structured Storage System


  6. For example, Amazon has a tool called Elastic Load Balancing and Rackspace provides a service called Rackspace Cloud Load Balancers (in beta as of this writing) Scaling CouchDB


  7. The exponential growth of data first presented challenges to cutting-edge businesses such as Google, Yahoo, Amazon, and Microsoft. Hadoop in Action


  8. This original version of Cassandra was written primarily by an ex-employee from Amazon and one from Microsoft. Cassandra The Definitive Guide


  9. Before joining Improve Digital, he spent time at Amazon.co.uk, where he led several software development teams building systems that process Amazon catalog data for every item worldwide. Hadoop Beginners Guide


  10. When a user makes a purchase on Amazon, there are quite a few separate stages through which that order must travel including order placement, invoice creation, payment processing, order fulfillment, shipping, and more. ActiveMQ in Action



109、单词 ssl 的匹配文本:


  1. We suggest you read up on VPNs and setting up CouchDB behind an HTTP proxy (like Apache httpd’s mod_proxy, nginx, or varnish) that will handle SSL for you. CouchDB The Definitive Guide


  2. For a while, Kerberos implementations had the option of using Kerberized SSL (KSSL) for HTTP authentication — and it is still an option in many distributions. Professional Hadoop Solutions


  3. If SSL mutual authentication is used as the thentication protocol, the client can manage its own authentication ntext, and need not depend on references to the login session state. Java EE 7 Specification


  4. Unfortunately, CouchDB does not support SSL at the time of writing. Beginning CouchDB


  5. Using SSL over HTTP is transparent to application programmers; simply use URLs that start with https. Core Java Volume II Advanced Features 9th Edition


  6. Chapter 9, Database, Security, and System Enhancements: Various database improvements such as the introduction of the new RowSetFactory class are illustrated along with how to take advantage of new SSL support. Java 7 New Features Cookbook


  7. In Java 1.4 and later, you can use https:// with the URL class as previously shown to securely download documents from web servers that support SSL. Java in a Nutshell 5th Edition


  8. It discusses ActiveMQ URIs and all the transport connectors for both client-to-broker and broker-to-broker communications including TCP, NIO, STOMP, failover, SSL, HTTP/S, and much more. ActiveMQ in Action


  9. A similar technique can be used to disable the SSLv2Hello pseudo-protocol described above, which is not recognized by some SSL implementations. Fundamental Networking in Java


  10. There are a great number of events that can be handled using a WebViewClient such as a malformed URL or a failed SSL handshake. Java 7 Recipes A Problem-Solution Approach



110、单词 ejb 的匹配文本:


  1. Currently, he is developing web applications using the latest Java technologies on the market (EJB 3.0, CDI, Spring, JSF, Struts, Pro Java 7 NIO.2


  2. Similar access to databases is provided to JSP SF applications, and servlets by the Web Container, and to enterprise y the EJB Container. Java EE 7 Specification


  3. Transactional annotation brings transactional semantics to POJOs (plain old Java objects), outside of an EJB (Enterprise JavaBean) Java EE 7 Essentials


  4. We’d like to address a common confusion before going any further: The JavaBeans that we discuss in this chapter have little in common with Enterprise JavaBeans (EJB) Core Java Volume II Advanced Features 9th Edition


  5. He has written numerous publications and has also written the EJB 3.1 Cookbook. Java 7 New Features Cookbook


  6. Particular care should be taken when determining applicable requirements based on the presence of EJB Lite in the Web Profile. Java EE 7 Web Profile Specification


  7. However, in their book J2EE Performance Testing with BEA WebLogic Server (Expert Press), Peter Zadrozny, Philip Aston, and Ted Osborne state that there is no real difference in performance between these two options, as long as the beans are explicitly removed from the container when the session expires.[2] The beans are removed automatically by binding the session to a session listener that calls ejbRemove( ) when the session terminates. Java Performance Tuning 2nd Edition


  8. Technologies such as those discussed in chapter 2 (COM, CORBA, DCE, and EJB) using RPC are considered to be tightly coupled. ActiveMQ in Action


  9. A special thanks goes to Rahul Biswas for providing content relating to EJB and Java persistence and also for his willingness to review multiple drafts and provide valuable feedback. Java Performance


  10. The Enterprise JavaBean (EJB) 3.0 model used interfaces for interacting with methods that performed database work. Java 7 Recipes A Problem-Solution Approach



111、单词 manning 的匹配文本:


  1. Manning Early Access (MEAP) readers who posted comments in the Author Online forum. Mahout in Action


  2. Copyright 2014 Manning Publications For more information on this and other Manning titles go to. Netty in Action MEAP V08


  3. Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Gradle in Action


  4. Without his recommendation, Manning would not have come knocking on our inboxes, and we would never have written a book together. RabbitMQ in Action


  5. Toward the end of 2010, the idea of writing HBase in Action for Manning came up. HBase in Action


  6. I’m also indebted to Cynthia Kane, my development editor at Manning, who coached me through writing this book and provided invaluable feedback on my work. Hadoop in Practice


  7. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine. Hadoop in Action


  8. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps. ActiveMQ in Action


  9. Please post comments or corrections to the Author Online forum at http://www.manning-sandbox.com/forum.jspa?forumID=451 Lucene in Action 2nd Edition


  10. The purchase of MongoDB in Action includes free access to a private forum run by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the author and other users. MongoDB in Action



112、单词 querying 的匹配文本:


  1. They could be built quickly, without having to worry about drivers, languages, or indeed many of the complexities of querying and retrieving information. Getting Started with CouchDB


  2. These ports are used by gweb, which is usually installed on the same host as gmetad, so unless you’re using some of the advanced integration features, such as Nagios integration, or have custom scripts querying gmetad, you shouldn’t need any firewall ACLs for gmetad. Monitoring with Ganglia


  3. In the blog app, we use group_level reduce queries to compute the count of comments both on a per-post and total basis, achieved by querying the same view index with different methods. CouchDB The Definitive Guide


  4. Extend your knowledge on ElasticSearch, and querying and data handling, along with its internal workings. Mastering ElasticSearch


  5. Hive is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, ad hoc querying, and analysis of large data sets stored in Hadoop files. Cassandra High Performance Cookbook


  6. Although it might be suitable and fast for general-purpose querying, it starts to become more cumbersome when you try to perform aggregation and reporting. Beginning CouchDB


  7. Don’t use the include_docs parameter (or set it to false) when querying views. Scaling CouchDB


  8. Query time search-as-you-type Index time optimizations Ngrams for partial matching Index time search-as-you-type Preparing the index Querying the field Edge n-grams and postcodes Ngrams for compound words 16 Elasticsearch The Definitive Guide


  9. We present an overview of what’s different about Cassandra’s model for querying and updating data, and then get to work using the API. Cassandra The Definitive Guide


  10. The Geronimo deployer is a command-line tool for querying and installing Geronimo plug-ins. ActiveMQ in Action



113、单词 junit 的匹配文本:


  1. By the end of this chapter, you’ll write tests with JUnit, TestNG, and Spock and execute them as part of the build lifecycle. Gradle in Action


  2. Let''s write a simple JUnit test for a sample Java class. Gradle Effective Implementation Guide


  3. The code For this chapter, we created the Maven project (http://maven.apache.org) and every aspect we described is connected with a particular JUnit (http://junit.org/) test in that project. Mastering ElasticSearch


  4. From the JUnit installation, copy the junit.jar into your library path. Cassandra High Performance Cookbook


  5. MRUnit is built on top of the popular JUnit testing framework. Hadoop RealWorld Solutions Cookbook


  6. The name is a unique label for the module itself, and is often the same as the project name (like solr-core, spring-web, or junit) Gradle Beyond The Basics


  7. One of the powerful features of JUnit and other test frameworks is that when tests fail, the failure message includes details on the cause of the failure. Hadoop in Practice


  8. For running unit tests we use junit, as well as a couple of helper libraries: hamcrest-all provides useful matchers for writing test assertions, and mrunit is used for writing MapReduce tests. Hadoop:The Definitive Guide 3rd Edition


  9. MRUnit takes the unit test model of its namesakes such as JUnit and DBUnit and provides a framework to help write and execute tests that can help improve the quality of your code. Hadoop Beginners Guide


  10. This facilitates compiled tests being able to reference JUnit or TestNG annotations or base classes. Building and Testing with Gradle



114、单词 workflow 的匹配文本:


  1. Let’s examine the workflow of the Songbird backup application, first as a user backing up from a single computer, and then using Songbird to synchronize playlists between multiple computers. CouchDB The Definitive Guide


  2. Figure 7.13 demonstrates the page workflow you’ll need to script. Gradle in Action


  3. Still worse, instead of losing a single workflow, a JobTracker failure caused an outage that would lose all the running jobs in a cluster and require users to manually recover their workflows. Apache Hadoop YARN Yet Another Resource Negotiator


  4. This is an easy way to begin sketching out and prototyping code whose final form you do not yet know, which is a typical workflow in the development of new forms of build automation. Gradle Beyond The Basics


  5. Often times we require multiple MapReduce applications to be executed in a workflow-like manner to achieve our objective. Hadoop MapReduce Cookbook


  6. Space considerations do not allow us to demonstrate every feature therein, but a typical user workflow is shown in Figure 6 Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Within a few months we’d built and deployed a MapReduce application encompassing a number of MapReduce jobs, woven together with our own MapReduce workflow management system onto a small cluster of 18 nodes. Hadoop in Practice


  8. The output of a workflow is some subset of the input pages along with a set of index terms. Hadoop in Action


  9. Looking forward, Imagini hopes to simplify their workflow once Hadoop Streaming becomes available with Cassandra. Cassandra The Definitive Guide


  10. Once again, using any of the multiple interfaces (web console, CLI, or API), a Hadoop workflow is defined with attributes such as the number of Hadoop hosts required and the location of the source data. Hadoop Beginners Guide



115、单词 gui 的匹配文本:


  1. There are two ways to create/modify views: one is via the web GUI, and the other by programatically defining views using JSON.i Creating views using the GUI. Monitoring with Ganglia


  2. The NameNode web GUI is not known to work correctly with Hadoop 0.19.0 under the virtual cluster. Pro Hadoop


  3. The use case starts out as a plain Java application without a GUI, simply controlled through console input. Gradle in Action


  4. Sometimes if a GUI web browser is not available, we can use a command line based web browser such as elinks and lynx. Hadoop Operations and Cluster Management Cookbook


  5. We will write our first Gradle script and learn about the command-line and GUI features of Gradle. Gradle Effective Implementation Guide


  6. Her interests and experiences lie across multiple tiers, from web frontend GUI to middleware, from middleware to backend SQL RDBMS and NoSQL data storage. Cassandra High Performance Cookbook


  7. It functions much like a GUI-based FTP client in managing remote storage. Hadoop in Action


  8. A GUI will start and load the JConsole tool that ships with Java. Cassandra The Definitive Guide


  9. In addition to concurrency constructs, this example illustrates a few of the issues encountered when using threads with any GUI-based program. Concurrent Programming in Java Design Principles and Pattern


  10. Gradle must drive the initiation of the build, say from from the command line, IDE, or GUI, but can be either the subordinate or dominate player in the actual XML and Gradle build script flow. Building and Testing with Gradle



116、单词 faceting 的匹配文本:


  1. It consists of different UI templates, request/response handlers, and different faceting provided by Solr. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. You should also know how to send queries to get the documents you are interested in, how to narrow down the results of your queries by using filtering, and how to calculate statistics for your data with the use of the faceting/ aggregation mechanism. Mastering ElasticSearch


  3. Solr supports complex search criteria, faceting, result highlighting, query-completion, query spellcheck, relevancy tuning, and more. Solr 3 Enterprise Search Server


  4. Chapter 6, Using Faceting Mechanism: This chapter will show you the beautiful world of the Apache Solr faceting mechanism, including tasks like getting the number of documents with the same field value, matching the same query, matching given range values, and so on. Solr 3.1 Cookbook


  5. Looking ahead, in part 2, we''ll cover Elasticsearch''s core functionality when it comes to dealing with data: indexing, searching, analysis, and faceting. Elasticsearch in Action MEAP V07 cHAPTER 1


  6. Once the user performs a query, the results can be further categorized using Solr’s faceting support to show features of the documents in the result set. Solr in action


  7. You will learn how to deal with the problems in Solr configuration and setup, how to handle common querying problems, how to fine-tune Solr instances, how to set up and use SolrCloud, how to use faceting and grouping, fight common problems, and many more things. Apache Solr 4 Cookbook



117、单词 gmond 的匹配文本:


  1. Look at each instance of gmond within the same network broadcast domain (since reachability is based on multicast packets), and find the udp_recv_channel block in gmond.conf. Apache Flume Distributed Log Collection for Hadoop


  2. We will install Ganglia Monitoring Daemon (Gmond) on each node in the cluster, which will gather the server and HBase metrics of that node. HBase Administration Cookbook


  3. As depicted in Figure 1-1, Ganglia is architecturally composed of three daemons: gmond, gmetad, and gweb. Monitoring with Ganglia


  4. Ganglia is composed of two servers: the gmetad server, which provides historical data and collects current data, and the gmond server, which collects and serves current statistics. Pro Hadoop


  5. Start the gmond daemon with the following command on the master node: sudo service gmond start. Hadoop Operations and Cluster Management Cookbook


  6. The process that feeds metrics to your statistics system, such as collectd and gmond, probably already runs on the storage node. bk-admin-guide-cloud-latest


  7. The Ganglia monitoring daemon (gmond) should be running on every worker node in the cluster. Hadoop RealWorld Solutions Cookbook


  8. Multicast Versus Unicast While the default communication method between monitoring daemons (gmond) is UDP multicast messages, you may encounter environments where multicast is either not possible or a limiting factor. HBase:The Definitive Guide


  9. It works by running a small monitoring daemon on each host called gmond that collects metrics locally. Hadoop Operations



118、单词 endpoint 的匹配文本:


  1. The syslog TCP source As previously mentioned, the syslog TCP source provides an endpoint for messages over TCP, allowing for a larger payload size and TCP retry semantics that should be used for any reliable inter-server communications. Apache Flume Distributed Log Collection for Hadoop


  2. A maximal matching is one to which no additional edge can be added without sharing an endpoint. Pregel  -  A System for Large-Scale Graph Processing


  3. Bulk Document Inserts The fastest mode for importing data into CouchDB via HTTP is the _bulk_docs endpoint. CouchDB The Definitive Guide


  4. It’s responsible for accepting HTTP requests, executing a CRUD operation mapped to a URL endpoint, and forwarding the request to a JSP. Gradle in Action


  5. MultiGet The MultiGet operation is available via the _mget endpoint and allows fetching several documents using a single request. Mastering ElasticSearch


  6. As soon as a natural endpoint finds data, it is returned to the client. Cassandra High Performance Cookbook


  7. Let’s look at how to write a MapReduce job to pull data from an HTTP endpoint: Hadoop in Practice


  8. You can see that we’re still using index megacorp and type employee, but instead of specifying a document ID, we now use the _search endpoint. Elasticsearch The Definitive Guide


  9. Set the environment variable EC2_URL to the service endpoint of a different region if you choose to. Hadoop in Action


  10. When a server node is started, it registers itself with the gossiper to receive endpoint state information. Cassandra The Definitive Guide



119、单词 tuples 的匹配文本:


  1. In particular, Java provides poor support for simple anonymous functions and heterogeneous tuples, which leads to verbosity and some loss of static type safety. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  2. Finally, we used the FLATTEN operator to unnest the Tuples in the DataBags emitted from the UDF. Hadoop RealWorld Solutions Cookbook


  3. Compound types include arrays, maps (an overloaded term in this paper), and tuples. Interpreting the Data Parallel Analysis with Sawzall


  4. What is Hadoop? The role of the programmer is to define map and reduce functions, where the map function outputs key/value tuples, which are processed by reduce functions to produce the final output. Hadoop in Practice


  5. The result of the LOAD operator, and indeed any operator in Pig Latin, is a relation, which is just a set of tuples. Hadoop:The Definitive Guide 3rd Edition


  6. To support this diversity of data, Pig supports complex data types, such as bags   and tuples   that can be nested to form fairly sophisticated data structures. Hadoop in Action


  7. It was new, with strange new vocabulary and terms such as “tuples”—familiar words used in a new and different manner. Cassandra The Definitive Guide


  8. Tenzing currently has support for creating functions in Sawzall that take tables (vector of tuples) as input and emit tables as output. Tenzing - A SQL Implementation On The MapReduce Framework


  9. Bigtable presents a multi-dimensional sorted map to users: keys are (row, column, timestamp) tuples. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications


  10. The bottom, colored segment of the bars represents the time taken by HadoopDB when hand-coded and the rest of the bar indicates the additional overhead as a result of the SMS planner (e.g., SMS jobs need to serialize tuples retrieved from the underlying database and deserialize them before further processing in Hadoop) HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



120、单词 standalone 的匹配文本:


  1. Tiering Flume agents is covered using Avro serialization, as well as using the Flume command line as a standalone Avro client for testing and importing data manually. Apache Flume Distributed Log Collection for Hadoop


  2. The installed CouchDB can be run both standalone and also as a service. Getting Started with CouchDB


  3. Primarily, we discuss two ways in which to extend Ganglia, including the development and deployment of additional metric modules, and the use of a standalone utility called gmetric. Monitoring with Ganglia


  4. The goal here is to provide a way to build standalone apps that can also be easily indexed by search engines and used via screen readers. CouchDB The Definitive Guide


  5. Standalone writes are implemented as read-write transactions; non-snapshot standalone reads are implemented as read-only transactions. Spanner - Google’s Globally-Distributed Database


  6. And it packages up all of this functionality into a standalone server that your. Elasticsearch The Definitive Guide


  7. What’s in This Book? This book is designed with the chapters acting, to a reasonable extent, as standalone guides. Cassandra The Definitive Guide


  8. There is also an API to directly execute queries on the pool, and a standalone binary which does not need any server side components, but rather launches its own MapReduce jobs. Tenzing - A SQL Implementation On The MapReduce Framework


  9. Figure 2.2 Standalone terminals and applications connecting to a mainframe using many protocols. ActiveMQ in Action


  10. Though many of these features look Maven-like in their naming and function, these are behaviors desired by nearly any Java project, even standalone single-JAR Java applications. Building and Testing with Gradle



121、单词 ssh 的匹配文本:


  1. It is best to ensure that all nodes are set up the same way using a common shell script, executed via a parallel SSH client or configuration management tool. Monitoring with Ganglia


  2. To establish a secure SSH connection between your computer and GitHub, you’ll need to generate SSH keys and add the public key to your GitHub account. Gradle in Action


  3. The output tells us that the local machine has the following services running: ftp, ssh, smtp, rpcbind (service for remote procedure calls), and jpp (service for Java packaging) Hadoop Operations and Cluster Management Cookbook


  4. The repository can be accessible via the file system, HTTP, SSH, or other protocols. Gradle Effective Implementation Guide


  5. The recipe Connecting with JConsole over a SOCKS Proxy shows how to use JConsole with a host only reachable by SSH. Cassandra High Performance Cookbook


  6. We will do this using public key authentication, but you could instead use ssh-agent or rsh. Scaling CouchDB


  7. If you are using Windows machines, install Cygwin and SSH server in each machine. Hadoop MapReduce Cookbook


  8. When you start Hadoop it’ll launch NameNode and JobTracker on the local host from which you issued the start command, and then SSH to all the nodes in this file to launch the SecondaryNameNode. Hadoop in Practice


  9. If a Hadoop cluster is already available to you, you can skip the next section on how to set up Secure Shell (SSH) channels between nodes. Hadoop in Action


  10. Time for action – setting up SSH Carry out the following steps to set up SSH: Hadoop Beginners Guide



122、单词 kafka 的匹配文本:


  1. This paper discusses the design and engineering problems we encountered in moving LinkedIn’s data pipeline from a batch-oriented file aggregation mechanism to a real-time publish-subscribe system called Kafka. Building LinkedIn’s Real-time Activity Data1


  2. Apache Kafka) and supports all three types of the Trident spout transaction. Storm Blueprints Patterns for Distributed Real-time Computation


  3. Getting ready Before we get started, we need to install a message broker that more readily supports the transactional logic inherent in Storm''s Trident, specifically Apache Kafka. Storm Real-Time Processing Cookbook


  4. Set up Apache Kafka clusters and develop custom message producers and consumers using practical, hands-on examples. Apache Kafka


  5. We introduce Kafka, a distributed messaging system that we developed for collecting and delivering high volumes of log data with low latency. Kafka a Distributed Messaging System for Log Processing



123、单词 rmi 的匹配文本:


  1. How to write server-side Java applications for the WWW • Distributed computing technologies including remote method invocation (RMI) and. Java Network Programming and Distributed Computing


  2. Applications that use RMI typically have trouble running on more secure networks. Cassandra High Performance Cookbook


  3. I-IIOP subsystem is composed of APIs that allow for the use of RMI-style ming that is independent of the underlying protocol, as well as an. Java EE 7 Specification


  4. I felt like we’d need to precisely control how things like connections, timeouts and buffers are handled, and RMI gives you little control over those. Hadoop:The Definitive Guide 3rd Edition


  5. Communication between the client and the middle tier can occur through HTTP (when you use a web browser as the client) or another mechanism such as remote method invocation (RMI, see Chapter 11) Core Java Volume II Advanced Features 9th Edition


  6. Throwable implements Serializable so exceptions from remote method invocation (RMI) can be passed from server to client. Effective Java 2nd Edition


  7. Notable omissions are the RMI and IDL tools that are specific to enterprise programming and which are documented in Java Enterprise in a Nutshell (O’Reilly) Java in a Nutshell 5th Edition


  8. In addition, Java makes heavy use of nonstandard custom schemes such as rmi, jar, jndi, and doc for various purposes. Java Network Programming 4th Edition


  9. Such non-JMS APIs could consist of utilizing the CORBA IIOP protocol or some other native protocol beyond Java RMI. ActiveMQ in Action


  10. By convention, this notion is further restricted to constructs affecting a single JVM, as opposed to distributed programming, for example using remote method invocation (RMI), that involves multiple JVMs residing on multiple computer systems. Concurrent Programming in Java Design Principles and Pattern



124、单词 bytecode 的匹配文本:


  1. The Java compiler does this by generating bytecode instructions which have nothing to do with a particular computer architecture. Core Java Volume I Fundamentals 9th Edition


  2. Additionally, bytecode swap technologies like JRebel can be configured to perform hot deployment for class file changes. Gradle in Action


  3. As long as the code is compiled into bytecode, we are fine. Gradle Effective Implementation Guide


  4. For example, if you need to hack an old, unsupported vendor JAR at the bytecode level, or otherwise have a highly custom. Gradle Beyond The Basics


  5. The reason it was discouraged is that older compilers would generate bytecode using the StringBuffer class, whose synchronization slowed down its execution. Hadoop in Practice


  6. However, the bytecode format used in the class files is well documented, and it is an easy matter for someone with experience in assembly programming and a hex editor to manually produce a class file containing valid but unsafe instructions for the Java virtual machine. Core Java Volume II Advanced Features 9th Edition


  7. Besides being more readable and concise than using multiple catch blocks, the generated bytecode is also smaller and does not result in the generation of duplicate code. Java 7 New Features Cookbook


  8. To use the UDF inside Hive, compile the Java code and package the UDF bytecode class file into a JAR file. Programming Hive


  9. Here is one possible execution trace, showing only the reads and writes to variable n that would result from the putfields and getfields in compiled bytecode. Concurrent Programming in Java Design Principles and Pattern


  10. Platform independent • Works by bytecode instrumentation using ObjectWeb ASM. BTrace Java Platform Observability by Bytecode Instrumentation



125、单词 datagram 的匹配文本:


  1. Your operating system and network aren’t hostile to multicast and/or User Datagram Protocol (UDP) Monitoring with Ganglia


  2. Each datagram is treated as a discrete unit, unrelated to any other previously sent packet—there are no "connections" between machines at the network layer. Java Network Programming and Distributed Computing


  3. Common bulk command sends documents by HTTP protocol and UDP bulk sends these using connectionless datagram protocol. Mastering ElasticSearch


  4. Note too that the receive( ) method of the DatagramSocket class in Java 2 now calls the checkAccept( ) rather than the checkConnect( ) method. Java Security 2nd Edition


  5. In this book, we cover only the Transmission Control Protocol (TCP).The Java platform also supports the User Datagram Protocol (UDP), which can be used to send packets (also called datagrams) with much less overhead than that of TCP. Core Java Volume II Advanced Features 9th Edition


  6. If, in addition, you don’t need a guarantee that your data will get there or that the packets of data will arrive in the order you sent them, you may be interested in the DatagramSocket and DatagramPacket classes: Java in a Nutshell 5th Edition


  7. Chapter 12, UDP, introduces the User Datagram Protocol (UDP) and the associated DatagramPacket and DatagramSocket classes that provide fast, unreliable communication. Java Network Programming 4th Edition


  8. User Datagram Protocol (UDP) along with TCP make up the core of internet protocols. ActiveMQ in Action


  9. A ‘datagram’ is a single transmission which may be delivered zero or more times. Fundamental Networking in Java


  10. Solution Make use of datagram multicasting using the DatagramChannel class. Java 7 Recipes A Problem-Solution Approach



126、单词 hostname 的匹配文本:


  1. The headers are key/value pairs that can be used to make routing decisions or carry other structured information (such as the timestamp of the event or hostname of the server where the event originated) Apache Flume Distributed Log Collection for Hadoop


  2. The UI allowed for filtering by hostname and could limit the total number of hosts displayed. Monitoring with Ganglia


  3. It tells the server the hostname that came with the request. CouchDB The Definitive Guide


  4. For a joining node to receive data from other nodes, do not include its hostname or IP in its seed list. Cassandra High Performance Cookbook


  5. All client hosts must be able to resolve the hostname of the machine running the tests. Scaling CouchDB


  6. The protocol, hostname and port of any node in the cluster. Elasticsearch The Definitive Guide


  7. We can set two servers to be seeds just by indicating the IP address or hostname of this node and then adding our second instance: Cassandra The Definitive Guide


  8. The /etc/hosts file must also contain an entry for the real IP address and the hostname for a proper configuration. ActiveMQ in Action


  9. For example, in Webtable, pages in the same domain are grouped together into contiguous rows by reversing the hostname components of the URLs. Bigtable - A Distributed Storage System for Structured Data


  10. See Task Types for a description of the most important builtin task types.) The task declares properties (i.e., hostname, database, sql, etc.) in conventional Groovy idiom. Building and Testing with Gradle



127、单词 clojure 的匹配文本:


  1. I’ve written this book for experienced Java programmers who are interested in learning how to manage and make use of concurrency on the JVM, using languages such as Java, Clojure, Groovy, JRuby, and Scala. Programming Concurrency on the JVM


  2. Even though it wasn’t part of the core Maven project, a similar approach was proposed by the project Maven Polyglot that allows you to write your build definition logic, which is the project object model (POM) file, in Groovy, Ruby, Scala, or Clojure. Gradle in Action


  3. You may have heard of the term closure and may be wondering whether lambdas meet the definition of a closure (not to be confused with the Clojure programming language) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. One route is a transactional memory system, such as the one that comes with the Clojure language. NoSQL Distilled


  5. Jython is an implementation of Python on the JVM, and there are entirely unique languages like Clojure and Scala as well. HBase in Action


  6. You can write MapReduce jobs in Clojure while accessing HBase tables. HBase:The Definitive Guide


  7. Seven Languages in Seven Weeks In this book you’ll get a hands-on tour of Clojure, SQL Antipatterns


  8. He envisages himself using functional languages in projects and the decision to learn Clojure (a little bit of JRuby, Scala, F#, and Dart lately) influences his current self-learning activities. Java 7 New Features Cookbook


  9. All are JVM (Java Virtual Machine) libraries that can be used from programming languages like Java, Clojure, Scala, JRuby, Groovy, and Jython, as opposed to tools with their own languages, like Hive and Pig. Programming Hive


  10. In addition to backing pure Java apps, the Java driver also forms the basis for the drivers powering JVM languages like Scala, Clojure, and JRuby. MongoDB in Action



128、单词 descriptor 的匹配文本:


  1. You’ll need to learn about Mojos (Maven’s internal extension API), how to provide a plugin descriptor (again in XML), and about specific annotations to provide the data needed in your extension implementation. Gradle in Action


  2. Ivy fully supports Maven descriptor files and repositories; it also adds some extra functionality. Gradle Effective Implementation Guide


  3. If you look closely at the assembly plugin configuration, you''ll notice that we specify the assembly descriptor, called release.xml, in the assembly directory. Mastering ElasticSearch


  4. Each Maven coordinate, whether it corresponds to any code, source, or JavaDocs at all, always describes an XML descriptor file called a POM (Project Object Model) Gradle Beyond The Basics


  5. This behavior is similar to the fsync system call in POSIX that commits buffered data for a file descriptor. Hadoop:The Definitive Guide 3rd Edition


  6. Open() opens a named file or directory to produce a handle, analogous to a UNIX file descriptor. Chubby - The lock service for loosely-coupled distributed systems


  7. The first row has the column descriptor “CITED” from the original file. Hadoop in Action


  8. The database descriptor logs are a good thing, as this indicates that the operation loaded your schemas. Cassandra The Definitive Guide


  9. Truncating the file doesn’t work because the JVM will continue to write to the open file descriptor until the daemon is restarted. Hadoop Operations


  10. Instead of processing messages one at a time, MDBs can process multiple messages at the same time because the EJB container can create as many instances of the MDBs as are allowed by the EJB’s deployment descriptor. ActiveMQ in Action



129、单词 lifecycle 的匹配文本:


  1. We’ll touch on how these objects map to the corresponding classes in the Gradle API, Gradle’s build lifecycle, the incremental build feature, and the mechanics of registering lifecycle hooks. Gradle in Action


  2. We notice that only the LIFECYCLE, WARN, QUIET, and ERROR log levels are shown if we don''t add any extra command-line options. Gradle Effective Implementation Guide


  3. The ApplicationMaster is the “head” of a job, managing all lifecycle aspects including dynamically increasing and decreasing resources consumption, managing the flow of execution (e.g., running reducers against the output of maps), handling faults and computation skew, and performing other local optimizations. Apache Hadoop YARN Yet Another Resource Negotiator


  4. The life cycle of Java object and garbage collections In order to see how garbage collector works, let''s get through the lifecycle of a sample Java object. Mastering ElasticSearch


  5. Cassandra is a distributed system, and distributed deletes have a complex lifecycle of their own. Cassandra High Performance Cookbook


  6. We will cover four discrete areas of Gradle functionality: file operations, custom Gradle plug-ins, build lifecycle hooks, and dependency management. Gradle Beyond The Basics


  7. States The ZooKeeper object transitions through different states in its lifecycle (see Figure 14-3) Hadoop:The Definitive Guide 3rd Edition


  8. Common issues, such as the data lifecycle, that need to be considered regardless of technology. Hadoop Beginners Guide


  9. This class is used to configure the broker and manage its entire lifecycle. ActiveMQ in Action


  10. In Maven, a plug-in is a means of extending the tool with a particular, fine-grained action to be performed during your build, often associated with one or more phases in Maven’s elaborate lifecycle model. Building and Testing with Gradle



130、单词 optimizations 的匹配文本:


  1. In this section we discuss the optimizations and innovations that make Paxos practical for our system. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. Its premise is zero configuration, heavy optimizations, and a beer theme. CouchDB The Definitive Guide


  3. A number of optimizations in our system are therefore targeted at reducing the amount of data sent across the network: the locality optimization allows us to read data from local disks, and writing a single copy of the intermediate data to local disk saves network bandwidth. MapReduce - Simplied Data Processing on Large Clusters


  4. Similarly, the optimizer does not modify any user code as part of its optimizations. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. Many optimizations can be put in place to not affect in coming read requests. Cassandra A Decentralized Structured Storage System


  6. Performance tuning of large scale systems can sometimes result in counterintuitive optimizations. Scaling CouchDB


  7. Query time search-as-you-type Index time optimizations Ngrams for partial matching Index time search-as-you-type Preparing the index Querying the field Edge n-grams and postcodes Ngrams for compound words 16 Elasticsearch The Definitive Guide


  8. This eliminates the need for any sophisticated schema design and allows Amazon to put effort toward low-latency and high-throughput performance optimizations and their other primary goals. Cassandra The Definitive Guide


  9. It presents some general tuning techniques covering such topics as persistent versus nonpersistent messages, transactions, embedded brokers, tuning the wire level protocol, tuning the TCP transport, and some optimizations for message producers and message consumers. ActiveMQ in Action


  10. Locality groups do not support CPUcache-level optimizations, such as those described by Ailamaki [2] Bigtable - A Distributed Storage System for Structured Data



131、单词 repositories 的匹配文本:


  1. You’ll find that some RPM-based distributions ship with Ganglia packages in the base repositories, and others require you to use special-purpose package repositories, such as the Red Hat project’s EPEL (Extra Packages for Enterprise Linux) repository. Monitoring with Ganglia


  2. You’ll find additional references to source code repositories that either take some examples from the book to the next level or demonstrate the use of Gradle in contexts not covered in the book. Gradle in Action


  3. We will also see how we can use repositories with dependencies in our build scripts. Gradle Effective Implementation Guide


  4. That''s right; you don''t need to manually download dependencies if they are available in one of the available Maven repositories. Mastering ElasticSearch


  5. Licensing issues prevents distribution of the Oracle JVM in most RPM repositories. Cassandra High Performance Cookbook


  6. Examples include telephone call records, network logs, and web document repositories. Interpreting the Data Parallel Analysis with Sawzall


  7. You may prefer this scheme over repositories like Maven Central on principal. Gradle Beyond The Basics


  8. The Dapper team also needs to control the total size of data written to its central repositories, though, and thus we incorporate a second round of sampling for that purpose. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  9. Nexus provides many benefits, one of which is a proxy to remote Maven repositories with a local cache of all downloaded artifacts. ActiveMQ in Action


  10. It also offers a bridge, in the form of a dependency management definition graph, to Maven and Ivy artifact repositories without demanding remote repositories. Building and Testing with Gradle



132、单词 reducers 的匹配文本:


  1. It assumes that adding more workers results in an idealized linear speedup (modulo dependencies between mappers and reducers), up to the point where map activities and all reduce activities respectively run in parallel. Omega - flexible, scalable schedulers for large compute clusters


  2. When a job finishes, final results will reside in multiple files, depending on the number of reducers used in the job. Hadoop Operations and Cluster Management Cookbook


  3. The ApplicationMaster is the “head” of a job, managing all lifecycle aspects including dynamically increasing and decreasing resources consumption, managing the flow of execution (e.g., running reducers against the output of maps), handling faults and computation skew, and performing other local optimizations. Apache Hadoop YARN Yet Another Resource Negotiator


  4. For Reducers that first combine all the values with a given key using an associative, commutative operation, a separate userdefined Combiner function can be specified to perform partial combining of values associated with a given key during the Map phase. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. This is useful when retrieving the output of a MapReduce computation with multiple reducers where each reducer produces a part of the output. Hadoop MapReduce Cookbook


  6. Your reducers need to be able to generate a line for each word in your input, so your map output key should be each word in the input files so that MapReduce can join them all together. Hadoop in Practice


  7. Under the MapReduce model, the data processing primitives are called mappers  and reducers. Hadoop in Action


  8. Using Hadoop Streaming, they use PHP to MapReduce over their data and output directly to Cassandra via Thrift in their reducers. Cassandra The Definitive Guide


  9. Hadoop provides a standard specification (that is, interface) for the map and reduce functions, and implementations of these are often referred to as mappers and reducers. Hadoop Beginners Guide


  10. Reducers, as we’ll see later, operate on the intermediate key-value data we output from the mapper. Hadoop Operations



133、单词 dao 的匹配文本:


  1. Because all the necessary steps are implemented, each calculator can implement the overloaded calculate(s: DAO, productId: String) method. Scala in Action


  2. Consider the case where you have a data access object (DAO) component that performs data operations against a PostgreSQL database and you want to upgrade to Oracle. Pro Spring 3


  3. For several use cases, it can even eliminate the need to write our own implementation of a data access object (DAO) or repository. Pro Spring MVC


  4. For example, if you want to ensure that a particular Data Access Object (DAO) in your application is called only by an administrator user, you can’t do that with web security alone. Pro Spring Security


  5. The model, DAO, and command-line implementations look similar to what you’ve seen for the users table. HBase in Action


  6. It depends on a favorites service to pull back a list of favorite movies and a movie DAO to fetch details about individual movies: Functional Programming Patterns in Scala and Clojure


  7. The relationship between a model and a DAO like Active Record should. SQL Antipatterns


  8. So you could create a DAO with a get method as shown here: Cassandra The Definitive Guide


  9. Use a dirty flag where supported by the EJB server or in a BMP or DAO implementation to avoid writing unchanged EJBs to the database. Java Performance Tuning 2nd Edition


  10. It should be simple to convert one to the other automatically.” This is a thought we’ve probably all had at one point or another while writing yet another Data Access Object (DAO) to convert Java Database Connectivity (JDBC) result sets into something object-oriented. Pro JPA2



134、单词 znode 的匹配文本:


  1. ZooKeeper provides commands to access or update Znode from its command-line tool; type help for more information. HBase Administration Cookbook


  2. This command will create a znode in ZooKeeper where the automatic failover system will store the data. Hadoop Operations and Cluster Management Cookbook


  3. Outputs information that ZooKeeper has about HBase install Shows parent znode. HBase in Action


  4. When the master notices that a regionserver is no longer reachable, usually because the servers’s znode has expired in ZooKeeper, it splits the dead regionserver’s commit log by region. Hadoop:The Definitive Guide 3rd Edition


  5. Because of the distributed nature of these events, the servers are using ZooKeeper to track specific states in a dedicated znode. HBase:The Definitive Guide


  6. The main differences between ZooKeeper and standard filesystems are that every znode can have data associated with it and znodes are limited to the amount of data that they can have. Apache Kafka


  7. ZooKeeper holds a zNode with the physical address of the Primary AvatarNode for a given cluster. Apache Hadoop Goes Realtime at Facebook


  8. The absence of data often conveys important information about a znode. ZooKeeper Distributed Process Coordination



135、单词 netty 的匹配文本:


  1. Under the covers, Akka uses JBoss Netty and Google Protocol Buffers libraries to make the remoting happen seamlessly. Programming Concurrency on the JVM


  2. Netty provides performance and scalability, leaving you free to focus on what really interests you - your unique application! Netty in Action MEAP V08


  3. While the sort benchmark would typically benefit from improvements to the HDFS data path, both benchmarks perform better on YARN primarily due to significant improvements in the MapReduce runtime itself: map-side sort improvements, a reduce client that pipelines and batches transfers of map output, and a server-side shuffle based on Netty [3] Apache Hadoop YARN Yet Another Resource Negotiator


  4. Think of the Google protocol buffer as XML but smaller and faster, and Netty as a non-blocking I/O (NIO) implementation, which allows Akka to efficiently use threads for I/O operations. Scala in Action


  5. MapReduce 2 uses Netty, which by default allows up to twice as many threads as there are processors.) Hadoop:The Definitive Guide 3rd Edition



136、单词 localhost 的匹配文本:


  1. In this case, we’ve used the localhost address and the default CouchDB port number, 5984 Getting Started with CouchDB


  2. By default, it tries to talk to the gmond running on localhost, but you can specify another running gmond by specifying -i. Monitoring with Ganglia


  3. Some common network configurations cause the replication test to fail when accessed via the localhost address. CouchDB The Definitive Guide


  4. Starting the application with this configuration will expose the URL http://localhost:9090/todo. Gradle in Action


  5. Gradle will keep running, and at the end, we see that the application is running at http://localhost:8080/web. Gradle Effective Implementation Guide


  6. If you’re set on using localhost, disabling IPv6 will also fix this issue. Beginning CouchDB


  7. If you would prefer to use cURL, first create the catalog-a database: curl -X PUT http://localhost:5984/catalog-a. Scaling CouchDB


  8. If you didn''t install Elasticsearch on your local machine, you should be able to reach it by replacing “localhost” with the IP address of the remote machine. Elasticsearch in Action MEAP V07 cHAPTER 1


  9. That’s because this cluster of one node at localhost is set up for you by default. Cassandra The Definitive Guide


  10. Note that all the artifacts are being downloaded from the localhost instead of from a remote Maven repository. ActiveMQ in Action



137、单词 iteration 的匹配文本:


  1. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. Pregel  -  A System for Large-Scale Graph Processing


  2. Metrics submitted using a “positive” slope generate RRDs that are created as COUNTERs; however, this mechanism is not ideal for situations involving incrementing values that submit on each iteration (i.e., Apache httpd page serving counts without log-scraping) Monitoring with Ganglia


  3. We’ll build the first iteration of the example application without using show or list, because writing Ajax queries against the JSON API is a better way to teach CouchDB as a database. CouchDB The Definitive Guide


  4. Groovy automatically exposes an implicit variable named it to indicate the loop iteration index. Gradle in Action


  5. Part of this long iteration phase was due to a slow realization that Spanner should do more than tackle the problem of a globallyreplicated namespace, and should also focus on database features that Bigtable was missing. Spanner - Google’s Globally-Distributed Database


  6. This will occur, of course, for every iteration of the loop, so each previous owner’s mileage will be added to the total. Beginning CouchDB


  7. For example, the most recent iteration of the eBay architecture is mostly Java, but the search engine is in C++ Cassandra The Definitive Guide


  8. For the third iteration, we did extensive experiments with two major styles of execution: LLVM based native code generation with row major block based intermediate data and column major vector based processing with columnar intermediate storage. Tenzing - A SQL Implementation On The MapReduce Framework


  9. If we know that each partition is sorted and contains the same key range, simple iteration logic can do the required matching. Hadoop Beginners Guide


  10. The iteration is stopped once no chunk with more rows than a given threshold, e.g., 50’000, exists. PowerDrill - Processing a Trillion Cells per Mouse Click



138、单词 pom 的匹配文本:


  1. The binary package contains the same Maven POM file as the source archive so you can always recompile the code even if you start with the binary distribution. Apache Flume Distributed Log Collection for Hadoop


  2. Even though it wasn’t part of the core Maven project, a similar approach was proposed by the project Maven Polyglot that allows you to write your build definition logic, which is the project object model (POM) file, in Groovy, Ruby, Scala, or Clojure. Gradle in Action


  3. Uploading to a Maven repository If we want to upload to a Maven repository, we must create a Maven POM (Project Object Model) file. Gradle Effective Implementation Guide


  4. If you would like the code examples to work with ElasticSearch 1.0.0Beta version, you should modify ElasticSearch version in the pom.xml file. Mastering ElasticSearch


  5. A pom file contains information on the project, including dependencies and plugin configuration information. Cassandra High Performance Cookbook


  6. Each Maven coordinate, whether it corresponds to any code, source, or JavaDocs at all, always describes an XML descriptor file called a POM (Project Object Model) Gradle Beyond The Basics


  7. The following is the relevant part of the Maven Project Object Model (POM): Hadoop:The Definitive Guide 3rd Edition


  8. Building with Maven The original authors of Cassandra apparently didn’t care much for Maven, so the early releases did not include any Maven POM file. Cassandra The Definitive Guide


  9. Using Maven, you’ll need to add the following dependency to the pom.xml file: ActiveMQ in Action


  10. The Maven POM and Gradle Build With the big picture established, let’s take a piecewise approach to comparing Maven and Gradle. Building and Testing with Gradle



139、单词 ldap 的匹配文本:


  1. We rely on the web server to provide authentication, so any Apache authentication system (htpasswd, LDAP, etc.) is supported. Monitoring with Ganglia


  2. The authentication system is pluggable, so you can integrate with existing services to authenticate users to CouchDB using an HTTP layer, using LDAP integration, or through other means. CouchDB The Definitive Guide


  3. Can you tie in your organization’s Lightweight Directory Access Protocol (LDAP) directory and user groups to Hadoop’s permissions structure? Professional Hadoop Solutions


  4. The classes themselves typically interact with the operating system, using system calls to authenticate the user via Solaris'' NIS or NIS+, the Windows NT login service, an LDAP server, or whatever other authentication system is available on the platform. Java Security 2nd Edition


  5. The chapter also has a brief introduction to hierarchical databases (such as LDAP directories) and the JNDI API. Core Java Volume II Advanced Features 9th Edition


  6. A CertStore, however, may represent a large public database (in the form of an LDAP server, for examle) of untrusted certificates. Java in a Nutshell 5th Edition


  7. Providing Your Own Authentication You can provide your own method of authenticating to Cassandra if you want to make special requirements, such as a Kerberos ticket or encryption, or if you want to store passwords in a different location, such as an LDAP directory. Cassandra The Definitive Guide


  8. Consider the table Employee with fields emp id, ldap user, name, dept id, and salary. Tenzing - A SQL Implementation On The MapReduce Framework


  9. The KDC contains a database of principals and their keys, very much like /etc/passwd and some KDC implementations (including MIT Kerberos) support storing this data in centralized systems like LDAP. Hadoop Operations


  10. Fortunately, ActiveMQ comes with implementations of some modules that can authenticate users using properties files, LDAP, and SSL certificates, which will be enough for many use cases. ActiveMQ in Action



140、单词 nosql 的匹配文本:


  1. NoSQL design space: we partition the datastore and replicate each partition separately, providing full ACID semantics within partitions, but only limited consistency guarantees across them. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. The uncompressed dataset is tens of terabytes, which is small compared to many NoSQL instances, but was large enough to cause difficulties with sharded MySQL. Spanner - Google’s Globally-Distributed Database


  3. Can you have a scalable database without going NoSQL?  Yes. F1 - The Fault-Tolerant Distributed RDBMS


  4. His background includes a wide range of programming languages (including Java and C#), search engines (including Lucene), databases, and NoSQL distributed data stores. Cassandra High Performance Cookbook


  5. Or, you can index data from your NoSQL data-store, to add search capabilities there. Elasticsearch in Action MEAP V07 cHAPTER 1


  6. It has become the de factostandard for exchanging data in the NoSQL world. Elasticsearch The Definitive Guide


  7. Apache Hadoop is a NoSQL applications framework that runs on distributed clusters. Hadoop in Action


  8. It’s a picture that makes some NoSQL solutions seem perhaps less radical and less scary than we may have thought at first, and more like a natural expression and encapsulation of some of the work that was already being done to manage very large databases. Cassandra The Definitive Guide


  9. He has worked with flat files, indexed files, hierarchical databases, network databases, relational databases, NoSQL databases, Hadoop, and related technologies. Hadoop Beginners Guide


  10. These systems can also be grouped with the NoSQL databases (MongoDB [27], to name one of many): both offer higher performance and scale better than traditional databases, but provide weaker semantics. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



141、单词 udp 的匹配文本:


  1. Syslog has an older UDP transport as well as a newer TCP protocol that can handle data larger than a single UDP packet can transmit (about 64k), as well as deal with network-related congestion events that might require the data to be retransmitted. Apache Flume Distributed Log Collection for Hadoop


  2. Your operating system and network aren’t hostile to multicast and/or User Datagram Protocol (UDP) Monitoring with Ganglia


  3. The UDP port for reporting is referred to as PORT, and for simplicity, the multicast port will be identical to PORT. Pro Hadoop


  4. The second and third way allows us to send many documents using the bulk API and the UDP bulk API. Mastering ElasticSearch


  5. Getting ready Syslog is a simple text-based protocol designed to transfer log messages over UDP. Cassandra High Performance Cookbook


  6. All system control messages rely on UDP based messaging while the application related messages for replication and request routing relies on TCP. Cassandra A Decentralized Structured Storage System


  7. Flume also supports TCP/UDP data sources that can receive logs from syslog. Hadoop in Practice


  8. We were forced to send KeepAlive RPCs via UDP rather than TCP; UDP has no congestion avoidance mechanisms, so we would prefer to use UDP only when high-level timebounds must be met. Chubby - The lock service for loosely-coupled distributed systems


  9. Flume NG also supports a sequence generator source (mostly for testing) as well as both TCP and UDP variants of a source that reads syslogd data. Hadoop Beginners Guide


  10. The same applies to appropriate ActiveMQ connectors, so a UDP connector can offer some performance advantages over the TCP connector, but it’s still not often used because of the unreliability it introduces (as explained in more detail later) ActiveMQ in Action



142、单词 rpc 的匹配文本:


  1. This file is organized into different contexts: jvm, rpc, hdfs, mapred, and hbase. Monitoring with Ganglia


  2. Thrift''s code generation takes the interface file as input and generates network RPC clients in many languages. Cassandra High Performance Cookbook


  3. What has not changed: Latency of an RPC Disk drive throughput and seek latency. Storage Architecture and Challenges


  4. True application-level transparency, possibly our most challenging design goal, was achieved by restricting Dapper’s core tracing instrumentation to a small corpus of ubiquitous threading, control flow, and RPC library code. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  5. We omit details that are covered elsewhere in the literature, such as the details of a consensus protocol or an RPC system. Chubby - The lock service for loosely-coupled distributed systems


  6. We enhanced the Hadoop RPC software to automatically determine the version of the software running on the server that it is communicating with, and then talk the appropriate protocol while talking to that server. Apache Hadoop Goes Realtime at Facebook


  7. This directory includes the Thrift and Avro RPC libraries for interacting with Cassandra. Cassandra The Definitive Guide


  8. Loose coupling is commonly introduced into an architecture to mitigate the classic tight coupling of Remote Procedure Calls (RPC) ActiveMQ in Action


  9. Using a scan reduces the number of RPCs executed by the benchmark since a single RPC fetches a large sequence of values from a tablet server. Bigtable - A Distributed Storage System for Structured Data


  10. To save RPC overhead (at the cost of increasing transaction latency) each Percolator worker batches timestamp requests across transactions by maintaining only one pending RPC to the oracle. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



143、单词 dsl 的匹配文本:


  1. A build tool that carries on this idea is Gant, a DSL on top of Ant written in Groovy. Gradle in Action


  2. Build logic is described with a powerful DSL and empowers developers to create reusable and maintainable build logic. Gradle Effective Implementation Guide


  3. Chapter 2, Power User Query DSL, describes how Apache Lucene scoring works, why ElasticSearch rewrites queries, and how query rescore mechanism works. Mastering ElasticSearch


  4. Although entire books could be written about every DSL for Hadoop, this section gives you a quick “taste” of some of them to show how this growing area of Hadoop can lower the barriers to the learning curve of Hadoop for its users. Professional Hadoop Solutions


  5. You can do many of the same things through the Gradle DSL as described previously, but you might prefer in some cases to drop back to direct manipulation. Gradle Beyond The Basics


  6. HBase-DSL This project gives you dedicated classes that help when formulating queries against an HBase cluster. HBase:The Definitive Guide


  7. Inverted index Analysis and analyzers Built-in analyzers When analyzers are used Testing analyzers Specifying analyzers Mapping Core simple field types Viewing the mapping Customizing field mappings Updating a mapping Testing the mapping Complex core field types Multi-value fields Empty fields Multi-level objects Mapping for inner objects How inner objects are indexed Arrays of inner objects Empty search Query DSL Structure of a query clause Combining multiple clauses Queries and Filters Performance differences When to use which Most important queries and filters Combining queries with filters Filtering a query Just a filter A query as a filter Validating queries Understanding errors Understanding queries Conclusion Sorting Sorting by field values Multi-level sorting Sorting on multi-value fields String sorting and multi-fields What is relevance? Understanding the score. Elasticsearch The Definitive Guide


  8. Camel can use either a Java-based domain-specific language (DSL), or Scala DSL, or an XML-based DSL to define routes. ActiveMQ in Action


  9. A Clojure DSL for Cascading that provides additional functionality inspired by Datalog for data processing and query abstractions. Programming Hive


  10. Gradle provides a way of modeling a build system using a DSL with convention over configuration, yet allowing an enterprise to define its own model. Building and Testing with Gradle



144、单词 acl 的匹配文本:


  1. Grant full privileges to admin users by uncommenting the following line in the kadm5.acl file: HBase Administration Cookbook


  2. You will also discover how to view the access control list (ACL) of a file and how to set Unix permissions on a file. Pro Java 7 NIO.2


  3. The default attribute defines the default policy for the entire ACL. Monitoring with Ganglia


  4. We will show you how to configure ACL for authorization and Kerberos for authentication, configure NameNode HA, recover from a failed NameNode, and so on. Hadoop Operations and Cluster Management Cookbook


  5. You can refresh the servicelevel authorization for the NameNode and JobTracker without restarting the Hadoop daemons using the –refreshServiceAcl command switch on the dsfadmin (for NameNode) and mradmin (for JobTracker) executables. Professional Hadoop Solutions


  6. The format for an ACL is a comma-separated list of usernames, followed by whitespace, followed by a comma-separated list of group names. Hadoop:The Definitive Guide 3rd Edition


  7. After the agent has started, modifying the ACL file has no further effect. Oracle JRockit The Definitive Guide


  8. Each node has various meta-data, including three names of access control lists (ACLs) used to control reading, writing and changing the ACL names for the node. Chubby - The lock service for loosely-coupled distributed systems


  9. Notice that the acl view is not supported and a unix and posix view are supported. Java 7 New Features Cookbook


  10. The queue has an access control list (ACL) associated with it that defines which users and groups may submit jobs, but also which users may administer the queue. Hadoop Operations



145、单词 optimizer 的匹配文本:


  1. What happens if the virtual machine loads another subclass that overrides an inlined method? Then the optimizer must undo the inlining. Core Java Volume I Fundamentals 9th Edition


  2. The FlumeJava optimizer transforms a user-constructed, modular FlumeJava execution plan into one that can be executed efficiently. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. These analytics are running in batch and are taking several minutes already, so what does a minute or two more really matter? In some cases, the query plan optimizer in Pig or Hive will be better at optimizing your code than you are! In a small fraction of situations, the extra few minutes added by Pig or Hive will matter, in which case you should use Java MapReduce. MapReduce Design Patterns


  4. A default optimizer is a cost-based optimizer, but additional custom optimizers can be introduced, based on the open APIs provided by Drill. Professional Hadoop Solutions


  5. The compiler generates a logical plan that is then optimized through a simple rule based optimizer. Hive A Petabyte Scale Data Warehouse Using Hadoop


  6. Aurora’s efficiency comes from a carefully designed run-time system and a query optimizer, rather than Sawzall’s brute force parallel style. Interpreting the Data Parallel Analysis with Sawzall


  7. Hive currently uses a rule-based query optimizer for determining how to execute a query, but it’s likely that in the future a cost-based optimizer will be added. Hadoop:The Definitive Guide 3rd Edition


  8. The Tenzing optimizer applies some basic rule and cost based optimizations to create an optimal execution plan. Tenzing - A SQL Implementation On The MapReduce Framework


  9. Hive also provides a “hint” mechanism to tell the query optimizer which table should be streamed: Programming Hive


  10. Parallel databases use an optimizer tailored for distributed workloads that turn SQL commands into a query plan whose execution is divided equally among multiple nodes. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



146、单词 mappings 的匹配文本:


  1. The last thing is the character mappings, which is used before tokenizer and is responsible for processing text before any analysis is done. Mastering ElasticSearch


  2. A tablet is similar to Bigtable’s tablet abstraction, in that it implements a bag of the following mappings: Spanner - Google’s Globally-Distributed Database


  3. Moreover, when dealing with CSV or tab-separated data, a slightly misaligned INT or FLOAT type mapping in your Hive table declaration, where the data has a STRING, could lead to NULL mappings per row. Hadoop RealWorld Solutions Cookbook


  4. If we provide no artifact or Ivy mappings (as in Example 4-21), Gradle assumes that the Ivy repo uses the default Maven structure. Gradle Beyond The Basics


  5. Datanodes must send block reports to both namenodes because the block mappings are stored in a namenode’s memory, and not on disk. Hadoop:The Definitive Guide 3rd Edition


  6. Inverted index Analysis and analyzers Built-in analyzers When analyzers are used Testing analyzers Specifying analyzers Mapping Core simple field types Viewing the mapping Customizing field mappings Updating a mapping Testing the mapping Complex core field types Multi-value fields Empty fields Multi-level objects Mapping for inner objects How inner objects are indexed Arrays of inner objects Empty search Query DSL Structure of a query clause Combining multiple clauses Queries and Filters Performance differences When to use which Most important queries and filters Combining queries with filters Filtering a query Just a filter A query as a filter Validating queries Understanding errors Understanding queries Conclusion Sorting Sorting by field values Multi-level sorting Sorting on multi-value fields String sorting and multi-fields What is relevance? Understanding the score. Elasticsearch The Definitive Guide


  7. To resolve this problem, we chose to group name entries into batches so that a single lookup would return and cache the name mappings for a large number (typically 100) of related processes within a job. Chubby - The lock service for loosely-coupled distributed systems


  8. Is it certain that we’ve done anything but move the problem here? Of course, with some systems, such as those that make extensive use of document exchange, as with services or XML-based applications, there are not always clear mappings to a relational database. Cassandra The Definitive Guide


  9. For any Java Map object, its contents are a set of mappings from a given key of a specified type to a related value of a potentially different type. Hadoop Beginners Guide


  10. One of the most common types of scripts is one that uses a CSV file of machine to rack mappings. Hadoop Operations



147、单词 runnable 的匹配文本:


  1. Place the code for the task into the run method of a class that implements the Runnable interface. Core Java Volume I Fundamentals 9th Edition


  2. Because Ganglia is an open source project, it is possible to compile a runnable binary executable of the gmond agent on virtually any platform with a C compiler. Monitoring with Ganglia


  3. In Java, independently executable tasks must implement interface Runnable and define a run method. A Java ForkJoin Framework


  4. A class that we will use to periodically check a given URL address; we will develop it as a Runnable interface implementation, because we will use thread executors to run it. Mastering ElasticSearch


  5. In the case of parallel files upload, for every file in the directory, a new runnable class is created and is submitted to the executor. Professional Hadoop Solutions


  6. Also, the scheduler employs a work stealing algorithm that allows some task scheduling flexibility to prevent nodes from running idle when runnable tasks are waiting. Incoop MapReduce for Incremental Computations


  7. In the old API this is possible for mappers by writing a MapRunnable, but no equivalent exists for reducers. Hadoop:The Definitive Guide 3rd Edition


  8. You can see the count for each possible thread state, including new, runnable, blocked, and so on. HBase:The Definitive Guide


  9. The CPU load average in Linux is the average number of processes in a runnable state on a machine, over a rolling window of time. Hadoop Operations


  10. The classes that can support Runnable typically have nothing in common except that they contain a run method. Concurrent Programming in Java Design Principles and Pattern



148、单词 init 的匹配文本:


  1. If not, you can always start or stop using the init scripts. Getting Started with CouchDB


  2. The metric_init callback function is the first of three functions that must be defined and implemented in every gmond metric module. Monitoring with Ganglia


  3. SysV/BSD-Style Systems Depending on your operating system, the couchdb daemon script could be installed into a directory called init.d (for SysV-style systems) or rc.d (for BSD-style systems) under. CouchDB The Definitive Guide


  4. Target named init that used task mkdir to create directory build. Gradle in Action


  5. We create a new Git repository in the current project directory, with the init command in Git: Gradle Effective Implementation Guide


  6. As a result, you do not have to run a list of targets sequentially; the dist target will run its dependents such as compile and init and their dependencies in proper order. Cassandra High Performance Cookbook


  7. Finally, you''ll have your init script as a symbolic link to the “elasticsearch” service wrapper script: Elasticsearch in Action MEAP V07 cHAPTER 1


  8. You’ll need sudo access for your user to run this command (it launches the Hadoop services via init.d scripts): Hadoop in Practice


  9. You can run it in a command prompt by executing ec2-init.bat. Hadoop in Action


  10. The server starts up with a call to the initServer method of this class, upon which the server registers the SEDA verb handlers, makes some determinations about its state (such as whether it was bootstrapped or not, and what its partitioner is), and registers itself as an MBean with the JMX server. Cassandra The Definitive Guide



149、单词 multicast 的匹配文本:


  1. Look at each instance of gmond within the same network broadcast domain (since reachability is based on multicast packets), and find the udp_recv_channel block in gmond.conf. Apache Flume Distributed Log Collection for Hadoop


  2. Instead of the default UDP multicast communication method we use between Gmonds, we have configured it to use unicast messages. HBase Administration Cookbook


  3. Ganglia’s use of a lightweight multicast listen/announce protocol was influenced by Brent Chun’s early work building a scalable execution environment for clusters. Monitoring with Ganglia


  4. The UDP port for reporting is referred to as PORT, and for simplicity, the multicast port will be identical to PORT. Pro Hadoop


  5. Hadoop supports network communication through both unicast (with normal IP addresses, which is the one we use here) and multicast, which uses multicast addresses, such as 239.2.11.71 Hadoop Operations and Cluster Management Cookbook


  6. The boostrap process When the ElasticSearch node starts, it uses multicast (or unicast, if configured) to find the other nodes in the same cluster (the key here is the cluster name defined in the configuration) and connect to them. Mastering ElasticSearch


  7. It actively monitors the host for changes, which it will announce using uni- or multicast network messages. HBase:The Definitive Guide


  8. More sophisticated distributed object systems (such as JINI) allow for “multicast” lookup of remote objects that might be on a number of different servers. Core Java Volume II Advanced Features 9th Edition


  9. It uses Kestrel as a loosely coupled, reliable message queue that picks a server at random to write to, so there is no cross-server communication (no clustering, no multicast, etc.) Cassandra The Definitive Guide


  10. Dynamic networks using such protocols as multicast and discovery will also be covered. ActiveMQ in Action



150、单词 singleton 的匹配文本:


  1. After type erasure there is only one Singleton class, and only one singleInstance field. Core Java Volume I Fundamentals 9th Edition


  2. Implementing the JSONRiverModule class The JSONRiverModule class is responsible for binding the river class and telling ElasticSearch that it should be a singleton. Mastering ElasticSearch


  3. In many scenarios, it is often cleaner and more readable than using blank placeholder values or static singleton instances for output. Hadoop RealWorld Solutions Cookbook


  4. Another difference with String is that Text is mutable (like all Writable implementations in Hadoop, except NullWritable, which is a singleton) Hadoop:The Definitive Guide 3rd Edition


  5. Remember to add a readResolve method to all typesafe enumerations in your legacy code and to all classes that follow the singleton design pattern. Core Java Volume II Advanced Features 9th Edition


  6. When get is called, we could always return the same instance (singleton) or a new instance with each invocation. Getting Started with Google Guava


  7. In the case of a singleton, getInstance takes no parameters and returns the sole instance. Effective Java 2nd Edition


  8. AntiEntropyService implements the Singleton pattern and defines the static Dif ferencer class as well, which is used to compare two trees; if it finds any differences, it launches a repair for the ranges that don’t agree. Cassandra The Definitive Guide


  9. NullWritable: This is a singleton object representation of a null value. Hadoop Beginners Guide


  10. As described in the Design Patterns book, a Singleton class intentionally supports only one instance. Concurrent Programming in Java Design Principles and Pattern



151、单词 optionally 的匹配文本:


  1. Remember if you increase this value you may also have to increase your Java heap space using the -Xmx and optionally the -Xms parameters. Apache Flume Distributed Log Collection for Hadoop


  2. CouchDB can write the JSON document directly, simplifying the writing/update process, while allowing you to optionally enfore a structure on your JSON documents within the database itself if you need it. Getting Started with CouchDB


  3. The addresses may be specified as IP addresses or DNS hostnames and may optionally be suffixed by a colon followed by the port number where the gmond tcp_accept_channel is to be found. Monitoring with Ganglia


  4. An MSCR operation has M input channels (each performing a map operation) and R output channels (each optionally performing a shuffle, an optional combine, and a reduce) FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. Each item in this array represents an individual view, and each of these views contains a map and, optionally, a reduce function. Beginning CouchDB


  6. Optionally, you could have checked “Continuous” to trigger continuous replication. Scaling CouchDB


  7. Optionally, the original values may also be stored unchanged so that they can be retrieved later. Elasticsearch The Definitive Guide


  8. Optionally, the client may provide a content generation number to allow the client to simulate compare-and-swap on a file; the contents are changed only if the generation number is current. Chubby - The lock service for loosely-coupled distributed systems


  9. For information on how to obtain (and optionally build) the NMS project, please refer to the NMS project site. ActiveMQ in Action


  10. Optionally, an SSTable can be completely mapped into memory, which allows us to perform lookups and scans without touching disk. Bigtable - A Distributed Storage System for Structured Data



152、单词 failover 的匹配文本:


  1. Failover options are also covered to create a more robust data pipeline. Apache Flume Distributed Log Collection for Hadoop


  2. The master can support fast ACID transactions but risks downtime or data loss during failover to a slave. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  3. In this way, every node in a cluster is capable of acting as the primary node or reporting node for the gmetad aggregator in case of a failover situation. Monitoring with Ganglia


  4. Then we’ll cover how to add redundant hot-failover twin nodes, so you don’t have to worry about losing machines. CouchDB The Definitive Guide


  5. You will now need to set up a failover system so that in the event of the primary server crashing, requests are redirected to the secondary server. Beginning CouchDB


  6. This setup typically involves having two or more load balancers sharing the same IP address, with one configured as a failover. Scaling CouchDB


  7. A cluster is a group of nodes with the same cluster.name that are working together to share data and to provide failover and scale, although a single node can form a cluster all by itself. Elasticsearch The Definitive Guide


  8. New process FailOver Controller (ZKFC) responsible for monitoring and failover. Namenode High Availability


  9. Now you have the problem of data replication and consistency during regular usage and in failover scenarios. Cassandra The Definitive Guide


  10. It discusses ActiveMQ URIs and all the transport connectors for both client-to-broker and broker-to-broker communications including TCP, NIO, STOMP, failover, SSL, HTTP/S, and much more. ActiveMQ in Action



153、单词 parameterized 的匹配文本:


  1. However, we expect that most readers will use this knowledge primarily for help with troubleshooting and to satisfy their curiosity about the inner workings of the parameterized collection classes. Core Java Volume I Fundamentals 9th Edition


  2. The Parameterized Trigger plugin extends the functionality of chaining build jobs with the ability to declare parameters for the triggered job. Gradle in Action


  3. Instantiate the MapDriver class parameterized exactly as the mapper under test. Professional Hadoop Solutions


  4. Then we set the parameterized Writable types for the key-value pairs out of the mapper. Hadoop RealWorld Solutions Cookbook


  5. The Plugin interface is type-parameterized because plug-ins can theoretically be applied to any kind of Gradle object. Gradle Beyond The Basics


  6. We are in the process of deploying an adaptive sampling scheme that is parameterized not by a uniform sampling probability, but by a desired rate of sampled traces per unit time. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. The context objects are used for emitting key-value pairs, and so they are parameterized by the output types so that the signature of the write() method is: Hadoop:The Definitive Guide 3rd Edition


  8. This will change the how the job is parameterized in the future. HBase:The Definitive Guide


  9. When you write a reusable script, it’s generally parameterized such that you can vary its operation for each run. Hadoop in Action


  10. The get command is using the parameterized execute method, as are other sibling commands for insert and delete (not shown in the example) Cassandra The Definitive Guide



154、单词 serialized 的匹配文本:


  1. As soon as UDP packets on port 1234 are received, gmond-debug will attempt to decode it and print a serialized version of the information contained therein. Monitoring with Ganglia


  2. Updates are serialized, so CouchDB will never attempt to write documents faster than your disk can spin, and it also means that two mutually conflicting writes can’t be written at the same time. CouchDB The Definitive Guide


  3. Internally, super columns must be completely serialized and de-serialized to be accessed. Cassandra High Performance Cookbook


  4. As the columns for a given key are being serialized and written out to disk we generate indices at every 256K chunk boundary. Cassandra A Decentralized Structured Storage System


  5. But when you’re working with big data volumes you’ll need serialization formats with compact serialized forms that natively support partitioning and have schema evolution features. Hadoop in Practice


  6. When an object has been serialized into JSON it is. Elasticsearch The Definitive Guide


  7. In the standard Mapper model , the output key/value pairs are serialized and written to disk,1 prepared to be shuffled. Hadoop in Action


  8. Message streaming is Cassandra’s optimized way of sending sections of SSTable files from one node to another; all other communication between nodes occurs via serialized messages. Cassandra The Definitive Guide


  9. In some cases, we also use a sorted disk based serialized implementation for the bigger tables to conserve memory. Tenzing - A SQL Implementation On The MapReduce Framework


  10. Messages must be serialized in and out of a byte sequence to be sent over the wire using what’s known as a wire protocol. ActiveMQ in Action



155、单词 combiner 的匹配文本:


  1. To enable this optimization the user subclasses the Combiner class, overriding a virtual Combine() method. Pregel  -  A System for Large-Scale Graph Processing


  2. In this job configuration file, we specified the job name, the mapper class, the reducer class, the combiner class, the input format, and output format for the job. Hadoop Operations and Cluster Management Cookbook


  3. We allow the user to specify an optional Combiner function that does partial merging of this data before it is sent over the network. MapReduce - Simplied Data Processing on Large Clusters


  4. For Reducers that first combine all the values with a given key using an associative, commutative operation, a separate userdefined Combiner function can be specified to perform partial combining of values associated with a given key during the Map phase. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. We will demonstrate the application of a combiner to optimize data transfer overhead between the map and reduce stages. Hadoop RealWorld Solutions Cookbook


  6. To optimize such scenarios, Hadoop supports a special function called combiner. Hadoop MapReduce Cookbook


  7. Summary The simplest ways to cut down on shuffle and sort times is to use a combiner, and to compress your map outputs. Hadoop in Practice


  8. Why do we need to sum the count? We’ve chosen this route because it will be easier for us later if we choose to add a combiner  to enhance performance. Hadoop in Action


  9. Apart from the combiner…maybe There is one additional, and optional, step that we omitted previously. Hadoop Beginners Guide


  10. Unlike Hadoop’s Combiner, Hive serializes partial aggregates into strings instead of maintaining them in their natural binary representation. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



156、单词 generics 的匹配文本:


  1. This sounds scary, but if you think about it, the behavior is simply as it was before generics were added to Java. Core Java Volume I Fundamentals 9th Edition


  2. Higher level clients such as Hector bring more abstraction and take more advantage of language features such as Java''s generics. Cassandra High Performance Cookbook


  3. Because the entire reason to have generics is to generate compiler. Data Structures and Algorithm Analysis in Java 3rd Edition


  4. After all, why can’t the types be determined from a combination of the mapper and the reducer? The answer has to do with a limitation in Java generics: type erasure means that the type information isn’t always present at runtime, so Hadoop has to be given it explicitly. Hadoop:The Definitive Guide 3rd Edition


  5. Here we are creating ConcurrentMap with String keys and Book objects for the values (specified by the generics on the ConcurrentMap declaration) Getting Started with Google Guava


  6. The most significant set of changes was the addition of generics, enum types, annotations, autoboxing, and the for-each loop in Java 5 Effective Java 2nd Edition


  7. When generics were introduced in Java 1.5, it became easier to write code to address a number of similar problems. Java 7 New Features Cookbook


  8. Although the use of Java generics can make this look a little opaque at first, there is actually not that much going on. Hadoop Beginners Guide


  9. Internally, Hadoop uses many of the features introduced with Java 6, such as generics and concurrency utilities. Hadoop Operations


  10. Note that Java syntax conventions for generics are followed for the collection types. Programming Hive



157、单词 compaction 的匹配文本:


  1. Status Displays a list of the running background tasks on the server, including view index building, compaction, and replication. Getting Started with CouchDB


  2. This is the number of Stores in the RegionServer that have been targeted for compaction. Monitoring with Ganglia


  3. Cassandra has numerous counters and the ability to trigger operations such as clearing the Key Cache or disabling compaction over JMX. Cassandra High Performance Cookbook


  4. This process is very similar to the compaction process that happens in the Bigtable system. Cassandra A Decentralized Structured Storage System


  5. It guides you through the process of creating a database and working with data; covers more complex topics such as views and map/reduce; and explores advanced database concepts such as replication, compaction, and deployment. Beginning CouchDB


  6. Run database compaction, view compaction, and view cleanup when the database is not under heavy load. Scaling CouchDB


  7. Read performance is correlated with the number of files in a region and thus critically hinges on a well-tuned compaction algorithm. Apache Hadoop Goes Realtime at Facebook


  8. During a major compaction, the server initiates a TreeRequest/ TreeReponse conversation to exchange Merkle trees with neighboring nodes. Cassandra The Definitive Guide


  9. HBase regularly rewrites its data files (during a major compaction), which “fixes” block distribution and data locality anyway. Hadoop Operations


  10. This minor compaction process has two goals: it shrinks the memory usage of the tablet server, and it reduces the amount of data that has to be read from the commit log during recovery if this server dies. Bigtable - A Distributed Storage System for Structured Data



158、单词 datasets 的匹配文本:


  1. From a practical point of view, B-trees, therefore, guarantee an access time of less than 10 ms even for extremely large datasets. CouchDB The Definitive Guide


  2. This paper describes a system called Dremel1 that supports interactive analysis of very large datasets over shared clusters of commodity machines. Dremel - Interactive Analysis of WebScale Datasets


  3. As HDFS scaled, more compute clusters could be allocated on top of it, creating a virtuous cycle of increased user density over more datasets, leading to new insights. Apache Hadoop YARN Yet Another Resource Negotiator


  4. With its distributed storage and compute capabilities, Hadoop is fundamentally an enabling technology for working with huge datasets. Hadoop in Practice


  5. These use cases would have workloads that are challenging to scale because of very high write throughput, massive datasets, unpredictable growth, or other patterns that may be difficult or suboptimal in a sharded RDBMS environment. Apache Hadoop Goes Realtime at Facebook


  6. Before explaining how to use the contrib package, let’s go through all the major steps in a repartitioned sort-merge join of our toy datasets. Hadoop in Action


  7. In addition, Tenzing has metadata discovery mechanisms built-in to simplify importing datasets into Tenzing. Tenzing - A SQL Implementation On The MapReduce Framework


  8. This data locality optimization is a huge reason behind Hadoop''s ability to efficiently process such large datasets. Hadoop Beginners Guide


  9. Highly tuned and performant systems have evolved that provide users with the possibility of answering ad hoc queries over large datasets in an interactive manner. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. This trend is beginning to reverse itself, however, as the enormous datasets like those Percolator is intended to process become far too large for a single shared-memory machine to handle. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



159、单词 daemons 的匹配文本:


  1. It consisted of a federation of worker daemons (agents) configured from a centralized master (or masters) via Zookeeper (a federated configuration and coordination system) Apache Flume Distributed Log Collection for Hadoop


  2. As depicted in Figure 1-1, Ganglia is architecturally composed of three daemons: gmond, gmetad, and gweb. Monitoring with Ganglia


  3. Mac OS X has a service management framework called launchd that can be used to start, stop, or manage system daemons. CouchDB The Definitive Guide


  4. Along the way, we’ll touch on topics like compiler daemons and joint compilation. Gradle in Action


  5. For example, for a small to medium-sized cluster with up to a hundred slave nodes, the NameNode, JobTracker, and SecondaryNameNode daemons can be put on the same master machine. Hadoop Operations and Cluster Management Cookbook


  6. Daemons apply a variant of Marzullo’s algorithm [27] to detect and reject liars, and synchronize the local machine clocks to the nonliars. Spanner - Google’s Globally-Distributed Database


  7. For users seeking real-time data, the Dapper user interface is capable of communicating directly with Dapper daemons on each production machine. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. A reasonable question may be, why not split the Hadoop daemons onto separate hosts? If you were to do this, you would lose out on data locality (the ability to read from local disk), which is a key distributed system property of. Hadoop in Practice


  9. These daemons have specific roles; some exist only on one server, some exist across multiple servers. Hadoop in Action


  10. Daemons There are three daemons that make up a standard HDFS cluster, each of which serves a distinct role, shown in Table 2-1 Hadoop Operations



160、单词 foo 的匹配文本:


  1. Here is how you might send a file named foo.log into the Flume agent we previously configured: Apache Flume Distributed Log Collection for Hadoop


  2. If there are parameters in the query string (in this case corresponding to ?foo=bar), they will be parsed and available as a JSON object at req.query. CouchDB The Definitive Guide


  3. Loading sample data for the following keywords: foo, bar, baz. Cassandra High Performance Cookbook


  4. The command reads the /data/foo.txt file and prints it to the screen, just like the cat command in Unix system. Hadoop MapReduce Cookbook


  5. If application owners choose to augment the trace with their own annotations (like the “foo” annotation in the figure), these are also recorded with the rest of the span data. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  6. This will copy the /foo directory (and its contents) from the first cluster to the /bar directory on the second cluster, so the second cluster ends up with the directory structure /bar/foo. Hadoop:The Definitive Guide 3rd Edition


  7. The exact value "Foo" is not the same as the exact value "foo" Elasticsearch The Definitive Guide


  8. The second component (foo) is the name of a Chubby cell; it is resolved to one or more Chubby servers via DNS lookup. Chubby - The lock service for loosely-coupled distributed systems


  9. Users of HDFS are, in many cases, unaware of the block boundaries of files, so the fact that block two of file foo was accessed last week is of little value. Hadoop Operations


  10. Now you can send some messages to the TEST.FOO destination. ActiveMQ in Action



161、单词 filename 的匹配文本:


  1. Path and filename Each time Flume starts a new file at hdfs.path in HDFS to write data into, the filename is composed of hdfs.filePrefix, a period character, the epoch timestamp the file was started, and optionally, a file suffix specified by the hdfs.fileSuffix property (if set) Apache Flume Distributed Log Collection for Hadoop


  2. Until the file is saved with a filename, you can’t edit the contents, only change the contents of the unsaved document. Getting Started with CouchDB


  3. The filename must contain _report.json in it to be considered by the web UI. Monitoring with Ganglia


  4. After assembling the JAR file, you’ll notice that the version number has been added to the JAR filename. Gradle in Action


  5. Next, we define a new file collection with a filter that contains all files that have the filename extension .txt. Gradle Effective Implementation Guide


  6. You can delete attachments using the small delete icon to the left of the attachment filename. Beginning CouchDB


  7. To rename files using regular expressions, we can simply provide a source regex and a destination filename. Gradle Beyond The Basics


  8. If you want to change the location of the logs directory, just find the following entry in the same log4j.properties file and chose a different filename: Cassandra The Definitive Guide


  9. BrokerXmlConfig is the filename which is assumed to be on the classpath unless a URL is specified. ActiveMQ in Action


  10. Notice not only that the Java code has been compiled and its class file placed in a directory we didn’t have to configure, but also that test report files have been generated (ready to receive actual unit test results when we write tests later), and a JAR has been built using the project directory name as the filename. Building and Testing with Gradle



162、单词 scripting 的匹配文本:


  1. The advantages and disadvantages of each are, for the most part, the same advantages and disadvantages of the C/C++ languages versus the Python scripting language. Monitoring with Ganglia


  2. We think this is a viable model for many document-oriented applications, and it makes a great way to teach CouchDB, because we can easily assume that all of you have a browser at hand without having to ensure that you’re familiar with a particular server-side scripting language. CouchDB The Definitive Guide


  3. Conditional modifications using scripting Sometimes it is convenient to add some additional logic when modifying a document and that''s why ElasticSearch allows us to use scripting along with the update API. Mastering ElasticSearch


  4. The next recipe, Scripting a multiple instance installation does this process with a single script. Cassandra High Performance Cookbook


  5. He also maintains a blog on his web site at http://www.joelennon.ie, where he has often posted tutorials about topics ranging from Oracle database development to Windows batch scripting. Beginning CouchDB


  6. The query language, Sawzall, operates at about the level of a type-safe scripting language. Interpreting the Data Parallel Analysis with Sawzall


  7. Scripting – when you need more (TODO) Scripting in Elasticsearch default scripting language mvel other scripting languages script fields Query DSL sorting Scripting and facets IV. Elasticsearch The Definitive Guide


  8. Building from Source Cassandra uses Apache Ant for its build scripting language and the Ivy plug-in for dependency management. Cassandra The Definitive Guide


  9. The CLI provides more power such as complex scripting and is used mostly by power users. Tenzing - A SQL Implementation On The MapReduce Framework


  10. We’ll start by exploring the STOMP (Streaming Text Orientated Messaging Protocol) protocol, which due to its simplicity plays an important role in messaging for scripting languages. ActiveMQ in Action



163、单词 mac 的匹配文本:


  1. You can also find a ready-to-use installation, CouchDBX, that does not require the command-line process of HomeBrew or MacPorts. Getting Started with CouchDB


  2. Refer to the following instructions, which work for the latest Mac OS X Lion. Monitoring with Ganglia


  3. Homebrew Homebrew is a recent addition to the software management tools on Mac OS X. CouchDB The Definitive Guide


  4. Because he didn’t go through the motions of initially setting up Gradle, he’s also concerned about potential differences between setting up Gradle on his Windows machine versus installing it on a Mac. Gradle in Action


  5. On a Microsoft Windows operating system, the semicolon (;) is used as a path separator, and in Linux or Mac OS X operating systems, the colon (:) is used. Gradle Effective Implementation Guide


  6. Next, I provide you with a step-by-step guide to installing CouchDB on various distributions of the Linux operating system as well as on a Mac, before guiding you through the creation of your first database. Beginning CouchDB


  7. In fact, MacBook Pro seems to be the laptop of choice among Hadoop developers, as they’re ubiquitous in Hadoop conferences and user group meetings. Hadoop in Action


  8. The scheme Cassandra uses is a version one UUID, which means that it is generated based on conflating the computer’s MAC address and the number of 100-nanosecond intervals since the beginning of the Gregorian calendar. Cassandra The Definitive Guide


  9. Windows and Mac OS X are popular choices for developers. Hadoop Beginners Guide


  10. This client runs on Mac OS X and speaks many different protocols, including XMPP. ActiveMQ in Action



164、单词 browsers 的匹配文本:


  1. It is intended for any mobile browsers supported by the jQueryMobile toolkit. Monitoring with Ganglia


  2. Since it is extremely useful to be able to test CouchDB from a browser, CouchDB sends a text/plain content type, so all browsers will display the JSON as text. CouchDB The Definitive Guide


  3. You need a tool that automates bringing up the browser, manipulates the data object model (DOM) of the web page, and supports running these tests against different browsers. Gradle in Action


  4. Some modern web browsers include support for native JSON encoding and decoding. Beginning CouchDB


  5. If that application is a web application, then it has clients connecting to it through web browsers. Scaling CouchDB


  6. This server is important both for easing the transition from DNS names to Chubby names, and to accommodate existing applications that cannot be converted easily, such as browsers. Chubby - The lock service for loosely-coupled distributed systems


  7. Note that the meta headers are intended for consumption by browsers and crawlers and not visible when the page is rendered. Hadoop in Action


  8. The Web UI, with easier-to-use features such as query & table browsers and syntax highlighting, is geared towards novice and intermediate users. Tenzing - A SQL Implementation On The MapReduce Framework


  9. This kind of URI is used by web browsers to identify websites. ActiveMQ in Action


  10. Geb and EasyB Geb can be thought of as the furthest reaches of test automation on the JVM with Groovy-based control of web browsers for testing what is primarily thought of as a task for Selenium/WebDriver. Building and Testing with Gradle



165、单词 ec2 的匹配文本:


  1. This issue is also relevant when monitoring dynamic environments such as cloud resources (for example, Amazon EC2) Monitoring with Ganglia


  2. We will explain steps to register, connect, and start VM instances on EC2 Hadoop Operations and Cluster Management Cookbook


  3. Another class of related technologies comes from the world of cloud infrastructures such as EC2, Azure, Eucalyptus and VMWare offerings. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Amazon EC2 discovery The Amazon store, in addition to selling goods has a few popular services as selling storage or computing power in the pay-as-you-go model. Mastering ElasticSearch


  5. This chapter discusses how to use Amazon Elastic MapReduce (EMR), the hosted Hadoop infrastructure, to execute traditional MapReduce computations as well as Pig and Hive computations on the Amazon EC2 cloud infrastructure. Hadoop MapReduce Cookbook


  6. In this section, we look at running Hadoop on Amazon EC2, which is a great way to try out your own Hadoop cluster on a low-commitment trial basis. Hadoop:The Definitive Guide 3rd Edition


  7. The EC2 service provides compute capacity for running Hadoop nodes. Hadoop in Action


  8. Since we also explore Amazon Web Services in this book, you can run all the examples on EC2 instances, and we will look at some other more Hadoop-specific uses of AWS throughout the book. Hadoop Beginners Guide


  9. At the time of this writing, Amazon Web Services, a popular infrastructure as a service cloud provider, had a large power outage that impacted a large swath of EC2 users. Hadoop Operations


  10. All experiments we run in this paper are on Amazon’s EC2 cloud offering; however our techniques are applicable to non-virtualized cluster computing grid deployments as well. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



166、单词 lambda 的匹配文本:


  1. The second argument is lambda and it controls a feature of the factorizer called regularization. Mahout in Action


  2. The distribution property (which can take the value ll or spl) • The lambda property (which can take the value df or tff) Mastering ElasticSearch


  3. By the end of this first part, you’ll have a full understanding of what lambda expressions are, and you’ll be able to write code that’s both concise and flexible enough to easily adapt to changing requirements. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Maybe on a forum for programming language enthusiasts such as Lambda the Ultimate, or maybe in more pedestrian quarters: Reddit, or the like. Programming in Scala 2nd Edition


  5. Also often called , , anonymous functions function literals lambda functions , or just. Functional Programming in Scala


  6. We’ve got a list of people who asked to be notified when our new restaurant, The Lambda Bar and Grille, opens, and we’d like to send them an invitation to a grand-opening party. Functional Programming Patterns in Scala and Clojure


  7. This is similar to a lambda and like constructs used in many other languages. Programming Scala


  8. If this is the kind of discussion that is interesting to you, please check out Lambda the Ultimate at. Beginning Scala


  9. Item 21: Use function objects to represent strategies Some languages support function pointers, delegates, lambda expressions, or similar facilities that allow programs to store and transmit the ability to invoke a particular function. Effective Java 2nd Edition


  10. A lambda expression is a block of code with parameters. Java SE 8 for the Really Impatient



167、单词 grained 的匹配文本:


  1. CGMgraph [8] is similar in concept, providing a number of parallel graph algorithms using the Coarse Grained Multicomputer (CGM) model based on MPI. Pregel  -  A System for Large-Scale Graph Processing


  2. In the first, coarse-grained conflict detection, a scheduler’s placement choice would be rejected if any changes had been made to the target machine since the local copy of cell state was synchronized at the beginning of the transaction. Omega - flexible, scalable schedulers for large compute clusters


  3. When a read arrives, it only needs to be checked against the fine-grained safe time for key ranges with which the read conflicts. Spanner - Google’s Globally-Distributed Database


  4. This will allow scalable access to small amounts of data, and may facilitate fine-grained OSD load balancing using a mechanism similar to D-SPTF [15] Ceph A Scalable, High-Performance Distributed File System


  5. Since the data flow from the input records to the Sawzall program is so well structured, it was easy to adapt it to provide fine-grained access control to individual fields within records. Interpreting the Data Parallel Analysis with Sawzall


  6. This facilitates fine-grained balancing in the presence of transient load imbalance, much like D-SPTF [16] RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. By querying individual fields, you have more flexbility, power and fine-grained control over which results are considered to be most relevant. Elasticsearch The Definitive Guide


  8. Abstract We describe our experiences with the Chubby lock service, which is intended to provide coarse-grained locking as well as reliable (though low-volume) storage for a loosely-coupled distributed system. Chubby - The lock service for loosely-coupled distributed systems


  9. You should now be ready to perform routine maintenance, understand your Cassandra cluster better, and know how to perform some general and fine-grained tuning tasks to keep Cassandra healthy. Cassandra The Definitive Guide


  10. We’ll cover the authorization plug-in, which in conjunction with the authentication plug-ins allows us to define a fine-grained security access to broker clients. ActiveMQ in Action



168、单词 append 的匹配文本:


  1. The source configuration also has a parameter called channels (plural) that is the name of the channel/channels the source will append events to, in this case c1 Apache Flume Distributed Log Collection for Hadoop


  2. Since the second request includes a completely new version of the document, CouchDB can simply append it to the database without having to wait for the read request to finish. CouchDB The Definitive Guide


  3. If the value is not NULL, which is determined by looking at its definition level, we synchronize the record being assembled to the record structure of the current reader in the method MoveToLevel, and append the field value to the record. Dremel - Interactive Analysis of WebScale Datasets


  4. With that background, we now describe how the client, master, and chunkservers interact to implement data mutations, atomic record append, and snapshot. GFS - The Google File System


  5. If there is a text in the term (its length is higher than zero), we use a StringBuilder object to reverse the text, we clear the term buffer (by calling setEmpty() on the attribute), and we append the reversed text to the already emptied term buffer (we do that by calling the append() method of the attribute) Mastering ElasticSearch


  6. While super columns look like an attractive option, it is almost always better to append the column and the super column together with a deliminator between them. Cassandra High Performance Cookbook


  7. Right from the very beginning, CouchDB was designed to be schema-free and indexable, using a combination of append-only storage and atomic updates. Beginning CouchDB


  8. No reads or seeks of any kind are required for writing a value to Cassandra because all writes are append operations. Cassandra The Definitive Guide


  9. Essentially, this acts as a batch mode APPEND style INSERT. Tenzing - A SQL Implementation On The MapReduce Framework


  10. In this case, Gradle has overloaded << to append a code block to the list of actions a task performs. Building and Testing with Gradle



169、单词 jit 的匹配文本:


  1. Two forces can lead to race conditions—the Just-in-Time (JIT) compiler optimization and the Java Memory Model. Programming Concurrency on the JVM


  2. On the JVM, there are optimizations in the JIT to attempt to avoid locks when they aren’t necessary. Scala in Depth


  3. While some attempts have been made to increase the performance of Java software, such as just-in-time (JIT) compilers that convert Java bytecode to native code, developers and users will find that performance is not as fast as comparable C++ code, and that a greater amount of memory is consumed. Java Network Programming and Distributed Computing


  4. Translating Java Virtual Machine code at load-time or during execution into the native instruction set of the host CPU (sometimes referred to as just-in-time, or JIT, code generation) The Java Virtual Machine Specification Java SE 7 Edition


  5. An especially important one involves the operation of the loader and what is called a just-in-time (JIT) compiler. Thinking in Java 4th Edition


  6. We start by discussing the Java bytecode format, and how a JIT compiler works, making a case for the power of adaptive runtimes. Oracle JRockit The Definitive Guide


  7. Of particular note is a VM technology called just-in-time (JIT) compilation whereby Java byte codes are converted on the fly into native platform machine language, boosting execution speed for code that is run repeatedly. Java in a Nutshell 5th Edition


  8. This code was then compiled using Sawzall’s just-in-time (JIT) compiler. Tenzing - A SQL Implementation On The MapReduce Framework


  9. The default is with a JIT, and this is the mode used for all measurements in the book. Java Performance Tuning 2nd Edition


  10. Chapter 7 presents a step-bystep approach to tuning the HotSpot VM covering choices such as which JIT compiler to use, which garbage collector to use, and how to size Java heaps, and also provides an indication when the Java application itself may require some rework to meet the performance goals set forth by application stakeholders. Java Performance



170、单词 jndi 的匹配文本:


  1. The JNDI API parts: an application-level interface used by the application components to aming and directory services and a service provider interface to attach a r of a naming and directory service. Java EE 7 Specification


  2. This uses the Java Naming and Directory Interface (JNDI) registry to look up the required details. HBase:The Definitive Guide


  3. The map pedName attribute specifies the JNDI name of the JMS destination from which the bean will consume the message. Java EE 7 Essentials


  4. The chapter also has a brief introduction to hierarchical databases (such as LDAP directories) and the JNDI API. Core Java Volume II Advanced Features 9th Edition


  5. Additions to the platform included the Java Naming and Directory Interface (JNDI) and the Java Sound APIs, which were previously available as extensions to the platform. Java in a Nutshell 5th Edition


  6. In addition, Java makes heavy use of nonstandard custom schemes such as rmi, jar, jndi, and doc for various purposes. Java Network Programming 4th Edition


  7. For example, EJBHome objects and other JNDI lookups are often costly, but need to be performed regularly. Java Performance Tuning 2nd Edition


  8. It also discusses the client-side JNDI support provided by ActiveMQ. ActiveMQ in Action


  9. It is important to make a distinction between the bean instances that are pooled and/or cached versus the bean references that the client obtains though resource injection, EJBHome.create, or JNDI (Java Naming and Directory Interface) lookup. Java Performance


  10. The preferred way to obtain a database connection is to use a DataSource when running on an application server or to have access to a Java Naming and Directory Interface (JNDI) service. Java 7 Recipes A Problem-Solution Approach



171、单词 enumerated 的匹配文本:


  1. You can define your own enumerated type whenever such a situation arises. Core Java Volume I Fundamentals 9th Edition


  2. Enumerated types An apparently small addition in Java SE5 is the enum keyword, which makes your life much easier when you need to group together and use a set of enumerated types. Thinking in Java 4th Edition


  3. However, suppose you maintain legacy code that contains an enumerated type such as Click here to view code imageClick here to view code image. Core Java Volume II Advanced Features 9th Edition


  4. How do you query the database for an enumerated list of values that are currently allowed in the status column? SQL Antipatterns


  5. If the constants are best viewed as members of an enumerated type, you should export. Effective Java 2nd Edition


  6. The three most significant new language features are generic types, enumerated types, and annotations, which are covered in a new chapter of their own. Java in a Nutshell 5th Edition


  7. Avro allows the definition of hierarchical data structures; so, for example, we can create a record that contains an array, an enumerated type, and a subrecord. Hadoop Beginners Guide


  8. With this option, the matching documents are first enumerated into an internal bit set and then that bit set is used to match documents. Lucene in Action 2nd Edition


  9. The sessions bound to a session context can be enumerated as follows: Fundamental Networking in Java


  10. The FetchType enumerated type defines the values for this element, which can be either EAGER or LAZY. Pro JPA2



172、单词 benchmarks 的匹配文本:


  1. You can run the benchmarks yourself by changing to the bench/ directory of CouchDB’s trunk and running ./runner.sh while CouchDB is running on port 5984 CouchDB The Definitive Guide


  2. We can also use Hadoop benchmarks to identify configuration problems and use it as reference for performance tuning. Hadoop Operations and Cluster Management Cookbook


  3. In this section we present a few micro-benchmarks to illustrate the bottlenecks inherent in the GFS architecture and implementation, and also some numbers from real clusters in use at Google. GFS - The Google File System


  4. While the sort benchmark would typically benefit from improvements to the HDFS data path, both benchmarks perform better on YARN primarily due to significant improvements in the MapReduce runtime itself: map-side sort improvements, a reduce client that pipelines and batches transfers of map output, and a server-side shuffle based on Netty [3] Apache Hadoop YARN Yet Another Resource Negotiator


  5. These benchmarks performed different computational tasks, including analyzing ads logs (Ads Logs), extracting and joining data about websites from various sources (SiteData and IndexStats), and computing usage statistics from logs dumped by internal build tools (Build Logs) FlumeJava - Easy, Efficient Data-Parallel Pipelines


  6. Benchmarking HDFS Running benchmarks is a good way to verify whether your HDFS cluster is set up properly and performs as expected. Hadoop MapReduce Cookbook


  7. It contains benchmarks for Avro, Protocol Buffers, and Thrift, along with a number of other frameworks. Hadoop in Practice


  8. In this paper, we describe the architecture and implementation of Tenzing, and present benchmarks of typical analytical queries. Tenzing - A SQL Implementation On The MapReduce Framework


  9. Our load balancing algorithm attempts to deal with this imbalance, but cannot do a perfect job for two main reasons: rebalancing is throttled to reduce the number of tablet movements (a tablet is unavailable for a short time, typically less than one second, when it is moved), and the load generated by our benchmarks shifts around as the benchmark progresses. Bigtable - A Distributed Storage System for Structured Data


  10. Through the performance benchmarks in the following sections, we show that using an efficient database storage layer cuts down on data processing time especially on tasks that require complex query processing over structured data such as joins. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



173、单词 bigtable 的匹配文本:


  1. Storing and querying hierarchical data is straightforward in key-value stores like Bigtable. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. HBase is an Apache-licensed open source system modeled after Google’s Bigtable. Monitoring with Ganglia


  3. Even though many projects happily use Bigtable [9], we have also consistently received complaints from users that Bigtable can be difficult to use for some kinds of applications: those that have complex, evolving schemas, or those that want strong consistency in the presence of wide-area replication. Spanner - Google’s Globally-Distributed Database


  4. FlumeJava’s parallel collections abstract away the details of how data is represented, including whether the data is represented as an in-memory data structure, as one or more files, or as an external storage service such as a MySql database or a Bigtable [5] FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. Bigtable[4] provides both structure and data distribution but relies on a distributed file system for its durability. Cassandra A Decentralized Structured Storage System


  6. For example, front-ends may handle many different request types, or a storage system such as Bigtable [8] may be most efficient when shared across multiple applications. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. In addition, both GFS and Bigtable use Chubby as a well-known and available location to store a small amount of meta-data; in effect they use Chubby as the root of their distributed data structures. Chubby - The lock service for loosely-coupled distributed systems


  8. Once you start to scale many other data stores (MySQL, Bigtable), some nodes need to be set up as masters in order to organize other nodes, which are set up as slaves. Cassandra The Definitive Guide


  9. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving) Bigtable - A Distributed Storage System for Structured Data


  10. Distributed storage systems like Bigtable [9] can scale to the size of our repository but don’t provide tools to help programmers maintain data invariants in the face of concurrent updates. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



174、单词 myisam 的匹配文本:


  1. Changing the value of AUTOCOMMIT has no effect on nontransactional tables, such as MyISAM or Memory tables, which have no notion of committing or rolling back changes. High Performance MySQL 3rd Edition


  2. MySQL’s MyISAM storage engine, querying a table is most efficient. SQL Antipatterns


  3. You might think this missing support constitutes a deal breaker; however, many people forget that one of the most popular table types in MySQL (MYISAM) doesn’t support transactions, either. The Definitive Guide to MongoDB


  4. Although at the moment MySQL supports a number of page and row-level locking storage engines, the core architecture has a strong MyISAM heritage. Understanding MySQL Internals



175、单词 wiki 的匹配文本:


  1. Sample configurations for other web servers such as Nginx and Lighttpd are available on the gweb wiki. Monitoring with Ganglia


  2. To secure against these errors, or to handle non-existing documents in a custom way (e.g., a wiki. CouchDB The Definitive Guide


  3. How do you know what’s out there, you may ask? Gradle provides a curated list of available community plugins on a wiki page: http://wiki.gradle.org/display/GRADLE/Plugins. Gradle in Action


  4. From the client point of view, document is a JSON object (see more about JSON format at http://en.wikipedia.org/wiki/JSON) Mastering ElasticSearch


  5. The API for list functions is volatile at present, so check the CouchDB wiki for the syntax to use for these functions. Beginning CouchDB


  6. The main wiki is located at http://wiki.apache.org/hadoop/, and contains useful presentations, setup instructions, and troubleshooting instructions. Hadoop in Practice


  7. Embedded Pig programs is a fairly advanced topic and you can find more details at http://wiki.apache.org/pig/ EmbeddedPig. Hadoop in Action


  8. According to the project wiki, Cassandra has been used to create a variety of applications, including a windowed time-series store, an inverted index for document searching, and a distributed job priority queue. Cassandra The Definitive Guide


  9. If transforms are of interest, they are well documented on the Hive wiki. Hadoop Beginners Guide


  10. This is easy enough because a sample is provided in the JBoss wiki. ActiveMQ in Action



176、单词 analytics 的匹配文本:


  1. For over a decade has worked for several startup companies in Silicon Valley and Raleigh, North Carolina, in the area of search and analytics. Apache Flume Distributed Log Collection for Hadoop


  2. No one (cluster) size fits all: automatic cluster sizing for data-intensive analytics. Omega - flexible, scalable schedulers for large compute clusters


  3. Finally we''ve touched the surface of search analytics and we''ve seen what software we can use to measure and observe. Mastering ElasticSearch


  4. If you are working with an existing system, then you can take a look at log files and analytics data to get a sense of how your application is used. Scaling CouchDB


  5. Elasticsearch is a distributed scalable real-time search and analytics engine. Elasticsearch The Definitive Guide


  6. Two main applications that fall into this category are Realtime Analytics [11] and MySQL backups. Apache Hadoop Goes Realtime at Facebook


  7. These companies include: • Twitter is using Cassandra for analytics. Cassandra The Definitive Guide


  8. It is possible to create a fully functional SQL engine on top of the MapReduce framework, with extensions that go beyond SQL into deep analytics. Tenzing - A SQL Implementation On The MapReduce Framework


  9. For example, when you need real-time analytics about your enterprise: an application used by a retail store might want to send a message to request more inventory. ActiveMQ in Action


  10. It records various information about the request in Google Analytics, such as a user identifier and information about the page being fetched. Bigtable - A Distributed Storage System for Structured Data



177、单词 executable 的匹配文本:


  1. Because Ganglia is an open source project, it is possible to compile a runnable binary executable of the gmond agent on virtually any platform with a C compiler. Monitoring with Ganglia


  2. Optimally, the automated tasks should be executable from the command line, which allows you to run the build from any machine you want, whenever you want. Gradle in Action


  3. Running Java applications If we want to execute a Java executable from a Gradle build, we have several options. Gradle Effective Implementation Guide


  4. Create an executable script /root/sync.sh that uses the rsync command: Cassandra High Performance Cookbook


  5. A protocol compiler takes this language and generates executable code to manipulate the protocol buffers. Interpreting the Data Parallel Analysis with Sawzall


  6. The fact that the Gradle build file is executable Groovy code gives you nearly unlimited flexibility in deciding on its origin. Gradle Beyond The Basics


  7. Let the data remain where it is and move the executable code to its hosting machine. Hadoop in Action


  8. The Ant tasks for Thrift generation directly ran the Thrift executable (as you would on the command line) Cassandra The Definitive Guide


  9. After the usual routine of making the mapper executable and running the job we produced, data showing 29 different UFO shapes were reported. Hadoop Beginners Guide


  10. Brew will install and build any dependencies, then place the Gradle executable in your path. Building and Testing with Gradle



178、单词 elasticsearch 的匹配文本:


  1. Its vibrant and growing user community includes large companies like Facebook and Instagram as well as popular open-source projects such as Infinispan, HornetQ, Vert.x, Apache Cassandra and Elasticsearch, all of which have employed its powerful network abstractions in their core code. Netty in Action MEAP V08


  2. For example, we would run a query lucene NOT elasticsearch to match documents that contain lucene term, but not elasticsearch term in the document. Mastering ElasticSearch


  3. In order to achieve this, the recipe integrates with an open source product call Elastic Search, which is a general-use, clustered search engine with a RESTful API (http://www.elasticsearch.org/) Storm Real-Time Processing Cookbook


  4. This is an intermediate level book designed for anyone writing applications using, or responsible for managing Elasticsearch in a production environment. Elasticsearch in Action MEAP V07 cHAPTER 1


  5. ElasticSearch for indexing and searching ElasticSearch (http://www.elasticsearch.org/) is an Apache 2.0 licensed open source search solution built on top of Apache Lucene. Hadoop MapReduce Cookbook


  6. Scripting – when you need more (TODO) Scripting in Elasticsearch default scripting language mvel other scripting languages script fields Query DSL sorting Scripting and facets IV. Elasticsearch The Definitive Guide



179、单词 comparator 的匹配文本:


  1. The sample MapReduce job performs range joins, and uses custom comparator and partitioner classes. Pro Hadoop


  2. The only control a developer has is how the keys are sorted and grouped by specifying a custom Comparator object. MapReduce Design Patterns


  3. The group key is used by the partitioner and the group comparator. Professional Hadoop Solutions


  4. Create a column family named cars specifying the comparator as LongType: Cassandra High Performance Cookbook


  5. Finally, write the code to set up a normal MapReduce job, but tell the MapReduce framework to use our own partitioner and comparator classes: Hadoop RealWorld Solutions Cookbook


  6. It uses the same comparison operators and comparator types as RowFilter. HBase in Action


  7. Finally, the grouping RawComparator is responsible for determining the group boundaries across the sorted records. Hadoop in Practice


  8. WritableComparator is a general-purpose implementation of RawComparator for WritableComparable classes. Hadoop:The Definitive Guide 3rd Edition


  9. KeyComparator Compares two KeyValue keys, i.e., what is returned by the getKey() method, in their raw, byte array format. HBase:The Definitive Guide


  10. Second, a column family has two attributes: a name and a comparator. Cassandra The Definitive Guide



180、单词 meta 的匹配文本:


  1. All of the data and meta-information generated by CouchDB and internal elements such as the configuration information are all generated as JSON. Getting Started with CouchDB


  2. You should see meta-information about the installation, your JVM, and the operating system. Gradle in Action


  3. Client module dependency module() A dependency on an external module where the artifacts are stored in a repository but the meta information about the module is in the build file. Gradle Effective Implementation Guide


  4. The information about keyspaces and column families is meta-information that is stored and replicated across all nodes in the cluster. Cassandra High Performance Cookbook


  5. The headers usually provide information such as the time of the response, information about the server, the type of content contained in the response body, and more meta-information. Beginning CouchDB


  6. This directory will automatically be included in the jar file under its META-INF directory. Gradle Beyond The Basics


  7. In addition, both GFS and Bigtable use Chubby as a well-known and available location to store a small amount of meta-data; in effect they use Chubby as the root of their distributed data structures. Chubby - The lock service for loosely-coupled distributed systems


  8. In Microsoft SQL Server, two meta-databases are maintained: master and tempdb. Cassandra The Definitive Guide


  9. ColumnIO files have headers which contain meta information about the data, including the low and high values for each column. Tenzing - A SQL Implementation On The MapReduce Framework


  10. The journal directory—Contains the data files for the data logs, and a data-control file that holds some meta information. ActiveMQ in Action



181、单词 cpus 的匹配文本:


  1. This includes metrics such as the OS type or the number of CPUs installed in the system. Monitoring with Ganglia


  2. The system scales to thousands of CPUs and petabytes of data, and has thousands of users at Google. Dremel - Interactive Analysis of WebScale Datasets


  3. The programs were executed on a weekend afternoon, when the CPUs, disks, and network were mostly idle. MapReduce - Simplied Data Processing on Large Clusters


  4. Another example is a big cluster with various hardware components, different CPUs, RAM amount, or disk space. Mastering ElasticSearch


  5. This is especially useful under high load where CPUs tend to drift more. Cassandra High Performance Cookbook


  6. While memory gets cheaper, CPUs get faster, and hard drives get larger every year, at any given moment there’s an upward limit to vertical scaling. Scaling CouchDB


  7. Since GFS is a storage system, its CPUs are often lightly loaded, and the free computing cycles can be used to run Workqueue jobs. Interpreting the Data Parallel Analysis with Sawzall


  8. Cassandra, like many databases, is particularly dependent on the speed of the hard disk and the speed of the CPUs (it’s best to have four or eight cores, to take advantage of Cassandra’s highly concurrent construction) Cassandra The Definitive Guide


  9. This bandwidth is enough to saturate the tablet server CPUs because of overheads in our networking stack, SSTable parsing, and Bigtable code, and is also almost enough to saturate the network links used in our system. Bigtable - A Distributed Storage System for Structured Data


  10. CPUs have become so much faster than disks that a few CPUs in a shared-memory machine can drive enough disk heads to service required loads without the complexity of distributed transactions: the top TPC-E benchmark results today are achieved on large shared-memory machines connected to a SAN. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



182、单词 dom 的匹配文本:


  1. If the user’s experience is impacted the most by DOM rendering in the browser, and MySQL contributes only a tiny fraction of the total response time, then optimizing queries further can never help the user experience appreciably. High Performance MySQL 3rd Edition


  2. You need a tool that automates bringing up the browser, manipulates the data object model (DOM) of the web page, and supports running these tests against different browsers. Gradle in Action


  3. The first thing to do is map the Story DOM with the STORIES table in the database using the table method defined in the Squeryl Schema class. Scala in Action


  4. Produce/consume JSON text in a streaming fashion (similar to StAX API for XML) • Build a Java object model for JSON text (similar to DOM API for XML) Java EE 7 Essentials


  5. Tree parsers, such as the Document Object Model (DOM) parser, that read an XML document into a tree structure. Core Java Volume II Advanced Features 9th Edition


  6. A DOM parser reads an XML document and converts it into a tree of nodes that represent the full content of the document. Java in a Nutshell 5th Edition


  7. These are logically defined processing units and may be accomplished within a single step (e.g., SAX) or may be distinct steps (e.g., DOM) Java Performance


  8. For those familiar with the XML DOM interface the code presented below should be straight-forward. Lucene in Action 2nd Edition


  9. How It Works JavaFX has the capability to apply CSS styles onto the Scene graph and its nodes very much like browsers applying CSS styles onto elements in an HTML document object model (DOM) Java 7 Recipes A Problem-Solution Approach


  10. It has been developed using JavaScript so that it can interact with DOM (Document Object Model) using native JavaScript calls. Selenium 2 Testing Tools Beginner''s Guide



183、单词 wildcard 的匹配文本:


  1. How do you allow one call and disallow the other? The Java language designers invented an ingenious new concept, the wildcard type, to solve this problem. Core Java Volume I Fundamentals 9th Edition


  2. Please note by default these wildcard characters can''t be used as the first character in a term because of the performance reasons. Mastering ElasticSearch


  3. Since we do not need to know the exact type T, we can use a wildcard. Data Structures and Algorithm Analysis in Java 3rd Edition


  4. In bash and other shells it’s normal to affix the * wildcard to filesystem commands, and for the shell to expand that prior to running a program. Hadoop in Practice


  5. Rather than having to enumerate each file and directory to specify the input, it is convenient to use wildcard characters to match multiple files with a single expression, an operation that is known as globbing. Hadoop:The Definitive Guide 3rd Edition


  6. Fieldnames can be specified with wildcards—any field which matches the wildcard pattern will be included in the search. Elasticsearch The Definitive Guide


  7. Methods that take a Builder instance would typically constrain the builder’s type parameter using a bounded wildcard type (Item 28) Effective Java 2nd Edition


  8. ActiveMQ uses a special notation to denote a wildcard subscription; we’ll describe that in the next section. ActiveMQ in Action


  9. Receiving limited or directed broadcasts is simply a matter of binding a datagram socket to the wildcard address and executing a receive operation. Fundamental Networking in Java


  10. In order to add all the JAR files within a specified directory, use the wildcard character (*) after the directory containing the files. Java 7 Recipes A Problem-Solution Approach



184、单词 kerberos 的匹配文本:


  1. We will show you how to configure ACL for authorization and Kerberos for authentication, configure NameNode HA, recover from a failed NameNode, and so on. Hadoop Operations and Cluster Management Cookbook


  2. You learn how Hadoop provides authentication, with a focus on the details of Hadoop’s use of Kerberos. Professional Hadoop Solutions


  3. Hadoop security – integrating with Kerberos Hadoop by default runs without security. Hadoop MapReduce Cookbook


  4. Hadoop can be configured to run with Kerberos, a network authentication protocol, which requires Hadoop daemons to authenticate clients, both user and other Hadoop components. Hadoop in Practice


  5. Note, however, that the latest versions of Hadoop support Kerberos authentication, which removes these restrictions; see “Security” on page 325.) Despite these limitations, it is worthwhile having permissions enabled (as it is by default; see the dfs.permissions property), to avoid accidental modification or deletion of substantial parts of the filesystem, either by users or by automated tools or programs. Hadoop:The Definitive Guide 3rd Edition


  6. Different credential administrations are possible; the initial implementation will use Kerberos. The Hadoop Distributed File System


  7. Providing Your Own Authentication You can provide your own method of authenticating to Cassandra if you want to make special requirements, such as a Kerberos ticket or encryption, or if you want to store passwords in a different location, such as an LDAP directory. Cassandra The Definitive Guide


  8. To avoid reliance on simple user IDs, the developers need to learn the user identity from somewhere, and the Kerberos system was chosen with which to integrate. Hadoop Beginners Guide


  9. Yahoo!, one of the major contributors to Apache Hadoop, invested in adding full Kerberos support to core Hadoop. Hadoop Operations


  10. Some common forms of authentication include plain-text passwords, one-time password devices, smart cards, or Kerberos, just to name a few. ActiveMQ in Action



185、单词 aws 的匹配文本:


  1. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: " Verify the startup from AWS Management Console" HBase Administration Cookbook


  2. Amazon Amazon, http://aws.amazon.com, provides a large set of cloud computing services: Pro Hadoop


  3. And, we will end this book by introducing steps to build a Hadoop cluster using the AWS cloud. Hadoop Operations and Cluster Management Cookbook


  4. For the past two years, he has participated in the design and implementation of several Hadoop and Amazon Web Services (AWS) based implementations. Professional Hadoop Solutions


  5. The biggest (and oldest) player right now in the cloud infrastructure space is Amazon Web Services (AWS) HBase in Action


  6. Check the output of the computation in the output data bucket using the AWS S3 console. Hadoop MapReduce Cookbook


  7. Deepak Singh, open data advocate and bizdev manager of the Amazon AWS cloud. Hadoop:The Definitive Guide 3rd Edition


  8. Transferring data into and out of an AWS Hadoop cloud. Hadoop in Action


  9. Since we also explore Amazon Web Services in this book, you can run all the examples on EC2 instances, and we will look at some other more Hadoop-specific uses of AWS throughout the book. Hadoop Beginners Guide


  10. When you set up your AWS account, make sure that all your Amazon EC2 instances, key pairs, security groups, and EMR jobflows are located in the same region to avoid cross-region transfer costs. Programming Hive



186、单词 backend 的匹配文本:


  1. He has worked with Perl/ PHP/Python, primarily for coding and MySQL/Oracle as the backend, for several years prior to moving into Hadoop fulltime. Apache Flume Distributed Log Collection for Hadoop


  2. It is also possible to use a different backend for events, which can be useful if you need to scale up to hundreds or thousands of events without incurring the processing penalty associated with JSON parsing. Monitoring with Ganglia


  3. Towards characterizing cloud backend workloads: insights from Google compute clusters. Omega - flexible, scalable schedulers for large compute clusters


  4. Jason Davies swapped out the backend of the Ely Service website with CouchDB, without changing anything visible to the user. CouchDB The Definitive Guide


  5. Her interests and experiences lie across multiple tiers, from web frontend GUI to middleware, from middleware to backend SQL RDBMS and NoSQL data storage. Cassandra High Performance Cookbook


  6. Cassandra is now deployed as the backend storage system for multiple services within Facebook. Cassandra A Decentralized Structured Storage System


  7. The max=4 parameters indicate the maximum number of connections that the proxy will allow to the backend server. Scaling CouchDB


  8. Understanding: Ads Review queries fan out across many types of systems, including BigTable, the aforementioned database, a multi-dimensional indexing service, and various other C++ and Java backend services. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  9. Keith Thornhill works as a Senior Software Engineer at Raptr and saw a need to take their storage and analytics backend to the next level. Cassandra The Definitive Guide


  10. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving) Bigtable - A Distributed Storage System for Structured Data



187、单词 jmeter 的匹配文本:


  1. Learn how to test web applications using Apache JMeter with practical, hands-on examples. Performance Testing With JMeter 2.9


  2. JMeter JMeter is a Java application that can load another application and measure its performance. High Performance MySQL 3rd Edition


  3. This is great for plotting the history of the standard result types, such as JUnit, FindBugs, JMeter, and NCSS. Jenkins Continuous Integration Cookbook


  4. You can even run JMeter as a proxy, and then manipulate your application in an ordinary browser to prepare an initial version of your test script. Jenkins The Definitive Guide


  5. Using SolrMeter you can quickly set this scenario up, and then use another tool like JMeter that drives your front-end search user interface to ensure your application meets your expected SLA when Solr is under load. Solr 3 Enterprise Search Server


  6. For web applications, there are several open source stress and load testing tools available, including Grinder, JMeter, and Gatling.6 When testing load-balanced web applications, we should ensure our test clients are distributed across different IP addresses so that requests are balanced across the cluster. Graph Databases


  7. JMeter (http://jakarta.apache.org/jmeter/), the Open Source load testing tool from the Apache Software Foundation, is one of the most popular tools in this area. Java Performance



188、单词 predicate 的匹配文本:


  1. Using MultiGet to limit round trips and overhead MultiGet should be used as an alternative to multiple get operations when each get operation uses the same SlicePredicate. Cassandra High Performance Cookbook


  2. A predicate, def(), can be used to test if a value is defined; it returns true for a defined value and false for an undefined value. Interpreting the Data Parallel Analysis with Sawzall


  3. Inner joins compare all tuples in relations L and R, and produce a result if a join predicate is satisfied. Hadoop in Practice


  4. It’s actually an equijoin because the join predicate is equality.) The result’s fields are made up of all the fields of all the input relations. Hadoop:The Definitive Guide 3rd Edition


  5. Following the input, the “filter” operator produces a value when its predicate evaluates to true. Hadoop in Action


  6. The slice predicate is a container object that allows us to specify the range of columns that we want to read between a start and a finish. Cassandra The Definitive Guide


  7. If a predicate is a constant or a constant range (e.g., BETWEEN) and the source data is an indexed source (e.g., Bigtable), the compiler will push down the condition to an index range scan on the underlying source. Tenzing - A SQL Implementation On The MapReduce Framework


  8. Sometimes, we want to only include particular columns or to apply a predicate to reduce the number of selected items. Hadoop Beginners Guide


  9. The value of the predicate is the value returned by the closure. Building and Testing with Gradle


  10. The first structured data task evaluates a simple selection predicate on the pageRank attribute from the Rankings table. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



189、单词 erlang 的匹配文本:


  1. One camp of developers believes actors should really be used for interprocess communication, like in Erlang. Programming Concurrency on the JVM


  2. Building from source requires a suitable build environment, some libraries, and prerequisites (such as Erlang) Getting Started with CouchDB


  3. Erlang is one such language, and it includes safe mechanisms for one task to communicate with another. Thinking in Java 4th Edition


  4. CouchDB is written in Erlang, but you don’t need to know anything about Erlang to use CouchDB. CouchDB The Definitive Guide


  5. This will drag along a number of packages with it, mainly erlang ones. Deploying OpenStack


  6. Add to this the fact that CouchDB is developed in Erlang OTP, a fault-tolerant programming language that offers excellent concurrency features, and you know that your CouchDB database will scale well without a loss of reliability and availability. Beginning CouchDB


  7. CouchDB (due to its Erlang underpinnings) can take advantage of multiple CPUs. Scaling CouchDB


  8. RabbitMQ is built using the OTP (Open Telecom Platform) and implemented in the Erlang language and runtime environment. Pro Puppet


  9. Preparing RabbitMQ for Pacemaker high availability In order for Pacemaker monitoring to function properly, you must ensure that RabbitMQ’s .erlang.cookie files are identical on all nodes, regardless of whether DRBD is mounted there or not. high-availability-guide


  10. Client: Riak offers three primary ways of interacting with it: via a JSON over HTTP interface; drivers for Erlang, Python, Java, PHP, JavaScript, and Ruby; and, finally, a Protocol Buffers client interface. Cassandra The Definitive Guide



190、单词 parsed 的匹配文本:


  1. Once all the configurations have been parsed you see this message, which shows everything that was configured. Apache Flume Distributed Log Collection for Hadoop


  2. The configuration file is parsed using libconfuse, a third-party API for configuration files. Monitoring with Ganglia


  3. You get back a JSON string, that, if parsed into a native object or data structure of your programming language, gives you access to the welcome string and version information. CouchDB The Definitive Guide


  4. The task sonarRunner will give you concrete information about the parsed report files: Gradle in Action


  5. CouchDB then sends a response to the application in the form of JavaScript Object Notation (JSON), which can easily be parsed by most modern programming languages. Beginning CouchDB


  6. Parsing a complex dataset with Hadoop Datasets we parsed so far were simple, where each data item was contained in a single line. Hadoop MapReduce Cookbook


  7. Every Web page has an associated 1D number called a docID which is assigned whenever a new URL is parsed out of a Web page. The anatomy of a large-scale hypertextual Web search engine


  8. The HPROF option you used will create a text file that can be easily parsed. Hadoop in Practice


  9. At the reducer, the list of values for each key are parsed. Hadoop in Action


  10. This sorts by directly comparing the bytes, validating that the input can be parsed as US-ASCII. Cassandra The Definitive Guide



191、单词 nagios 的匹配文本:


  1. Chapter 7, Monitoring Flume, discusses various options available to monitor Flume both internally and externally including Monit, Nagios, Ganglia, and custom hooks. Apache Flume Distributed Log Collection for Hadoop


  2. Chapter 5, Monitoring and Diagnosis: In this chapter, we will describe how to monitor and diagnose HBase cluster with Ganglia, OpenTSDB, Nagios, and other tools. HBase Administration Cookbook


  3. As described in Chapter 7, various Nagios integration features may be set in gweb’s conf.php. Monitoring with Ganglia


  4. Nagios: A Monitoring and Alert Generation Framework Nagios  (http://www.nagios.org) provides a flexible customizable framework for collecting data about the state of a complex system and triggering various levels of alerts based on the collected data. Pro Hadoop


  5. Chapter 6, Monitoring a Hadoop Cluster, explains how to monitor a Hadoop cluster with various tools, such as Ganglia and Nagios. Hadoop Operations and Cluster Management Cookbook


  6. Nagios is the de facto standard Network Monitoring System (NMS) Cassandra High Performance Cookbook


  7. There are many fine tools that you can use for this, ranging from sar,1 the built-in Linux utility that collects and reports on system activity, to more sophisticated tools such as Nagios and Ganglia. Hadoop in Practice


  8. Many third-party monitoring and alerting systems (such as Nagios or Hyperic) can query MBeans, making JMX the natural way to monitor your Hadoop cluster from an existing monitoring system. Hadoop:The Definitive Guide 3rd Edition


  9. Nagios Nagios is a very commonly used support tool for gaining qualitative data regarding cluster status. HBase:The Definitive Guide


  10. Nagios, for example, is open source, free, and rather straightforward. Cassandra The Definitive Guide



192、单词 src 的匹配文本:


  1. The binary distribution archive has bin in the name and the source archive is marked with src. Apache Flume Distributed Log Collection for Hadoop


  2. Source and destination directories are configured by attributes srcdir and destdir; compile Java source files located in directory src and put class files into directory dest. Gradle in Action


  3. The following example shows a simple Copy task called simpleCopy with a single source directory src/xml and a destination directory definitions: Gradle Effective Implementation Guide


  4. You can see that the code is placed in the src folder (the code is in the main folder and the unit tests are located in the test folder) Mastering ElasticSearch


  5. For this job we''re only interested in retrieving the column qualifier src from the column family cf for each key. Hadoop RealWorld Solutions Cookbook


  6. This will create a new directory called otp_src_R13B01 under /usr/local/erlang. Beginning CouchDB


  7. Printing out the files property of goodPoems (or otherwise inspecting the contents of the collection) shows that it contains all of the .txt files in the src/main/resources directory, but not the file whose name starts with shelley. Gradle Beyond The Basics


  8. Compile the source by running the ant build command from the CHAPTER_6_SRC folder. Hadoop MapReduce Cookbook


  9. In the src directory, you’ll find the source code for Hadoop itself. Hadoop Operations


  10. When custom task logic outgrows the build file, we can migrate it to the buildSrc directory at the project root. Building and Testing with Gradle



193、单词 css 的匹配文本:


  1. The raw images, JavaScript, CSS, and HTML assets needed by Sofa are stored in the _attachments field, which is interesting in that by default it shows only the stubs, rather than the full content of the files. CouchDB The Definitive Guide


  2. A WAR (web application archive) file is used to bundle web components, compiled classes, and other resource files like deployment descriptors, HTML, JavaScript, and CSS files. Gradle in Action


  3. He dabbled with web development in the late 1990s and was an expert in HTML, CSS, and JavaScript by the time he finished school. Beginning CouchDB


  4. Next, create the same document within the catalog-b database, but with a slightly different title (the comma is missing after “CSS”): Scaling CouchDB


  5. In this code, the page defines the structure using

    and CSS (not shown here) Java EE 7 Essentials


  6. JavaDocs now also use a CSS stylesheet to simplify changes to the appearance of the pages. Java 7 New Features Cookbook


  7. It has produced several HTML standards, as well as a variety of others such as HTTP, PICS, XML, CSS, MathML, and more. Java Network Programming 4th Edition


  8. It should be noted that delays are meant to be present between successive page requests but not when requesting content (CSS, JavaScript, images) considered part of a page request. Java Performance


  9. Solution Create an application to dynamically customized border regions using JavaFX’s CSS styling API. Java 7 Recipes A Problem-Solution Approach


  10. You can use CSS to change the visual appearance of your application. Java SE 8 for the Really Impatient



194、单词 endpoints 的匹配文本:


  1. Our support is notable for its scale: declaring a queue automatically creates an inbox on each entity group, giving us millions of endpoints. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. In addition to less portability and JVM commitment, using the REST API you can connect to different major ElasticSearch versions if you are aware of the differences in REST endpoints and response returned by ElasticSearch. Mastering ElasticSearch


  3. Because Oozie also has a web console that supports Kerberos HTTP SPNEGO authentication, it should also be configured to use the same HTTP principal and keytab file used for other HTTP endpoints. Professional Hadoop Solutions


  4. It is low latency because the operation is acknowledgment to the client by only one of the natural endpoints for the data. Cassandra High Performance Cookbook


  5. BigCouch introduces some new API endpoints that are needed to manage its clustering features. Scaling CouchDB


  6. Endpoints allow you to extend the cluster’s capabilities, exposing new operations to client applications. HBase in Action


  7. Endpoints are dynamic extensions to the RPC protocol, adding callable remote procedures. HBase:The Definitive Guide


  8. Range The Range class allows us to create a specific interval or span of values with defined endpoints, and works with Comparable types. Getting Started with Google Guava


  9. This strategy only overrides the calculateNatural Endpoints method from the abstract parent implementation. Cassandra The Definitive Guide


  10. Its sequencing with respect to other datagrams between the same two endpoints is not guaranteed. Fundamental Networking in Java



195、单词 mappers 的匹配文本:


  1. It assumes that adding more workers results in an idealized linear speedup (modulo dependencies between mappers and reducers), up to the point where map activities and all reduce activities respectively run in parallel. Omega - flexible, scalable schedulers for large compute clusters


  2. This command will initiate a MapReduce job with a number of mappers to run the copy task in parallel. Hadoop Operations and Cluster Management Cookbook


  3. Analysis is done by the analyzer, which is built of tokenizer and zero or more filters, and can also have zero or more character mappers. Mastering ElasticSearch


  4. Most MapReduces have a single (possibly sharded) input source and a single Mapper, but in general a single MapReduce can have multiple input sources and associated Mappers. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. Hive uses multiple mappers to fetch all the contents of the column family in parallel. Cassandra High Performance Cookbook


  6. But, Hadoop runs the mappers in multiple nodes, and often in a machine different than the machine running the job''s client. Hadoop MapReduce Cookbook


  7. The MapReduce framework will take care of calling your reducer once per unique key outputted by the mappers, along with a list of document IDs. Hadoop in Practice


  8. Under the MapReduce model, the data processing primitives are called mappers  and reducers. Hadoop in Action


  9. Hadoop provides a standard specification (that is, interface) for the map and reduce functions, and implementations of these are often referred to as mappers and reducers. Hadoop Beginners Guide


  10. This is exactly how Hadoop Streaming - an API that permits non-Java mappers and reducers to be written and executed in Hadoop MapReduce - works; a user supplied script or program is created and executed under the control of the child JVM. Hadoop Operations



196、单词 rdbms 的匹配文本:


  1. In a typical relational database management system (RDBMS) solution, for example, reliability and consistency, in particular, in a distributed system can start to be a problem. Getting Started with CouchDB


  2. CouchDB differs from others by accepting eventual consistency, as opposed to putting absolute consistency ahead of raw availability, like RDBMS or Paxos. CouchDB The Definitive Guide


  3. Her interests and experiences lie across multiple tiers, from web frontend GUI to middleware, from middleware to backend SQL RDBMS and NoSQL data storage. Cassandra High Performance Cookbook


  4. Most developers with years of development experience working with RDBMS’s find this a very useful feature to have. Cassandra A Decentralized Structured Storage System


  5. This entity storage system exposes certain RDBMS functionality on top of an underlying BigTable. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  6. These use cases would have workloads that are challenging to scale because of very high write throughput, massive datasets, unpredictable growth, or other patterns that may be difficult or suboptimal in a sharded RDBMS environment. Apache Hadoop Goes Realtime at Facebook


  7. This technique is similar in spirit to vertical partitioning and column-oriented databases in the relational database management system (RDBMS) world. Hadoop in Action


  8. RDBMS: The Awesome and the Not-So-Much There are many reasons that the relational database has become so overwhelmingly popular over the last four decades. Cassandra The Definitive Guide


  9. Explore how we can move data from RDBMS into HDFS and Hive. Hadoop Beginners Guide


  10. Databases satisfy many of the requirements of an incremental system: a RDBMS can make many independent and concurrent changes to a large corpus and provides a flexible language for expressing computation (SQL) Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



197、单词 instantiate 的匹配文本:


  1. It is also a good practice to instantiate member variables, or thread local variables, for any key or value that would otherwise be created in the body of the map() method. Pro Hadoop


  2. The settings file declares the configuration required to instantiate the project’s hierarchy. Gradle in Action


  3. Because we were interested in the latest data in order to instantiate the river and start indexing we''ve used the following command: Mastering ElasticSearch


  4. The header (see Table 2-1) contains key and value class names that are used by the reader to instantiate those classes, the version number, and compression information. Professional Hadoop Solutions


  5. After choosing the correct transports and other connection settings, users can instantiate a Cassandra.Client instance. Cassandra High Performance Cookbook


  6. In the mapper, we instantiate both ThriftWritable and WeblogRecord instances: Hadoop RealWorld Solutions Cookbook


  7. All Writable implementations must have a default constructor so that the MapReduce framework can instantiate them, then populate their fields by calling readFields() Hadoop:The Definitive Guide 3rd Edition


  8. The full request to create the index and instantiate the token filter and analyzer looks like this: Elasticsearch The Definitive Guide


  9. You instantiate a Job object by passing a JobConf object to its constructor. Hadoop in Action


  10. The BrokerFactory can instantiate the BrokerService class using the standard ActiveMQ XML configuration file as shown. ActiveMQ in Action



198、单词 facebook 的匹配文本:


  1. Creating mountains of data has become very easy, thanks to Facebook, Twitter, Amazon, digital cameras and camera phones, YouTube, Google, and just about anything else you can think of connected to the Internet. Apache Flume Distributed Log Collection for Hadoop


  2. Find us on Facebook: http://facebook.com/oreilly Follow us on Twitter: http://twitter.com/oreillymedia Watch us on YouTube: http://www.youtube.com/oreillymedia. Getting Started with CouchDB


  3. In contrast, Flickr.com at its core is like Facebook with users and data for users, but in Flickr’s case, the data that grows fastest is images uploaded by users. CouchDB The Definitive Guide


  4. There are strict operational requirements on Facebook’s platform in terms of performance, reliability and efficiency, and to support continuous growth the platform needs to be highly scalable. Cassandra A Decentralized Structured Storage System


  5. In the past two years, Joe has written more than a dozen articles and tutorials for the IBM developerWorks web site, including articles on Adobe Flex, Facebook applications, ExtJS, Adobe AIR, IBM DB2, and more. Beginning CouchDB


  6. This has evolved from traditional use cases, such as combining OLTP and log files, to more sophisticated uses, such as using Hadoop for data warehousing (exemplified by Facebook) and the field of data science, which studies and makes new discoveries about data. Hadoop in Practice


  7. At Facebook, Hadoop has traditionally been used in conjunction with Hive for storage and analysis of large data sets. Apache Hadoop Goes Realtime at Facebook


  8. People upload videos, take pictures on their cell phones, text friends, update their Facebook status, leave comments around the web, click on ads, and so forth. Hadoop in Action


  9. Though as of this writing it has not yet reached a 1.0 release, Cassandra is being used in production by some of the biggest properties on the Web, including Facebook, Twitter, Cisco, Rackspace, Digg, Cloudkick, Reddit, and more. Cassandra The Definitive Guide


  10. From emails to Facebook posts, from purchase histories to web links, there are large data sets growing everywhere. Hadoop Beginners Guide



199、单词 udf 的匹配文本:


  1. Conceptually, such user defined functions (UDFs) can be combined with SQL queries, but the experience reported in the comparison paper indicates that UDF support is either buggy (in DBMS-X) or missing (in Vertica) MapReduce a flexible data processing tool


  2. Basically, you would use Pig to build and group the records, then a UDF to make sense of the data. MapReduce Design Patterns


  3. This is something that can be provided by a custom UDF. Professional Hadoop Solutions


  4. Several of the Hive examples will demonstrate how to properly implement and use a custom function (UDF) for reuse in different analytics. Hadoop RealWorld Solutions Cookbook


  5. Since we’re looking to cover more advanced techniques, we’ll look at how you can write a custom Hive UDF to geolocate your logs. Hadoop in Practice


  6. A Filter UDF Let’s demonstrate by writing a filter function for filtering out weather records that do not have a temperature quality reading of satisfactory (or better) Hadoop:The Definitive Guide 3rd Edition


  7. Since storage handlers work transparently for the higher-level layers in Hive, you can also use any user-defined function (UDF) supplied with Hive—or your own custom functions. HBase:The Definitive Guide


  8. You should also consider contributing your UDF back to PiggyBank to benefit others in the Pig community. Hadoop in Action


  9. Time for action – adding a new User Defined Function (UDF) Let us show how to create and invoke some custom Java code via a new UDF. Hadoop Beginners Guide


  10. Unfortunately, [23] found that in practice, it was difficult to implement such a UDF inside the parallel databases. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



200、单词 xpath 的匹配文本:


  1. The only new element we have used here is the XPath Extractor post processor. Performance Testing With JMeter 2.9


  2. The second, and the one that we will explore in this recipe, is the ability to design custom bug discovery rules for Java source code using Xpath. Jenkins Continuous Integration Cookbook


  3. You can use various forms of XML schema definitions (DTDs, XML Schema, RelaxNG) to check document formats, run queries with XPath and XQuery, and perform transformations with XSLT. NoSQL Distilled


  4. Cassandra does not have a standardized data access language such as SQL or XPATH. Cassandra High Performance Cookbook


  5. Tsung allows matching using a limited subset of JSONPath (XPath for JSON), using the jsonpath attribute. Scaling CouchDB


  6. This is a lot like how XPath lets us select an arbitrary element from a deeply nested XML structure, or how a CSS selector lets us do the same with HTML. Functional Programming Patterns in Scala and Clojure


  7. The XPath language makes it simple to access tree nodes. Core Java Volume II Advanced Features 9th Edition


  8. XPath is a language for referring to specific nodes in an XML document. Java in a Nutshell 5th Edition


  9. Use of XML-friendly query mechanisms such as XPath and XQuery. Cassandra The Definitive Guide


  10. The details for xpath don’t concern us here, but note that we pass string literals to the xpath function and use FROM src LIMIT 1 to specify the required FROM clause and to limit the output. Programming Hive



201、单词 errata 的匹配文本:


  1. If you find any errata, please report them by visiting http://www. Apache Flume Distributed Log Collection for Hadoop


  2. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Hadoop Operations and Cluster Management Cookbook


  3. If you find any errata, please report them by visiting http://www.packtpub. Gradle Effective Implementation Guide


  4. Dear reader, refer to http://www.elasticsearchserverbook.com from time to time, where you''ll be able to find the newest errata dedicated to the book and additional articles extending it. Mastering ElasticSearch


  5. Errata Although we have taken every care to ensure the accuracy of our content, mistakes do happen. Cassandra High Performance Cookbook


  6. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support. Hadoop RealWorld Solutions Cookbook


  7. If you find any errata, please report them by visiting http://www.packtpub.com/support, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Hadoop MapReduce Cookbook


  8. We have a web page for this book, where we list errata, examples, and any. Elasticsearch The Definitive Guide


  9. We have a web page for this book, where we list errata, examples, and any additional information. Cassandra The Definitive Guide


  10. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Hadoop Beginners Guide



202、单词 recommender 的匹配文本:


  1. For better or worse, I instead prototyped a simple recommender for my friend’s startup, from scratch. Mahout in Action


  2. The highlighted code inside the recommend() method configures a default recommender before calling methods to perform recommendation. Sonar Code Quality Testing Essentials


  3. Recommender systems are used by many companies including Amazon, LinkedIn, and Facebook. Hadoop RealWorld Solutions Cookbook


  4. Recommender systems, which are also known as collaborative filtering (CF) systems, are the computer equivalent of you asking your friends for a restaurant recommendation. Hadoop in Practice



203、单词 amqp 的匹配文本:


  1. A cloud controller communicates with the internal object store via HTTP (Hyper Text Transfer Protocol), but it communicates with a scheduler, network controller, and volume controller via AMQP (Advanced Message Queue Protocol) bk-compute-adminguide-grizzly


  2. Protocol: AMQP 1.0 AMQP is a binary wire protocol which was. Next Generation Open Source Messaging with Apollo


  3. We go from basic concepts up to seeing how to map those concepts in AMQP (the protocol used by RabbitMQ) RabbitMQ in Action


  4. The core framework is large, and dozens of portfolio projects extend that core, covering things like security, web flow, SOAP web services (REST web services are part of the core), enterprise integration, batch processing, mobile, various flavors of social (Facebook, LinkedIn, Twitter, GitHub, and so on), various flavors of NoSQL (MongoDB, Neo4j, Riak, and so on), BlazeDS/Flex, AMQP/Rabbit, and many more. Spring in Practice


  5. The cloud controller communicates with the internal object store using HTTP, but it communicates with the scheduler, network controller, and volume controller using AMQP (advanced message queueing protocol) bk-admin-guide-cloud-latest


  6. This leaves the door open for supporting other messaging technologies, such as Advanced Message Queuing Protocol (AMQP) Spring Batch in Action


  7. Two additional plugins for RabbitMQ need to be installed to provide Stomp protocol support and AMQP protocol support. Pro Puppet


  8. Highly available RabbitMQ RabbitMQ is the default AMQP server used by many OpenStack services. high-availability-guide


  9. In this chapter, we’ll look at how to send and receive messages in Spring using both the Java Message Service (JMS) and the Advanced Message Queuing Protocol (AMQP) Spring in Action 4th Edition


  10. The choice of RabbitMQ over other AMQP compatible options that are gaining support in OpenStack, such as ZeroMQ and Qpid is due to its ease of use with Ubuntu and significant testing in production. OpenStack Operations Guide



204、单词 optimizing 的匹配文本:


  1. He also has experience optimizing SQL queries for database interactions. Apache Flume Distributed Log Collection for Hadoop


  2. Ganglia’s protocols were carefully designed, optimizing at every opportunity to reduce overhead and achieve high performance. Monitoring with Ganglia


  3. What about CSS? Requirements for optimizing CSS are in many cases no different from JavaScript. Gradle in Action


  4. The routing is a very powerful mechanism for optimizing a cluster. Mastering ElasticSearch


  5. Comparing the two MapReduce columns and the Sawzall column shows the importance of optimizing. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  6. Geolocation (TODO) Adding geolocation to your documents Geolocation-aware search Sorting by distance Geo-shapes Optimizing geo-queries 24 Elasticsearch The Definitive Guide


  7. This might mean optimizing the channels the database uses to write to the underlying filesystem. Cassandra The Definitive Guide


  8. Since joins are one of the most important aspects of our system, we have spent considerable time on implementing different types of joins and optimizing them. Tenzing - A SQL Implementation On The MapReduce Framework


  9. In an ideal world, consumers would always be running as fast as the fastest producer, which neatly brings us to the next section on optimizing message consumers. ActiveMQ in Action


  10. This is advantageous when further optimizing the memory footprint, see Section 3 PowerDrill - Processing a Trillion Cells per Mouse Click



205、单词 jspa 的匹配文本:


  1. Please post comments or corrections to the Author Online forum: http://www.manning-sandbox.com/forum.jspa?forumID=828 Solr in action


  2. Please post comments or corrections to the Author Online forum at http://www.manning-sandbox.com/forum.jspa?forumID=451 Lucene in Action 2nd Edition



206、单词 workloads 的匹配文本:


  1. In HPC clusters, CPUs can offload data parallel workloads to the accelerators. Monitoring with Ganglia


  2. These are the sort of workloads experienced by the majority of web applications. CouchDB The Definitive Guide


  3. A cluster typically hosts a multitude of distributed applications that share resources, have widely varying workloads, and run on machines with different hardware parameters. Dremel - Interactive Analysis of WebScale Datasets


  4. The master can easily keep up with this rate, and therefore is not a bottleneck for these workloads. GFS - The Google File System


  5. Tuning Memtables for write-heavy workloads Cassandra is designed so that all disk write operations are serial. Cassandra High Performance Cookbook


  6. A dynamic distributed metadata cluster provides extremely efficient metadata management and seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. Ceph A Scalable, High-Performance Distributed File System


  7. This boundary is configurable, but we have found 256K to work well for us in our production workloads. Cassandra A Decentralized Structured Storage System


  8. This may be because our Java workloads tend to be closer to the end user; these sorts of applications often handle a wider mix of requests and consequently have comparatively complex control paths. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  9. Although we have considered the statistical properties of such a distribution and demonstrated the effect of load variance on performance for certain workloads, the interaction of workload, PG distribution, and replication can be complex. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  10. In fact, the ability to handle application workloads that require high performance at significant write volumes with many concurrent client threads is one of the primary features of Cassandra. Cassandra The Definitive Guide



207、单词 sharded 的匹配文本:


  1. Scaling” is an imprecise word, but in this chapter we’ll be dealing with the aspect of putting together a partitioned or sharded cluster that will have to grow at an increasing rate over time from day one. CouchDB The Definitive Guide


  2. This interleaving of tables to form directories is significant because it allows clients to describe the locality relationships that exist between multiple tables, which is necessary for good performance in a sharded, distributed database. Spanner - Google’s Globally-Distributed Database


  3. Most MapReduces have a single (possibly sharded) input source and a single Mapper, but in general a single MapReduce can have multiple input sources and associated Mappers. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. The value of individual records is often multipled by the way they are partitioned, sharded, or sorted. MapReduce Design Patterns


  5. Sharded cache —  Every CPU is dedicated to a specific portion of data. Professional Hadoop Solutions


  6. Next-generation cluster-level file system Automatically sharded metadata layer Data typically written using Reed-Solomon (1.5x) Client-driven replication, encoding and replication Metadata space has enabled availability analyses. Storage Architecture and Challenges


  7. Fortunately, Elasticsearch is sharded by default, as you''ll come to understand, making it easy to spread your data in a cluster of multiple instances. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. And if it does offer scalability, does it imply specific steps to do so? The easiest solution would be to add one machine at a time, while sharded setups (especially those not supporting virtual shards) sometimes require for each shard to be increased simultaneously because each partition needs to be equally powerful. HBase:The Definitive Guide


  9. These use cases would have workloads that are challenging to scale because of very high write throughput, massive datasets, unpredictable growth, or other patterns that may be difficult or suboptimal in a sharded RDBMS environment. Apache Hadoop Goes Realtime at Facebook


  10. All this means that no application code has to handle these logistics; your application code communicates with a sharded cluster just as it speaks to a single node. MongoDB in Action



208、单词 latin 的匹配文本:


  1. Pig uses Pig Latin as the programming language, which is extensible and easy to use. Hadoop Operations and Cluster Management Cookbook


  2. A Pig Latin program combines high-level declarative operators similar to those in SQL, together with named intermediate variables representing edges in the dataflow graph between operators. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. Pig operations are written in Pig Latin and take a more imperative form. Hadoop RealWorld Solutions Cookbook


  4. Pig provides Pig Latin, an easy-to-use high-level query language similar to SQL. Incoop MapReduce for Incremental Computations


  5. This section describes how to use Pig Latin script to find countries with 2000$ GNI sorted by the same criterion from the HDR dataset. Hadoop MapReduce Cookbook


  6. It uses its own language called PigLatin to model and operate on data. Hadoop in Practice


  7. The language used to express data flows, called Pig Latin. Hadoop:The Definitive Guide 3rd Edition


  8. Computing similar documents efficiently, using a simple Pig Latin script. Hadoop in Action


  9. Included in the platform is a high-level language called Pig Latin and a compiler that translates programs written in Pig Latin into sequences of MapReduce jobs. Cassandra The Definitive Guide


  10. The overlap between the projects is that whereas Hive looks to present a familiar SQL-like interface to data, Pig uses a language called Pig Latin that specifies dataflow pipelines. Hadoop Beginners Guide



209、单词 dns 的匹配文本:


  1. Finally, if you want a reverse DNS lookup of the hostname to be used instead of IP as a value, set the useIP property to false. Apache Flume Distributed Log Collection for Hadoop


  2. The addresses may be specified as IP addresses or DNS hostnames and may optionally be suffixed by a colon followed by the port number where the gmond tcp_accept_channel is to be found. Monitoring with Ganglia


  3. Crawlers have different states: DNS lookup, connecting to host, send request, receiving response. The anatomy of a large-scale hypertextual Web search engine short


  4. This should allow the cluster to continue functioning even with DNS issues since Cassandra communicates exclusively by IP address. Cassandra High Performance Cookbook


  5. A major performance stress is DNS lookup so each crawler maintains a DNS cache. The anatomy of a large-scale hypertextual Web search engine


  6. It then updates the DNS tables, replacing the IP address of the failed replica with that of the new one. Chubby - The lock service for loosely-coupled distributed systems


  7. Unfortunately, changing only the hostname is not sufficient as the DataNodes cache the DNS entry.) You’ll also have to run the backup node as a NameNode by executing bin/start-dfs.sh on it. Hadoop in Action


  8. Round-Robin DNS Another option is to create a record in DNS that represents the set of servers in the cluster. Cassandra The Definitive Guide


  9. Add the host''s DNS name or IP address to the slaves file on the node from which you usually run commands such as slaves.sh or cluster start/stop scripts. Hadoop Beginners Guide


  10. A DNS-based load-balancing system selects a cluster by accounting for the user’s geographic proximity to each physical cluster. Web search for a planet the google cluster architecture



210、单词 parsers 的匹配文本:


  1. The Saxon JAR was in the classpath before the Jetty JAR, so it was being used to deliver the XML parsers, and the parser was not validating. Pro Hadoop


  2. During query time, if you use one of the provided query parsers, your query will be analyzed. Mastering ElasticSearch


  3. Most XML parsers operate on whole XML documents, so if a large XML document is made up of multiple input splits, it is a challenge to parse these individually. Hadoop:The Definitive Guide 3rd Edition


  4. Tree parsers, such as the Document Object Model (DOM) parser, that read an XML document into a tree structure. Core Java Volume II Advanced Features 9th Edition


  5. Nutch has parsers for HTML, a web crawler, a link-graph database, and other extra components necessary for a web search engine. Hadoop in Action


  6. The parsers I use to extract API information from the source files and javadoc files are created using the Antlr parser generator developed by Terence Parr. Java in a Nutshell 5th Edition


  7. Hadoop also provides mechanisms through its InputFormat and OutputFormat classes that provide implementations of common file formats and likewise remove the need of having to write file parsers for any but custom file types. Hadoop Beginners Guide


  8. In this section the performance aspects related to the use of various parsers and serializers are discussed starting with a brief look at the steps involved in a typical XML processing cycle. Java Performance


  9. Tika is actually a framework that hosts plugin parsers for each supported document type. Lucene in Action 2nd Edition


  10. The indentation of the actual output was changed to fit the page.) For those not familiar with parsers and tokenizers, this can look overwhelming. Programming Hive



211、单词 ajax 的匹配文本:


  1. It is easy to take the effects of the modern websites for granted, but the functionality of pop-up lists during searches, customization, and the in-page experience (traditionally referred to as AJAX) of a dynamic website. Getting Started with CouchDB


  2. An interesting point to note is the large number of HTTP POST operations, which is typical of a Web 2.0 site where a large proportion of the traffic is AJAX requests from JavaScript applications running in client browsers. Monitoring with Ganglia


  3. And most recently, he has been specializing in Web 2.0, Ajax, portals, and cloud computing. Pro Hadoop


  4. We’ll build the first iteration of the example application without using show or list, because writing Ajax queries against the JSON API is a better way to teach CouchDB as a database. CouchDB The Definitive Guide


  5. Instead of submitting the data to the server for each operation and rerendering the page, you change the functionality to exchange data via asynchronous JavaScript calls (AJAX) in the background. Gradle in Action


  6. He introduced the concept of JavaScript frameworks and Ajax to the company’s web division in 2008, developing a PL/SQL and JavaScript framework that is now used in every Core web product. Beginning CouchDB


  7. New features introduced in version 2 of the JSF specification, such as composite components and Ajax, are only exposed to page authors using facelets. Java EE 7 Essentials


  8. ActiveMQ could be used in the Web environment through its REST and Ajax APIs. ActiveMQ in Action


  9. If a page has multiple Ajax requests, it is important to measure the overall performance of the page by combining all associated requests. Java Performance


  10. Its most visible part is an interactive Web UI making heavy use of AJAX with the help of the Google Web Toolkit [16] PowerDrill - Processing a Trillion Cells per Mouse Click



212、单词 emr 的匹配文本:


  1. If you’re comfortable with the Amazon Elastic MapReduce (EMR) service, Mahout algorithms can be run as follows: Mahout in Action


  2. An alternative method of running a MapReduce with the Amazon cloud is to use Amazon Elastic MapReduce (EMR) Hadoop Operations and Cluster Management Cookbook


  3. You learn about Elastic MapReduce (EMR) and additional AWS services (such as S3, CloudWatch, Simple Workflow, and so on) that can be used to supplement Hadoop’s functionality. Professional Hadoop Solutions


  4. This chapter discusses how to use Amazon Elastic MapReduce (EMR), the hosted Hadoop infrastructure, to execute traditional MapReduce computations as well as Pig and Hive computations on the Amazon EC2 cloud infrastructure. Hadoop MapReduce Cookbook


  5. The trade-off is that you lose some control over how the cluster works and you have to pay extra for this EMR service. Hadoop in Action


  6. A dual approach In this book we will not be limiting our discussion to EMR or Hadoop hosted on Amazon EC2; we will be discussing both the building and the management of local Hadoop clusters (on Ubuntu Linux) in addition to showing how to push the processing into the cloud via EMR. Hadoop Beginners Guide


  7. Rather than allowing customers to rent machines for any type of process (like Amazon’s Elastic Cloud Computing [EC2] service and other cloud services), EMR allows users to rent virtual Hadoop clusters. Programming Pig


  8. If you already use Amazon Web Services, the fastest path to setting up Hive for learning is to run a Hive-configured job flow on Amazon Elastic MapReduce (EMR) Programming Hive



213、单词 configurable 的匹配文本:


  1. Configuration An interface into the configuration of your CouchDB installation that allows you to edit the different configurable parameters. Getting Started with CouchDB


  2. On a configurable schedule, snmp2ganglia polls each host for the OIDs associated with it and delivers those values to Ganglia as metrics. Monitoring with Ganglia


  3. Because of the way reverse HTTP proxies work, this automatically allows configurable security, encryption, load distribution, compression, and, of course, aggressive caching of your database resources. CouchDB The Definitive Guide


  4. In this simple example, we will develop a river that will be able to write last modified date of a given website and will check and update that data periodically and the period will be configurable. Mastering ElasticSearch


  5. Column families have several configurable parameters that are specified in a with clause and separated by and: Cassandra High Performance Cookbook


  6. This boundary is configurable, but we have found 256K to work well for us in our production workloads. Cassandra A Decentralized Structured Storage System


  7. A lot of what we''ve looked at here is configurable: from the node name to the Gateway settings. Elasticsearch in Action MEAP V07 cHAPTER 1


  8. The default sort order is not configurable; by default, records are returned in the order in which they are written. Cassandra The Definitive Guide


  9. This is all configurable and will be discussed later in chapter 4 ActiveMQ in Action


  10. Internally, each SSTable contains a sequence of blocks (typically each block is 64KB in size, but this is configurable) Bigtable - A Distributed Storage System for Structured Data



214、单词 partitioner 的匹配文本:


  1. Each map task in Hadoop is broken into the following phases: record reader, mapper, combiner, and partitioner. MapReduce Design Patterns


  2. Using the row key of data, the partitioner calculates a token. Cassandra High Performance Cookbook


  3. Finally, write the code to set up a normal MapReduce job, but tell the MapReduce framework to use our own partitioner and comparator classes: Hadoop RealWorld Solutions Cookbook


  4. Hadoop uses HashPartitioner as the default Partitioner implementation to calculate the distribution of the intermediate data to the reducers. Hadoop MapReduce Cookbook


  5. The partitioner''s job is to logically funnel map outputs to the. Hadoop in Practice


  6. There are overloaded template factory methods for setting a combiner, partitioner, record reader, or record writer. Hadoop:The Definitive Guide 3rd Edition


  7. Hadoop enforces this strategy by use of the HashPartitioner class. Hadoop in Action


  8. Depending on your security constraints and partitioner, it’s fine to run multiple keyspaces on the same cluster. Cassandra The Definitive Guide


  9. This functionality is provided by the HashPartitioner class within the org. Hadoop Beginners Guide


  10. Further, each key is assigned to a partition using a component called the partitioner. Hadoop Operations



215、单词 ubuntu 的匹配文本:


  1. You''d be hard pressed to find such a company using the bleeding-edge Linux kernel rather than something like Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu LTS, or any of the other distributions that aim for is stability and compatibility. Apache Flume Distributed Log Collection for Hadoop


  2. For example, if you use Ubuntu 9.10 (Karmic) or later, then CouchDB comes pre-installed. Getting Started with CouchDB


  3. Installing The rrdcached package can be installed on Debian-based distributions (Debian, Ubuntu, Linux, Mint, etc.) by using apt: Monitoring with Ganglia


  4. Be sure to read through the Ubuntu-specific system documentation that can be found under /usr/share/couchdb. CouchDB The Definitive Guide


  5. Assume that the box is based on the Linux distribution Ubuntu. Gradle in Action


  6. Binary installers are available for Ubuntu, Fedora, CentOS, FreeBSD, and Mac OS X systems through each system’s individual package manager. Beginning CouchDB


  7. We’ll be using Ubuntu, but these instructions should be easily adaptable to your operating system. Scaling CouchDB


  8. You can simplify your deployment of Hadoop by using their free Hadoop distribution, in either RPM   or Ubuntu /Debian   Packages. Hadoop in Action


  9. If you’re on a Linux distribution such as Ubuntu, it couldn’t be easier to get Git. Cassandra The Definitive Guide


  10. We assume you have a recent distribution of Ubuntu, but as long as you have command-line Linux familiarity any modern distribution will suffice. Hadoop Beginners Guide



216、单词 frontend 的匹配文本:


  1. For example, a cluster that is used for backend batch processing and not serving frontend requests. HBase Administration Cookbook


  2. If it seems like I’m reading your mind, it’s because the Ganglia authors are engineers like you, who designed Ganglia’s visualization UI, gweb, from their own notion of the ideal data visualization frontend. Monitoring with Ganglia


  3. This choice of replica sites was made to cope with outages due to potential major natural disasters, and also the choice of their frontend sites. Spanner - Google’s Globally-Distributed Database


  4. Her interests and experiences lie across multiple tiers, from web frontend GUI to middleware, from middleware to backend SQL RDBMS and NoSQL data storage. Cassandra High Performance Cookbook


  5. You will also need the Ganglia meta daemon (gmetad) running on at least one node, and another node running the Ganglia web frontend. Hadoop RealWorld Solutions Cookbook


  6. The frontend to the exception monitoring service then provides links from specific exception reports to their respective distributed traces. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. B.6 HDFS proxy The HDFS proxy is a component in the Hadoop contrib that provides a web app proxy frontend to HDFS. Hadoop in Practice


  8. Counters At StumbleUpon, the first production feature deployed on HBase was keeping counters for the stumbleupon.com frontend. Hadoop:The Definitive Guide 3rd Edition


  9. As soon as you have to retain information about your users, products, sessions, orders, and so on, you are typically going to use some storage backend providing a persistence layer for the frontend application server. HBase:The Definitive Guide


  10. Client: FlockDB uses the Thrift 0.2 client, and Twitter has also written a Ruby frontend that offers a richer interface. Cassandra The Definitive Guide



217、单词 txt 的匹配文本:


  1. Assume you want to determine all files within a directory and write their names to a new file named allfiles.txt. Gradle in Action


  2. For each record, numWords is incremented by the number of terms in input.txtField returned by the CountWords function. Dremel - Interactive Analysis of WebScale Datasets


  3. In the following example we have the filterFiles task, where we define a file collection of two files with the names INSTALL.txt and README. Gradle Effective Implementation Guide


  4. Printing out the files property of goodPoems (or otherwise inspecting the contents of the collection) shows that it contains all of the .txt files in the src/main/resources directory, but not the file whose name starts with shelley. Gradle Beyond The Basics


  5. Reads the stocks.txt file from the local filesystem and writes the Avro. Hadoop in Practice


  6. Create some text file on your local filesystem called example.txt. Hadoop in Action


  7. You might want to check out the README.txt file, as it will have the list of dependencies required to run the tool. Cassandra The Definitive Guide


  8. Place the file on HDFS: $ hadoop fs -put states.txt states.txt. Hadoop Beginners Guide


  9. README.txt—A file containing some URLs to documentation to get new users started with ActiveMQ. ActiveMQ in Action


  10. In this case, the copyFiles task will copy any files with the .xml, .properties, or .txt extensions from the resources directory to the target directory. Building and Testing with Gradle



218、单词 schemas 的匹配文本:


  1. Structure and Consistency There are no constraints or schemas for a document database, but that doesn’t mean that you can ignore aspects like the simplicity and consistency of the format. Getting Started with CouchDB


  2. Even though many projects happily use Bigtable [9], we have also consistently received complaints from users that Bigtable can be difficult to use for some kinds of applications: those that have complex, evolving schemas, or those that want strong consistency in the presence of wide-area replication. Spanner - Google’s Globally-Distributed Database


  3. There is nothing in the standard DSL to describe the servers to which a deployment might be targeted, the database schemas associated with the application, or the operations exposed by an underlying source control tool. Gradle Beyond The Basics


  4. It is unclear to us how effective schemas are in achieving transparency in practice. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  5. To do this you need to set the job configuration with the schema information for both schemas. Hadoop in Practice


  6. Finally, you can look at the data as generic database relations   with well-defined schemas, in a simple commaseparated format. Hadoop in Action


  7. One often-lauded feature of relational database systems is the rich schemas they afford. Cassandra The Definitive Guide


  8. The metadata server provides an API to store and fetch metadata such as table names and schemas, and pointers to the underlying data. Tenzing - A SQL Implementation On The MapReduce Framework


  9. Unlike traditional relational databases that require structured data with well-defined schemas, MapReduce and Hadoop work best on semi-structured or unstructured data. Hadoop Beginners Guide


  10. Clients can control the locality of their data through careful choices in their schemas. Bigtable - A Distributed Storage System for Structured Data



219、单词 javadoc 的匹配文本:


  1. Finally, we cover the marvelous javadoc tool that formats your code comments as a set of hyperlinked web pages. Core Java Volume I Fundamentals 9th Edition


  2. Its main purpose is to provide automation for typical tasks needed in Java projects, such as compiling source files to classes, running unit tests, packaging JAR files, and creating Javadoc documentation. Gradle in Action


  3. The task javadoc is used to generate Javadoc documentation, and is in the Documentation tasks section. Gradle Effective Implementation Guide


  4. A modifier is a keyword such as public or static.) The name of each annotation is preceded by an @ symbol, similar to Javadoc comments. Core Java Volume II Advanced Features 9th Edition


  5. Not coincidentally, they are also the elements for which the Javadoc utility generates documentation in its default mode of operation. Effective Java 2nd Edition


  6. An example of heap pollution Some methods should not be marked as safe, as illustrated with the following code adapted from the javadoc description of the @SafeVarargs annotation (http://download. Java 7 New Features Cookbook


  7. However, take a look at what we have done here and browse through the Javadoc for these classes. Hadoop Beginners Guide


  8. For more information on the advanced features of metrics2, see the Javadoc. Hadoop Operations


  9. But if you want more in-depth information, see the Javadoc for the JMS Message type. ActiveMQ in Action


  10. Use Javadoc to place comments before any of the classes, method, and fields that you want to document. Java 7 Recipes A Problem-Solution Approach



220、单词 aspectj 的匹配文本:


  1. However, the good news for Spring and AOP developers is that starting from version 2.0, Spring provides much tighter integration with AspectJ. Pro Spring 3


  2. For example, it’s possible to use the Spring Framework and AspectJ to enable dependency injection in our domain objects. Pro Spring MVC


  3. The Spring Framework also supports aspect-oriented programming with both a simpler approach called Spring AOP and the more powerful AspectJ approach. Spring in Practice


  4. The most popular, perhaps, is AspectJ which requires a special compilation process. Pro Spring Security


  5. In AspectJ parlance, each execution point is called a join point, and a particular query over join points is a pointcut. Programming Scala


  6. A dedicated project named Equinox Aspects allows you to use LTW and AspectJ within the Equinox container. Spring Dynamic Modules in Action


  7. Spring supports two implementations of AOP: one using the AOP Alliance project (essentially wrapping objects as just described) and the other using AspectJ. Spring Integration in Action


  8. Reports on the language AspectJ include some examples from this book expressed in an aspect-oriented fashion. Concurrent Programming in Java Design Principles and Pattern


  9. The more advanced mapping mode leverages AspectJ to provide a live, connected representation of the graph elements bound to the domain objects. Spring Data Modern Data Access for Enterprise Java


  10. Don’t worry if you don’t know AspectJ or the details of how AspectJ pointcut expressions are written. Spring in Action 4th Edition



221、单词 keystore 的匹配文本:


  1. Constructing this object is beyond the scope of this chapter but in general requires a certificate and secret key stored in a keystore object. Netty in Action MEAP V08


  2. If the certificate is generated by a well-known trusted authority, their root certificates are most likely already in the default keystore (cacerts) Jenkins Continuous Integration Cookbook


  3. For example, the properties related to SSL and the keystore are only used if you are using LDAP over SSL. Professional Hadoop Solutions


  4. The certificates themselves are held in a location (usually a file) called the keystore. Java Security 2nd Edition


  5. In most Java environments, a keystore is used to store the private keys, and a truststore to store the trusted certificates. Oracle JRockit The Definitive Guide


  6. Here is how Alice creates a keystore, alice.certs, and generates a key pair with alias alice: Core Java Volume II Advanced Features 9th Edition


  7. This is accomplished from the command prompt by executing the following command: keytool -genkey -keystore mySrvKeystore -keyalg RSA. Java 7 New Features Cookbook


  8. Assuming that a keystore exists on your system (see the keytool documentation in Chapter 8), you can get one with code like the following: Java in a Nutshell 5th Edition


  9. Enter key password for

    (RETURN if same as keystore password): Java Network Programming 4th Edition


  10. Without creating and denoting the proper keystore and truststore, you can expect to see the following exceptions: ActiveMQ in Action



222、单词 repl 的匹配文本:


  1. Here’s how to create that user account, which we’ll call repl: High Performance MySQL 3rd Edition


  2. PRIMARY" is the state of the member and "testReplSet" is an identifier for this set. MongoDB The Definitive Guide 2nd Edition


  3. There you typed node() at the REPL and got back test@mrhyde as a result. RabbitMQ in Action


  4. First, load the ring and a new ringbuilder object in a Python REPL: bk-admin-guide-cloud-latest


  5. Before you begin, you should have the Scala interpreter (REPL) running and ready. Functional Programming in Scala


  6. I start up a Clojure REPL and use it to poke around the problem API. Functional Programming Patterns in Scala and Clojure


  7. You can run them interactively at a REPL (read-eval-print loop) command line. Beginning Scala


  8. The --replSet option tells the instance the name of the replica set it is joining, as well as the name of at least one other member of the set. The Definitive Guide to MongoDB


  9. You get what’s called a “read-eval-print” loop, or REPL, in the world of Lisp, Scala, and so on. Java SE 8 for the Really Impatient


  10. Alternatively, a nice approach is to use Ruby’s interactive REPL, irb. MongoDB in Action



223、单词 mbean 的匹配文本:


  1. Several Flume components report metrics to the JMX platform MBean server. Apache Flume Distributed Log Collection for Hadoop


  2. This is readily visible only via the Ganglia reporting framework or through the JMX MBean interface. Pro Hadoop


  3. An Mbean with a randomly chosen number will be below, which you need to expand again. Cassandra High Performance Cookbook


  4. In order to find out how many documents you''ve indexed, you would look at the SolrIndexSearch Mbean. Solr 3 Enterprise Search Server


  5. FSDatasetMBean Datanode Datanode storage metrics, such as capacity and free storage space. Hadoop:The Definitive Guide 3rd Edition


  6. You do have to know the exact names of the MBean and the attribute or operation you want to get. HBase:The Definitive Guide


  7. MBeans and the platform MBean server will be discussed in the next chapter. Oracle JRockit The Definitive Guide


  8. CompactionManager implements an MBean interface so it can be introspected. Cassandra The Definitive Guide


  9. ActiveMQ can use an existing MBean server or even create its own if necessary. ActiveMQ in Action


  10. Java 6’s JConsole can also generate a heap dump using its HotSpotDiagnostics MBean. Java Performance



224、单词 bootstrap 的匹配文本:


  1. This means that a new user can easily bootstrap your application on an empty database. CouchDB The Definitive Guide


  2. You’ll learn how to apply Gradle’s standard plugins to bootstrap, configure, and run your application. Gradle in Action


  3. A component of the RM negotiates for the container to spawn this bootstrap process. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Seed nodes do not auto bootstrap At least one seed node needs to be specified during initial deployment. Cassandra High Performance Cookbook


  5. In the bootstrap case, when a node needs to join a cluster, it reads its configuration file which contains a list of a few contact points within the cluster. Cassandra A Decentralized Structured Storage System


  6. The first thing you need to do is bootstrap the source code. Beginning CouchDB


  7. The examples of Bootstrap actions include providing custom configuration for Hadoop, installing of any dependent software, distributing a common dataset, and so on. Hadoop MapReduce Cookbook


  8. The first thing you need to do is download and install the “bootstrap” RPM, which will update your local Yum configuration to include Cloudera’s remote Yum repository: Hadoop in Practice


  9. Seed nodes will ignore the AutoBootstrap setting because it is assumed that they will be the first nodes in the cluster. Cassandra The Definitive Guide


  10. Confirm that the Proceed with no Bootstrap Actions radio button is selected and click on the Continue button. Hadoop Beginners Guide



225、单词 declarative 的匹配文本:


  1. This chapter explores Gradle’s declarative support for dependency management, version conflict resolution strategies, and the inner workings of its cache. Gradle in Action


  2. Declarative builds and convention over configuration Gradle uses a Domain Specific Language (DSL) based on Groovy to declare builds. Gradle Effective Implementation Guide


  3. A Pig Latin program combines high-level declarative operators similar to those in SQL, together with named intermediate variables representing edges in the dataflow graph between operators. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. Hive lets you build analytics and manage data using a declarative, SQL-like dialect known as HiveQL. Hadoop RealWorld Solutions Cookbook


  5. It allows you to write concise, declarative builds in an idiomatic language backed by rich, domain-specific functionality. Gradle Beyond The Basics


  6. On the other hand, a declarative language such as SQL   is sometimes easier for analysts who prefer to just state the results one expects from a program. Hadoop in Action


  7. An important one is the Structured Query Language (SQL), which is feature-rich and uses a simple, declarative syntax. Cassandra The Definitive Guide


  8. Declarative query languages built on top of the core frameworks with intermediate to advanced optimizations. Tenzing - A SQL Implementation On The MapReduce Framework


  9. Remember that the major difference is that Pig is an imperative language (it defines how the process will be executed), while Hive is more declarative (defines the desired results but not how they will be produced) Hadoop Beginners Guide


  10. Thus you can express the activities of your build in a high-level, declarative language that is meaningful in your business and technology domain. Building and Testing with Gradle



226、单词 nio 的匹配文本:


  1. Java’s NIO API (non-blocking I/O), which is used for I/O operations such as processing a file, has been updated to take advantage of the Streams API. Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. You can achieve greater server throughput by using features of the java.nio package. Core Java Volume II Advanced Features 9th Edition


  3. This chapter is concerned with the management of paths as represented by the java.nio. Java 7 New Features Cookbook


  4. The java.io, java.net, and java.nio APIs for performing I/O operations are discussed later in this chapter.) Unfortunately, the interrupt() method does not wake up a thread blocked in an I/O method of the java.io package. Java in a Nutshell 5th Edition


  5. Indeed, the java.nio package that provides this support is not part of any current or planned Java ME profiles, though it is found in Android. Java Network Programming 4th Edition


  6. From 1.4, native ByteBuffer s (available with the java.nio packages) allow you to pass data to native libraries without necessarily passing the data through the JNI, which can be a significant gain. Java Performance Tuning 2nd Edition


  7. It discusses ActiveMQ URIs and all the transport connectors for both client-to-broker and broker-to-broker communications including TCP, NIO, STOMP, failover, SSL, HTTP/S, and much more. ActiveMQ in Action


  8. Java NIO frameworks also tend to simplify the programming of client-server type applications. Java Performance


  9. The advantage of the new NIO.2 is that it is asynchronous in nature. Java 7 Recipes A Problem-Solution Approach


  10. That class has methods to encode an array of bytes or a NIO ByteBuffer. Java SE 8 for the Really Impatient



227、单词 mongo 的匹配文本:


  1. The mongo shell is a crucial tool for using MongoDB and is used extensively throughout the rest of the text. MongoDB The Definitive Guide 2nd Edition


  2. To connect to the already running MongoDB server, create a Mongo client class with a hostname and port number: Scala in Action


  3. It’s generally applied to a number of recent nonrelational databases such as Cassandra, Mongo, Neo4J, and Riak. NoSQL Distilled


  4. To run the MongoDBserver, simply go to the bin directory of the directory you just unpacked and run the file ./mongod. Pro Spring Security


  5. Exporting data from HDFS into MongoDB This recipe will use the MongoOutputFormat class to load data from an HDFS instance into a MongoDB collection. Hadoop RealWorld Solutions Cookbook


  6. Quick Start If you’re looking to set up a cluster to play around with as fast as possible, you can set one up in a minute or two using the mongo-snippets repository on Github. Scaling MongoDB


  7. The mongo application allows you to use the database shell; this shell enables you to accomplish practically anything you’d want to do with MongoDB. The Definitive Guide to MongoDB


  8. The mongo executable loads the shell and connects to a specified mongod process. MongoDB in Action


  9. This support is mainly implemented as XML namespace and support classes for Spring JavaConfig and allows us to easily set up access to a Mongo database, an embedded Neo4j instance, and the like. Spring Data Modern Data Access for Enterprise Java


  10. Suppose that you need to authenticate against users in a non-relational database such as Mongo or Neo4j. Spring in Action 4th Edition



228、单词 akka 的匹配文本:


  1. But we’re not forced to use Clojure, because there are ways to use STM directly in Java, thanks to nice tools like Multiverse and Akka. Programming Concurrency on the JVM


  2. The chapter dives into issues of designing actor-based systems and finishes with a demonstration of how the Akka actors library provides best practices by default. Scala in Depth


  3. At the end of the chapter you will build a large distributed and scalable application using Akka. Scala in Action


  4. Scala—For Scala there’s an AMQP library that can be used directly with the AKKA Scalability framework. RabbitMQ in Action


  5. They’re a big part of the actor model, a model for concurrent and distributed programming, that’s used by Scala’s Akka library and by Erlang, another functional language. Functional Programming Patterns in Scala and Clojure


  6. Akka A project to implement a platform for building fault-tolerant, distributed applications based on REST, Actors, etc. Programming Scala



229、单词 instantiated 的匹配文本:


  1. The only information the partitioner has is the key, the value, the number of partitions, and whatever data was made available to it when it was instantiated. Pro Hadoop


  2. Every task is backed by an actual task object that’s instantiated for you during Gradle’s configuration phase. Gradle in Action


  3. A new instance of a mapper is instantiated in a separate JVM instance for each map task that makes up part of the total job input. Professional Hadoop Solutions


  4. The withInput() method is called to provide input to the Mapper class that the MapDriver class was instantiated with. Hadoop RealWorld Solutions Cookbook


  5. Next, a Sawzall interpreter is instantiated for each piece of data. Interpreting the Data Parallel Analysis with Sawzall


  6. One UniqueId class is instantiated by a tsd process for each kind of UID stored in the table. HBase in Action


  7. That is, the same pipe assembly can be “instantiated” many times into new Flow, in the same application, without fear of any interference between them. Hadoop:The Definitive Guide 3rd Edition


  8. The setup() method of ImportMapper overrides the method called once when the class is instantiated by the framework. HBase:The Definitive Guide


  9. Users submit code in the form of MapReduce jobs to the cluster, which is instantiated and executed on worker nodes within the cluster. Hadoop Operations


  10. Then the publisher is instantiated and used to send a simple message 100 times. ActiveMQ in Action



230、单词 classifier 的匹配文本:


  1. You will first learn how to build and train a classifier model by “teaching” an algorithm with a series of examples. Mahout in Action


  2. For example, let’s say you want to extend your version management functionality by two more tasks: one that increments the major version of the project and another to do the same work for the minor version classifier. Gradle in Action


  3. They implemented a logistic regression classifier on top of Spark similar to the example in Section 6.1, but they used a distributed reduceByKey to sum the gradient vectors in parallel. Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing


  4. To set the filename of the archive, we use any of these properties: baseName, appendix, version, classifier, and extension. Gradle Effective Implementation Guide


  5. In this recipe, we will use a naive Bayes classifier from Apache Mahout to determine if a set of terms found in a movie review mean the movie had a negative or positive reception. Hadoop RealWorld Solutions Cookbook


  6. In addition, modules may be qualified by their classifier and type. Gradle Beyond The Basics


  7. Classification using Naive Bayes Classifier A classifier assigns inputs into one of the N classes based on some properties (features) of inputs. Hadoop MapReduce Cookbook


  8. First we look at creating an external node classifier, and we provide some simple examples of these for you to model your own on; then we demonstrate the use of the LDAP node classifier. Pro Puppet


  9. You’ll use recommenders to recommend movies similar to movies that users have already rated; you’ll write a classifier that can filter out spam emails; and, finally, we’ll look at how you can use clustering to discover structure in your data. Hadoop in Practice


  10. In ES2, this labeling is accomplished using a rule-driven classifier that uses a number of page features extracted during local analysis. Hadoop in Action



231、单词 stateful 的匹配文本:


  1. Pregel is also different because it implements a stateful model where long-lived processes compute, communicate, and modify local state, rather than a dataflow model where any process computes solely on input data and produces output data input by other processes. Pregel  -  A System for Large-Scale Graph Processing


  2. Since useful HTTPbased applications will most likely require a stateful model, this presents an interesting challenge. Java Network Programming and Distributed Computing


  3. What should reversing the stream of all prime numbers do? It should return the largest prime number, which mathematics tells us doesn’t exist.) We call these operations stateful operations. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. The problem is that the majority of implementations of actors are stateful (compared to the in-memory combiner pattern, described in Chapter 4) Professional Hadoop Solutions


  5. HBase’s Increment command is an example of such a stateful operation. HBase in Action


  6. Filters could be enhanced to become stateful, and therefore make decisions across row boundaries. HBase:The Definitive Guide


  7. An option has been added to disable passivation of stateful session beans. Java EE 7 Essentials


  8. Graphs and MapReduce – a match made somewhere The main reason graphs don''t look like many other MapReduce problems is due to the stateful nature of graph processing, which can be seen in the path-based relationship between elements and often between the large number of nodes processed together for a single algorithm. Hadoop Beginners Guide


  9. Systems such as Linux-HA work well for stateless services such as static content serving, but for a stateful system such as the namenode, they’re insufficient. Hadoop Operations


  10. Stateful Almost all the UDFs are stateful by default; a UDF that is not stateful is rand() because it returns a different value for each invocation. Programming Hive



232、单词 invocations 的匹配文本:


  1. MapReduce implementations should strive to reduce startup latency by using techniques like worker processes that are reused across different invocations; MapReduce a flexible data processing tool


  2. Subsequent invocations of the gradle command will now reuse the daemon process. Gradle in Action


  3. The Map invocations are distributed across multiple machines by automatically partitioning the input data. MapReduce - Simplied Data Processing on Large Clusters


  4. But our input file hasn''t changed between the task invocations, so the task doesn''t have to be executed. Gradle Effective Implementation Guide


  5. Although, technically, these APIs can be used to write synchronous invocations, this must be done on top of fully asynchronous APIs. Professional Hadoop Solutions


  6. Then we again form chunks from the aggregate result of all the Combine invocations and recursively apply the Combine function to these new chunks. Incoop MapReduce for Incremental Computations


  7. If rereduce is true, however, the keys argument will be null, and the values argument will be an array of the results produced by the previous invocations of the reduce function. Beginning CouchDB


  8. The two invocations of the mapper will therefore be given the following input: Hadoop Beginners Guide


  9. Communication is performed through direct method invocations of the broker object. ActiveMQ in Action


  10. Command-line Java can be a hassle, but this task tries to take the hassle away and integrate command-line Java invocations into your build. Building and Testing with Gradle



233、单词 asynchronously 的匹配文本:


  1. Optimistic Replication Any member of a homogeneous replica group can accept mutations [23], which are asynchronously propagated through the group. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. Our message passing model allows us to amortize latency by delivering messages asynchronously in batches. Pregel  -  A System for Large-Scale Graph Processing


  3. The blocks in each stripe are prefetched asynchronously; the read-ahead cache typically achieves hit rates of 95% Dremel - Interactive Analysis of WebScale Datasets


  4. Object locks are similarly used to mask latency for large writes by acquiring locks and flushing data asynchronously. Ceph A Scalable, High-Performance Distributed File System


  5. Writes to local disk are the most expensive operation in Dapper’s runtime library, but their visible overhead is much reduced since each disk write coalesces multiple log file write operations and executes asynchronously with respect to the traced application. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  6. When each DataNode has completed writing the block locally, the blocks are moved from temporary storage into permanent storage, and each DataNode asynchronously notifies the NameNode of their block storage (steps A, B, and C) Hadoop in Practice


  7. Remember that these changes are forwarded to the replica shards asynchronously and there is no guarantee that they will arrive in the same order that they were sent. Elasticsearch The Definitive Guide


  8. It can break down in some cases, such as programs that process records asynchronously (say, by spawning threads) or buffer them to process in chunks, but it usually works. Hadoop in Action


  9. Unlike the commit log, data is written to this file asynchronously. Cassandra The Definitive Guide


  10. The rest of the steps in the process are handled asynchronously. ActiveMQ in Action



234、单词 jsf 的匹配文本:


  1. Currently, he is developing web applications using the latest Java technologies on the market (EJB 3.0, CDI, Spring, JSF, Struts, Pro Java 7 NIO.2


  2. Performing static code analysis example, JEE components, web frameworks like JSF, and mobile technologies such as Android. Gradle in Action


  3. For example, Java Server Pages (JSP) and Java Server Faces (JSF) both have their own EL, and their syntaxes are different. Pro Spring 3


  4. This could be a JSP or JSF page, but it could also be a PDF or XML representation of a page. Pro Spring MVC


  5. Simple HTML pages can also be used to provide a re limited user interface for Java EE applications.) vlets, JSP pages, JSF applications, filters, and web event listeners typically cute in a web container and may respond to HTTP requests from web cli. Java EE 7 Specification


  6. JavaServer Faces (JSF) introduces pass-through attributes and elements that allow near-total control over the user experience of each individual element in the view. Java EE 7 Essentials


  7. Web technologies such as JavaServer Faces (JSF) and JavaServer Pages (JSP) rely heavily on the JavaBeans component model. Core Java Volume II Advanced Features 9th Edition


  8. Not all of them are discussed here—JSF and Jersey, just to name two. Java Performance


  9. Application frameworks such as Java Server Pages (JSP) and Java Server Faces (JSF) are built upon servlet technology. Java 7 Recipes A Problem-Solution Approach


  10. They are the foundation for technologies such as JavaServer Pages (JSP) and the backbone of web frameworks such as JavaServer Faces (JSF) Pro JPA2



235、单词 multithreaded 的匹配文本:


  1. Multithreaded [The] benefits of multithreading are better interactive responsiveness and real-time behavior. Core Java Volume I Fundamentals 9th Edition


  2. Due to the multithreaded nature of the web browser and web server, the read workload is likely to be random access. Monitoring with Ganglia


  3. This particular constraint is also why the map tasks may be multithreaded, while the reduce tasks are explicitly only single-threaded. Pro Hadoop


  4. However, the main bottleneck will be multithreaded access, which has poor performance. Mastering ElasticSearch


  5. The stress tool is a multithreaded load tester specifically for Cassandra. Cassandra High Performance Cookbook


  6. We expect you to understand basic concepts of concurrent programs, both multithreaded and concurrent processes. HBase in Action


  7. Most developers will use the multithreaded library, zookeeper_mt, as it supports both the synchronous and asynchronous APIs. Hadoop:The Definitive Guide 3rd Edition


  8. The NameNode is a multithreaded system and processes requests simultaneously from multiple clients. The Hadoop Distributed File System


  9. They allow you to prevent race conditions in multithreaded or sharednothing application server design. HBase:The Definitive Guide


  10. Most socket-based web services (for example, HTTP daemons, servlet engines, and application servers) are multithreaded. Concurrent Programming in Java Design Principles and Pattern



236、单词 pointcut 的匹配文本:


  1. Pointcuts: A pointcut is a collection of joinpoints that you use to define when advice should be executed. Pro Spring 3


  2. The advice is associated with a pointcut expression and runs at any join point matching the pointcut. Pro Spring MVC


  3. In the next listing you supplement the breaker definition from recipe 14.1 with the Spring AOP interceptor bean (representing advice in general AOP parlance), a pointcut, and a Spring AOP advisor (representing an aspect in general AOP parlance) Spring in Practice


  4. Instead, it uses aspect-oriented programming to declare a pointcut at a method or set of methods, with an advice that performs checks for role membership when the pointcut matches. Spring Security 3.1


  5. One very good thing about the XML configuration for securing methods is that it can be applied to more than one method of more than one class at the same time using AspectJ pointcut expressions—the same way you would in the rest of Spring Security’s suite of products. Pro Spring Security


  6. In AspectJ parlance, each execution point is called a join point, and a particular query over join points is a pointcut. Programming Scala


  7. The method should have the following signature (pointcut expression language): Spring Integration in Action


  8. In both cases, the pointcut-ref attribute refers to a pointcut named embark. Spring in Action 4th Edition



237、单词 util 的匹配文本:


  1. The value you passed is used in a call to java.util. Apache Flume Distributed Log Collection for Hadoop


  2. TableMapReduceUtil is a useful utility class for writing a MapReduce program over HBase. HBase Administration Cookbook


  3. The method defines one parameter of type java.util.Map called options. Gradle in Action


  4. Note the pmdMain, pmdTest, and pmdUtil tasks that are executed. Gradle Effective Implementation Guide


  5. This recipe uses the helper functions in the ByteBufferUtil class to convert byte data into strings. Cassandra High Performance Cookbook


  6. The Mongo Hadoop Adaptor also provides a convenient MongoConfigUtil class to help set up the job to connect to MongoDB as if it were a filesystem. Hadoop RealWorld Solutions Cookbook


  7. We will configure the TableMapper and the TableReducer using the utility methods given in the TableMapReduceUtil class. Hadoop MapReduce Cookbook


  8. MapFile can be thought of as a persistent form of java.util.Map (although it doesn’t implement this interface), which is able to grow beyond the size of a Map that is kept in memory. Hadoop:The Definitive Guide 3rd Edition


  9. The clock is stored as a long (representing the milliseconds since the Unix epoch), so we could wrap this in a new java.util.Date object if we wanted to. Cassandra The Definitive Guide


  10. The java.util.Map interface is the parent of commonly used classes such as HashMap and (through some library backward reengineering) even the original Hashtable. Hadoop Beginners Guide



238、单词 reusable 的匹配文本:


  1. A dependency is generally an external, reusable library in the form of a JAR file (for example, Log4J for logging support) Gradle in Action


  2. Build logic is described with a powerful DSL and empowers developers to create reusable and maintainable build logic. Gradle Effective Implementation Guide


  3. Generally the default, unbounded filter cache is fine with the soft type and proper queries that are paying attention to make the caches reusable. Mastering ElasticSearch


  4. We took a look at other design patterns books that have been written in the past as inspiration, particularly Design Patterns: Elements of Reusable Object-Oriented Software, by Gamma et al. MapReduce Design Patterns


  5. You can see that those methods are reusable, and after they are created, the generation of a dynamic Workflow becomes a pretty trivial task. Professional Hadoop Solutions


  6. Not only that, but you’ll build it in a way that’s reusable for other kinds of queries. HBase in Action


  7. For each group in the regular expression, set the appropriate column in the array—the row is a reusable ArrayList that was created in. Hadoop in Practice


  8. As another benefit, UDFs tend to be more reusable than the libraries developed for writing MapReduce programs. Hadoop:The Definitive Guide 3rd Edition


  9. When you write a reusable script, it’s generally parameterized such that you can vary its operation for each run. Hadoop in Action


  10. This model of having reusable mapper and reducer implementations is one thing to remember, especially in combination with the fact that often the best starting point for a new MapReduce job implementation is an existing one. Hadoop Beginners Guide



239、单词 reilly 的匹配文本:


  1. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc., Getting Started with CouchDB, the cover image of a hawk’s bill sea turtle, and related trade dress are trademarks of O’Reilly Media, Inc. Getting Started with CouchDB


  2. Monitoring with Ganglia, the image of a Porpita pacifica, and related trade dress are trademarks of O’Reilly Media, Inc. Monitoring with Ganglia


  3. CouchDB: The Definitive Guide, the image of a Pomeranian dog, and related trade dress are trademarks of O’Reilly Media, Inc. CouchDB The Definitive Guide


  4. O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Scaling CouchDB


  5. Gradle Beyond the Basics, the image of a Belgian shepherd dog, and related trade dress are trademarks of O’Reilly Media, Inc. Gradle Beyond The Basics


  6. Hadoop: The Definitive Guide, the image of an elephant, and related trade dress are trademarks of O’Reilly Media, Inc. Hadoop:The Definitive Guide 3rd Edition


  7. O’Reilly books may be purchased for educational, business, or sales promotional use. Elasticsearch The Definitive Guide


  8. Cassandra: The Definitive Guide, the image of a Paradise flycatcher, and related trade dress are trademarks of O’Reilly Media, Inc. Cassandra The Definitive Guide


  9. Hadoop Operations, the cover image of a spotted cavy, and related trade dress are trademarks of O’Reilly Media, Inc. Hadoop Operations


  10. Building and Testing with Gradle, the image of a bush wren, and related trade dress are trademarks of O’Reilly Media, Inc. Building and Testing with Gradle



240、单词 spel 的匹配文本:


  1. Spring Expression Language (SpEL) Expression Language (EL) is a technology to allow an application to manipulate Java objects at runtime. Pro Spring 3


  2. The value can either be a hardcoded value or a SpEL expression. Pro Spring MVC


  3. Up to this point, all the expressions you’ve seen use the Spring Expression Language (SpEL) which is new to Spring 3.0 Spring in Practice


  4. Fortunately, Spring Security can leverage Spring Expression Language (SpEL) to determine whether a user has authorization. Spring Security 3.1


  5. SpEL makes it possible to use programming expressions inside bean definitions and other parts of the Spring portfolio—for example, in some annotations you will see later, which allows you to manipulate objects at runtime. Pro Spring Security


  6. Next, you make the configuration more flexible by leveraging the Spring Expression Language (SpEL) to avoid hardcoding certain settings in Spring configuration files. Spring Batch in Action


  7. If all the information is to be found in the message itself, and all you need is to write a logical expression that’s computed against the payload or the header values, you can use the Spring 3.0 Expression Language (SpEL) directly. Spring Integration in Action


  8. A control bus can execute Groovy scripts or Spring Expression Language (SpEL) expressions, allowing you to manipulate the state of components inside the application. Spring Data Modern Data Access for Enterprise Java


  9. Let’s start with a look at property placeholders, the simpler of the two, and then dig into the more powerful SpEL. Spring in Action 4th Edition



241、单词 predefined 的匹配文本:


  1. Writing a Pregel program involves subclassing the predefined Vertex class (see Figure 3) Pregel  -  A System for Large-Scale Graph Processing


  2. In contrast, CouchDB uses predefined map and reduce functions in a style known as MapReduce. CouchDB The Definitive Guide


  3. The actual building of your software usually follows predefined and ordered steps. Gradle in Action


  4. Users can add support for a new input type by providing an implementation of a simple reader interface, though most users just use one of a small number of predefined input types. MapReduce - Simplied Data Processing on Large Clusters


  5. Adding Ivy repositories An Ivy repository has a customizable layout; this means that there is no single predefined layout as with a Maven repository. Gradle Effective Implementation Guide


  6. The second phase is restricted to a set of predefined aggregators that process the intermediate results generated by the first phase. Interpreting the Data Parallel Analysis with Sawzall


  7. Either provide a list of stopwords or tell it to use a predefined stopwords list from a particular language. Elasticsearch The Definitive Guide


  8. This reveals another interesting quality to Cassandra’s columns: they don’t have to be as simple as predefined name/value pairs; you can store useful data in the key itself, not only in the value. Cassandra The Definitive Guide


  9. When you want to configure a network of brokers, the first obvious question is, do you know the exact network address of each broker in the network? If the answer is yes, then you can proceed configuring your network statically and also connect your clients to predefined broker URIs. ActiveMQ in Action


  10. We’ll also leave behind the strict, predefined lifecycle of Maven and the absence of a lifecycle with Ant. Building and Testing with Gradle



242、单词 serializable 的匹配文本:


  1. In plain English, you can assign the result to Object, Serializable, or Comparable. Core Java Volume I Fundamentals 9th Edition


  2. The input split implements the Writable interface, so that it is serializable by the framework, and includes a default constructor in order for the framework to create a new instance via reflection. MapReduce Design Patterns


  3. The current implementation is based on the serializable shape super class, and several specific shape implementations, including point, bounding box, circle, and polygon.) This can be used by the filter to obtain the shape with which the document is associated to perform explicit filtering. Professional Hadoop Solutions


  4. In the MapReduce application, they’re of the Hadoop serializable types Text and LongWritable, respectively. HBase in Action


  5. Hadoop''s Writable versus Java''s Serializable Hadoop''s Writable-based serialization framework provides a more efficient and customized serialization and representation of the data for MapReduce programs than using the general-purpose Java''s native serialization framework. Hadoop MapReduce Cookbook


  6. If you want your SequenceFile to contain objects that aren’t Writable or Serializable, you’ll need to implement your own Serializer and register it. Hadoop in Practice


  7. The Serializable interface has no methods, so you don’t need to change your classes in any way. Core Java Volume II Advanced Features 9th Edition


  8. To make a singleton class that is implemented using either of the previous approaches serializable (Chapter 11), it is not sufficient merely to add implements Serializable to its declaration. Effective Java 2nd Edition


  9. Do NOT use your own classes (even if they implement java.io .Serializable) Programming Hive


  10. The main advantage of snapshot isolation over a serializable protocol is more efficient reads. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



243、单词 unicode 的匹配文本:


  1. To fully understand the char type, you have to know about the Unicode encoding scheme. Core Java Volume I Fundamentals 9th Edition


  2. This should be enclosed by double quotes and supports Unicode characters and backslash escaping. Getting Started with CouchDB


  3. These are defined in Unicode and are wrapped in double quotes (") Beginning CouchDB


  4. There are also two primitive array-like types: bytes, similar to a C array of unsigned char; and string, which is defined to hold characters from the Unicode character set. Interpreting the Data Parallel Analysis with Sawzall


  5. One option is to add up the ASCII (or Unicode) values of the characters in the string. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. Indexing for the Text class is in terms of position in the encoded byte sequence, not the Unicode character in the string or the Java char code unit (as it is for String) Hadoop:The Definitive Guide 3rd Edition


  7. It splits the text on word boundaries, as defined by the Unicode Consortium, and removes most punctuation. Elasticsearch The Definitive Guide


  8. Byte-oriented streams are inconvenient for processing information stored in Unicode (recall that Unicode uses multiple bytes per character) Core Java Volume II Advanced Features 9th Edition


  9. To display the actual buffer, we need to apply the forName method to obtain a Charset object, and then use the decode method against it to convert the bytes in the buffer into Unicode characters. Java 7 New Features Cookbook


  10. These are number, Unicode string, boolean, array, object, and null. Cassandra The Definitive Guide



244、单词 sts 的匹配文本:


  1. Later, we’ll also touch on a more sophisticated Eclipse distribution called the SpringSource Tool Suite (STS) that provides out-of-the-box support for importing hierarchical Gradle projects. Gradle in Action


  2. If we want to re-use our existing Eclipse IDE, we can install STS as a plugin. Gradle Effective Implementation Guide


  3. Soon after that, SpringSource, the company behind Spring founded by Rod Johnson, created an integrated tool called the SpringSource Tool Suite (STS) Pro Spring 3


  4. SpringSource has also invested significant effort into providing its own distribution based upon Eclipse called the SpringSource Tool Suite (STS) Pro Spring MVC


  5. Spring has strong IDE support in the form of the Spring Tool Suite (STS), which is a branded and extended version of Eclipse. Spring in Practice


  6. Security Token Services (STS) that are used for issuing tokens related to a subject’s authorization credentials, and for issuing authorization decisions in transactions. Professional Hadoop Solutions


  7. The focus of STS is to provide tools to help Spring developers create and manage Spring-based applications. Spring Batch in Action


  8. More recent is the SpringSource Tool Suite (STS), http://www.springsource.com/products/sts) a dedicated Eclipse distribution targeting the development of Spring- and SpringSource dm Server–based applications. Spring Dynamic Modules in Action


  9. IntelliJ is an excellent alternative IDE, and provides support for many of the same features as SpringSource Tool Suite, the key omission being an integrated copy of the Spring tc Server web application server, which comes bundled with STS. Spring Roo in Action


  10. It requires either a Maven 3 installation on your machine or an IDE capable of importing Maven projects such as the Spring Tool Suite (STS) Spring Data Modern Data Access for Enterprise Java



245、单词 microsoft 的匹配文本:


  1. Outside of the Web, environments like Apple’s Xcode or Microsoft’s .NET all provide toolkits that simplify the development and functionality of your applications. Getting Started with CouchDB


  2. Similar integrated sFlow support is available for the Microsoft extensible virtual switch that is part of the upcoming Windows Server 2012 version of Hyper-V. Monitoring with Ganglia


  3. Accountants don’t use erasers” is a quote from Pat Helland, a senior architect of transactional systems who worked at Microsoft and Amazon. CouchDB The Definitive Guide


  4. On a Microsoft Windows operating system, the semicolon (;) is used as a path separator, and in Linux or Mac OS X operating systems, the colon (:) is used. Gradle Effective Implementation Guide


  5. Please remember that because of some bugs that exist in the JVM machine for Microsoft Windows it is very probable that the new IO filesystem store will suffer from performance problems while running on Microsoft Windows. Mastering ElasticSearch


  6. At the time, CouchDB would run only on Microsoft Windows. Beginning CouchDB


  7. Microsoft is also starting to work with Hortonworks to ensure that Hadoop works on its platform. Hadoop in Practice


  8. The exponential growth of data first presented challenges to cutting-edge businesses such as Google, Yahoo, Amazon, and Microsoft. Hadoop in Action


  9. This original version of Cassandra was written primarily by an ex-employee from Amazon and one from Microsoft. Cassandra The Definitive Guide


  10. Note that this support does allow integration with third-party identity systems such as Microsoft Active Directory, so it is quite powerful. Hadoop Beginners Guide



246、单词 codec 的匹配文本:


  1. If you want to specify compression for your data, you set the hdfs.codeC property if you want the HDFS sink to write compressed files. Apache Flume Distributed Log Collection for Hadoop


  2. Data compression in Hadoop is done by a compression codec, which is a program that encodes and decodes data streams. Hadoop Operations and Cluster Management Cookbook


  3. You can also use it for debugging what is actually going to be written in the Lucene index by using SimpleTextCodec (on the Apache Lucene level because, ElasticSearch doesn''t expose this codec) Mastering ElasticSearch


  4. SequenceFiles are splittable even when using a compression codec that is not normally splittable, such as GzipCodec. Hadoop RealWorld Solutions Cookbook


  5. The code in Example 4-12 indicates that the top-level project has a compile-time dependency on both the :codec and the :content subprojects. Gradle Beyond The Basics


  6. The following example uses the Snappy compression codec in conjunction with Avro files: Hadoop in Practice


  7. In Hadoop, a codec is represented by an implementation of the CompressionCodec interface. Hadoop:The Definitive Guide 3rd Edition


  8. Class property denoting which CompressionCodec to use for compressing mapper’s output. Hadoop in Action


  9. Since map output need not be splittable, it is not important to select a compression codec that exhibits such a property. Hadoop Operations


  10. This build file declares an external dependency: the Apache Commons Codec library. Building and Testing with Gradle



247、单词 deprecated 的匹配文本:


  1. In our case we''ve chosen the suggested and the only non-deprecated type of gateway, which is the local (gateway.type property) Mastering ElasticSearch


  2. In general, we try to use the newer mapreduce API for all of our examples, not the deprecated mapred API. MapReduce Design Patterns


  3. With each version, new features are added, and older features are deprecated. Beginning CouchDB


  4. For properties that exist in version 1.x, the old (deprecated) names are used in this book because they will work in all the versions of Hadoop listed here. Hadoop:The Definitive Guide 3rd Edition


  5. The older API was deprecated and should have been dropped in version 0.21.0—but that did not happen. HBase:The Definitive Guide


  6. The new abstract classes also replace the MapReduceBase class, which has been deprecated. Hadoop in Action


  7. This operation will be deprecated once the transition is complete. Cassandra The Definitive Guide


  8. On deprecated memory related parameters Some of parameters to control memory-aware scheduling in the Apache Hadoop and CDH documentation are actually deprecated in the code. Hadoop Operations


  9. The hint still works, but it’s now deprecated as of Hive v0.7 Programming Hive


  10. The stop method was originally included in class Thread, but its use has since been deprecated. Concurrent Programming in Java Design Principles and Pattern



248、单词 foreach 的匹配文本:


  1. You can iterate either by using an explicit iterator or with a foreach loop that returns a Path object for each iteration. Pro Java 7 NIO.2


  2. We eliminated the getNames() and getScore methods and instead provided a foreach() internal-iterator to step through the score values. Programming Concurrency on the JVM


  3. Next, foreach (with a lowercase e) is similar to forEach in Java 8 (with an uppercase E) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. In Listing 13-6, you can see the familiar GROUP BY operation from SQL, while foreach a generate b is the projection operator, equivalent to SELECT b FROM a in SQL. Professional Hadoop Solutions


  5. There is one field that looks odd in the FOREACH statement, and that field is ip_country_ tbl::ip. Hadoop RealWorld Solutions Cookbook


  6. Evaluation functions—These are used in conjunction with the FOREACH operator, and manipulate data to produce modified output. Hadoop in Practice


  7. Pig’s nested FOREACH syntax lets us count the distinct incoming repliers (neighbor nodes) and the total incoming replies in one pass:12 Hadoop:The Definitive Guide 3rd Edition


  8. In addition, the reduce() method no longer accepts the list of values as an Iterator but as an Iterable, which is easier to iterate through using Java’s foreach syntax. Hadoop in Action


  9. The for/in statement for easily iterating through arrays and collections (this statement is sometimes called “foreach”) Java in a Nutshell 5th Edition


  10. Alternatively, the enhanced for loop, also known as the foreach loop, could be used to iterate over the array (see recipe 4-7 for more discussion of the foreach loop): Java 7 Recipes A Problem-Solution Approach



249、单词 csv 的匹配文本:


  1. As with a show, you can use this to output as HTML, XML, formatted JSON, even CSV if that is what you need. Getting Started with CouchDB


  2. Raw graph data can be exported as CSV or JSON. Monitoring with Ganglia


  3. The same goes for CSV, multi-part form, or any other format. CouchDB The Definitive Guide


  4. Moreover, when dealing with CSV or tab-separated data, a slightly misaligned INT or FLOAT type mapping in your Hive table declaration, where the data has a STRING, could lead to NULL mappings per row. Hadoop RealWorld Solutions Cookbook


  5. The first line instructs Pig to load the CSV (comma-separated values) file into the variable A. Hadoop MapReduce Cookbook


  6. The data is in CSV form, and the fields are in the following order: Hadoop in Practice


  7. For example, there are load and store functions in the Piggy Bank for Avro data files, CSV files, Hive RCFiles, SequenceFiles, and XML files. Hadoop:The Definitive Guide 3rd Edition


  8. The data set is in the standard comma-separated values (CSV) format, with the first line a description of the columns. Hadoop in Action


  9. One of the most common types of scripts is one that uses a CSV file of machine to rack mappings. Hadoop Operations


  10. We compare latency and memory usage of the basic datastructures with other data formats and backends: CSV, record-io (binary format based on protocol buffers [29]), and Dremel (as mentioned in the introduction, Dremel is a high performance column-store developed at Google) PowerDrill - Processing a Trillion Cells per Mouse Click



250、单词 searcher 的匹配文本:


  1. It will elaborate on searcher, indexer, web DB, and fetcher. Web Crawling and Data Mining with Apache Nutch


  2. IndexSearcher is a basic component of Solr searched with a default base searcher class. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  3. Lucene library use an abstraction class called Searcher to access index. Mastering ElasticSearch


  4. IndexSearcher leverages IndexReader to read the inverted index and implement the search logic. Professional Hadoop Solutions


  5. By waiting for the data to flush to disk (waitFlush) and waiting for a new searcher to be ready to respond to changes (waitSearcher), this circumstance is avoided. Solr 3 Enterprise Search Server


  6. A program called DumpLexicon takes this list together with the lexicon produced by the indexer and generates a new lexicon to be used by the searcher. The anatomy of a large-scale hypertextual Web search engine


  7. The output index is now created, optimized, and closed, after which it is ready for use in a searcher application. Hadoop:The Definitive Guide 3rd Edition


  8. Create five Searcher objects, five threads to execute them, and start the five threads. Java 7 Concurrency Cookbook


  9. To show you Lucene’s indexing and searching capabilities, we’ll use a pair of command-line applications: Indexer and Searcher. Lucene in Action 2nd Edition


  10. This handler allows Solr (and also you) to retrieve the latest version of the document without the need for re-opening the searcher, and thus even if the document is not yet visible during usual search operations. Apache Solr 4 Cookbook



251、单词 applets 的匹配文本:


  1. Chapter 10 shows you how to deploy your programs, either as applications or applets. Core Java Volume I Fundamentals 9th Edition


  2. Over the years his focus switched from applets, to servlets, to Java Enterprise Edition applications, to Spring-based software. Gradle Effective Implementation Guide


  3. Also, the decision to compile Java to JVM bytecode (a virtual machine code that soon every browser supported) meant that it became the language of choice for internet applet programs (do you remember applets?) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Applets can be used to provide a powerful user interface Java EE applications. Java EE 7 Specification


  5. You can use JDBC in applets and Web Start applications, but you probably don’t want to. Core Java Volume II Advanced Features 9th Edition


  6. Thread groups were originally envisioned as a mechanism for isolating applets for security purposes. Effective Java 2nd Edition


  7. More details on how to create draggable applets can be found at that site. Java 7 New Features Cookbook


  8. The Java programming language is the language in which Java applications, applets, servlets, and components are written. Java in a Nutshell 5th Edition


  9. Applets that run in web browsers normally use the proxy server settings of the web browser itself, though these can be overridden in the Java Control Panel. Java Network Programming 4th Edition


  10. The book also provides solutions for desktop application development using the Swing API, and web-based solutions including servlets and applets. Java 7 Recipes A Problem-Solution Approach



252、单词 literals 的匹配文本:


  1. Scala provides tuple literals, which means you can create tuples through simple syntactic sugar—just the normal mathematical notation: Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. The following hexadecimal, octal, and binary literals represent the decimal value -1: The Java Language Specification Java SE 7 Edition


  3. You must use double quotes or the shell will interpret them as literals. HBase:The Definitive Guide


  4. Note that class literals are used as the values for the annotation parameter: Effective Java 2nd Edition


  5. These features include simple improvements such as using underscores in literals and the use of strings with switch statements. Java 7 New Features Cookbook


  6. Finally, the keywords used to write statements, the punctuation characters that represent operators, and the literals values that appear in a program are all tokens, which are described next. Java in a Nutshell 5th Edition


  7. This includes applying the concatenation operator and converting other literals to strings. Java Performance Tuning 2nd Edition


  8. The details for xpath don’t concern us here, but note that we pass string literals to the xpath function and use FROM src LIMIT 1 to specify the required FROM clause and to limit the output. Programming Hive


  9. Java will automatically intern String literals, and this can cause variation when using the == comparison operator. Java 7 Recipes A Problem-Solution Approach


  10. In other programming languages that support function literals, you can declare function types such as (String, String) -> int, declare variables of those types, and use the variables to save function expressions. Java SE 8 for the Really Impatient



253、单词 recursion 的匹配文本:


  1. We now turn to recursion in more detail, a technique promoted in functional programming to let you think more in terms of this what to do style. Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. Learning MapReduce is a lot like learning recursion for the first time: it is challenging to find the recursive solution to the problem, but when it comes to you, it is clear, concise, and elegant. MapReduce Design Patterns


  3. Run the application, supplying a node to start at using the startAt parameter and a depth for recursion into the graph: Cassandra High Performance Cookbook


  4. The reason for this is to prevent an infinite recursion between the security manager and the InetAddress class. Java Security 2nd Edition


  5. Overview Chapter 1 contains review material on discrete math and recursion. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. So, forced unlock operations upon exceptions can still be correctly implemented, resetting the recursion count to the correct state, with no extra synchronization overhead. Oracle JRockit The Definitive Guide


  7. To prevent this from happening, you can replace the recursion in deepCopy with iteration: Effective Java 2nd Edition


  8. This chapter describes performance-tuning a variety of common code structures: loops, switches, and recursion. Java Performance Tuning 2nd Edition


  9. All methods are finite (no infinite loops or unbounded recursion), and so eventually release. Concurrent Programming in Java Design Principles and Pattern


  10. How It Works Before NIO.2, trying to traverse a directory tree involved recursion, and depending on the implementation, it could be very brittle. Java 7 Recipes A Problem-Solution Approach



254、单词 solaris 的匹配文本:


  1. In UNIX (including Linux, Mac OS X, and Solaris), the procedure for editing the executable path depends on the shell you are using. Core Java Volume I Fundamentals 9th Edition


  2. Convenient binary packages for Solaris are distributed in the OpenCSW collection. Monitoring with Ganglia


  3. Scale Unlimited also sponsors a live CD image of a Solaris installation with a three-node Hadoop cluster in zones (http://opensolaris.org/os/project/livehadoop/) Pro Hadoop


  4. Other operating systems such as Solaris, FreeBSD, or Windows do work, but are less often deployed in production. Cassandra High Performance Cookbook


  5. Most of Google is implemented in C or C++ for efficiency and can run in either Solaris or Linux. The anatomy of a large-scale hypertextual Web search engine


  6. Solaris Solaris is offered by Oracle, and is available for a limited number of architecture platforms. HBase:The Definitive Guide


  7. Supported operating systems on EC2   include more than six variants of Linux, plus Windows Server and OpenSolaris. Hadoop in Action


  8. If you don’t have the Sun J2SE installed and you’re using Linux, Solaris, or Windows, download and install it from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/ index.html. ActiveMQ in Action


  9. VirtualBox is free for all these platforms, and also Solaris. Programming Hive


  10. Every complete source code example in this book has been compiled and executed on Windows and Solaris platforms, and in many cases has been used to interoperate between both platforms. Fundamental Networking in Java



255、单词 mongos 的匹配文本:


  1. This means that, with sharding, "$group" will first be run on each shard and then the individual shards’ groups will be sent to the mongos to do the final grouping and the remainder of the pipeline will be run on the mongos (not the shards) MongoDB The Definitive Guide 2nd Edition


  2. To accomplish this, MongoDB comes with a special routing process called mongos. Scaling MongoDB


  3. The mongos daemon is responsible for managing which MongoDB server is sent the commands from your application, and this daemon will reissue queries that cross multiple shards to multiple servers and aggregate the results together. The Definitive Guide to MongoDB


  4. A separate routing server called mongos is also used to send requests to the appropriate shard. MongoDB in Action



256、单词 iterable 的匹配文本:


  1. The collection expression must be an array or an object of a class that implements the Iterable interface, such as ArrayList. Core Java Volume I Fundamentals 9th Edition


  2. FlumeJava includes additional operations such as asIterable() that can be used to inspect parts of larger PCollections. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. The difference is that, unlike a map method that is invoked with a single key/value pair, a reduce method is invoked with a single key and an iterable set of values (remember, a reducer is invoked after execution of shuffle and sort, at which point, all the input key/value pairs are sorted, and all the values for the same key are partitioned to a single reducer and come together) Professional Hadoop Solutions


  4. Without the assistance of the combiner, this would present the reducer with an iterable collection of 1s to be counted. Hadoop RealWorld Solutions Cookbook


  5. Classes that implement the Iterable interface can have the enhanced for loop used on them to view all their items. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. This is a common mistake; the Iterable was erroneously replaced by an Iterator class. HBase:The Definitive Guide


  7. For that reason, the Files class has a method that yields an Iterable object instead. Core Java Volume II Advanced Features 9th Edition


  8. In addition, the reduce() method no longer accepts the list of values as an Iterator but as an Iterable, which is easier to iterate through using Java’s foreach syntax. Hadoop in Action


  9. If the element type of the Iterable src exactly matches that of the stack, it works fine. Effective Java 2nd Edition


  10. The Reducer interface uses an Iterator object instead of an Iterable object; this was changed as the latter works with the Java for each syntax and makes for cleaner code. Hadoop Beginners Guide



257、单词 jobtracker 的匹配文本:


  1. Check the job status via the MapReduce admin page—http://master1:50030/ jobtracker.jsp: HBase Administration Cookbook


  2. You can also keep track of your crawling from the browser by opening the Jobtracker component of Apache Hadoop. Web Crawling and Data Mining with Apache Nutch


  3. The JobtrackerHost is the host on which the JobTracker server process will be run. Pro Hadoop


  4. Check the status of a job by opening the JobTracker URL, master:50030/ jobtracker.jsp. Hadoop Operations and Cluster Management Cookbook


  5. Verify that the new scheduler has been applied by going to http://

    <jobtracker-host>:50030/scheduler in your installation. Hadoop MapReduce Cookbook


  6. In CDH you can access the JobTracker UI at http://localhost:50030/jobtracker.jsp. Hadoop in Practice


  7. In this book, we use the lowercase form, “jobtracker,” to denote the entity when it’s being referred to generally, and the CamelCase form JobTracker to denote the Java class that implements it. Hadoop:The Definitive Guide 3rd Edition


  8. It has locations of data blocks stored in several datanodes and often runs on the same server as the jobtracker in smaller clusters. Cassandra The Definitive Guide


  9. A master process, called the jobtracker in Hadoop MapReduce, is responsible for accepting these submissions (more on the role of the jobtracker later) Hadoop Operations


  10. It runs the namenode and jobtracker daemons, but it also has Hive installed on it. Programming Hive



258、单词 sbt 的匹配文本:


  1. To run the examples, readers should have Scala installed and, optionally, SBT (http://scalasbt.org) Scala in Depth


  2. This chapter will show you how to build and organize a Scala project and it introduces the popular Simple Build Tool (SBT) Scala in Action


  3. Fortunately, the Kafka source distribution includes sbt and can be built with the following command: Storm Blueprints Patterns for Distributed Real-time Computation


  4. In the folder that mongo-hadoop was cloned to, open the build.sbt file with a text editor. Hadoop RealWorld Solutions Cookbook


  5. Once the download is complete, install Scala Build Tool (sbt), then unpack and build the Kafka server by executing the following command: Storm Real-Time Processing Cookbook


  6. More typically, you''ll build and run your Scala code using sbt, the build tool for Scala, and/or an IDE like IntelliJ or Eclipse. Functional Programming in Scala


  7. The Scala-oriented SBT, available at http://code.google.com/p/simple-build-tool/, has some similarities to Buildr. Programming Scala


  8. Installing the prerequisites Kafka is implemented in Scala and uses the ./sbt tool for building Kafka binaries. Apache Kafka


  9. Simple Build Tool (sbt, http://code.google.com/p/simple-build-tool/) is a Scala-based build tool that focuses on dependency management. Beginning Scala



259、单词 orm 的匹配文本:


  1. Use of Apache Gora Although there are many excellent ORM frameworks for relational databases and data modeling, data stores in NoSQL are profoundly different from their relative cousins. Web Crawling and Data Mining with Apache Nutch


  2. Object-relational mapping (ORM) systems (and the “frameworks” that use them) are another frequent performance nightmare. High Performance MySQL 3rd Edition


  3. By the 2000s, we’ve seen popular ORM frameworks such as Hibernate, iBATIS, and Rails Active Record that reduce much of that burden. NoSQL Distilled


  4. The Metastore thus becomes an application that runs on an RDBMS and uses an open source ORM layer called DataNucleus ([8]), to convert object representations into a relational schema and vice versa. Hive A Petabyte Scale Data Warehouse Using Hadoop


  5. You can define the callback listeners using the XML descriptors bundled in META-INF/ orm.xml: Java EE 7 Essentials


  6. Object-relational mapping (ORM) techniques such as Active Record often use SELECT * by default to populate the fields of an object representing a row in a database. SQL Antipatterns


  7. And if you’re an application developer, you’ll no doubt be familiar with the many object-relational mapping (ORM) frameworks that have sprung up in recent years to help ease the difficulty in mapping application objects to a relational model. Cassandra The Definitive Guide


  8. In EclipseLink a query results cache can be specified through the object relationship mapping file, orm.xml. Java Performance


  9. The technique of bridging the gap between the object model and the relational model is known as object-relational mapping, often referred to as O-R mapping or simply ORM. Pro JPA2


  10. In addition, less time need be spent shoehorning the relational representation of data into an object-oriented data model or dealing with the vagaries, and optimizing the SQL produced by, an ORM. MongoDB in Action



260、单词 zset 的匹配文本:


  1. Figure 1.5 An example of a ZSET with two members/scores under the key zset-key. Redis in Action


  2. This will tell you whether that key is a string, hash, list, set, or zset. Redis Cookbook



261、单词 udfs 的匹配文本:


  1. Conceptually, such user defined functions (UDFs) can be combined with SQL queries, but the experience reported in the comparison paper indicates that UDF support is either buggy (in DBMS-X) or missing (in Vertica) MapReduce a flexible data processing tool


  2. Finally, the SQL parser supports custom domain-specific SQL extensions based on User Defined Functions (UDFs), User Defined Table Functions (UDTFs), and custom operators (for example, Mahout’s k-means operator) Professional Hadoop Solutions


  3. Most abstract Pig classes that used to create UDFs now have a method named List

    getCacheFiles() that can be overridden to load files from HDFS into the distributed cache. Hadoop RealWorld Solutions Cookbook


  4. A runtime job scheduler and load balancer – Doesn’t need to understand what’s going on in the UDFs or rely on. MapReduce A Major Step Backwards


  5. Generic UDFs are more efficient as they don''t require reflection to call the evaluate. Hadoop in Practice


  6. Virtually all parts of the processing path are customizable: loading, storing, filtering, grouping, and joining can all be altered by userdefined functions (UDFs) Hadoop:The Definitive Guide 3rd Edition


  7. More unique and crucial to understanding Pig are the design choices of its programming language (a data flow language called Pig Latin), the data types it supports, and its treatment of user-defined functions (UDFs ) as first-class citizens. Hadoop in Action


  8. As can be seen from the HiveQL statement, we can use our new function just like any other and indeed the only way to know which functions are built-in and which are UDFs is through familiarity with the standard Hive function library. Hadoop Beginners Guide


  9. Hive_test is ideal for testing code such as UDFs, input formats, SerDes, or any component that only adds a pluggable feature for the language. Programming Hive


  10. Ideally, the data analysis system should also have a robust mechanism for allowing the user to write user defined functions (UDFs) and queries that utilize UDFs should automatically be parallelized across the processing nodes in the shared-nothing cluster. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



262、单词 mongod 的匹配文本:


  1. When run with no arguments, mongod will use the default data directory, /data/db/ (or \data\db\ on the current volume on Windows) MongoDB The Definitive Guide 2nd Edition


  2. To run the MongoDBserver, simply go to the bin directory of the directory you just unpacked and run the file ./mongod. Pro Spring Security


  3. Our tutorial assumes that you’ve configured a working MongoDB database and that your mongod daemon process is running when you work with MongoDB and Roo. Spring Roo in Action


  4. Keep in mind that this isn’t as simple as setting up a single mongod (then again, what is?) Scaling MongoDB


  5. Thus, if you want to run the mongod service, you will need to do so directly from the aforementioned bin directory. The Definitive Guide to MongoDB


  6. The core database server runs via an executable called mongod (mongodb.exe on Windows) MongoDB in Action


  7. To bootstrap MongoDB, you need to create a folder to contain the data and then start the mongod binary, pointing it to the just-created directory (see Example 6-3) Spring Data Modern Data Access for Enterprise Java



263、单词 vms 的匹配文本:


  1. This exception indicates to the programmer that the tainer does not support moving the object between VMs. Java EE 7 Specification


  2. The idea of exposing a general-purpose lock service is found in VMS [23], Chubby - The lock service for loosely-coupled distributed systems


  3. Even then, the new version was faster than the original, and VMs have become much faster in the intervening years. Effective Java 2nd Edition


  4. Use of this flag sets the class file version number so that the resulting class file cannot be run by VMs from previous releases. Java in a Nutshell 5th Edition


  5. Virtual machines (VMs) can then be deployed within a hypervisor and have access to whatever hardware resources are allocated to them by the hypervisor. Hadoop Operations


  6. For one thing, it exacts a severe performance penalty in many VMs (though more recent VMs have improved greatly in this respect), potentially slowing down your code by a factor of three or more. Java Network Programming 4th Edition


  7. Chapter 3 looks at the SDK, including Virtual Machines (VMs) and compilers. Java Performance Tuning 2nd Edition


  8. A high level simplistic description of the lock optimization added to Java 5 HotSpot VMs and later is given as follows; spin in a tight loop trying to acquire a lock, if not successful after a number of tight loop spins, park the thread and wait to be notified when to try acquiring the lock again. Java Performance


  9. Most of the preconfigured virtual machines (VMs) available are only designed for VMWare, but if you prefer VirtualBox you may find instructions on the Web that explain how to import a particular VM into VirtualBox. Programming Hive


  10. You will need to tightly manage the application packaging and be ready to update client VMs when security updates become available. Java SE 8 for the Really Impatient



264、单词 openid 的匹配文本:


  1. Yale CAS also has the significant advantage of having a pluggable set of handlers that authenticate across a range of backend servers, such as LDAP, OpenID (http://openid. Jenkins Continuous Integration Cookbook


  2. Spring Security primarily addresses two major areas of security: authentication and authorization.11 Concerning web authentication, it provides a built-in username/password login form with optional remember-me functionality, support for CAS-based SSO, OpenID authentication, and others. Spring in Practice


  3. Chapter 8, Opening up To OpenID, covers OpenID-enabled login and user attribute exchange, as well as a high-level overview of the logical flow of the OpenID protocol. Spring Security 3.1


  4. Authenticating with OpenID OpenID is an authentication solution that exists to address the problems inherent in having many user accounts in many different sites with many different sets of credentials. Pro Spring Security


  5. In the context of keystone this is a call that is specific to the implementation, such as adding support for OpenID. OpenStack Operations Guide



265、单词 datanode 的匹配文本:


  1. The command hadoop-daemon.sh start datanode will start a DataNode server on a machine, if one is not already running. Pro Hadoop


  2. This configuration only allows the DataNode instances running as users belonging to the group datanode to communicate with the NameNode in the cluster. Hadoop Operations and Cluster Management Cookbook


  3. Another two EC2 small instances are used with roles hadoop-datanode and hadoop-tasktracker in each instance. Hadoop MapReduce Cookbook


  4. There are two types of client: external clients (run by users) and internal clients (run on the cluster as a part of the system, e.g., datanode and tasktracker daemons) Hadoop:The Definitive Guide 3rd Edition


  5. No Datanode contains more than one replica of any block. The Hadoop Distributed File System


  6. Eventually, you will see a complaint in the datanode logs about the xcievers limit being exceeded, but on the run up to this one manifestation is a complaint about missing blocks. HBase:The Definitive Guide


  7. At least one node in the cluster needs to be running the datanode process. Cassandra The Definitive Guide


  8. In HDFS, the daemon responsible for storing and retrieving block data is called the datanode (DN) Hadoop Operations


  9. Core Instance Group The nodes in the core instance group have the same function as Hadoop slave nodes that run both the datanode and tasktracker daemons. Programming Hive



266、单词 adapters 的匹配文本:


  1. If you frequently write event listeners that use adapters, you''ll find that you often need to create new Java source files, which can become difficult to manage. Java Network Programming and Distributed Computing


  2. Most icons are very similar to what you are used to, which helps ease the learning curve for new adapters. Performance Testing With JMeter 2.9


  3. We can use adapters to adapt existing classes to produce the desired interface, with a relatively small amount of code. Thinking in Java 4th Edition


  4. Resource adapters may also provide services that are entirely local, perhaps interact platform adapter work w. Java EE 7 Specification


  5. These producers can be different in nature; for example, frontend applications, backend services, proxy applications, adapters to legacy systems, and producers for Hadoop. Apache Kafka


  6. As the list of adapters grew, so did the versions of each, causing them to become difficult to maintain. ActiveMQ in Action


  7. There are also many third-party adapters available for MongoDB that let you use common opensource or commercial monitoring systems, including tools such as Nagios, ganglia, and cacti. The Definitive Guide to MongoDB


  8. This flexibility also aids in integration with other tools, as Hive can query many native file types without the need to write custom “adapters” in Java. Programming Hive


  9. This is only one of many common contexts for building Adapters, which also form the basis of several related patterns presented in the Design Patterns book. Concurrent Programming in Java Design Principles and Pattern


  10. If your key function can return null, you will like the nullsFirst and nullsLast adapters. Java SE 8 for the Really Impatient



267、单词 javafx 的匹配文本:


  1. Oracle is developing an alternate technology, called JavaFX, that may at some point become a replacement for Swing. Core Java Volume I Fundamentals 9th Edition


  2. To provide a richer interface to the users of a web application, unless you are using rich Internet application (RIA) technologies that require special runtimes on the web browser client (for example, Adobe Flex requires Flash, JavaFX requires JRE, Microsoft Silverlight requires Silverlight, and so on), you need to use JavaScript to implement the features. Pro Spring 3


  3. Anonymous classes are often used in the context of GUI applications to create event-handler objects (here using the JavaFX API, a modern UI platform for Java): Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Standalone desktop applications In my mind, the most compelling reason for using the embedded Solr approach is when you have a rich client application developed using technologies such as Swing or JavaFX which is running in a much more constrained client environment. Solr 3 Enterprise Search Server


  5. JavaFX 2.0 can be used for developing rich desktop and Internet applications using the Java language, or any other language that runs on the JVM. Java 7 Recipes A Problem-Solution Approach


  6. NOTE: Since Java 8 positions JavaFX as the successor to the Swing GUI toolkit, I use JavaFX in these examples. Java SE 8 for the Really Impatient



268、单词 firefox 的匹配文本:


  1. You can view it with any web browser, such as Firefox. Core Java Volume I Fundamentals 9th Edition


  2. Using a browser extension Google Chrome and Firefox have vibrant browser plugin ecosystems that allow you to extend the capabilities of your browser with each plugin you choose. Performance Testing With JMeter 2.9


  3. All tests are configured to work exclusively against Mozilla Firefox. Gradle in Action


  4. Getting ready To validate the configuration from the web interface, a web browser such as Firefox or Google Chrome is needed. Hadoop Operations and Cluster Management Cookbook


  5. However, when I do so, Firefox (my web browser of choice on Windows) gives a page load error. Beginning CouchDB


  6. The Mozilla Foundation is responsible for the Firefox web browser and Thunderbird email client. HBase in Action


  7. Chrome output is shown on the top and Firefox on the bottom. Java EE 7 Essentials


  8. Two of the more popular ones are both Firefox extensions:  Elasticfox and S3Fox. Hadoop in Action


  9. Currently, no browser natively supports XForms, though plug-ins are available for Firefox, and libraries exist to allow serverside translation of XForms into XHTML. Cassandra The Definitive Guide


  10. In this model, applications like Firefox and Warcraft run in the application layer and talk only to the transport layer. Java Network Programming 4th Edition



269、单词 mcollective 的匹配文本:


  1. I’ll cover the basics of using Puppet manifests for configuration management and techniques for executing and managing those configurations with MCollective and Facter. Managing Infrastructure with Puppet


  2. Similarly, Marionette Collective (MCollective) is an orchestration framework closely related to Puppet. Pro Puppet



270、单词 aggregates 的匹配文本:


  1. Fortunately, with rrdtool as the backend for Ganglia, it is possible to manually define a graph that aggregates metrics for all the LUNs backing each database. Monitoring with Ganglia


  2. On top of that, you want to generate a single test report that aggregates all test results. Gradle in Action


  3. The master aggregates the counter values from successful map and reduce tasks and returns them to the user code when the MapReduce operation is completed. MapReduce - Simplied Data Processing on Large Clusters


  4. Transaction, distribution, and concurrency boundaries are also used in domain-driven design when defining Aggregates. Scaling CouchDB


  5. The first phase evaluates the analysis on each record individually, while the second phase aggregates the results (Figure 2) Interpreting the Data Parallel Analysis with Sawzall


  6. New failures are forwarded to the leader, who aggregates updates. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. With millions of unique aggregates and billions of events, this meant a very large number of counters with an even larger number of operations against them. Apache Hadoop Goes Realtime at Facebook


  8. This is nearly all accessed via simple key lookup.” Currently Imagini aggregates data from a variety of sources into Hadoop’s distributed filesystem, HDFS. Cassandra The Definitive Guide


  9. Shutdown Once all tasks have completed successfully, the JobTracker outputs the final state of the job to the client, along with the final aggregates of some of the more important counters that it has been aggregating along the way. Hadoop Beginners Guide


  10. Small materialized aggregates: A light weight index structure for data warehousing. PowerDrill - Processing a Trillion Cells per Mouse Click



271、单词 servlets 的匹配文本:


  1. Tomcat is a container for servlets and JSP pages, but it also serves web pages. Core Java Volume I Fundamentals 9th Edition


  2. The information presented here is expected to benefit both novice and experienced Java practitioners; Java programmers with experience writing applets may be unaware of the range of Java programming environments available, including stand-alone applications, JavaBean software components, and Java servlets. Java Network Programming and Distributed Computing


  3. The WTP plugin simplifies the creation of typical web artifacts like web descriptors, Servlets, and JSP files. Gradle in Action


  4. Over the years his focus switched from applets, to servlets, to Java Enterprise Edition applications, to Spring-based software. Gradle Effective Implementation Guide


  5. Similar access to databases is provided to JSP SF applications, and servlets by the Web Container, and to enterprise y the EJB Container. Java EE 7 Specification


  6. Servlets • Defines a standard mechanism to upgrade existing HTTP connection to a different protocol using HttpUpgradeHandler. Java EE 7 Essentials


  7. The Java programming language is the language in which Java applications, applets, servlets, and components are written. Java in a Nutshell 5th Edition


  8. This is much like Java application servers that invoke servlets upon receiving an HTTP request; the container is responsible for setup and teardown as well as providing a runtime environment for user-supplied code. Hadoop Operations


  9. For instance, you wouldn’t change the ProxySelector in a servlet because that would change the ProxySelector for all servlets running in the same container. Java Network Programming 4th Edition


  10. The book also provides solutions for desktop application development using the Swing API, and web-based solutions including servlets and applets. Java 7 Recipes A Problem-Solution Approach



272、单词 rollback 的匹配文本:


  1. If a failure to write successfully to a channel occurred or if the channel was simply full (a more likely event than failure), the data couldn''t be "put back" as the rollback semantics dictate. Apache Flume Distributed Log Collection for Hadoop


  2. To be able to quickly recover from failed deployments, a good rollback strategy is essential. Gradle in Action


  3. You should understand that any changes made to the data on the upgraded system will be lost after the rollback completes. Hadoop:The Definitive Guide 3rd Edition


  4. The snapshot mechanism lets administrators persistently save the current state of the file system, so that if the upgrade results in data loss or corruption it is possible to rollback the upgrade and return HDFS to the namespace and storage state as they were at the time of the snapshot. The Hadoop Distributed File System


  5. In that case, you can carry out a rollback and the database automatically undoes the effect of all updates that occurred since the last committed transaction. Core Java Volume II Advanced Features 9th Edition


  6. A key feature of transactions is that they execute virtually at first, allowing the programmer to undo (using ROLLBACK) any changes that may have gone awry during execution; if all has gone well, the transaction can be reliably committed. Cassandra The Definitive Guide


  7. Take it slow, one daemon at a time, and for production clusters, always have a rollback plan. Hadoop Operations


  8. Because Impala is focused on analytic workloads, it currently doesn’t have OLTP-style operations such as DELETE, UPDATE, or COMMIT / ROLLBACK. Cloudera Impala


  9. There are six general responses to such failed actions: abrupt termination, continuation (ignoring failures), rollback, rollforward, retry, and delegation to handlers. Concurrent Programming in Java Design Principles and Pattern


  10. Since cleanup is synchronized on the primary lock, it is safe to clean up locks held by live clients; however, this incurs a performance penalty since rollback forces the transaction to abort. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



273、单词 lookups 的匹配文本:


  1. Remember that reverse lookups will add processing time to your data flow. Apache Flume Distributed Log Collection for Hadoop


  2. Reverse DNS Lookups The first time gmond receives a metric packet from any other node, it must do a name lookup to find the hostname corresponding to the packet’s source address. Monitoring with Ganglia


  3. Lookups by key, or key range, are extremely efficient operations with a B-tree, described in big O notation as O(log N) and O(log N + K), respectively. CouchDB The Definitive Guide


  4. This ordering guarantee makes it easy to generate a sorted output file per partition, which is useful when the output file format needs to support efficient random access lookups by key, or users of the output find it convenient to have the data sorted. MapReduce - Simplied Data Processing on Large Clusters


  5. In order to prevent lookups into files that do not contain the key, a bloom filter, summarizing the keys in the file, is also stored in each data file and also kept in memory. Cassandra A Decentralized Structured Storage System


  6. This data structure lives in memory and makes prefix lookups extremely fast, much faster than any term-based query could be. Elasticsearch The Definitive Guide


  7. For example, it is common for our developers to run jobs involving thousands of processes, and for each process to communicate with every other, leading to a quadratic number of DNS lookups. Chubby - The lock service for loosely-coupled distributed systems


  8. The filters are stored in memory and are used to improve performance by reducing disk access on key lookups. Cassandra The Definitive Guide


  9. Because the JNDI provider for each application server has already been configured (see the earlier sections related to the different application servers), this simple Spring configuration is powerful—it makes easy work of performing JNDI lookups. ActiveMQ in Action


  10. Optionally, an SSTable can be completely mapped into memory, which allows us to perform lookups and scans without touching disk. Bigtable - A Distributed Storage System for Structured Data



274、单词 monad 的匹配文本:


  1. Option is a monad because it has both a flatten and flatMap operation that abide by the monadic laws. Scala in Depth


  2. I talk about Monads at length in the next chapter, but for now think of a Monad as a simple container. Scala in Action


  3. In functional programming, there’s a general concept called a monad, which can explain a large number of types with computations, ranging from collections, to computations with state and I/O, backtracking computations, and transactions, to name but a few. Programming in Scala 2nd Edition


  4. Since Monad provides a default implementation of map, it can extend Functor. Functional Programming in Scala


  5. The examples we saw in Sample Code: Chaining Using Sequence Comprehensions, on page 162, are examples of the sequence or list monad. Functional Programming Patterns in Scala and Clojure


  6. Among its features are enhancements to several core Scala types, such as Boolean, Unit, String, and Option, plus support for functional control abstractions, such as FoldLeft, FoldRight, and Monad, that expand upon what is available in the standard library. Programming Scala


  7. Examples of monads List comprehensions, that is,for comprehensions that create lists, can be expressed in terms of the monad presented above: Steps in Scala An Introduction to Object-Functional Programming



275、单词 hashing 的匹配文本:


  1. Different databases have traditionally approached this in a variety of different ways, including binary logs, data streams, row-based logging, and more complex hashing techniques. Getting Started with CouchDB


  2. This is a secret value used for hashing authenticated user names. Monitoring with Ganglia


  3. The batch scheduling work is load-balanced across the schedulers using a simple hashing function. Omega - flexible, scalable schedulers for large compute clusters


  4. Consistent Hashing CouchDB’s storage model uses unique IDs to save and retrieve documents. CouchDB The Definitive Guide


  5. A default partitioning function is provided that uses hashing (e.g. MapReduce - Simplied Data Processing on Large Clusters


  6. Cassandra uses a Consistent Hashing to divide data across the ring. Cassandra High Performance Cookbook


  7. Both approaches have critical limitations: static subtree partitioning fails to cope with dynamic workloads and data sets, while hashing destroys metadata locality and critical opportunities for efficient metadata prefetching and storage. Ceph A Scalable, High-Performance Distributed File System


  8. Cassandra partitions data across the cluster using consistent hashing [11] but uses an order preserving hash function to do so. Cassandra A Decentralized Structured Storage System


  9. Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  10. This routing string is passed through a hashing function to generate a number, which is divided by. Elasticsearch The Definitive Guide



涉及文献数目:163
1、A Java ForkJoin Framework
2、A Little Riak Book
3、ActiveMQ in Action
4、Apache Flume Distributed Log Collection for Hadoop
5、Apache Hadoop Goes Realtime at Facebook
6、Apache Hadoop YARN Yet Another Resource Negotiator
7、Apache Kafka
8、Apache Solr 4 Cookbook
9、Apache Sqoop Cookbook
10、BTrace Java Platform Observability by Bytecode Instrumentation
11、Beginning CouchDB
12、Beginning Scala
13、Bigtable - A Distributed Storage System for Structured Data
14、BookKeeper
15、Building LinkedIn’s Real-time Activity Data1
16、Building and Testing with Gradle
17、Cassandra A Decentralized Structured Storage System
18、Cassandra High Performance Cookbook
19、Cassandra The Definitive Guide
20、Ceph A Scalable, High-Performance Distributed File System
21、Chubby - The lock service for loosely-coupled distributed systems
22、Cloudera Impala
23、Concurrent Programming in Java Design Principles and Pattern
24、Core Java Volume I Fundamentals 9th Edition
25、Core Java Volume II Advanced Features 9th Edition
26、CouchDB The Definitive Guide
27、Dapper a Large-Scale Distributed Systems Tracing Infrastructure
28、Data Structures and Algorithm Analysis in Java 3rd Edition
29、Deploying OpenStack
30、Dremel - Interactive Analysis of WebScale Datasets
31、Effective Java 2nd Edition
32、Elasticsearch The Definitive Guide
33、Elasticsearch in Action MEAP V07 cHAPTER 1
34、F1 - The Fault-Tolerant Distributed RDBMS
35、FlumeJava - Easy, Efficient Data-Parallel Pipelines
36、Functional Programming Patterns in Scala and Clojure
37、Functional Programming in Scala
38、Fundamental Networking in Java
39、GFS - The Google File System
40、Getting Started with CouchDB
41、Getting Started with Google Guava
42、Gradle Beyond The Basics
43、Gradle Effective Implementation Guide
44、Gradle in Action
45、Graph Databases
46、HBase Administration Cookbook
47、HBase in Action
48、HBase:The Definitive Guide
49、HDFS scalability the limits to growth
50、Hadoop Beginners Guide
51、Hadoop MapReduce Cookbook
52、Hadoop Operations
53、Hadoop Operations and Cluster Management Cookbook
54、Hadoop RealWorld Solutions Cookbook
55、Hadoop in Action
56、Hadoop in Practice
57、HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads
58、Hadoop:The Definitive Guide 3rd Edition
59、High Performance MySQL 3rd Edition
60、Hive A Petabyte Scale Data Warehouse Using Hadoop
61、Incoop MapReduce for Incremental Computations
62、Interpreting the Data Parallel Analysis with Sawzall
63、Java 7 Concurrency Cookbook
64、Java 7 New Features Cookbook
65、Java 7 Recipes A Problem-Solution Approach
66、Java 8 in Action Lambdas, Streams and Functional-style Programming
67、Java EE 7 Essentials
68、Java EE 7 Specification
69、Java EE 7 Web Profile Specification
70、Java Network Programming 4th Edition
71、Java Network Programming and Distributed Computing
72、Java Performance
73、Java Performance Tuning 2nd Edition
74、Java SE 8 for the Really Impatient
75、Java Security 2nd Edition
76、Java in a Nutshell 5th Edition
77、Jenkins Continuous Integration Cookbook
78、Jenkins The Definitive Guide
79、Kafka a Distributed Messaging System for Log Processing
80、Lucene in Action 2nd Edition
81、Mahout in Action
82、Managing Infrastructure with Puppet
83、MapReduce - Simplied Data Processing on Large Clusters
84、MapReduce A Major Step Backwards
85、MapReduce Design Patterns
86、MapReduce a flexible data processing tool
87、Mastering ElasticSearch
88、Megastore - Providing Scalable, Highly Available Storage for Interactive Services
89、MongoDB The Definitive Guide 2nd Edition
90、MongoDB in Action
91、Monitoring with Ganglia
92、Namenode High Availability
93、Netty in Action MEAP V08
94、Next Generation Open Source Messaging with Apollo
95、NoSQL Distilled
96、Omega - flexible, scalable schedulers for large compute clusters
97、OpenStack Operations Guide
98、Oracle JRockit The Definitive Guide
99、Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications
100、Performance Testing With JMeter 2.9
101、PowerDrill - Processing a Trillion Cells per Mouse Click
102、Pregel  -  A System for Large-Scale Graph Processing
103、Pro Hadoop
104、Pro JPA2
105、Pro Java 7 NIO.2
106、Pro Puppet
107、Pro Spring 3
108、Pro Spring MVC
109、Pro Spring Security
110、Professional Hadoop Solutions
111、Programming Concurrency on the JVM
112、Programming Hive
113、Programming Pig
114、Programming Scala
115、Programming in Scala 2nd Edition
116、RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.
117、RabbitMQ in Action
118、Redis Cookbook
119、Redis in Action
120、Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing
121、SQL Antipatterns
122、Scala in Action
123、Scala in Depth
124、Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr
125、Scaling CouchDB
126、Scaling MongoDB
127、Selenium 2 Testing Tools Beginner''s Guide
128、Selenium Testing Tools Cookbook
129、Solr 3 Enterprise Search Server
130、Solr 3.1 Cookbook
131、Solr in action
132、Sonar Code Quality Testing Essentials
133、Spanner - Google’s Globally-Distributed Database
134、Spring Batch in Action
135、Spring Data Modern Data Access for Enterprise Java
136、Spring Dynamic Modules in Action
137、Spring Integration in Action
138、Spring Roo in Action
139、Spring Security 3.1
140、Spring in Action 4th Edition
141、Spring in Practice
142、Steps in Scala An Introduction to Object-Functional Programming
143、Storage Architecture and Challenges
144、Storm Blueprints Patterns for Distributed Real-time Computation
145、Storm Real-Time Processing Cookbook
146、Tenzing - A SQL Implementation On The MapReduce Framework
147、Testable JavaScript
148、The Definitive Guide to MongoDB
149、The Hadoop Distributed File System
150、The Java Language Specification Java SE 7 Edition
151、The Java Virtual Machine Specification Java SE 7 Edition
152、The anatomy of a large-scale hypertextual Web search engine
153、The anatomy of a large-scale hypertextual Web search engine short
154、Thinking in Java 4th Edition
155、Tika in Action
156、Understanding MySQL Internals
157、Web Crawling and Data Mining with Apache Nutch
158、Web search for a planet the google cluster architecture
159、ZooKeeper Distributed Process Coordination
160、bk-admin-guide-cloud-latest
161、bk-compute-adminguide-grizzly
162、elasticsearch. The Road to a Distributed, (Near) Real Time, Search Engine
163、high-availability-guide

2000 个软件开发领域的高频特殊词及精选例句(三)

2000 个软件开发领域的高频特殊词及精选例句(三)

高春辉、王春生、朱峰:关于开源创业的 15 件小事

superword 是一个 Java 实现的英文单词分析软件,主要研究英语单词音近形似转化规律、前缀后缀规律、词之间的相似性规律等等。

551、单词 ibm 的匹配文本:


  1. The Basic Edition includes HDFS, Hbase, MapReduce, Hive, Mahout, Oozie, Pig, ZooKeeper, Hue, and several other open source tools, as well as a basic version of the IBM installer and data access tools. Professional Hadoop Solutions


  2. In the past two years, Joe has written more than a dozen articles and tutorials for the IBM developerWorks web site, including articles on Adobe Flex, Facebook applications, ExtJS, Adobe AIR, IBM DB2, and more. Beginning CouchDB


  3. Microsoft and IBM have their share of the pie too. HBase in Action


  4. Finally we would like to recognize the generous support of our equipment donors IBM, Intel, and Sun and our funders. The anatomy of a large-scale hypertextual Web search engine


  5. There are Hadoop distributions from the large, established enterprise vendors, including EMC, IBM, Microsoft, and Oracle, as well as from specialist Hadoop companies such as Cloudera, Hortonworks, and MapR. Hadoop:The Definitive Guide 3rd Edition


  6. Our case studies in chapter 12 will describe how companies including New York Times , China Mobile , and IBM are using Hadoop. Hadoop in Action


  7. Of course I’m talking about the Information Management System (IMS) hierarchical database, invented in 1966 at IBM. Cassandra The Definitive Guide


  8. The IBM InfoSphere Big Insights distribution is available at http://www-01.ibm.com/software/data/infosphere/ biginsights/ and (like MapR) offers commercial improvements and extensions to the open source Hadoop core. Hadoop Beginners Guide


  9. Solutions for a category of middleware that provides asynchronous, reliable message queuing such as IBM WebSphere MQ, SonicMQ, TIBCO Rendezvous, and Apache ActiveMQ, commonly used for Enterprise Application Integration (EAI) purposes. ActiveMQ in Action


  10. For example, the following query compares Apple (symbol AAPL) and IBM (symbol IBM) Programming Hive



552、单词 dismax 的匹配文本:


  1. Let''s create a dismax query that will combine two queries: a simple term query and a prefix query. Mastering ElasticSearch


  2. The dismax query parser, which is described in a later chapter, makes this easy. Solr 3 Enterprise Search Server


  3. I chose the dismax handler (that''s its name defined by the name attribute) Solr 3.1 Cookbook


  4. To do that, run the following query to your Solr instance: http://localhost:8983/solr/select?defType=dismax&pf=title^100&q=20 Apache Solr 4 Cookbook



553、单词 rfc 的匹配文本:


  1. Finally, there are some undocumented properties on the syslog sources that allow for adding additional regular expression matching patterns for messages that do not conform to the RFC standards. Apache Flume Distributed Log Collection for Hadoop


  2. Some of the major goals (expressed in RFC 871[1]) were to achieve: Java Network Programming and Distributed Computing


  3. The bSocket protocol has been standardized by IETF under RFC 6455 Java EE 7 Specification


  4. No matter which CA you use, you''ll eventually be sent back the certificate, which will be in RFC 1421 format. Java Security 2nd Edition


  5. A typical example is how the WebSocket protocol is upgraded from HTTP, as described in the Opening Handshake section of RFC 6455 Java EE 7 Essentials


  6. For the format of the HTTP request headers, see RFC 2616 Core Java Volume II Advanced Features 9th Edition


  7. This permits the String to be parsed using the RFC 2045 grammar specifications. Java 7 New Features Cookbook


  8. This command uses the following options: -alias, -file, -keystore, -rfc, -storepass, -storetype, and -v. Java in a Nutshell 5th Edition


  9. Despite the name, a published RFC is a finished work. Java Network Programming 4th Edition


  10. The original specification for JSON can be found in RFC 4627, and it was written by Douglas Crockford. The Definitive Guide to MongoDB



554、单词 modularity 的匹配文本:


  1. Netty’s pipeline and encoder classes encourage modularity by separating serialization and deserialization from application logic. Netty in Action MEAP V08


  2. As it turns out, one of the strong points of Web Flow (which we haven’t really discussed until now) is its support for composition and modularity. Pro Spring MVC


  3. In fact, event-based architectures help to enforce the separation of concerns and modularity that MVC advocates. Testable JavaScript


  4. Strategy Pattern A big part of the pluggability and modularity of the framework is achieved thanks to the wide use of the Strategy pattern. Pro Spring Security


  5. This gives you a simple and useful kind of modularity, so that you can work with very large bodies of code without different parts of the code trampling on each other. Programming in Scala 2nd Edition


  6. Because of their modularity, pure functions are easier to test, to reuse, to parallelize, to generalize, and to reason about. Functional Programming in Scala


  7. We still have to embed the implementation code in the class somehow, compromising modularity. Programming Scala


  8. In contrast, OSGi provides strong modularity support but it’s not xv. Spring Dynamic Modules in Action


  9. On the other hand, most scripting languages lack features that are beneficial for programming complex applications, such as strong typing, encapsulation, and modularity. Core Java Volume II Advanced Features 9th Edition


  10. Object-oriented concurrent programming differs from multithreaded systems programming in languages such as C mainly due to the encapsulation, modularity, extensibility, security, and safety features otherwise lacking in C. Concurrent Programming in Java Design Principles and Pattern



555、单词 coprocessors 的匹配文本:


  1. We have set HBase to use the Kerberos authentication, and HBase coprocessors-based authorization. HBase Administration Cookbook


  2. Here you learn about the latest developments in both HDFS (federation) and HBase new file formats, and coprocessors. Professional Hadoop Solutions


  3. Chapter 5 introduces coprocessors, a mechanism for pushing computation out to your HBase cluster. HBase in Action


  4. Coprocessors Earlier we discussed how you can use filters to reduce the amount of data being sent over the network from the servers to the client. HBase:The Definitive Guide



556、单词 jee 的匹配文本:


  1. JEE-specific web componen and static content like images and CSS files. Gradle in Action


  2. First, you can use Spring to build any application in Java (e.g., stand-alone, Web, JEE applications, etc.), unlike many other frameworks such as Apache Struts, which is limited to web applications. Pro Spring 3


  3. The framework combines best practices for Java Enterprise Edition (JEE) development from the industry and integration with the best-of-breed third-party frameworks. Pro Spring MVC


  4. In this section we will look at an example of how to deploy your Java web or JEE application to an application server such as Tomcat, JBoss, or GlassFish. Jenkins The Definitive Guide


  5. In this respect, Spring is like JEE and even its earlier J2EE incarxvii. Spring in Practice


  6. Bnd scans most of the elements that have attributes relative to class or interface (such as bean@class) in most of the Spring namespaces (beans, aop, context, jee, lang, osgi, util, and webflow-config) Spring Dynamic Modules in Action


  7. That’s understandable considering it’s the predominant Java-based API for messaging and sits among the standards of the Java Enterprise Edition (JEE) Spring Integration in Action


  8. Although J2EE (now known as JEE) was able to catch up with Spring, Spring never stopped moving forward. Spring in Action 4th Edition



557、单词 func 的匹配文本:


  1. The IsUseragentBot class extends the abstract class FilterFunc, which allows us to override the exec(Tuple t) method. Hadoop RealWorld Solutions Cookbook


  2. Note that by-name parameters are implemented as scala.Func tion0 objects. Programming Scala


  3. Discover how to use the REGISTER and DEFINE statements to register and define an alias for your LoadFunc, and how to use your LoadFunc in combination with the LOAD statement. Hadoop in Practice


  4. Filter UDFs are all subclasses of FilterFunc, which itself is a subclass of EvalFunc. Hadoop:The Definitive Guide 3rd Edition


  5. Version 0.7.0 of Pig introduced the LoadFunc/StoreFunc classes and functionality, which allows you to load and store data from sources other than the usual HDFS. HBase:The Definitive Guide


  6. For pushing the data into the Kafka cluster, the AvroKafkaStorage class (extends Pig''s StoreFunc class) takes the Avro schema as its first argument and connects to the Kafka URI. Apache Kafka


  7. To create an eval UDF you make a Java class that extends the abstract EvalFunc class. Hadoop in Action


  8. The Pig integration code (a LoadFunc implementation) is found in the contrib section of Cassandra’s source download. Cassandra The Definitive Guide


  9. Because the group operator returns a record for each group, with a bag containing all the records in that group, your eval func still takes one record and returns one record. Programming Pig


  10. For instance, suppose that a function named DUMMY_FUNC accepted one parameter and returned a value. Java 7 Recipes A Problem-Solution Approach



558、单词 denoted 的匹配文本:


  1. The units denoted by the value vary according to the metric module. Monitoring with Ganglia


  2. As we’ve seen, design documents are normal JSON documents, denoted by the fact that their DocID is prefixed with _design/ CouchDB The Definitive Guide


  3. Remember that paths are denoted by a colon character (:) Gradle in Action


  4. The root project is denoted by a colon (:) and has no explicit name. Gradle Effective Implementation Guide


  5. This recipe will use the MapReduce API and the distributed cache to mark any lines in the news archive dataset that contain one or more keywords denoted in a list. Hadoop RealWorld Solutions Cookbook


  6. To evaluate the effectiveness of the Contraction phase, we consider the two different levels of memoization in Incoop: (i) the coarse-grained, task-level memoization performed in the implementation, denoted as Task, and (ii) the fine-grained approach that adds the Contraction phase in the implementation, denoted as Contraction. Incoop MapReduce for Incremental Computations


  7. Mahout didn''t do a great job of picking initial random cluster locations (denoted with gray crosses) Hadoop in Practice


  8. A region is denoted by the table it belongs to, its first row, inclusive, and last row, exclusive. Hadoop:The Definitive Guide 3rd Edition


  9. Finally the phone attribute has an array value, which is denoted by the surrounding square brackets and the fact that the type attribute is repeated. Cassandra The Definitive Guide


  10. Note that we expect the end of the file to be denoted by the end of the stream (or -1) ActiveMQ in Action



559、单词 se5 的匹配文本:


  1. Many of the Java SE5 language changes were designed to improve the experience of the programmer. Thinking in Java 4th Edition



560、单词 md5 的匹配文本:


  1. If you are a cautious person, go to the backup site and get the PGP checksum or the MD5 checksum of the download file. Pro Hadoop


  2. The revision number is the md5 hash of the transport representation of a document with an N- prefix denoting the number of times a document got updated. CouchDB The Definitive Guide


  3. By generating an MD5 check-sum from the key, we randomize which node the data for that key will be stored on. Cassandra High Performance Cookbook


  4. If the preceding example had used the argument --update-key md5, the generated code would have run like the following: Hadoop RealWorld Solutions Cookbook


  5. MD5 is a good choice because you want twits to be stored in groups by user. HBase in Action


  6. MD5 hash, so that later you can visually compare that your. Hadoop in Practice


  7. The MD5 digests are the same, showing that the file survived its trip to HDFS and is back intact. Hadoop:The Definitive Guide 3rd Edition


  8. The row key is the MD5 hash of the line to generate a random key. HBase:The Definitive Guide


  9. It uses a BigIntegerToken with an MD5 hash applied to it to determine where to place the keys on the node ring. Cassandra The Definitive Guide


  10. After starting the HDFS cluster, we wait for a moment and then use MD5 cryptographic checksums to verify that both locations contain the identical fsimage. Hadoop Beginners Guide



561、单词 centric 的匹配文本:


  1. The vertex-centric approach is reminiscent of MapReduce in that users focus on a local action, processing each item independently, and the system composes these actions to lift computation to a large dataset. Pregel  -  A System for Large-Scale Graph Processing


  2. For example, an admin over 20,000 systems is far more likely to be running a configuration management engine such as Puppet/Chef or CFEngine and will therefore have fewer qualms about host-centric configuration. Monitoring with Ganglia


  3. Yet another benefit is that the shell script can interact with services, systems, and tools that are not Java centric. MapReduce Design Patterns


  4. Because some of Hadoop’s tools (such as Hive and real-time Hadoop queries) provide a low entry barrier to Hadoop for people more familiar with database queries, some people limit their knowledge to only a few database-centric tools in the Hadoop ecosystem. Professional Hadoop Solutions


  5. This recipe does not require a full understanding of the Giraph API, but it does assume some familiarity with Bulk Synchronous Parallel (BSP) and the design goals of vertex-centric APIs including Apache Giraph and Google Pregel. Hadoop RealWorld Solutions Cookbook


  6. Let’s wrap all the user-centric HBase interactions in a single class. HBase in Action


  7. The best_fields type is also field centric and suffers from similar problems. Elasticsearch The Definitive Guide


  8. We focus on practices unique to data-centric programming within Hadoop. Hadoop in Action


  9. It may appear strange to accuse a project fully implemented in Java of being Java-centric, but the consideration is from a client''s perspective. Hadoop Beginners Guide


  10. One can take two complementary views of any OO system, object-centric and activity-centric: Concurrent Programming in Java Design Principles and Pattern



562、单词 multiproject 的匹配文本:


  1. Gradle provides powerful support for defining and organizing multiproject builds, as well as modeling dependencies between projects. Gradle in Action


  2. For multiproject setup, the build.scala file is used to define common settings and tasks for multiple projects. Scala in Action


  3. Role-based strategy is relatively new in Jenkins, but it is an excellent way to simplify the task of managing permissions in large, multiteam and multiproject organizations. Jenkins The Definitive Guide


  4. The beforeEvaluate() hook is not particularly useful in this case, since there is no way in a single build file to advise the build to do something before it is evaluated.3 Beforehooks are only useful in the case of a multiproject build. Gradle Beyond The Basics


  5. Multiproject Build Structure A multiproject is generally hierarchical in nature: it typically has a master project with one or more subprojects. Building and Testing with Gradle



563、单词 serializing 的匹配文本:


  1. Since multiple proposers may submit values with proposal number zero, serializing at this replica ensures only one value corresponds with that proposal number for a particular log position. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. By working with the grain and letting CouchDB do this for us, we save ourselves a tremendous amount of CPU cycles that would otherwise have been spent serializing object graphs from SQL, converting them into domain objects, and using those objects to do application-level validation. CouchDB The Definitive Guide


  3. The nested data model backs a platform-neutral, extensible mechanism for serializing structured data at Google. Dremel - Interactive Analysis of WebScale Datasets


  4. Because a BloomFilter is also a Writable object, serializing it is fairly trivial. MapReduce Design Patterns


  5. Recipes include techniques for serializing data, storing large objects, time series, normalized, and de-normalized data. Cassandra High Performance Cookbook


  6. Our primary-copy caching strategy makes a single authoritative MDS responsible for managing cache coherence and serializing updates for any given piece of metadata. Ceph A Scalable, High-Performance Distributed File System


  7. In the preceding example, we use the Writable types as the fields of our custom Writable type and use the readFields() method of the fields for de-serializing the data from the DataInput object. Hadoop MapReduce Cookbook


  8. The DataOutput and DataInput interfaces have a rich set of methods for serializing and deserializing Java primitives, so, in general, you have complete control over the wire format of your Writable object. Hadoop:The Definitive Guide 3rd Edition


  9. This is because the MapReduce framework has a certain defined way of serializing the key/value pairs to move them across the cluster’s network, and only classes that support this kind of serialization can function as keys or values in the framework. Hadoop in Action


  10. As we''ll see, Flume makes assumptions about the data format and provides no ways of serializing data outside of these. Hadoop Beginners Guide



564、单词 remoting 的匹配文本:


  1. Under the covers, Akka uses JBoss Netty and Google Protocol Buffers libraries to make the remoting happen seamlessly. Programming Concurrency on the JVM


  2. Akka implements transparent remoting, where the remoteness of the actor is completely configured at deployment time. Scala in Action


  3. Remoting Support Accessing or exposing remote components in Java has never been the simplest of jobs. Pro Spring 3


  4. This handler adapter is mostly used by Spring Remoting to support some of the HTTP remoting options. Pro Spring MVC


  5. Remote scaling is more complex: it requires a remoting technology like Java Messaging Service (JMS) or GridGain, and you must plug in scaling to batch processing using Spring Batch hooks. Spring Batch in Action


  6. As emphasized on the Direct Web Remoting (DWR) website (http://directwebremoting. Spring Dynamic Modules in Action


  7. This allows the application developer to focus on the business logic and not worry about lowlevel transaction and state management details, remoting, concurrency, multithreading, connection pooling, or other complex low-level APIs. Java EE 7 Essentials


  8. With certain environments that have high latency or lower bandwidths, rendering internal frames across the network can be too expensive (for example, remoting using a VPN) Java 7 Recipes A Problem-Solution Approach


  9. Business methods for components are intercepted by the application server in order to apply services such as concurrency, transaction management, security, and remoting. Pro JPA2


  10. Even though Spring integrates with several popular MVC frameworks, its web and remoting module comes with a capable MVC framework that promotes Spring’s loosely coupled techniques in the web layer of an application. Spring in Action 4th Edition



565、单词 har 的匹配文本:


  1. The HTTP Archive (HAR) format is the standard for capturing this information; Testable JavaScript


  2. The disadvantages of HAR files are that they can’t be optimized for local disk access in MapReduce, and they can’t be compressed. Hadoop in Practice


  3. Now we can run the archive command: % hadoop archive -archiveName files.har /my/files /my. Hadoop:The Definitive Guide 3rd Edition


  4. A HAR file is similar to a familiar tar, JAR, or Zip file, but file system operation can address the individual files for the archive, and a HAR file can be used transparently as the input to a MapReduce job. The Hadoop Distributed File System


  5. It can capture the performance data from a web application in an HTML Archive (HAR) format, as well as manipulate the browser behavior and traffic, such as whitelisting and blacklisting content, simulating network traffic and latency, and rewriting HTTP requests and responses. Selenium Testing Tools Cookbook


  6. The ALTER TABLE … ARCHIVE PARTITION statement captures the partition files into a Hadoop archive (HAR) file. Programming Hive


  7. The way that BrowserMob Proxy does this is by capturing the network traffic and pushing it into a format called HTTP Archive, or most commonly known as HAR. Selenium 2 Testing Tools Beginner''s Guide



566、单词 watcher 的匹配文本:


  1. Watcher: The watcher watches watchables! In our examples, the watcher is WatchService and it monitors the file system changes (the file system is a FileSystem instance) Pro Java 7 NIO.2


  2. Most of the action is in the modelWatcher object B. Mahout in Action


  3. Getting the watcher data from GitHub requires a web service call to the GitHub API (http://developer.github.com/v3/) Spring in Practice


  4. In this case, once the data is written to HDFS, a data watcher (HDFS watcher) invokes a MapReduce JobTracker, informing it about the availability of a new data. Professional Hadoop Solutions


  5. The Watcher object receives callbacks from ZooKeeper to inform it of various events. Hadoop:The Definitive Guide 3rd Edition


  6. All the master cluster region servers keep a watcher on one another to be notified when one dies (just like the master does) HBase:The Definitive Guide


  7. A producer maintains a pool of broker connections, and constantly updates it using ZooKeeper watcher callbacks. Apache Kafka


  8. It does a few more interesting things: (a) one can register a watcher on a path and get notified when the children of a path or the value of a path has changed; (b) a path can be created as ephemeral (as oppose to persistent), which means that if the creating client is gone, the path is automatically removed by the Zookeeper server; (c) zookeeper replicates its data to multiple servers, which makes the data highly reliable and available. Kafka a Distributed Messaging System for Log Processing


  9. The pool consists of master and worker nodes, plus an overall gatekeeper called the master watcher. Tenzing - A SQL Implementation On The MapReduce Framework


  10. Other HotSpot VM helper threads such as watcher threads and stat sampler are started. Java Performance



567、单词 varargs 的匹配文本:


  1. Note You can use the @SafeVarargs annotation to defeat the restriction against generic array creation, using this method: Core Java Volume I Fundamentals 9th Edition


  2. It provides three Files.copy() methods to accomplish this task and provides a set of options for controlling the copy process—the methods take a varargs argument represented by these options. Pro Java 7 NIO.2


  3. The ACC_VARARGS flag indicates that this method takes a variable number of arguments at the source code level. The Java Virtual Machine Specification Java SE 7 Edition


  4. The second form provides a convenient syntax to create and call methods that can produce an effect similar to C’s variable argument lists (known as “varargs” in C) Thinking in Java 4th Edition


  5. Applying a SafeVarargs annotation to the method declaration prevents generation of these unchecked warnings at the method invocation expressions. The Java Language Specification Java SE 7 Edition


  6. One could pass in an array, iterable, or varargs of any object. Getting Started with Google Guava


  7. A minor advantage of builders over constructors is that builders can have multiple varargs parameters. Effective Java 2nd Edition


  8. The @SafeVarargs annotation has been introduced to flag a method as safe. Java 7 New Features Cookbook


  9. Varargs methods to define and invoke methods that accept an arbitrary number of arguments. Java in a Nutshell 5th Edition


  10. The of method has a varargs parameter, so you can construct a stream from any number of arguments: Java SE 8 for the Really Impatient



568、单词 troubleshoot 的匹配文本:


  1. As a provider of a website, 10 years ago, your application logs were only used to help you troubleshoot your website. Apache Flume Distributed Log Collection for Hadoop


  2. Useful Resources In this section, we will describe helpful online/offline resources to help you troubleshoot issues with Ganglia. Monitoring with Ganglia


  3. In the following section, I’ll show how to troubleshoot version conflict and tweak Gradle’s dependency resolution strategy to your specific use case. Gradle in Action


  4. We will also describe steps to verify and troubleshoot a Hadoop cluster configuration. Hadoop Operations and Cluster Management Cookbook


  5. In addition to that, it covers functionalities that allow us to troubleshoot ElasticSearch, such as describing how segments merging works, how to see what ElasticSearch does beneath its high-level interface, and how to limit the I/O operations. Mastering ElasticSearch


  6. Doing this is helpful when trying to troubleshoot errors that users may be reporting. Cassandra High Performance Cookbook


  7. It likely will not prevent you from having to troubleshoot issues dealing with scale or data validity, but it will eliminate a large majority of the compile time and control flow issues. Hadoop RealWorld Solutions Cookbook


  8. If you’re somewhat new to Linux systems and you want to run Cassandra on Linux (which is recommended), you may want to check out Jonathan Ellis’s blog entry on using a variety of Linux performance monitoring tools to help you understand the performance of your underlying platform so that you can troubleshoot in the right place. Cassandra The Definitive Guide


  9. The interaction between these layers is what makes a system like this so complex and so difficult to troubleshoot. Hadoop Operations


  10. If you were unable to successfully run the Java examples, then you’ll need to troubleshoot the situation. ActiveMQ in Action



569、单词 idioms 的匹配文本:


  1. You may want to change the behavior of an existing task or add your own idioms for describing your business domain. Gradle in Action


  2. In a nutshell, such code, called functions in the functional programming community, can be passed around and combined in a way to produce powerful programming idioms that you’ll see in Java 8 guise throughout this book. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. In contrast, writing this query in the Java MapReduce API is challenging, because implementation of the ORDER BY clause might require knowledge of the specialized programming idioms. Professional Hadoop Solutions


  4. However, using generic classes requires an understanding of the pre-Java 5 idioms for generic programming. Data Structures and Algorithm Analysis in Java 3rd Edition


  5. Chapter 3, Functional Programming with Guava introduces the functional programming idioms provided by Guava. Getting Started with Google Guava


  6. The platform is much bigger now than it was in 2001 and more complex, but once you learn the patterns and idioms for using the new features, they make your programs better and your life easier. Effective Java 2nd Edition


  7. There are only a few common Java idioms for which parentheses are required. Java in a Nutshell 5th Edition


  8. There’s a real opportunity to eliminate reinventing these idioms by letting “higher-level” tools handle them automatically. Programming Hive


  9. The solutions provided by lock utility classes come at the price of more awkward coding idioms and less automatic enforcement of correct usage. Concurrent Programming in Java Design Principles and Pattern


  10. It’s easy to show nesting relationships in XML, but it’s hard to express program flow and data access the way most common programming language idioms express them. Building and Testing with Gradle



570、单词 scalaz 的匹配文本:


  1. The best example of unicode and non-unicode operators comes from the Scalaz library. Scala in Depth


  2. In chapter 6 you’ll learn how to create a simple web application using the SBT and the Scalaz HTTP module. Scala in Action


  3. It’s possible to program in a purely functional way using the excellent Scalaz library,a. Functional Programming Patterns in Scala and Clojure


  4. Scalaz Scalaz is a library that fills in gaps in the standard library. Programming Scala



571、单词 pluggable 的匹配文本:


  1. Centralized configuration was replaced with a simple on-disk configuration file (although the configuration provider is pluggable so that it can be replaced) Apache Flume Distributed Log Collection for Hadoop


  2. In the current version of gmond, there are two types of pluggable modules, C/C++ and Python. Monitoring with Ganglia


  3. The authentication system is pluggable, so you can integrate with existing services to authenticate users to CouchDB using an HTTP layer, using LDAP integration, or through other means. CouchDB The Definitive Guide


  4. Finally, an administrator may configure the NM with a set of pluggable, auxiliary services. Apache Hadoop YARN Yet Another Resource Negotiator


  5. The good news is that Hadoop scheduler is pluggable, and it comes with two other schedulers. Hadoop MapReduce Cookbook


  6. To reduce the probability of data loss when multiple simultaneous nodes fail, we implemented a pluggable block placement policy that constrains the placement of block replicas into smaller, configurable node groups. Apache Hadoop Goes Realtime at Facebook


  7. Fortunately, starting with version 0.19, Hadoop has a pluggable architecture for the scheduler, and two new schedulers have become available for resolving job contention. Hadoop in Action


  8. You can also use your own pluggable comparator for sorting if you wish. Cassandra The Definitive Guide


  9. Linux supports, via Pluggable Authentication Modules (PAM) system, the ability to control resources such as file descriptors and virtual memory at the user level. Hadoop Operations


  10. Pluggable persistence and security—ActiveMQ provides multiple flavors of persistence and you can choose between them. ActiveMQ in Action



572、单词 recommenders 的匹配文本:


  1. Chapter 3 discusses how you can represent the data that recommenders use in an efficient way. Mahout in Action


  2. In this chapter we’ll look at the Mahout MapReduce implementations of recommenders, classifiers, and clusterers. Hadoop in Practice


  3. Mahout provides implementations of such common ML techniques as clustering and recommenders. Hadoop Beginners Guide



573、单词 swf 的匹配文本:


  1. The Flash bytecode format is known as SWF, and SWF files are produced by the Flex compiler. Thinking in Java 4th Edition


  2. In SWF, you model each process with flows (see chapter 5 for more details) Spring in Practice


  3. The dyn-swf sub-Workflow includes the clustering and ranging actions, and e-mail notification in case of failure. Professional Hadoop Solutions


  4. It captures a screen using the VNC protocol and generates a Shockwave Flash (SWF) movie file. Selenium Testing Tools Cookbook



574、单词 whirr 的匹配文本:


  1. Using Apache Whirr to automate the provisioning and deployment of. Storm Blueprints Patterns for Distributed Real-time Computation


  2. If you’re deploying in the cloud, Apache Whirr (http://whirr.apache.org) is a framework that can come to your rescue and make spinning up and configuring instances easy. HBase in Action


  3. We will also use Apache Whirr, a cloud neutral library for deploying services on cloud environments, to provision Apache Hadoop and Apache HBase clusters on cloud environments. Hadoop MapReduce Cookbook


  4. Apache Whirr Amazon Elastic Compute Cloud (EC2) is a computing service that allows customers to rent computers (instances) on which they can run their own applications. Hadoop:The Definitive Guide 3rd Edition


  5. Apache Whirr Recently, we have seen an increase in the number of users who want to run their cluster in dynamic environments, such as the public cloud offerings by Amazon’s EC2, or Rackspace Cloud Servers, as well as in private server farms, using open source tools like Eucalyptus. HBase:The Definitive Guide



575、单词 cardinality 的匹配文本:


  1. The maximum number of dimensions possible is the cardinality of the vector. Mahout in Action


  2. Each data point in these graphs was obtained by averaging the measurements over 30 runs, in each of which a set of columns of a given cardinality was chosen at random. Dremel - Interactive Analysis of WebScale Datasets


  3. Index Cardinality Cardinality refers to how many distinct values there are for a field in a collection. MongoDB The Definitive Guide 2nd Edition


  4. Using this codec for high cardinality fields can speed up queries on such fields. Mastering ElasticSearch


  5. The performance of building an inverted index depends mostly on the computational cost of parsing the content in the mapper, the cardinality of the index keys, and the number of content identifiers per key. MapReduce Design Patterns


  6. An instance template specifies a cardinality and a set of roles that run on each instance of. Hadoop:The Definitive Guide 3rd Edition


  7. The partial key scan mechanism is quite powerful, as you can use it as a lefthand index, with each added field adding to its cardinality. HBase:The Definitive Guide


  8. Clearly, this can consume a lot of memory, especially for high cardinality. Elasticsearch The Definitive Guide


  9. A FetchType of LAZY benefits entities with one-to-many or many-to-many relationships, where the cardinality of the relationship is high, and the attributes are not accessed immediately after the entity is loaded. Java Performance


  10. Cardinality It isn’t very often that a project has only a single employee working on it. Pro JPA2



576、单词 thresholds 的匹配文本:


  1. Monitoring Ganglia Metrics with Nagios Because Nagios has no built-in means of polling data from remote hosts, Nagios users have historically employed various remote execution schemes to collect a litany of metrics with the goal of comparing them against static thresholds. Monitoring with Ganglia


  2. This means that Sonar not only gives you the option to apply specific metric rule sets and thresholds per project, but also allows you to decide whether you want to use the analysis tools provided by Sonar to generate metrics. Gradle in Action


  3. This decider allows us to configure thresholds when a shard can be allocated to a node and when ElasticSearch should try to relocate shard to another node. Mastering ElasticSearch


  4. When thresholds are reached, a Memtable is flushed to disk in a sorted format called an SSTable. Cassandra High Performance Cookbook


  5. Add to these graphs things like thresholds, and you can easily find out if the system isn’t performing in the expected range of operation. HBase in Action


  6. Region servers are responsible for all read and write requests for all regions they serve, and also split regions that have exceeded the configured region size thresholds. HBase:The Definitive Guide


  7. If tasks fail due to software errors, Hadoop will retry them within configurable thresholds. Hadoop Beginners Guide


  8. Alert thresholds and performance data of MapReduce is inherently coupled to that of HDFS when services are stacked in this manner, making it difficult to detect the root cause of a failure across service and host boundaries. Hadoop Operations


  9. When using Threads, granularity thresholds must in general be substantially higher than when using lightweight executable classes (although still substantially lower than those needed in distributed parallel designs) Concurrent Programming in Java Design Principles and Pattern


  10. You may want to create thresholds for components to expand and contract. Java 7 Recipes A Problem-Solution Approach



577、单词 cypher 的匹配文本:


  1. Neo4J also provides the Cypher query language to query the graph. NoSQL Distilled


  2. In the example, I will also introduce the option to exchange password encoders in order to use different algorithms to cypher the passwords, instead of storing them in plain text. Pro Spring Security


  3. In the next chapter, we’ll look at Cypher, which is a pattern-matching language tuned to the way we tend to describe graphs using diagrams. Graph Databases


  4. With the declarative Cypher query language, Neo4j makes it easier to get started for everyone who knows SQL from working with relational databases. Spring Data Modern Data Access for Enterprise Java


  5. Query Declares a property to automatically provide an iterable that’s built by executing a given Cypher query. Spring in Action 4th Edition



578、单词 spam 的匹配文本:


  1. If clicking a link or visiting a web page could install arbitrary code on the user’s computer, criminals would have an easy time stealing confidential information, accessing financial data, or taking over users’ machines to send spam. Core Java Volume I Fundamentals 9th Edition


  2. In addition to extremely large-scale pipelines for Yahoo! Search, scientists optimizing advertising analytics, spam filtering, and content optimization drove many of its early requirements. Apache Hadoop YARN Yet Another Resource Negotiator


  3. Classifiers have widespread applications such as e-mail spam filtering, finding most promising products, selecting customers for closer interactions, and taking decisions in machine learning situations, and so on. Hadoop MapReduce Cookbook


  4. Examples of classification include email spam filtering and detection of fraudulent credit card transactions. Hadoop in Practice


  5. A low clustering coefficient could indicate widely dispersed interest (as it does with @THE_REAL_SHAQ), or it could indicate the kind of inorganic community that a spam account would engender. Hadoop:The Definitive Guide 3rd Edition


  6. As an example, the following query looks for emails that contain "business opportunity" and must either be starred, or both be in the Inbox and not marked as spam: Elasticsearch The Definitive Guide


  7. However, in these days of spam floods, most servers have built-in checks and only accept requests from users or IP address ranges that they trust. Core Java Volume II Advanced Features 9th Edition


  8. For example, you want to build a spam filter   to know the words frequently used in the millions of spam emails you’ve received. Hadoop in Action


  9. Somewhat more subtly, an applet could take advantage of the user’s system to forge email, generate spam, or launch hacking attempts on other systems. Java in a Nutshell 5th Edition


  10. Examples of such use cases include: Responding to customer requests, spam analysis, dealing with alerts in highly critical revenue systems, or monitoring and assessing changes to production systems. PowerDrill - Processing a Trillion Cells per Mouse Click



579、单词 martin 的匹配文本:


  1. To circumvent this problem, Mahout implements techniques to identify groups of words that have an unusually high probability of occurring together, such as Martin Luther King Jr or Coca Cola. Mahout in Action


  2. NoSQL distilled : a brief guide to the emerging world of polyglot persistence / Pramod J Sadalage, Martin Fowler. NoSQL Distilled


  3. Knut Ola Hellan (creator of Pillow) and Martin Brown (from Couchbase) both provided valuable feedback which helped to make this book better. Scaling CouchDB


  4. In Scala, we use a simplified version of a technique introduced in a paper written by Scala’s designer, Martin Odersky. Functional Programming Patterns in Scala and Clojure


  5. Martin Gardner, the mathematics and science writer, once said in an interview: Beyond calculus, I am lost. Hadoop:The Definitive Guide 3rd Edition


  6. The name of this design comes from Martin Fowler’s book Patterns of. SQL Antipatterns


  7. Martin Odersky’s amazing balance between theory and reality make Scala possible, and the people he has attracted into the community are brilliant. Beginning Scala


  8. Martin Porter, and it’s best defined in his own words: Lucene in Action 2nd Edition


  9. The text of the book is set in 10/11 point FF Scala and Scala Sans, designed by Martin Majoor, and distributed by FSI FontShop International. Fundamental Networking in Java


  10. Special thanks also go to Brian Aker, Martin “MC” Brown, and Paul Kinzelman for their reviews and suggestions. Understanding MySQL Internals



580、单词 idempotent 的匹配文本:


  1. The PUT operation is idempotent; that is, the content of the URL specifies the name of the object we are creating through the HTTP request. Getting Started with CouchDB


  2. Combiners also must be idempotent, as they may be run an arbitrary number of times by the Hadoop framework over a given map task’s output. Pro Hadoop


  3. The key feature of infrastructure automation is that deployment scripts are idempotent—that is, running your scripts multiple times will have the same intended effect as running them once, something that becomes critical when a script fails halfway through. CouchDB The Definitive Guide


  4. Applying plugins is an idempotent operation, and therefore is only executed once for a specific plugin. Gradle in Action


  5. We rely on the application writer to make such side-effects atomic and idempotent. MapReduce - Simplied Data Processing on Large Clusters


  6. If the mutations fail, it is safe to submit the entire list again as mutations are idempotent due to the time stamp associated with them. Cassandra High Performance Cookbook


  7. An idempotent operation produces the same result no matter how many times it’s executed. Hadoop in Practice


  8. Another reason for turning off speculative execution is for tasks that are not idempotent. Hadoop:The Definitive Guide 3rd Edition


  9. The records in this application log are idempotent, and can be applied multiple times without any data loss. Apache Hadoop Goes Realtime at Facebook


  10. Your tasks are no longer guaranteed to be idempotent and you’ll need to understand how various failure scenarios can affect your tasks. Hadoop in Action



581、单词 intellij 的匹配文本:


  1. JetBrains IntelliJ Idea continues to forge creative new paths in development tools. Thinking in Java 4th Edition


  2. This chapter explains Gradle’s capabilities for generating project files for popular IDEs like Eclipse, IntelliJ, and NetBeans. Gradle in Action


  3. In the next chapter, we will learn how we can integrate Gradle with Integrated Development Environments (IDEs) Eclipse and JetBrains IntelliJ. Gradle Effective Implementation Guide


  4. Don’t be turned off by these issues though; there’s good news! Most integrated development environments (IDEs) such as NetBeans and IntelliJ support this refactoring and will automatically ensure these gotchas don’t arise. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. Modern IDEs have great support for many code quality tools—Checkstyle, PMD, and FindBugs all have plugins for Eclipse, NetBeans, and IntelliJ, which provide rapid feedback for developers on code quality issues. Jenkins The Definitive Guide


  6. The application server is easy to use (ZIP installer and NetBeans/ Eclipse/IntelliJ integration), lightweight (downloads starting at 37 MB, small disk/ memory footprint), and modular (OSGi-based, containers start on demand) Java EE 7 Essentials


  7. Once the JAR file is downloaded, add it as an external library to your IDE (IntelliJ, NetBeans, or Eclipse) Getting Started with Google Guava


  8. The other big change in the platform is the widespread adoption of modern Integrated Development Environments (IDEs), such as Eclipse, IntelliJ IDEA, and NetBeans, and of static analysis tools, such as FindBugs. Effective Java 2nd Edition


  9. IDEs like Eclipse and IntelliJ IDEA communicate with source code repositories like. Java Network Programming 4th Edition


  10. Although the steps described here are for NetBeans, you can also develop using other IDEs such as Eclipse, IntelliJ, or vi. Java 7 Recipes A Problem-Solution Approach



582、单词 snippets 的匹配文本:


  1. Many of the code snippets used in the following sections were taken from the mod_example gmond metric module source code. Monitoring with Ganglia


  2. With flexibility comes a lot of duplicated code snippets that are copied from one project to another. Gradle in Action


  3. They typically support only a single tag, whereas in real life, a user can be interested in getting XML snippets for multiple tags. Professional Hadoop Solutions


  4. Those code snippets evolve and grow over the course of a section or chapter. HBase in Action


  5. Previous chapters showed you some MapReduce snippets in action when working with data serialization. Hadoop in Practice


  6. Please note that the provided code snippets have been simplified due to space constraints; for download details for the full code listings, refer to the Preface. Hadoop:The Definitive Guide 3rd Edition


  7. The snippets extracted from Hush show you how the feature is used in context, and since it is part of the publicly available repository accompanying the book, you have the full source available as well. HBase:The Definitive Guide


  8. Many applications like to highlight snippets of text from each search result so that the user can see why the document matched their query. Elasticsearch The Definitive Guide


  9. For other events, you write short snippets of event-handling code. Core Java Volume II Advanced Features 9th Edition


  10. As we walk through the different tuning options for ActiveMQ, we’ll demonstrate them with example snippets of code, finally pulling all the tuning techniques together in an example data feed application. ActiveMQ in Action



583、单词 alpha 的匹配文本:


  1. And although release Version 2.0.x is the most feature-rich Hadoop release, it is still in alpha state requiring further improvements. Hadoop Operations and Cluster Management Cookbook


  2. When used (in order to use it, we need to use the laplace value as its name), a constant value equal to the value of the alpha parameter (which is by default 0.5) will be added to counts to balance the weights of frequent and infrequent n-grams. Mastering ElasticSearch


  3. Note, however, that some processor architectures (specifically, Alpha and DSM architectures) may have more implementation issues for correct handling of accesses to final fields. JSR-133 Java Memory Model and Thread


  4. CouchDB is still in the alpha stage of development, and as a result its security features are incomplete and constantly evolving. Beginning CouchDB


  5. Overlaying a partially transparent rectangle on an image In the Java 2D API, transparency is described by an alpha channel. Core Java Volume II Advanced Features 9th Edition


  6. The alpha value of zero means our window will be completely transparent. Java 7 New Features Cookbook


  7. It includes the major HDFS improvements such as namenode high availability, as well as also a forward port of the battle-tested MRv1 daemons (in addition to the alpha version of YARN) so as to be production-ready. Hadoop Operations


  8. There are several free packages available for parsing class files, including CFParse from the IBM alphaWorks site, http://www.alphaworks.ibm.com/tech/cfparse. Java Performance Tuning 2nd Edition


  9. The t-statistic is compared to a value obtained from a Student’s t-distribution for an a (alpha) and degrees of freedom. Java Performance


  10. This powerful API allows you to do things such as drawing geometric shapes, image processing, alpha compositing (combining images), text font rendering, antialiasing, clipping, creating transformations, stroking, filling, and printing. Java 7 Recipes A Problem-Solution Approach



584、单词 jsps 的匹配文本:


  1. These technologies are suited to mediumto large-scale application development, and include topics such as Enterprise Java Beans (EJBs), Java Server Pages (JSPs), and much more. Java Network Programming and Distributed Computing


  2. The introductory portion ends with the Error Handling with Exceptions chapter, but you may also want to supplement this with an introduction to JDBC, Servlets and JSPs. Thinking in Java 4th Edition


  3. Directory holding dynamic scripting view components in form of JSPs. Gradle in Action


  4. Develop JavaServer Pages (JSPs) to display both static and dynamic content in a web browser. Jenkins Continuous Integration Cookbook


  5. For the web pages, the primary technology used is Java Server Pages (JSPs) Pro Spring MVC


  6. Hadoop provides several JavaServer Pages (JSPs), enabling you to visualize MapReduce execution. Professional Hadoop Solutions


  7. It''s similar to JSPs, ASPs, PHPs, and so on, but simpler with a syntax consisting of just a handful of commands. Solr 3 Enterprise Search Server


  8. For example, an application server generating plenty of Java Server Pages (JSPs) on the fly reduces a static compiler to a JIT compiling JVM, only slower and less adaptive. Oracle JRockit The Definitive Guide


  9. For this reason, Java Performance Tuning, Second Edition includes four new chapters: a new chapter on J2EE application tuning in general followed by chapters on tuning JDBC, servlets and JSPs, and EJBs. Java Performance Tuning 2nd Edition


  10. Each application can contain one or more Java Servlets and JSPs. Java Performance



585、单词 fifo 的匹配文本:


  1. If all jobs in the waiting queue have the same priority, the Fair Scheduler can be configured to schedule these jobs with either Fair Scheduler or FIFO Scheduler. Hadoop Operations and Cluster Management Cookbook


  2. Hadoop is by default configured with a First in First out (FIFO) scheduler, which executes jobs in the same order as they arrive. Hadoop MapReduce Cookbook


  3. By default MapReduce uses a FIFO scheduler to determine how tasks should be scheduled if there are multiple jobs running at the same time. Hadoop in Practice


  4. Job Scheduling Early versions of Hadoop had a very simple approach to scheduling users’ jobs: they ran in order of submission, using a FIFO scheduler. Hadoop:The Definitive Guide 3rd Edition


  5. Another storage data structure that is often required at scale is a FIFO queue, like that provided by GFS [7] RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  6. Under Hadoop’s default FIFO scheduler , as soon as a job is sent to Hadoop for execution, the JobTracker will assign as many TaskTrackers as necessary to process that job. Hadoop in Action


  7. Each worker constantly monitors a common area for new tasks and picks up new tasks as they arrive on a FIFO basis. Tenzing - A SQL Implementation On The MapReduce Framework


  8. This has hidden the fact that, by default, Hadoop places subsequent job submissions into a First In, First Out (FIFO) queue. Hadoop Beginners Guide


  9. None of the three scheduler plug-ins (including the default FIFO scheduler) provide access control over which jobs are submitted to which. Hadoop Operations


  10. The indexes maintain the FIFO data structure for queues and the durable subscriber pointers to their topic messages. ActiveMQ in Action



586、单词 persisting 的匹配文本:


  1. For instance, persisting Jabber instant messages to CouchDB might use ejabberd and Hovercraft. CouchDB The Definitive Guide


  2. Most of this saving can be attributed to scheduling and launching overheads of multiple MapReduce jobs and avoiding the unnecessary steps of persisting outputs of the intermediate MapReduce jobs to HDFS. Apache Hadoop YARN Yet Another Resource Negotiator


  3. This comes in handy if you are persisting the results of the concatenation and want a very descriptive column header. Hadoop RealWorld Solutions Cookbook


  4. Persisting data into structures that arrange themselves based on data values can result in worst-case data distribution. HBase in Action


  5. Gateway is the component of Elasticsearch responsible for persisting your data to disk, so that you don''t lose it when the node goes down. Elasticsearch in Action MEAP V07 cHAPTER 1


  6. Use of a BackupNode provides the option of running the NameNode without persistent storage, delegating responsibility for the namespace state persisting to the BackupNode. The Hadoop Distributed File System


  7. So, after it has opened all the storage files, the HLog is initialized to reflect where persisting ended and where to continue. HBase:The Definitive Guide


  8. In addition to persisting all of these messages, a new threading model also requires messages to be stored for each participating user. Apache Hadoop Goes Realtime at Facebook


  9. Although one of the most important aspects of message persistence is that the messages will survive in long-term storage, there are a number of cases where messages are required to be available for consumers that were disconnected from the broker, but persisting the messages in a database is too slow. ActiveMQ in Action


  10. It works by persisting output to a file which could be stored on HDFS or locally. Programming Hive



587、单词 iterators 的匹配文本:


  1. There is an important conceptual difference between iterators in the Java collections library and iterators in other libraries. Core Java Volume I Fundamentals 9th Edition


  2. How many times do you find yourself reimplementing these operations using iterators? Most databases let you specify such operations declaratively. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. The recipes leverage many of its unique features, including iterators, combiners, scan authorizations, and constraints. Hadoop RealWorld Solutions Cookbook


  4. But since scans are similar to iterators, you do not have a scan() call, but rather a getScanner(), which returns the actual scanner instance you need to iterate over. HBase:The Definitive Guide


  5. Similarly, implementations of the collection interfaces, such as Set and List, typically use nonstatic member classes to implement their iterators: Effective Java 2nd Edition


  6. In addition, the second section will examine the use of empty iterators to avoid null pointer exceptions. Java 7 New Features Cookbook


  7. A third approach to traversal is for a collection class to support fast-fail iterators that throw an exception if the collection is modified in the midst of a traversal. Concurrent Programming in Java Design Principles and Pattern


  8. A HashMap does not guarantee the order of the elements stored in the map, and this extends to its iterators. Java 7 Recipes A Problem-Solution Approach


  9. In fact, Percolator presents the user with a database-like interface: it supports transactions, iterators, and secondary indexes. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications


  10. The MapReduce jobs consist of DAGs of relational operators (such as filter, select (project), join, aggregation) that operate as iterators: each operator forwards a data tuple to the next operator after processing it. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



588、单词 monads 的匹配文本:


  1. The map method is associated with functors, and the flatMap method is associated with monads. Scala in Depth


  2. That’s because Scala is looking for the IOMonad type in the io.monads package, not in another top-level package called monads. Scala in Action


  3. Because for expressions are equivalent to applications of these three methods, they can be seen as syntax for monads. Programming in Scala 2nd Edition


  4. All monads are functors, but not all functors are monads. Functional Programming in Scala


  5. In fact, the sequence comprehensions we went over are both examples of a technique popularized by the highly functional language Haskell that is known as monadic transformations, which rely on a concept from category theory known as monads. Functional Programming Patterns in Scala and Clojure


  6. Monads in Scala Method map (see Section 2.13) has some really interesting properties. Steps in Scala An Introduction to Object-Functional Programming


  7. NOTE: These operations are important in the theory of monads, but you don’t need to know the theory to understand map and flatMap. Java SE 8 for the Really Impatient



589、单词 jaxb 的匹配文本:


  1. Classes for abstraction of XML marshaling and unmarshaling and support for popular tools like Castor, JAXB, XMLBeans, XStream, and so on, are packed into this module. Pro Spring 3


  2. The OXM module provides an abstraction layer that offers simplified and consistent support for popular Object/XML mapping tools such as Castor, the Java Architecture for XML Binding (JAXB), JiBX, XMLBeans, and XStream. Spring in Practice


  3. The simplest way to do this is to use Java Architecture for XML Binding (JAXB) to generate Java binding (based on the Workflow schema), and use these bindings as a Java API to create Oozie Workflows. Professional Hadoop Solutions


  4. To provide the eb services support, the JAX-RPC technology has evolved into the JAXhnology, which makes heavy use of the JAXB technology to bind Java to XML data. Java EE 7 Specification


  5. We use Castor here, but XStream, JAXB 2, or XMLBeans would work as well. Spring Batch in Action


  6. The @XmlRootElement annotation ensures that an automatic mapping from Java to XML occurs following JAXB mapping and an XML representation of the resource is returned. Java EE 7 Essentials


  7. From case to case, this can be achieved in different ways, such as through Java serialization or using XML marshalling through mechanisms such as Java Architecture for XML Binding (JAXB) Spring Integration in Action


  8. For the purpose of discussion, the Jackson serializer for JSON and JAXB for XML were selected. Java Performance


  9. However, in Java 7, you will find that the Java API for XML Processing (JAXP), Java API for XML Binding (JAXB), and even the Java API for XML Web Services (JAXWS) have been included in the core runtime libraries. Java 7 Recipes A Problem-Solution Approach


  10. Registered if JAXB v2 libraries are present on the classpath. Spring in Action 4th Edition



590、单词 columnar 的匹配文本:


  1. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. Dremel - Interactive Analysis of WebScale Datasets


  2. Column families were originally introduced by columnar databases for fast analytical queries. Professional Hadoop Solutions


  3. This book covers unloading/loading to and from HDFS, graph analytics with Giraph, batch data analysis using Hive, Pig, and MapReduce, machine-learning approaches with Mahout, debugging and troubleshooting MapReduce jobs, and columnar storage and retrieval of structured data using Apache Accumulo. Hadoop RealWorld Solutions Cookbook


  4. Although there is some overlap, making a columnar data store like HBase hum involves a different bag of tricks from those you use to tweak, say, MySQL. HBase in Action


  5. Installing HBase HBase is a highly scalable NoSQL data store that supports columnar-style data storage. Hadoop MapReduce Cookbook


  6. Hive provides another binary storage format called RCFile, short for Record Columnar File. Hadoop:The Definitive Guide 3rd Edition


  7. This is also where the majority of similarities end, because although HBase stores data on disk in a column-oriented format, it is distinctly different from traditional columnar databases: whereas columnar databases excel at providing real-time analytical access to data, HBase excels at providing key-based access to a specific cell of data, or a sequential range of cells. HBase:The Definitive Guide


  8. A manager who wants to understand the advantages (and disadvantages) of Cassandra and related columnar databases to help make decisions about technology strategy. Cassandra The Definitive Guide


  9. Tenzing supports a mostly complete SQL implementation (with several extensions) combined with several key characteristics such as heterogeneity, high performance, scalability, reliability, metadata awareness, low latency, support for columnar storage and structured data, and easy extensibility. Tenzing - A SQL Implementation On The MapReduce Framework


  10. It combines the advantages of columnar data layout with other known techniques (such as using composite range partitions) and extensive algorithmic engineering on key data structures. PowerDrill - Processing a Trillion Cells per Mouse Click



591、单词 ttl 的匹配文本:


  1. This is done by commenting out the multicast address and time to live (TTL) settings, and adding a dedicated master Gmond node (master2) in the gmond.conf file. HBase Administration Cookbook


  2. Every time when packet crosses route, the TTL is decreased. Mastering ElasticSearch


  3. A compromise is achieved by implementing a configurable cache time-to-live (TTL) parameter, thus limiting cache presence in each Lucene instance. Professional Hadoop Solutions


  4. Time To Live (TTL) is a setting that makes a column self-delete a specified number of seconds after the insertion time. Cassandra High Performance Cookbook


  5. We talk about Time To Live (TTL) later, as a part of advanced column family configurations. HBase in Action


  6. The time-to-live (or TTL) sets a threshold based on the timestamp of a value and the internal housekeeping is checking automatically if a value exceeds its TTL. HBase:The Definitive Guide


  7. Usually it is straightforward to pick a suitable TTL value, but if prompt replacement of failed services is desired, the TTL can become small enough to overload the DNS servers. Chubby - The lock service for loosely-coupled distributed systems


  8. Cassandra 0.7 introduced an optional time to live (TTL) value, which allows columns to expire a certain amount of time after creation. Cassandra The Definitive Guide


  9. As far as the application itself, you need to pay attention to an additional header field in the datagrams called the Time-To-Live (TTL) value. Java Network Programming 4th Edition


  10. The networkTTL property denotes the maximum number of remote brokers a message can pass through before being discarded. ActiveMQ in Action



592、单词 appends 的匹配文本:


  1. Log appends are acknowledged at the master in parallel with. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. If it finds it, it appends the value to the location of the design document itself to build the URL. CouchDB The Definitive Guide


  3. You may also encounter a project version that appends a SNAPSHOT designator to indicate that the built project artifact is in the state of development. Gradle in Action


  4. The key idea is this: we create a finite state machine (FSM) that reads the field values and levels for each field, and appends the values sequentially to the output records. Dremel - Interactive Analysis of WebScale Datasets


  5. Similarly, the Google File System [7] is optimized for very large files and a workload consisting largely of reads and file appends. Ceph A Scalable, High-Performance Distributed File System


  6. As HFile writer appends a block, an in-memory index of the first key of each block is formed, along with its in-file offset. Hadoop in Action


  7. As Cassandra receives updates, every write value is written immediately to the commit log in the form of raw sequential file appends. Cassandra The Definitive Guide


  8. Sqoop exports by default do a series of appends, adding each record as a new row in the table. Hadoop Beginners Guide


  9. Random and sequential writes perform better than random reads since each tablet server appends all incoming writes to a single commit log and uses group commit to stream these writes efficiently to GFS. Bigtable - A Distributed Storage System for Structured Data


  10. Each succeeding call appends its closure to the end of the list to be executed in order (Example 2-18) Building and Testing with Gradle



593、单词 terabytes 的匹配文本:


  1. This feature allows you to grow storage incrementally, regardless of whether you need terabytes or petabytes of space. Monitoring with Ganglia


  2. Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. MapReduce - Simplied Data Processing on Large Clusters


  3. The uncompressed dataset is tens of terabytes, which is small compared to many NoSQL instances, but was large enough to cause difficulties with sharded MySQL. Spanner - Google’s Globally-Distributed Database


  4. Google has thousands of machines working on hundreds of terabytes of data, spread across the World Wide Web. Beginning CouchDB


  5. Such a configuration might have a hundred terabytes of data to be processed, distributed across some or all of the machines. Interpreting the Data Parallel Analysis with Sawzall


  6. They needed to go through terabytes   and petabytes   of data to figure out which websites were popular, what books were in demand, and what kinds of ads appealed to people. Hadoop in Action


  7. It performs blazingly fast writes, can store hundreds of terabytes of data, and is decentralized and symmetrical so there’s no single point of failure. Cassandra The Definitive Guide


  8. Picture a much larger input file measured in terabytes, and the split model makes more sense. Hadoop Beginners Guide


  9. The Bigtable clusters used by these products span a wide range of configurations, from a handful to thousands of servers, and store up to several hundred terabytes of data. Bigtable - A Distributed Storage System for Structured Data


  10. The search process is challenging because of the large amount of data: The raw documents comprise several tens of terabytes of uncompressed data, and the inverted index resulting from this raw data is itself many terabytes of data. Web search for a planet the google cluster architecture



594、单词 invariant 的匹配文本:


  1. Spanner depends on the following disjointness invariant: for each Paxos group, each Paxos leader’s lease interval is disjoint from every other leader’s. Spanner - Google’s Globally-Distributed Database


  2. All nodes on joining the cluster contact the leader who tells them for what ranges they are replicas for and leader makes a concerted effort to maintain the invariant that no node is responsible for more than N-1 ranges in the ring. Cassandra A Decentralized Structured Storage System


  3. Cuckoo hashing maintains the invariant that an item is always stored in one of these two locations. Data Structures and Algorithm Analysis in Java 3rd Edition


  4. Multiupdate is very useful for building structures in ZooKeeper that maintain some global invariant. Hadoop:The Definitive Guide 3rd Edition


  5. As we know that this job’s correctness is invariant to the ordering, we can ignore the differences. Hadoop in Action


  6. The exception’s detail method should indicate which invariant is violated (Item 63) Effective Java 2nd Edition


  7. A related use of the assert statement is to verify a class invariant. Java in a Nutshell 5th Edition


  8. But it is still surprising to see that in these cases the final size almost seems like an invariant. PowerDrill - Processing a Trillion Cells per Mouse Click


  9. Invariant state constraints, such as the facts that the currentVolume always remains between zero and capacity, and that capacity is nonnegative and never changes after construction. Concurrent Programming in Java Design Principles and Pattern


  10. When the selection result is zero, the set of selected keys is empty, so the following invariant holds: Fundamental Networking in Java



595、单词 substring 的匹配文本:


  1. You can extract a substring from a larger string with the substring method of the String. Core Java Volume I Fundamentals 9th Edition


  2. The String.substring method is used for extracting the actual IP address data from the raw string. Pro Hadoop


  3. Using a String data type works great for simple use cases, but what if you want to know the exact minor version of your project? You’ll have to parse the string value, search for the dot character, and filter out the substring that identifies the minor version. Gradle in Action


  4. You can do exact matches, substring matches, or regular-expression matches and filter out data that doesn’t match. HBase in Action


  5. We convert the Text value containing the line of input into a Java String, then use its substring() method to extract the columns we are interested in. Hadoop:The Definitive Guide 3rd Edition


  6. The second filter does a regular expression match, while the third uses a substring match approach. HBase:The Definitive Guide


  7. The substring method returned a new filename starting with the first character and ending with the character immediately preceding the period. Java 7 New Features Cookbook


  8. Instead, therefore, we have an opportunity to introduce several of Hive''s built-in functions, in particular, those to convert a string to lowercase (LOWER), to extract a substring from a string (SUBSTR) and to return the number of characters in a string (LENGTH) Hadoop Beginners Guide


  9. Returns the substring for the index’s match using the regex_pattern. Programming Hive


  10. We observe a reversal of the general rule that Hive adds an overhead cost to hand-coded Hadoop in the “small” (substring) aggregation task (the time taken by Hive is represented by the lower part of the Hadoop bar in Fig. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



596、单词 posix 的匹配文本:


  1. In a regular Portable Operating System Interface (POSIX) style filesystem, if you open a file and write data, it still exists on disk before the file is closed. Apache Flume Distributed Log Collection for Hadoop


  2. Redis is written in ANSI C and works in most POSIX systems, such as Linux, without any external dependencies. MapReduce Design Patterns


  3. Data-Oriented Access Control Other than ACLs and POSIX-based file permissions (read and write) for users and groups on HDFS, Hadoop does not natively keep track of the access control policies for its data. Professional Hadoop Solutions


  4. The Ceph file system has three main components: the client, each instance of which exposes a near-POSIX file system interface to a host or process; a cluster of OSDs, which collectively stores all data and metadata; and a metadata server cluster, which manages the namespace (file names and directories) while coordinating security, consistency and coherence (see Figure 1) Ceph A Scalable, High-Performance Distributed File System


  5. Alternatively, CouchDB can be built from source on virtually any POSIX system. Beginning CouchDB


  6. The client presents a filesystem interface similar to a Portable Operating System Interface (POSIX), so the user code does not need to know about the namenode and datanode to function. Hadoop:The Definitive Guide 3rd Edition


  7. Permission settings for directories also closely follow the POSIX   model. Hadoop in Action


  8. If you’re on Windows, you’ll first have to get the Cygwin POSIX emulator and then install Git. Cassandra The Definitive Guide


  9. It also offers native integration with NFS file systems, which makes processing of existing data much easier; MapR replaced HDFS with a full POSIX-compliant filesystem that can easily be mounted remotely. Hadoop Beginners Guide


  10. These systems are fantastic as the backing store for relational databases, content delivery systems, and similar types of data storage needs because they can support full-featured POSIX semantics, scale to meet the size requirements of these systems, and offer low-latency access to data. Hadoop Operations



597、单词 jtext 的匹配文本:


  1. Each user interface component has a wrapper class (such as JButton or JTextField) that stores the model and the view. Core Java Volume I Fundamentals 9th Edition


  2. But what should the result of pressing that button be? We’d like to see something change on the screen, so a new Swing component will be introduced: the JTextField. Thinking in Java 4th Edition


  3. A DefaultCellEditor can be constructed with a JTextField, a JCheckBox, or a JComboBox. Core Java Volume II Advanced Features 9th Edition


  4. Otherwise, you mostly use the methods it inherits from the JTextField superclass. Java Network Programming 4th Edition


  5. In the inheritance hierarchy, all Swing components such as the JLabel, JTextField and JPanel classes extend from the JComponent class. Java 7 Recipes A Problem-Solution Approach



598、单词 rrd 的匹配文本:


  1. It is usually installed on the same physical hardware as gmetad, because it needs access to the RRD databases created by the poller. Monitoring with Ganglia


  2. Many of these are built on top of RRDTool (http://www.rrdtool.org), which stores time-series data in round-robin database (RRD) files. High Performance MySQL 3rd Edition


  3. Next, the rrdgraph report processor that converts transaction reports into RRD-based graphs. Pro Puppet


  4. Each gmond process relays data to a central gmetad process that records data in a series of RRD, or round-robin database files, which are fixed-size files that efficiently store time series data. Hadoop Operations



599、单词 sstables 的匹配文本:


  1. SSTables are not written to again after they are flushed; if there are changes to the data, a new SSTable is written. NoSQL Distilled


  2. It does this by isolating the disk traffic for Commit Log activity from the traffic used for reads as well as flush Memtables and compact SSTables. Cassandra High Performance Cookbook


  3. It also has scripts for converting SSTables (the datafiles) to JSON and back. Cassandra The Definitive Guide


  4. Of these updates, the recently committed ones are stored in memory in a sorted buffer called a memtable; the older updates are stored in a sequence of SSTables. Bigtable - A Distributed Storage System for Structured Data


  5. SSTables are stored in GFS; Bigtable relies on GFS to preserve data in the event of disk loss. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



600、单词 completable 的匹配文本:


  1. As you’ll see in this chapter, the Future interface and particularly its new CompletableFuture implementation are your best tools in such circumstances. Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. The CompletableFuture class that we will discuss in Chapter 6 has an operation that does just what map should do, but it is called thenApply. Java SE 8 for the Really Impatient



601、单词 jnlp 的匹配文本:


  1. Then, prepare a descriptor file in the Java Network Launch Protocol (JNLP) format. Core Java Volume I Fundamentals 9th Edition


  2. If you are in a situation where you can ensure that users have a JRE installed (such as inside a corporate environment), then applets (or JNLP/Java Web Start, described later in this chapter) might be the perfect way to distribute client programs and automatically update everyone’s machine without the usual cost and effort of distributing and installing new software. Thinking in Java 4th Edition


  3. When you click on the Launch button on either of these sites in Firefox, the browser will ask if you want to open a file called jenkins.jnlp using Java Web Start. Jenkins The Definitive Guide


  4. A future version of this ation may require that Java EE products be able to deploy application clients that allows them to be launched by a JNLP client, and that application ntainers be able to launch application clients deployed using the JNLP. Java EE 7 Specification


  5. The Embedding a JNLP file in an HTML page recipe provides a demonstration of this technique. Java 7 New Features Cookbook


  6. If the URL of a JNLP (Java Network Launching Protocol) is specified on the command line, javaws launches the specified application. Java in a Nutshell 5th Edition


  7. Underneath the covers, Java Web Start provides a network launching protocol called JNLP. Java 7 Recipes A Problem-Solution Approach



602、单词 auth 的匹配文本:


  1. The token lifetime can be configured with the timeout (in seconds) setting in the couch_httpd_auth configuration section. CouchDB The Definitive Guide


  2. Getting ready Install the Twitter plugin (http://wiki.hudson-ci.org/display/HUDSON/ Twitter+Plugin) and download auth.jar from the same wiki page. Jenkins Continuous Integration Cookbook


  3. And if your Jenkins server requires authentication, just pass in the -auth username:password option: Jenkins The Definitive Guide


  4. Hadoop Auth provides HTTP SPNEGO authentication for protected web application resources, and sets a signed HTTP cookie that contains a Kerberos authentication token (which can be used until the cookie expires) Professional Hadoop Solutions


  5. App annotat in whic authent Users”) roles, w environ all auth method. Java EE 7 Specification


  6. To do this, the server uses the setNeedClientAuth( ) method on its server socket as follows: Java Security 2nd Edition


  7. This ensures that the getUserPrincipal, getRemoteUser, and getAuthType methods return valid values. Java EE 7 Essentials


  8. The AuthTest program authenticates a user and runs a simple action that retrieves a system property. Core Java Volume II Advanced Features 9th Edition


  9. If your server desires or requires authentication by its clients, call setWantClientAuth( ) or setNeedClientAuth( ) Java in a Nutshell 5th Edition


  10. Next, we set the needClientAuth parameter in the SSL transport URI, which instructs the broker to check connecting client certificates and allow access only to those that are found in the truststore. ActiveMQ in Action



603、单词 arg 的匹配文本:


  1. Inspect the local variables and check the value of the arg variable: Core Java Volume I Fundamentals 9th Edition


  2. One can validly pop types matching the argument types given in Descriptor and an uninitialized type, UninitializedArg, off the incoming operand stack, yielding OperandStack. The Java Virtual Machine Specification Java SE 7 Edition


  3. That parameter will now be available in the Oozie application execution context, and can be used, for example, as a parameter for java action invocation inside the <arg> element. Professional Hadoop Solutions


  4. Entities A POJO with a no-arg public constructor is used to define the mapping with one or more relational database tables. Java EE 7 Essentials


  5. Implement a class with a method public static void premain(String arg, Instrumentation instr) Core Java Volume II Advanced Features 9th Edition


  6. Do this by using one of the constructors other than the no-arg constructor. Java in a Nutshell 5th Edition


  7. Although the Serializable interface requires serializable objects to have no-arg constructors, deserialized objects do not actually use that (or any) constructor. Java Performance Tuning 2nd Edition


  8. Also, when discussing invocations of the form obj.msg(arg), the recipient (that is, the object bound to variable obj) is called the target object. Concurrent Programming in Java Design Principles and Pattern


  9. Note You will need an empty (no-arg) constructor for classes that implement the Externalizable interface. Java 7 Recipes A Problem-Solution Approach


  10. The standard idiom is to wrap the code in a no-arg lambda: Java SE 8 for the Really Impatient



604、单词 subtypes 的匹配文本:


  1. Note that the last four subtypes are interfaces—the virtual machine instantiates suitable classes that implement these interfaces. Core Java Volume I Fundamentals 9th Edition


  2. If you had a hundred different lambdas for expressing a Comparator, that would mean a hundred different subtypes of Comparator. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. There are actually several subtypes of FileCollection that may be in use in any of these cases. Gradle Beyond The Basics


  4. Finally, all the key and value types   must be subtypes of Writable , which ensures a serialization interface for Hadoop to send the data around in a distributed cluster. Hadoop in Action


  5. The Liskov substitution principle says that any important property of a type should also hold for its subtypes, so that any method written for the type should work equally well on its subtypes [Liskov87] Effective Java 2nd Edition


  6. The other two types were both of the type application, but had different subtypes. Java 7 New Features Cookbook


  7. When defining an abstract type (e.g., Shape) that you expect to have many subtypes (e.g., Circle, Rectangle, Square), you are often faced with a choice between interfaces and abstract classes. Java in a Nutshell 5th Edition


  8. In addition, nonstandard custom types and subtypes can be freely defined as long as they begin with x- Java Network Programming 4th Edition


  9. Therefore, the following would be valid assignments because both Integer and Double are subtypes of Number: Java 7 Recipes A Problem-Solution Approach


  10. In general, reading is covariant (subtypes are okay) and writing is contravariant (supertypes are okay) Java SE 8 for the Really Impatient



605、单词 ceph 的匹配文本:


  1. Being a part of Hadoop core and serving as a storage layer for the Hadoop MapReduce framework, HDFS is also a stand-alone distributed file system like Lustre, GFS, PVFS, Panasas, GPFS, Ceph, and others. HDFS scalability the limits to growth


  2. MooseFS (Moose File System) is a shared file system ; it implements the same rough concepts of shared storage solutions - such as Ceph, Lustre or even GlusterFS. bk-compute-adminguide-grizzly


  3. Rados block device (RBD) Stores images inside of a Ceph storage cluster using Ceph''s RBD interface. bk-admin-guide-cloud-latest


  4. Abstract We have developed Ceph, a distributed file system that provides excellent performance, reliability, and scalability. Ceph A Scalable, High-Performance Distributed File System


  5. Ceph [17] has a cluster of namespace servers (MDS) and uses a dynamic subtree partitioning algorithm in order to map the namespace tree to MDSs evenly. The Hadoop Distributed File System


  6. Built as part of the Ceph distributed file system [27], RADOS facilitates an evolving, balanced distribution of data and workload across a dynamic and heterogeneous storage cluster while providing applications with the illusion of a single logical object store with well-defined safety semantics and strong consistency guarantees. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. Ceph was originally developed by one of the founders of DreamHost and is currently used in production there. OpenStack Operations Guide



606、单词 geospatial 的匹配文本:


  1. On the other hand, only MyISAM supports geospatial search inside MySQL. High Performance MySQL 3rd Edition


  2. It combines the ability to scale out with features such as secondary indexes, range queries, sorting, aggregations, and geospatial indexes. MongoDB The Definitive Guide 2nd Edition


  3. A typical example would be geospatial processing based on the bounding box discussed in the following example. Professional Hadoop Solutions


  4. There are also examples for building an efficient geospatial row key and performing batch analysis using MapReduce. Hadoop RealWorld Solutions Cookbook


  5. Recently, David has been focusing his attention on the intersection of geospatial technologies with Lucene and Solr. Solr 3 Enterprise Search Server


  6. In chapter 8, you get a glimpse at using HBase for geospatial data. HBase in Action


  7. It supports powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, rich document handling (for example, Word and PDF), and geospatial search. Hadoop MapReduce Cookbook


  8. In that time, graph databases have helped solve important problems in the areas of social networking, master data management, geospatial, recommendations, and more. Graph Databases


  9. Leveraging Geospatial Indexes One form of indexing worthy of special mention is geospatial indexing. The Definitive Guide to MongoDB


  10. The kinds of indexes supported include all the ones you’d find in an RDMBS; ascending, descending, unique, compound-key, and even geospatial indexes are supported. MongoDB in Action



607、单词 postfix 的匹配文本:


  1. Install the Postfix e-mail server to let Nagios send out e-mails: HBase Administration Cookbook


  2. There are actually two forms of these operators; you’ve just seen the postfix form of the operator that is placed after the operand. Core Java Volume I Fundamentals 9th Edition


  3. There are two versions of each type of operator, often called the prefix and postfix versions. Thinking in Java 4th Edition


  4. You can also use a postfix of other criteria to cover queries with the index. MongoDB The Definitive Guide 2nd Edition


  5. Exec and Notify Subscribing a service to a file is very convenient, but what if we need to do something more explicit when a file resource changes? I’ll use a postfix transport map as an example. Managing Infrastructure with Puppet


  6. A postfix expression followed by a ++ operator is a postfix increment expression. The Java Language Specification Java SE 7 Edition


  7. You’ll discover how to use Puppet to manage Postfix, Apache and MySQL servers, as well as how to load-balance your Puppet Masters. Pro Puppet


  8. Two of our highest-volume log formats are produced by the Postfix mail transfer agent and Microsoft Exchange Server. Hadoop:The Definitive Guide 3rd Edition


  9. The columns form the actual counters, and their name is a combination of the date, plus an optional dimensional postfix—for example, the country code. HBase:The Definitive Guide


  10. These operators, in both prefix and postfix forms, are most commonly used to increment or decrement the counter that controls a loop. Java in a Nutshell 5th Edition



608、单词 scalar 的匹配文本:


  1. Not only can you construct an index with scalar values like strings and integers, you can also use JSON structures as keys for your views. CouchDB The Definitive Guide


  2. Each scalar expression in the SELECT clause emits a value at the same level of nesting as the most-repeated input field used in that expression. Dremel - Interactive Analysis of WebScale Datasets


  3. Removing low scoring data If you can score your data with some sort of scalar value, you can filter out records that don’t meet a certain threshold. MapReduce Design Patterns


  4. Unstructured text works well when you’re working with scalar or tabular data. Hadoop in Practice


  5. Some eval functions are aggregate functions, which means they operate on a bag of data to produce a scalar value; MAX is an example of an aggregate function. Hadoop:The Definitive Guide 3rd Edition


  6. The simplest form of the update request accepts a partial document as the "doc" parameter which just gets merged with the existing document—objects are merged together, existing scalar fields are overwritten. Elasticsearch The Definitive Guide


  7. A scalar function is a function that returns a single value. Core Java Volume II Advanced Features 9th Edition


  8. These include filter functions (eval functions that return a Boolean) and aggregate functions (eval functions that take a bag and return a scalar value) Hadoop in Action


  9. This pattern fuses together two scalar values with a separator to create an aggregate. Cassandra The Definitive Guide


  10. Similarly, array objects holding scalar elements possess locks, but their individual scalar elements do not. Concurrent Programming in Java Design Principles and Pattern



609、单词 pojos 的匹配文本:


  1. In the next section we will look into the various way to serialize Pojos and how Netty allows you to do so by make use of different serialization libraries. Netty in Action MEAP V08


  2. In Java, classes that follow the bean conventions are called plain-old Java objects (POJOs) Gradle in Action


  3. JavaBeans (also known as POJOs, for Plain Old Java Objects) provide a standard mechanism for creating Java resources that are configurable in a number of ways. Pro Spring 3


  4. Indexing POJOs POJOs (Plain Old Java Objects) typically follow the JavaBean naming pattern for properties that each have a getter and setter method. Solr 3 Enterprise Search Server


  5. You can also define listeners for all these events as annotated POJOs. Spring Batch in Action


  6. Thanks to this mechanism, simple POJOs can be exposed as OSGi services, and OSGi services can be injected into regular Spring beans by using the appropriate elements. Spring Dynamic Modules in Action


  7. Transactional annotation brings transactional semantics to POJOs (plain old Java objects), outside of an EJB (Enterprise JavaBean) Java EE 7 Essentials


  8. These are Plain Old Java Objects (POJOs) that act as if they’re message driven. ActiveMQ in Action


  9. This makes it easy to cache the results of method calls in your POJOs without having to explicitly manage the process of checking for the existence of a cache entry, loading new ones, and expiring old cache entries. Spring Data Modern Data Access for Enterprise Java


  10. It empowered plain old Java objects (POJOs) with powers previously only available using EJB and other enterprise Java specifications. Spring in Action 4th Edition



610、单词 deserialization 的匹配文本:


  1. Passing by reference eliminates a serialization and deserialization for the key/value, a potential speed increase. Pro Hadoop


  2. If the versions are different you may run into issues, for example, not being able to connect or problems with response deserialization. Mastering ElasticSearch


  3. Deserialization of this Bloom filter is demonstrated in the setup method of the following Mapper code. MapReduce Design Patterns


  4. No serialization or deserialization of objects (both memory and processing) Professional Hadoop Solutions


  5. Serialization and deserialization take data from an in-memory form and persist it to a back-end data store. Cassandra High Performance Cookbook


  6. You can also review a Google Code project called hive-json-serde,9 which can support both serialization and deserialization. Hadoop in Practice


  7. Deserialization is the reverse process of turning a byte stream back into a series of structured objects. Hadoop:The Definitive Guide 3rd Edition


  8. The constructor without any parameters is only for deserialization purposes and should not be used directly. HBase:The Definitive Guide


  9. However, this proved to be inefficient because of the serialization and deserialization costs associated with translating to and from Sawzall’s native type system. Tenzing - A SQL Implementation On The MapReduce Framework


  10. The main purpose of this interface is to provide mechanisms for the serialization and deserialization of data as it is passed across the network or read and written from the disk. Hadoop Beginners Guide



611、单词 nontrivial 的匹配文本:


  1. It does, however, add a nontrivial amount of load to the Nagios server. Monitoring with Ganglia


  2. To bring everything together, I take you through the step-by-step development of a nontrivial MapReduce application. Pro Hadoop


  3. Building a system for general-purpose, distributed, parallel computation is nontrivial. HBase in Action


  4. The objective of this part of the book is to present techniques to address nontrivial questions asked about your data, and to create new insights into your data. Hadoop in Practice


  5. A natural question to ask is: can you do anything useful or nontrivial with it? The answer is yes. Hadoop:The Definitive Guide 3rd Edition


  6. Selecting one server type over the others is a nontrivial task, as it depends on your use case. HBase:The Definitive Guide


  7. Decomposing a data processing application into mappers and reducers is sometimes nontrivial. Hadoop in Action


  8. Not to mention the administration overhead, which can be nontrivial if things start breaking in the early hours of the morning. Hadoop Beginners Guide


  9. To demonstrate the use of CLUSTER BY, let’s see a nontrivial example: another way to perform the Word Count algorithm that we introduced in Chapter 1 Programming Hive


  10. Yet all of the parallel databases mentioned above have a nontrivial cost, often coming with seven figure price tags for large installations. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



612、单词 datacenter 的匹配文本:


  1. The combined volume of events is written together in HDFS in datacenter 1 Apache Flume Distributed Log Collection for Hadoop


  2. The underlying data is stored in a scalable NoSQL datastore in each datacenter (see Figure 1) Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  3. The official NVML module is a valuable tool for cluster administrators who manage GPUs in an HPC cluster or datacenter environment. Monitoring with Ganglia


  4. Zones are also the unit of physical isolation: there may be one or more zones in a datacenter, for example, if different applications’ data must be partitioned across different sets of servers in the same datacenter. Spanner - Google’s Globally-Distributed Database


  5. Chapter 8, Multiple Datacenter Deployments: Cassandra is designed to work both when nodes are deployed in a local area network and when nodes are separated by larger geographical distances such as a wide area network. Cassandra High Performance Cookbook


  6. All messages that users write or read are stored in HBase.16 The system supporting Facebook messages needs to deliver high write throughput, extremely large tables, and strong consistency within a datacenter. HBase in Action


  7. Cluster mirroring in Kafka The Kafka mirroring feature is used for creating the replica of an existing cluster, for example, for the replication of an active datacenter into a passive datacenter. Apache Kafka


  8. Each datacenter has multiple racks (R), and each rack has multiple machines. Hadoop in Action


  9. Figure Kafka cluster co-located with each datacenter where our userfacing services run. Kafka a Distributed Messaging System for Log Processing


  10. Also, the “peaky” nature of the MapReduce workload made it hard to fully utilize the resources of a datacenter compared to Percolator’s much smoother resource usage. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



613、单词 hudson 的匹配文本:


  1. Hudson started out as an open source project in 2004 at Sun Microsystems. Gradle in Action


  2. The second script iterates through instances of slave objects (http://javadoc.jenkins-ci.org/hudson/slaves/SlaveComputer.htm) Jenkins Continuous Integration Cookbook


  3. For a CI (Continuous Integration) system, we advise you to check out Jenkins4 (which was called Hudson before they changed the name for legal reasons) Pro Spring MVC


  4. Seven years ago, I wrote the first line of code that started this whole project that is now known as Jenkins, and was originally called Hudson. Jenkins The Definitive Guide


  5. The sole purpose of this module is to output test results, in JUnit XML format, to the console upon test completion (YUI supports other output formats that you can use instead; use whatever your build tool understands—for example, Hudson/Jenkins understands JUnit XML) Testable JavaScript


  6. In most cases, the platform was sitting next to a Hudson/Jenkins build server in a continuous integration setup. Sonar Code Quality Testing Essentials


  7. This approach is ideal when a large number of developers are sharing a project, or with continuous integration tools such as Hudson that build and test code in a completely automated and unattended manner. Cassandra High Performance Cookbook


  8. If you want to download the most cutting-edge builds, you can get the source from Hudson, which the Cassandra project uses as its Continuous Integration tool. Cassandra The Definitive Guide


  9. His current topics of research center around project automation, including: build tools (Gradle, Leiningen, Maven, Ant), distributed version control (Git, Mercurial), testing frameworks (Geb, Spock, JUnit, TestNG, Mockito), continuous integration (Jenkins, Hudson, Bamboo) and code quality metrics (Sonar, CodeNarc, PMD) Building and Testing with Gradle


  10. The first book to include detailed screenshots and recipes for using Jenkins continuous integration server (formerly known as Hudson) Selenium 2 Testing Tools Beginner''s Guide



614、单词 fibonacci 的匹配文本:


  1. Programming tasks such as calculating factorial, Fibonacci numbers, anagrams, and Sierpinski carpet are just a few of the well-known tasks that can be accomplished through the recursive programming technique. Pro Java 7 NIO.2


  2. Here’s a second implementation of Generator, this time to produce Fibonacci numbers: Thinking in Java 4th Edition


  3. The Fibonacci series is famous as a classic programming exercise. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Another set of problems that are not applicable for MapReduce are recursive problems (for example, a Fibonacci problem) Professional Hadoop Solutions


  5. We chose to calculate a Fibonacci number in our test program because we used a. Java Threads 3rd Edition


  6. The second measures function invocation using a recursive function to calculate the first 35 Fibonacci numbers. Interpreting the Data Parallel Analysis with Sawzall


  7. The running time is analyzed in much the same way as for the program that computes the Fibonacci numbers. Data Structures and Algorithm Analysis in Java 3rd Edition


  8. Let’s start with a very simple example: calculating Fibonacci numbers: Beginning Scala


  9. Fibonacci, also known as Leonardo di Pisa, in his thirteenth century book Liber abaci posed the following question: Two young rabbits, one of each sex, are placed on an island. Java Performance


  10. We''ll illustrate the basic steps with a very boring and unrealistic, but very simple classic example: recursively computing fib, the Fibonacci function. Concurrent Programming in Java Design Principles and Pattern



615、单词 ints 的匹配文本:


  1. Finally, you see the use of the += operator to append a string to s, and the use of parentheses to control the order of evaluation of the expression so that the ints are actually summed before they are displayed. Thinking in Java 4th Edition


  2. Mark the channel finish • Read all ints out of the outbound output of the EmbeddedChannel and check if it only. Netty in Action MEAP V08


  3. For example, you can convert an array of primitive ints into an IntStream as follows: Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. These particular tables are sum tables; they add up the values emitted to them, ints or floats as appropriate. Interpreting the Data Parallel Analysis with Sawzall


  5. But the most serious problem with this technique is that when you access an array that is indexed by an enum’s ordinal, it is your responsibility to use the correct int value; ints do not provide the type safety of enums. Effective Java 2nd Edition


  6. Therefore, if you want to do anything with the bytes returned by getAddress(), you need to promote the bytes to ints and make appropriate adjustments. Java Network Programming 4th Edition


  7. Any data type manipulation other than with ints or booleans produced highly varying and misleading times. Java Performance Tuning 2nd Edition


  8. A TermFreqVector instance has several methods for retrieving the vector information, primarily as matching arrays of Strings and ints (the term value and frequency in the field, respectively) Lucene in Action 2nd Edition


  9. By defining these constants as ints, it is possible to assign an invalid value to a variable that is supposed to only be allowed to hold one of the constant values: Java 7 Recipes A Problem-Solution Approach


  10. NOTE: The Random class has methods ints, longs, and doubles that return primitive type streams of random numbers. Java SE 8 for the Really Impatient



616、单词 abort 的匹配文本:


  1. If a test fails, the project will abort, but if all tests pass, the project continues. Gradle Effective Implementation Guide


  2. The coordinator leader then logs a commit record through Paxos (or an abort if it timed out while waiting on the other participants) Spanner - Google’s Globally-Distributed Database


  3. The do attribute value of abort tells Tsung to abort the session if the match condition is true. Scaling CouchDB


  4. You don’t want to abort processing the file if a line is malformed, so you update a counter and. Hadoop in Practice


  5. Use the timeout because it is important to your SLA, not because you want to abort the execution of long running queries. Elasticsearch The Definitive Guide


  6. This is particularly useful in very long running jobs where the errors may cause you to abort the job. Hadoop Beginners Guide


  7. If the shared edits path is not writable or becomes unavailable for any reason, the namenode process will abort. Hadoop Operations


  8. After you’re finished with the transaction, you can either commit it or abort it by sending the appropriate frame (COMMIT or ABORT), of course with the transaction ID passed as the transaction header. ActiveMQ in Action


  9. Since cleanup is synchronized on the primary lock, it is safe to clean up locks held by live clients; however, this incurs a performance penalty since rollback forces the transaction to abort. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications


  10. This approach is different from most parallel databases which abort unfinished queries upon a node failure and restart the entire query processing (using a replica node instead of the failed node) HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



617、单词 logn 的匹配文本:


  1. There is a recursive and relatively complicated O(N logN) solution to this problem, which we now describe. Data Structures and Algorithm Analysis in Java 3rd Edition



618、单词 pagination 的匹配文本:


  1. As we’ll see in Part III, once you have a working list, adding pagination is trivial. CouchDB The Definitive Guide


  2. Repeat this 11 times, and verify that pagination is displayed. Gradle in Action


  3. Run the script ant pipe the output to more command to control pagination: Cassandra High Performance Cookbook


  4. It allows you to create, modify, and delete databases and documents as you please, and it has advanced pagination and sorting functionality on every screen. Beginning CouchDB


  5. A filter that uses this functionality is the PageFilter, checking if the number of rows to be returned for one iteration in the pagination process is reached, returning true afterward. HBase:The Definitive Guide


  6. And you can use the from and size parameters for pagination: Elasticsearch The Definitive Guide


  7. Always be careful when working with textual data such as capitalization, punctuation, hyphenation, pagination, and other aspects, as they can skew how the data is perceived. Hadoop Beginners Guide


  8. Use pagination to restrict the number of entities retrieved from the database. Java Performance


  9. Java code is presented in various formats as dictated by space and pagination considerations. Fundamental Networking in Java


  10. Each increment of results displayed is called a page, and the process of navigating through the results is called pagination. Pro JPA2



619、单词 bmi 的匹配文本:


  1. Here is an example of a BMI Calculator application page and HTML code written to render this page in a browser as displayed in the following screenshots: Selenium Testing Tools Cookbook



620、单词 functionalities 的匹配文本:


  1. The current mailing-lists are hosted at SourceForge and the search functionalities are subpar. Monitoring with Ganglia


  2. For this purpose, we discussed two useful functionalities: deploying a WAR file to a CloudBees web container and retrieving runtime information about this application. Gradle in Action


  3. A plugin basically adds extra functionalities such as tasks and properties to our project. Gradle Effective Implementation Guide


  4. In addition to that, it covers functionalities that allow us to troubleshoot ElasticSearch, such as describing how segments merging works, how to see what ElasticSearch does beneath its high-level interface, and how to limit the I/O operations. Mastering ElasticSearch


  5. In addition, similar new functionalities ought to be straightforward to implement and maintainable in the long term. Java 8 in Action Lambdas, Streams and Functional-style Programming


  6. We are going to demonstrate the basic functionalities provided by Guava and how it can help with some of the common everyday tasks encountered in programming. Getting Started with Google Guava


  7. To make it work across a cluster of distributed machines, we find that we need to add a number of functionalities: Hadoop in Action


  8. The failover transport connector plays an important role in achieving advanced functionalities such as high availability and load balancing as will be explained in chapter 12 ActiveMQ in Action


  9. With each version, Java increases the functionalities offered to programmers to facilitate the development of concurrent programs. Java 7 Concurrency Cookbook


  10. Via the use of the Calendar add() method, each of these two functionalities can be easily performed. Java 7 Recipes A Problem-Solution Approach



621、单词 etl 的匹配文本:


  1. Chapter 6, Interceptors, ETL, and Routing, explains how to transform data in flight as well as extract information from the payload to use with channel selectors to make routing decisions. Apache Flume Distributed Log Collection for Hadoop


  2. Chukwa: A Monitoring Service Chukwa’s goal is to provide extract, transform, and load (ETL) services for cluster logging data, thereby providing end users with a simple and efficient way to find the logging events that are actually important. Pro Hadoop


  3. They might use a combination of Pig and MapReduce for extract, transform, load (ETL) implementation; MapReduce for the actual data processing; and Hive for analytic queries. Professional Hadoop Solutions


  4. The next recipe, Setting up a "Shadow" data center for running only MapReduce Jobs shows how to use Cassandra''s built-in replication to partition a cluster with nodes dedicated for ETL-type workloads and others dedicated for serving low-latency requests. Cassandra High Performance Cookbook


  5. Chapter 3, Extracting and Transforming Data, includes recipes that show basic Hadoop ETL over several different types of data sources. Hadoop RealWorld Solutions Cookbook


  6. Introduction Hadoop MapReduce together with the supportive set of projects makes for a good framework choice to process large text datasets and to perform ETL-type operations. Hadoop MapReduce Cookbook


  7. DataBee An in-house Extract, Transform, Load (ETL) workflow software that is used to provide a common framework for reliable batch processing across all data processing jobs. Hadoop:The Definitive Guide 3rd Edition


  8. For example, we often use MapReduce   in the ETL-like process of taking humongous log files and computing a much smaller and more manageable set of statistics for analysts to look at. Hadoop in Action


  9. There are many ways of building workflows; a common approach is with Extract Transform and Load (ETL) tools such as Pentaho Kettle (http://kettle.pentaho.com) and Spring Batch (http://static.springsource.org/spring-batch) Hadoop Beginners Guide


  10. Those that primarily run ETL-style or other high input to output data ratio MapReduce jobs may prefer the additional bandwidth of a 10 Gb network. Hadoop Operations



622、单词 summarizes 的匹配文本:


  1. It knows what hosts exist, and what metrics are available for those hosts, but it doesn’t make you click through hierarchal lists of metrics to see graphs; rather, it graphically summarizes the entire grid using graphs that combine metrics by cluster and provides sane click-throughs for increased specificity. Monitoring with Ganglia


  2. Figure 10 summarizes the results graphically, considering the impact of scaling ttask as an additional dimension. Omega - flexible, scalable schedulers for large compute clusters


  3. Table 2 summarizes the name-node throughput with respect to the two internal operations. HDFS scalability the limits to growth


  4. For reference, see Figure 3 which summarizes the repetition and definition levels for all atomic fields in our sample records. Dremel - Interactive Analysis of WebScale Datasets


  5. As a result, Spanner requires a scope expression for every read-only transaction, which is an expression that summarizes the keys that will be read by the entire transaction. Spanner - Google’s Globally-Distributed Database


  6. Table 6-5 summarizes the main elements and attributes of the XML Coordinator language. Professional Hadoop Solutions


  7. One of the most fundamental Hadoop jobs at Last.fm takes the incoming listening data and summarizes it into a format that can be used for display purposes on the Last.fm website as well as for input to other Hadoop programs. Hadoop:The Definitive Guide 3rd Edition


  8. Figure 9-2 summarizes the effects of using the KeyValue fields. HBase:The Definitive Guide


  9. Table 10.2 summarizes the read and write operators in Pig Latin. Hadoop in Action


  10. Google Analytics summarizes this data and makes it available to webmasters. Bigtable - A Distributed Storage System for Structured Data



623、单词 vmware 的匹配文本:


  1. The libvirt project has created a common set of tools for managing virtualization resources on different virtualization platforms, currently including: Xen, QEMU, KVM, LXC, OpenVZ, User Mode Linux, VirtualBox, and VMware ESX and GSX. Monitoring with Ganglia


  2. Configuration Files for the Sample Cluster The examples provided in this section were run using the VMware images provided by Cloudera as part of its boot camp (http://www.cloudera.com/hadoop-training-basic) Pro Hadoop


  3. VMware, the company behind Spring and Grails, made significant investments in choosing Gradle. Gradle in Action


  4. Another class of related technologies comes from the world of cloud infrastructures such as EC2, Azure, Eucalyptus and VMWare offerings. Apache Hadoop YARN Yet Another Resource Negotiator


  5. It also includes Nectar-only functionality such as automatic scaling to VMWare virtual machines. Jenkins The Definitive Guide


  6. Cloudera makes the distribution available in a number of different formats: source and binary tar files, RPMs, Debian packages, VMware images, and scripts for running CDH in the cloud. HBase:The Definitive Guide


  7. For the purpose of producing JRockit examples, the author has used the hosted hypervisor VMware Fusion to run a Linux version of JRockit on his home computer. Oracle JRockit The Definitive Guide


  8. In March of 2010, VMWare took over as project sponsor of Redis. Cassandra The Definitive Guide


  9. For performance or scalability testing, you would use real hardware in a cluster configuration.) You run the VM in VMWare, KVM, or VirtualBox, start the Impala service through the Cloudera Manager web interface, and then interact with Impala through the impalashell interpreter or the ODBC and JDBC interfaces. Cloudera Impala


  10. Most of the preconfigured virtual machines (VMs) available are only designed for VMWare, but if you prefer VirtualBox you may find instructions on the Web that explain how to import a particular VM into VirtualBox. Programming Hive



624、单词 liveness 的匹配文本:


  1. Specifically, a per-cluster ResourceManager (RM) tracks resource usage and node liveness, enforces allocation invariants, and arbitrates contention among tenants. Apache Hadoop YARN Yet Another Resource Negotiator


  2. Concurrent programming involves a fundamental tradeoff between safety (avoiding errors such as update conflicts) and liveness (responding quickly to clients) NoSQL Distilled


  3. In most cases, existing replication traffic serves as a passive confirmation of liveness, with no additional communication overhead. Ceph A Scalable, High-Performance Distributed File System


  4. Cascading aims to help developers build powerful applications quickly and simply, through a well-reasoned API, without needing to think in MapReduce, and while leaving the heavy lifting of data distribution, replication, distributed process management, and liveness to Hadoop. Hadoop:The Definitive Guide 3rd Edition


  5. Though it is not important to understand what the code generator has done here in detail, the main idea is that several program points in the assembly code are annotated with liveness information. Oracle JRockit The Definitive Guide


  6. The result is a liveness failure: the program fails to make progress. Effective Java 2nd Edition


  7. In particular, multithreaded servers (and other multithreaded programs) require developers to address concerns that aren’t issues for single-threaded programs, particularly issues of safety and liveness. Java Network Programming 4th Edition


  8. The HotSpot VM’s JIT compilers use a bytecode analysis called Method Liveness that computes for every Java local field whether there’s a bytecode later in the method that might use its value. Java Performance


  9. Liveness failures lead to no behavior — things just stop running. Concurrent Programming in Java Design Principles and Pattern


  10. Percolator uses simple mechanisms to determine the liveness of another transaction. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



625、单词 derby 的匹配文本:


  1. Start the Oozie web application (with an internal Tomcat web server and Derby database) Professional Hadoop Solutions


  2. By default Hive uses Derby to store the metastore, which is an embedded Java relational database. Hadoop in Practice


  3. By default, the metastore service runs in the same JVM as the Hive service and contains an embedded Derby database instance backed by the local disk. Hadoop:The Definitive Guide 3rd Edition


  4. If this is your first experience with databases, we recommend that you use the Apache Derby database that is a part of most versions of JDK 7 Core Java Volume II Advanced Features 9th Edition


  5. Out-of-the-box Hive comes with an open source, lightweight, embedded SQL database called Derby, 4 which is installed and run on the client machine along with Hive. Hadoop in Action


  6. The Derby database engine will be used for the database examples. Java 7 New Features Cookbook


  7. Similarly, the JDBC channel uses an embedded Derby database to store events in a recoverable fashion. Hadoop Beginners Guide


  8. When using the JDBC message store, the default JDBC driver used in ActiveMQ is Apache Derby. ActiveMQ in Action


  9. By default, Hive uses a built-in Derby SQL server, which provides limited, singleprocess storage. Programming Hive


  10. Solution Use an embedded database such as the Derby database. Java 7 Recipes A Problem-Solution Approach



626、单词 encapsulation 的匹配文本:


  1. Encapsulation (sometimes called information hiding) is a key concept in working with objects. Core Java Volume I Fundamentals 9th Edition


  2. Code sources An encapsulation of the location from which certain Java classes were obtained. Java Security 2nd Edition


  3. JavaServer Faces • Faces Flow provides an encapsulation of related views/pages with applicationdefined entry and exit points. Java EE 7 Essentials


  4. On the other hand, most scripting languages lack features that are beneficial for programming complex applications, such as strong typing, encapsulation, and modularity. Core Java Volume II Advanced Features 9th Edition


  5. This concept, known as information hiding or encapsulation, is one of the fundamental tenets of software design [Parnas72] Effective Java 2nd Edition


  6. When applied to fields, this ability to hide members enables an object-oriented design technique known as data encapsulation. Java in a Nutshell 5th Edition


  7. It’s a picture that makes some NoSQL solutions seem perhaps less radical and less scary than we may have thought at first, and more like a natural expression and encapsulation of some of the work that was already being done to manage very large databases. Cassandra The Definitive Guide


  8. These specifications enable encoding of XML content as MIME body parts and encapsulation of those parts within SOAP 1.2 envelopes. Java Performance


  9. Object-oriented concurrent programming differs from multithreaded systems programming in languages such as C mainly due to the encapsulation, modularity, extensibility, security, and safety features otherwise lacking in C. Concurrent Programming in Java Design Principles and Pattern


  10. This encapsulation can help to make code more readable and reusable. Java 7 Recipes A Problem-Solution Approach



627、单词 subsets 的匹配文本:


  1. A common pattern in most languages, frameworks, and databases is using types to distinguish between subsets of your data. CouchDB The Definitive Guide


  2. Filtering is the way to pull subsets back out and deliver them to analysis shops that are interested in just that subset. MapReduce Design Patterns


  3. These subsets (or partitions) are the inputs to the reduce tasks. Professional Hadoop Solutions


  4. Instead of passing the whole dataset, similar to how the EvalFunc class works, the Accumulator interface allows for subsets of the data to be passed to the UDF. Hadoop RealWorld Solutions Cookbook


  5. They don’t need to have all the tuples in a group at one time to perform their operations, and they can work on subsets of the data and produce intermediate values that can be aggregated in the reducer. Hadoop in Practice


  6. Views may also be used to restrict users’ access to particular subsets of tables that they are authorized to see. Hadoop:The Definitive Guide 3rd Edition


  7. If you only need subsets of the data, narrowing the scan’s scope is playing into the strengths of HBase, since data is stored in column families and omitting entire families from the scan results in those storage files not being read at all. HBase:The Definitive Guide


  8. Furthermore, the equivalent transformation property must hold when the combiner is applied to arbitrary subsets of the intermediate data. Hadoop in Action


  9. Through the use of multiple partition columns, it is possible to create a rich hierarchical structure and for large tables with queries that require only small subsets of data it is worthwhile spending some time deciding on the optimal partitioning strategy. Hadoop Beginners Guide


  10. Also, whether you move old data to an “archive” location or delete it outright, the risk of tampering with newer data is reduced since the data subsets are in separate directories. Programming Hive



628、单词 validly 的匹配文本:


  1. An aaload instruction is type safe iff one can validly replace types matching int and an array type with component type ComponentType where ComponentType is a subtype of Object, with ComponentType yielding the outgoing type state. The Java Virtual Machine Specification Java SE 7 Edition


  2. If a client holds a proxy to an object in the server, it is important that the server does not garbage-collect that object until the client releases the proxy (and it can be validly garbage-collected) Java Performance Tuning 2nd Edition



629、单词 ejbs 的匹配文本:


  1. These technologies are suited to mediumto large-scale application development, and include topics such as Enterprise Java Beans (EJBs), Java Server Pages (JSPs), and much more. Java Network Programming and Distributed Computing


  2. Much of the extra work in Enterprise JavaBeans (EJBs), for example, is eliminated through the use of annotations in EJB3.0 Thinking in Java 4th Edition


  3. On the reverse side, Spring also provides simple annotation for injecting Spring beans into EJBs. Pro Spring 3


  4. All applica base na names r server i conflict new un differen a default application name of "bigapp", and defines four modules with names "ejbs/inventory", "ejbs/accounts", "ui/store", and "ui/admin" Java EE 7 Specification


  5. You can use managed beans to capture the data, session scope variables to pass information between pages, button clicks to invoke the business logic in backing EJBs, and (conditional) navigation rules to go from one page to another. Java EE 7 Essentials


  6. For this reason, Java Performance Tuning, Second Edition includes four new chapters: a new chapter on J2EE application tuning in general followed by chapters on tuning JDBC, servlets and JSPs, and EJBs. Java Performance Tuning 2nd Edition


  7. Just as with other APIs such as JDBC, JNDI, EJBs, and so on, it’s common to abstract the use of JMS APIs so as to not intermingle the JMS code with the business logic. ActiveMQ in Action


  8. One general tuning guideline is to set the steady state pool size to the number of hardware threads and maximum size to be equal to the maximum size of the HTTP worker thread pool (plus ORB thread pool size if remote EJBs or MDBs are invoked) Java Performance


  9. As a result, many web applications today sidestep EJBs entirely, going directly from servlets to the database. Pro JPA2


  10. But even though early EJBs were such an easy target, invasive programming could easily be found in earlier versions of Struts, WebWork, Tapestry, and countless other Java specifications and frameworks. Spring in Action 4th Edition



630、单词 supertype 的匹配文本:


  1. In a nutshell, the compiler autoboxed the parameters into a Double and two Integer objects, and then it tried to find a common supertype of these classes. Core Java Volume I Fundamentals 9th Edition


  2. That is, v is a subtype of X if the direct supertype of v is a subtype of X. The Java Virtual Machine Specification Java SE 7 Edition


  3. It’s also possible to go the other way, and use supertype wildcards. Thinking in Java 4th Edition


  4. The behavior of these supertypes may be important in some cases, but we’ll confine our attention to the common supertype here. Gradle Beyond The Basics


  5. Here, the referent can be inserted into any queue whose element type is a supertype of the type T of the referent; T is the lower bound for the wildcard. The Java Language Specification Java SE 7 Edition


  6. In object-oriented polymorphism, two subtypes can be referenced similarly because they implicitly share a common supertype. SQL Antipatterns


  7. Clients of an anonymous class can’t invoke any members except those it inherits from its supertype. Effective Java 2nd Edition


  8. Specifically, we’ll examine the Comparable interface and the Enum class (the supertype of enumerated types, described later in this chapter) and will learn about an important but infrequently used feature of generics known as lowerbounded wildcards. Java in a Nutshell 5th Edition


  9. In this example, ? super Integer means any type that is either an Integer or any supertype of Integer. Java 7 Recipes A Problem-Solution Approach


  10. If the supertype is abstract and has only one abstract method, you don’t even have to give the method name. Java SE 8 for the Really Impatient



631、单词 xmpp 的匹配文本:


  1. Finally, we describe interfacing directly with CouchDB from Erlang, which can be a useful technique if you want to integrate CouchDB storage with a server for non-HTTP protocols, like SMTP (email) or XMPP (chat) CouchDB The Definitive Guide


  2. By 2005, Extensible Messaging and Presence Protocol (XMPP) had become a respected standard for open instant messaging, and one of the foremost implementations was the Erlang-based ejabberd server package by Alexey Shchepin. RabbitMQ in Action


  3. Most did not stick, and those that did, like SOAP and XMPP, are on the complex side of things. ZeroMQ


  4. Using the XMPP protocol with Storm to send alerts and notifications Use case. Storm Blueprints Patterns for Distributed Real-time Computation


  5. Tip  Other ideas for Puppet report processors include RSS feeds for new reports, IRC, XMPP or instant messaging, or SMS notifications of new reports. Pro Puppet


  6. The chapter begins with the XMPP adapters that enable plugging into instant messaging systems. Spring Integration in Action


  7. As you can see from the output in the following listing, a number of different adapter options are available, from ftp, jdbc, and jms to some new adapters that were added in recent releases of Roo, such as twitter and xmpp. Spring Roo in Action


  8. There is also coverage of command-line tools, the command agent, use of XMPP, JConsole, and the web console. ActiveMQ in Action



632、单词 getters 的匹配文本:


  1. Basic Getters and Setters The methods in this section get and set basic types: Pro Hadoop


  2. You have to manually define the constructor that initializes all fields, two getters, and two setters. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. This object has two private member variables and getters and setters for each variable. MapReduce Design Patterns


  4. This is another data container class (getters/setters methods are omitted for brevity) Professional Hadoop Solutions


  5. The first part of the implementation is straightforward: there are two Text instance variables, first and second, and associated constructors, getters, and setters. Hadoop:The Definitive Guide 3rd Edition


  6. Note that the getters listed in Table 3-1 for the Put class only retrieve what you have set beforehand. HBase:The Definitive Guide


  7. Add getters/setters by right-clicking on the editor pane and selecting Insert Code (use the Control + I shortcut on Mac) Java EE 7 Essentials


  8. This tells the encoder to call the getName, getSalary, and getHireDay property getters and write the resulting values into the object expression. Core Java Volume II Advanced Features 9th Edition


  9. Hard-line object-oriented programmers feel that such classes are anathema and should always be replaced by classes with private fields and public accessor methods (getters) and, for mutable classes, mutators (setters): Effective Java 2nd Edition


  10. Encapsulate the private fields by making getters and setters to access them. Java 7 Recipes A Problem-Solution Approach



633、单词 jframe 的匹配文本:


  1. The Swing version of this class is called JFrame and extends the Frame class. Core Java Volume I Fundamentals 9th Edition


  2. Swing basics Most Swing applications will be built inside a basic JFrame, which creates the window in whatever operating system you’re using. Thinking in Java 4th Edition


  3. Suppose you want to save a JFrame object to a file so that you can retrieve it later. Core Java Volume II Advanced Features 9th Edition


  4. All the recipes of this chapter use a JFrame-based application. Java 7 New Features Cookbook


  5. After expanding the references, you will eventually see that they are being held by an Icon, which in turn is held by a JFrame. Java 7 Recipes A Problem-Solution Approach



634、单词 threadsafe 的匹配文本:


  1. AtomicLong provides threadsafe methods to modify and retrieve the values of a simple long variable. Programming Concurrency on the JVM


  2. This receives the message and stuffs it into a threadsafe SharedQueue instance inside of the consumer object. RabbitMQ in Action


  3. A set of classes that provide threadsafe operations without synchronization. Java Threads 3rd Edition


  4. As a result, you do have to worry about whether message objects are threadsafe. Programming in Scala 2nd Edition


  5. An entity manager and persistence context are not required to be threadsafe. Java EE 7 Essentials


  6. When writing code that must be threadsafe, you must obtain locks (using a synchronized method or statement) when required. Java in a Nutshell 5th Edition


  7. As long as the containing class accesses the unsafe class only in a threadsafe fashion and as long as it never lets a reference to the private field leak out into another object, the class is safe. Java Network Programming 4th Edition


  8. If you are accessing the values of the map from different threads, you should make sure that the values are threadsafe. Java 7 Recipes A Problem-Solution Approach


  9. However, the object is mutated, and that is not threadsafe. Java SE 8 for the Really Impatient


  10. If the appropriate threadsafe counterpart is not detected, the protective mutex is enabled as the last resort. Understanding MySQL Internals



635、单词 serde 的匹配文本:


  1. It allows you to use a custom serializer and deserializer (known as SerDe) for different file formats. Professional Hadoop Solutions


  2. We explicitly tell the SerDe that a tab character separates each field and a newline character separates each record. Hadoop RealWorld Solutions Cookbook


  3. As we will describe in the later sections, this can be achieved by providing a jar that implements the SerDe java interface to Hive. Hive A Petabyte Scale Data Warehouse Using Hadoop


  4. You can also review a Google Code project called hive-json-serde,9 which can support both serialization and deserialization. Hadoop in Practice


  5. When acting as a deserializer, which is the case when querying a table, a SerDe will deserialize a row of data from the bytes in the file to objects used internally by Hive to operate on that row of data. Hadoop:The Definitive Guide 3rd Edition


  6. Hive uses an input format to split input streams into records, an output format to format records into output streams (i.e., the output of queries), and a SerDe to parse records into columns, when reading, and encodes columns into records, when writing. Programming Hive



636、单词 rsa 的匹配文本:


  1. For example, suppose that you need to factor a large integer—a computation that is required for breaking the RSA cipher. Core Java Volume I Fundamentals 9th Edition


  2. The NOW team had clustered together Sun workstations using Myrinet switches and specialized software to win RSA key-cracking challenges and break a number of sort benchmark records. Monitoring with Ganglia


  3. In addition, the rsa public key generated in step 2 of the previous recipe must be distributed and installed on every node in the cluster using the ssh-copy-id command. Hadoop RealWorld Solutions Cookbook


  4. Pick the default file in which to save the key, likely ~/.ssh/id_rsa. Scaling CouchDB


  5. First, generate an RSA key pair by typing the following in the hadoop user account: Hadoop:The Definitive Guide 3rd Edition


  6. For example, the RSA algorithm (the encryption algorithm invented by Rivest, Shamir, and Adleman) depends on the difficulty of factoring large numbers. Core Java Volume II Advanced Features 9th Edition


  7. Having verified that SSH is correctly installed on all nodes of the cluster, we use sshkeygen on the master node to generate an RSA key pair. Hadoop in Action


  8. This is accomplished from the command prompt by executing the following command: keytool -genkey -keystore mySrvKeystore -keyalg RSA. Java 7 New Features Cookbook


  9. If the key is an RSA key, the default signature algorithm is “MD5withRSAJava in a Nutshell 5th Edition


  10. Warning: Permanently added ''localhost'' (RSA) to the list of known hosts. Hadoop Beginners Guide



637、单词 grams 的匹配文本:


  1. This could simply be the measured weight in grams or kilograms. Mahout in Action


  2. Whenever a collection of data is sent by TCP using data grams, a timer is started. Java Network Programming and Distributed Computing


  3. The n-gram approach divides terms in the index into grams: word fragments are built of one or more letters. Mastering ElasticSearch


  4. This recipe will demonstrate how to use the Java MapReduce API to calculate n-grams over news archives. Hadoop RealWorld Solutions Cookbook


  5. The size of the n-grams (the number of contiguous words used to create features) Hadoop in Practice


  6. Query time search-as-you-type Index time optimizations Ngrams for partial matching Index time search-as-you-type Preparing the index Querying the field Edge n-grams and postcodes Ngrams for compound words 16 Elasticsearch The Definitive Guide


  7. For example, a workflow consisting of careful extraction of a person’s name from the title followed by name-specific variant generation is likely to yield much higher-quality answers than a workflow that only generates all possible n-grams of the title of a page. Hadoop in Action


  8. Tokenizers that create shingles (n-grams from multiple tokens) from another TokenStream. Lucene in Action 2nd Edition


  9. Like ngrams, but looks for n-grams that begin with the second array of words in each outer array. Programming Hive


  10. Please remember that using n-grams will make your index a bit larger. Apache Solr 4 Cookbook



638、单词 hadoopdb 的匹配文本:


  1. Recently, several hybrid approaches (e.g., HadoopDB and Hive) between parallel databases and Hadoop have been introduced to the database community. Tajo A Distributed Data Warehouse System on Large Clusters


  2. HadoopDB [3] is a research system in this hybrid category. Dremel - Interactive Analysis of WebScale Datasets


  3. HadoopDB: an architectural hybrid of MapReduce and DBMS technologies for analytical workloads” Spanner - Google’s Globally-Distributed Database


  4. HadoopDB[15] reuses most of Hive’s system, except, it uses traditional database instances in each of the nodes to store data instead of using a distributed file system. Hive A Petabyte Scale Data Warehouse Using Hadoop


  5. HadoopDB: an architectural hybrid of MapReduce and DBMS technologies for analytical workloads. Tenzing - A SQL Implementation On The MapReduce Framework


  6. In this paper, we describe our implementation of and experience with HadoopDB, whose goal is to serve as exactly such a hybrid system. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



639、单词 factorial 的匹配文本:


  1. Listing 11.1 prints the stack trace of a recursive factorial function. Core Java Volume I Fundamentals 9th Edition


  2. Programming tasks such as calculating factorial, Fibonacci numbers, anagrams, and Sierpinski carpet are just a few of the well-known tasks that can be accomplished through the recursive programming technique. Pro Java 7 NIO.2


  3. Unfortunately, a join over n tables will have n-factorial combinations of join orders to examine. High Performance MySQL 3rd Edition


  4. But if you’ve read the earlier chapters of this book, then you know that Java 8 streams provide an even simpler declarative way of defining factorial, as the next listing shows. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. Suppose you have a Java program to compute the factorial of. Java Threads 3rd Edition


  6. Here is the definition of a method using that type, which calculates the factorial of a passed integer value:5 Programming in Scala 2nd Edition


  7. But to start, suppose we wanted to adapt our program to print out both the absolute value of a number  the factorial of another number. Functional Programming in Scala


  8. Here is an implementation of a factorial calculator, where we use a conventional technique of calling a second, nested method to do the work: Programming Scala


  9. The factorial of an integer is the product of the number and all positive integers less than the number. Java in a Nutshell 5th Edition


  10. I will take the factorial example used in the text. Java Performance Tuning 2nd Edition



640、单词 virtualized 的匹配文本:


  1. In the cloud, virtualized GPUs will provide thin clients such as smartphones and tablets and access to a high-performance graphics experience. Monitoring with Ganglia


  2. Virtualized and shared resources can be harder to troubleshoot, especially because you don’t have access to the underlying physical hardware to inspect and measure what’s happening. High Performance MySQL 3rd Edition


  3. Handling Network Disk IO Issues One of the biggest problems with using virtualized hardware is that you are generally sharing a disk with other tenants, which exacerbates the disk slowness mentioned previous because everyone is competing for disk IO. MongoDB The Definitive Guide 2nd Edition


  4. Volumes are persistent virtualized block devices independent of any particular instance. bk-admin-guide-cloud-latest


  5. One example, although very uncommon, is based on virtualized servers. HBase:The Definitive Guide


  6. Removing the OS layer from a virtualized Java setup is less problematic than one might think. Oracle JRockit The Definitive Guide


  7. Now along comes Hadoop and everything you read says commodity, scale-out, share-nothing hardware, but what about the existing investment in blades, shared storage systems, and virtualized infrastructure? Hadoop, generally speaking, does not benefit from virtualization. Hadoop Operations


  8. How It Works Java Swing’s JDesktopPane is a container component similar to a JPanel except that it manages mini internal frames (JInternalFrame) similar to a virtualized desktop. Java 7 Recipes A Problem-Solution Approach


  9. If you’re running on virtualized hardware (such as Amazon’s EC2), then you may find that a sufficiently large instance isn’t available. MongoDB in Action


  10. Many enterprises are shifting away from deploying their analytical databases on high-end proprietary machines, and moving towards cheaper, lower-end, commodity hardware, typically arranged in a shared-nothing MPP architecture, often in a virtualized environment inside public or private “clouds” HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



641、单词 dag 的匹配文本:


  1. A DAG is a data structure from computer science and contains the following two elements: Gradle in Action


  2. Gradle builds up a Directed Acyclic Graph (DAG) with all the tasks before any task is executed. Gradle Effective Implementation Guide


  3. Efforts to combat these artifacts achieved mixed results; while spreading TaskTrackers across racks made intra-rack reads of shared datasets more likely, the shuffle of records between map and reduce tasks would necessarily cross racks, and subsequent jobs in the DAG would have fewer opportunities to account for skew in their ancestors. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Compile to Dryad data flow plan (DAG), and executed by the runtime job manager. MapReduce and Parallel DBMSs Friends or Foes


  5. In mathematics and computer science, a Directed Acyclic Graph (DAG) is a directed graph with no cycles. Professional Hadoop Solutions


  6. If it is a multi-project build, there is also a DAG of project objects, one for each project. Gradle Beyond The Basics


  7. We will also be discussing other advanced Hadoop features such as using DistributedCache for distribute data, using Hadoop Streaming for quick prototyping of Hadoop computations, and using Hadoop counters to report custom metrics for your computation as well as adding job dependencies to manage simple DAG-based workflows of Hadoop MapReduce computations. Hadoop MapReduce Cookbook


  8. JobControl When there is more than one job in a MapReduce workflow, the question arises: how do you manage the jobs so they are executed in order? There are several approaches, and the main consideration is whether you have a linear chain of jobs or a more complex directed acyclic graph (DAG) of jobs. Hadoop:The Definitive Guide 3rd Edition


  9. Configuration is the phase in which those task objects are assembled into an internal object model, usually called the DAG (for directed acyclic graph) Building and Testing with Gradle


  10. The logical plan generator then creates a DAG of relational operators, the query plan. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



642、单词 utils 的匹配文本:


  1. When the fillCup() method finds there’s not enough coffee, rather than returning an error code, it invokes StmUtils’s retry() method. Programming Concurrency on the JVM


  2. Depending on your platform and installation type, your browser may have already been opened with the correct link; if not, you can access Futon, the built-in administration interface to CouchDB by going to the URL: http://127.0.0.1:5984/_utils/index.html Futon is a web-based interface to the main functionality in CouchDB and provides support for editing the configuration information, creating databases, documents, design documents (and therefore views, shows and lists) and starting and requesting replication. Getting Started with CouchDB


  3. Visit http://127.0.0.1:5984/_utils/database.html?blog/_all_docs and you should see a document with an ID corresponding to the one you just saved. CouchDB The Definitive Guide


  4. The perfect match is the class CharUtils from the Apache Commons Lang library. Gradle in Action


  5. Nonetheless, the user could modify the method Utils.paint in his game to use the method setRelativeSize because the paint method expects a list of Resizable objects as argument. Java 8 in Action Lambdas, Streams and Functional-style Programming


  6. Open your favorite web browser, and navigate to the URL http://127.0.0.1:5984/_utils. Beginning CouchDB


  7. We have an implementation available for you in the provided code under the utils package. HBase in Action


  8. You’ll make frequent use of the StringUtils class for tokenization. Hadoop in Practice


  9. This is evident in MapTempera tureReducer, where we have to convert the input value into an integer (using a convenience method in HadoopUtils) and then the maximum value back into a string before it’s written out. Hadoop:The Definitive Guide 3rd Edition


  10. Raise a DTrace USDT probe from BTrace action method • DTraceProbe method in BTraceUtils class. BTrace Java Platform Observability by Bytecode Instrumentation



643、单词 verifier 的匹配文本:


  1. It must either reject class files that do not verify via type checking, or else consistently failover to the type inferencing verifier whenever type checking fails. The Java Virtual Machine Specification Java SE 7 Edition


  2. Hence any program that passes the verifier must have defined semantics. JSR-133 Java Memory Model and Thread


  3. Chapter 3 This chapter discusses the memory protections built into the Java language, how those protections provide a measure of security, and how they are enforced by the bytecode verifier. Java Security 2nd Edition


  4. The reason it all works is that the Java bytecode verifier does not check the declarations, anyway! The Java compiler checks, but not the verifier. Programming in Scala 2nd Edition


  5. The verifier objects because the result of new Super() cannot be passed as an argument in place of a formal parameter of type Hyper, because Super is not a subclass of Hyper. The Java Language Specification Java SE 7 Edition


  6. If the verifier reports the content of the component to be invalid, the component immediately regains focus. Core Java Volume II Advanced Features 9th Edition


  7. For example, the casting operation in Java compiles down to a bytecode that checks that the class can actually be cast to the right thing and the verifier ensures that there’s no code path that could put an unchecked value into a variable. Beginning Scala


  8. In this situation, the verify( ) method of the HostnameVerifier is called to determine whether the connection should proceed or not. Java in a Nutshell 5th Edition



644、单词 decoupling 的匹配文本:


  1. Standardizing the metrics simplifies monitoring by decoupling agents from performance monitoring applications, allowing measurements to be made once and shared among different monitoring applications. Monitoring with Ganglia


  2. The main motivation for decoupling the namespace from the data is the scalability of the system. HDFS scalability the limits to growth


  3. Complete decoupling Whenever a method works with a class instead of an interface, you are limited to using that class or its subclasses. Thinking in Java 4th Edition


  4. Such an abstraction provides decoupling between how data is stored (HDFS files, HBase tables) from how it is seen by an application (table format) Professional Hadoop Solutions


  5. A service provider framework is a system in which multiple service providers implement a service, and the system makes the implementations available to its clients, decoupling them from the implementations. Effective Java 2nd Edition


  6. The first general idea is that failure detection should be flexible, which is achieved by decoupling it from the application being monitored. Cassandra The Definitive Guide


  7. Many enterprise filesystems support advanced features such as snapshots to minimize the window of time required to get a consistent capture of the data on disk and decoupling it from the time required to copy said data elsewhere. Hadoop Operations


  8. Decoupling the application processes from the I/O operations means that, ideally, your application does not spend time waiting for I/O. Java Performance Tuning 2nd Edition


  9. Without enterprise messaging, developers wouldn’t have an option beyond synchronous calls for application development, and the concept of decoupling an application design wouldn’t exist in nearly the same form. ActiveMQ in Action


  10. It may seem strange at first, but decoupling the transaction logic from the operation logic makes this class more portable to the Java EE environment. Pro JPA2



645、单词 lambdas 的匹配文本:


  1. JRuby allows us to fluently pass lambdas where closures are expected, so we took advantage of that here. Programming Concurrency on the JVM


  2. Scala also provides placeholder notation when defining anonymous functions (aka, lambdas) Scala in Depth


  3. Lambdas and closures are different concepts, but they’re closely related. Scala in Action


  4. Linear interpolation Linear interpolation is the last smoothing model that takes the values of the lambdas provided in the configuration and uses them to calculate weights of trigrams, bigrams and unigrams. Mastering ElasticSearch


  5. It introduces a new notation (anonymous functions, or lambdas) that enables you to write just. Java 8 in Action Lambdas, Streams and Functional-style Programming


  6. Ruby has a collection of overlapping features that allow passing blocks, Procs, and lambdas as parameters to methods. Beginning Scala


  7. Clearly, you can repeat that process if you have more methods or lambdas that yield Optional values. Java SE 8 for the Really Impatient



646、单词 inlining 的匹配文本:


  1. A more sophisticated optimization is the elimination (or “inlining”) of function calls. Core Java Volume I Fundamentals 9th Edition


  2. Of course, if a method is big, then your code begins to bloat, and you probably wouldn’t see any performance gains from inlining, since any improvements will be dwarfed by the amount of time spent inside the method. Thinking in Java 4th Edition


  3. Inlining requires all attribute values to be stored in the index many times, once for each time a document ID is stored. High Performance MySQL 3rd Edition


  4. Better performance could be achieved by generating new code to represent the appropriate composition of the user’s functions, and then applying traditional optimizations such as inlining to the resulting code. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. The inlining process must preserve the semantics of the method invocation. The Java Language Specification Java SE 7 Edition


  6. A method may even contain parts of other methods due to inlining. Oracle JRockit The Definitive Guide


  7. These methods are also candidates for inlining at runtime by a just-in-time compiler ( JIT) or similar optimization tool. Java in a Nutshell 5th Edition


  8. Compiler inlining is another frequently used technique that can significantly improve compiled code. Java Performance Tuning 2nd Edition


  9. The most basic classes of optimizations are simple identity transformations, constant folding, common subexpression elimination, and inlining of functions. Java Performance


  10. Support for inlining the name of the schema with the table name is nonstandard. Pro JPA2



647、单词 incrementing 的匹配文本:


  1. Metrics submitted using a “positive” slope generate RRDs that are created as COUNTERs; however, this mechanism is not ideal for situations involving incrementing values that submit on each iteration (i.e., Apache httpd page serving counts without log-scraping) Monitoring with Ganglia


  2. Figure 14.12 Setting a dynamic, incrementing version number at the initial phase of the build pipeline. Gradle in Action


  3. Other options are incrementing the previous value or use an auto-increment value from another source. Cassandra High Performance Cookbook


  4. The distinct IP count is calculated by incrementing a counter whenever we see an IP that does not equal a previously seen IP. Hadoop RealWorld Solutions Cookbook


  5. First, CouchDB looks at the incrementing part of the revision number (the part before the "-“) and the document with the highest number wins. Scaling CouchDB


  6. Sqoop supports two types, append, which works for numerical data that’s incrementing over time, such as auto-increment keys; and lastmodified, which works on timestamped data. Hadoop in Practice


  7. Job, Task, and Task Attempt IDs The format of a job ID is composed of the time that the jobtracker (not the job) started and an incrementing counter maintained by the jobtracker to uniquely identify the job to that instance of the jobtracker. Hadoop:The Definitive Guide 3rd Edition


  8. Periodically the leader will initiate a map update by incrementing the map epoch and using the Paxos update protocol to distribute the update proposal to other monitors, simultaneously revoking leases. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  9. On subsequent runs the document already exists so the scriptupdate is applied instead, incrementing the views counter. Elasticsearch The Definitive Guide


  10. In reality, the primary key would much more likely be an auto-incrementing, numeric employee ID. Hadoop Beginners Guide



648、单词 hostnames 的匹配文本:


  1. Note that we use IP addresses, and not hostnames in the topology. HBase Administration Cookbook


  2. Such a poller needs only a list of hostnames that specifies at least one host per cluster. Monitoring with Ganglia


  3. I’ve seen DNS resolution failures transiently appear, and as a general rule, I now use IP addresses instead of hostnames in the configuration files. Pro Hadoop


  4. The dfs-exclude.txt file contains the DataNode hostnames, one per line, that are to be decommissioned from the cluster. Hadoop Operations and Cluster Management Cookbook


  5. Add the hostnames of the nodes that are to be decommissioned to the exclude file. Hadoop MapReduce Cookbook


  6. A web server log is a good example of a set of records that is not normalized (for example, the client hostnames are specified in full each time, even though the same client may appear many times), and this is one reason that logfiles of all kinds are particularly well-suited to analysis with MapReduce. Hadoop:The Definitive Guide 3rd Edition


  7. After you have started the cluster, you should verify that all the region servers have registered themselves with the master and appear in the appropriate table with the expected hostnames (that a client can connect to) HBase:The Definitive Guide


  8. Client List The most straightforward way to connect to a cluster is to maintain a list of the addresses or hostnames of the servers in the cluster and cycle through them on the client side. Cassandra The Definitive Guide


  9. With the block IDs and datanode hostnames, the client can now contact the most appropriate datanode directly and read the block data it needs. Hadoop Operations


  10. The simplest way to get up and running is through the Cloudera Manager application, where you can bootstrap the whole process of setting up a Hadoop cluster with Impala just by specifying a list of hostnames for the cluster. Cloudera Impala



649、单词 mike 的匹配文本:


  1. The blog post http://bit.ly/flumePerfPt1 from Flume committer Mike Percy should give you some good starting points. Apache Flume Distributed Log Collection for Hadoop


  2. O’Reilly, including, but not limited to, Mike Loukides, Julie Steele, and Melanie Yarbrough, who gave me the opportunity and helped me turn the raw text into a good looking book. Getting Started with CouchDB


  3. After you show it to your coworker, Mike, he says he wants to join forces and bring the application to the next level by adding more advanced features. Gradle in Action


  4. More information about this structure can be found in a great post by Mike McCandless at http://blog.mikemccandless. Mastering ElasticSearch


  5. A sincere thanks to the many people who read the early drafts of these chapters, and provided comments, edits, insights, and ideas — specifically Mike Daconta, Ralph Perko, Praveena Raavichara, Frank Tyler, and Brian Uri. Professional Hadoop Solutions


  6. Mike Loukides, this book’s editor, and the rest of the team at O’Reilly Media were very responsive and helpful. Scaling CouchDB


  7. I am grateful to my editor, Mike Loukides, and his colleagues at O’Reilly for their help in the preparation of this book. Hadoop:The Definitive Guide 3rd Edition


  8. Thanks to my editor, Mike Loukides, for being a charming conversationalist at dinner in San Francisco. Cassandra The Definitive Guide


  9. There’s No Place Like 127.0.0.1 Mike (not his real name) was tasked with setting up a new Hadoop cluster. Hadoop Operations


  10. Thanks to Mike Loukides for his confidence in Gradle as a technology and his help in publishing this book under O’Reilly’s name. Building and Testing with Gradle



650、单词 preprocessing 的匹配文本:


  1. Some analyses join data from multiple input sources, often after a Sawzall preprocessing step or two. Interpreting the Data Parallel Analysis with Sawzall


  2. Chapter 9, Mass Text Data Processing, explains how to use Hadoop and Mahout to process large text datasets, and how to perform data preprocessing and loading operations using Hadoop. Hadoop MapReduce Cookbook


  3. If your source data is multiline XML or JSON form, for example, you may want to consider a preprocessing step. Hadoop in Practice


  4. Parameter substitution occurs as a preprocessing step before the script is run. Hadoop:The Definitive Guide 3rd Edition


  5. A lot of data processing tasks involve record-oriented preprocessing and postprocessing. Hadoop in Action


  6. In such a situation, it is preferable to either use a splitable format such as SequenceFile, or, if you cannot avoid receiving the file in the other format, do a preprocessing step that converts it into a splitable format. Hadoop Beginners Guide


  7. The preprocessing pipeline uses one table to store raw imagery. Bigtable - A Distributed Storage System for Structured Data


  8. The answer is that our encodings are ready to use without any preprocessing and are even designed to allow random access; both to the elements describing the columns and the dictionaries. PowerDrill - Processing a Trillion Cells per Mouse Click


  9. In this more complex example of Example 3-9, we first establish a configuration for myPmd which is a preprocessing step prior to task executions. Building and Testing with Gradle


  10. While grep data is randomly generated and requires no preprocessing, UserVisits needs to be repartitioned by destinationURL and indexed by visitDate for all databases during the load in order to achieve better performance on analytical queries (Hadoop would not benefit from such repartitioning) HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



651、单词 hexadecimal 的匹配文本:


  1. The body is shown in hexadecimal along with a String representation (for us humans to read, in this case the Hello World message) Apache Flume Distributed Log Collection for Hadoop


  2. In hexadecimal notation, you use a p, not an e, to denote the exponent. Core Java Volume I Fundamentals 9th Edition


  3. If all IP addresses are encoded as a zero-padded, fixedlength hexadecimal string, the primary lexical ordering issue is addressed. Pro Hadoop


  4. We need the hexadecimal representation of the public key, the path to the secret key ring file with our private key, and the passphrase used to protect the private key. Gradle Effective Implementation Guide


  5. If you are using binary keys or values and need to enter them in the shell, use double-quote''d hexadecimal representation. HBase in Action


  6. Accessing the counter directly gives you the byte array representation, with the shell printing the separate bytes as hexadecimal values. HBase:The Definitive Guide


  7. We use hexadecimal numbers throughout this section to denote bytes.) Then, it contains a sequence of objects, in the order in which they were saved. Core Java Volume II Advanced Features 9th Edition


  8. Instead of using a hexadecimal literal, for example, the literal bit pattern can be used. Java 7 New Features Cookbook


  9. Integer literals can also be expressed in hexadecimal or octal notation. Java in a Nutshell 5th Edition


  10. Also, there are functions for converting integers to strings in different bases (e.g., hexadecimal) Programming Hive



652、单词 extensibility 的匹配文本:


  1. For specialized environments, gmond’s extensibility and extensive library of modules are likely to be a better option. Monitoring with Ganglia


  2. Our solution is a new parallel scheduler architecture built around shared state, using lock-free optimistic concurrency control, to achieve both implementation extensibility and performance scalability. Omega - flexible, scalable schedulers for large compute clusters


  3. Either you choose full flexibility and extensibility but get weak project standardization, tons of boilerplate code, and no support for dependency management by picking Ant; or you go with Maven, which offers a convention over configuration approach and a seamlessly. Gradle in Action


  4. In the future, we plan to cover in more depth such areas as formal algebraic specification, joins, extensibility mechanisms, etc. Dremel - Interactive Analysis of WebScale Datasets


  5. Although many publications emphasize the fact that Hadoop hides infrastructure complexity from business developers, you should understand that Hadoop extensibility is not publicized enough. Professional Hadoop Solutions


  6. There are some components that are designed for user extensibility such as custom types and practitioners. Cassandra High Performance Cookbook


  7. In terms of extensibility, Java Serialization has some support for evolving a type, but it is brittle and hard to use effectively (Writables have no support; the programmer has to manage them himself) Hadoop:The Definitive Guide 3rd Edition


  8. Fundamental to Pig Latin’s design philosophy is its extensibility through user-defined functions (UDFs ), and there’s a well-defined set of APIs   for writing UDFs. Hadoop in Action


  9. Tenzing supports a mostly complete SQL implementation (with several extensions) combined with several key characteristics such as heterogeneity, high performance, scalability, reliability, metadata awareness, low latency, support for columnar storage and structured data, and easy extensibility. Tenzing - A SQL Implementation On The MapReduce Framework


  10. Gradle DSL, not through lots of literal Groovy code—is core to Gradle’s strategy of provding high extensibility with low complexity. Building and Testing with Gradle



653、单词 resize 的匹配文本:


  1. As you can see in the figure, the title bar and the surrounding decorations, such as resize corners, are drawn by the operating system and not the Swing library. Core Java Volume I Fundamentals 9th Edition


  2. When sending a smaller amount of data, you can adjust the length to fit—you do not need to resize the data buffer. Java Network Programming and Distributed Computing


  3. For example, it would be nice if the interface had a setRelativeSize method that takes as argument a growth factor to resize a shape. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. This is a good time to resize your data file sizes. MapReduce Design Patterns


  5. The only problem with this implementation is that an estimate of the maximum heap size is required in advance, but typically this is not a problem (and we can resize if necessary) Data Structures and Algorithm Analysis in Java 3rd Edition


  6. Resize the table vertically until the scroll bar shows up. Core Java Volume II Advanced Features 9th Edition


  7. If the setShape method''s argument is null, the window will resize to the default for the given system, typically a rectangle. Java 7 New Features Cookbook


  8. These tasks to resize running job flows are not currently available from the AWS Console and need to be performed through the API or command line tools. Hadoop Beginners Guide


  9. The capacity is the number of elements the collection can hold before that collection needs to resize its internal memory to be larger. Java Performance Tuning 2nd Edition


  10. Because the paintComponent() method will be called during a resizing of the window, the clearRect() method will dynamically fill the background to gray as we resize the window. Java 7 Recipes A Problem-Solution Approach



654、单词 histogram 的匹配文本:


  1. The master also maintains statistics about the progress of computation and the state of the graph, such as the total size of the graph, a histogram of its distribution of out-degrees, the number of active vertices, the timing and message traffic of recent supersteps, and the values of all user-defined aggregators. Pregel  -  A System for Large-Scale Graph Processing


  2. The count must the textual representation of an object that the aggregator service expects: a number for DoubleValueSum, a whole number for the LongValue series, an arbitrary string for the StringValue series, and a whole number for UniqueValueCount and ValueHistogram. Pro Hadoop


  3. During query execution, the query dispatcher computes a histogram of tablet processing times. Dremel - Interactive Analysis of WebScale Datasets


  4. We''ve created the filter facet as an example, but as you may have guessed the API provides multiple possibilities of using faceting, just like the REST API does from the terms facet, through query, range, geo, statistical to histogram faceting. Mastering ElasticSearch


  5. If you group logins by the hour of the day and perform a count of the number of records in each group, you can plot these counts on a histogram and recognize times when your website is more active. MapReduce Design Patterns


  6. Hadoop aggregate package provides reducer and combiner implementations for simple aggregate operations such as sum, max, unique value count, and histogram. Hadoop MapReduce Cookbook


  7. Dapper user interface presents a simple frequency histogram over that metric space. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. You’re using the Linux graphing utility gnuplot4 to plot your data in a stacked histogram form. Hadoop in Practice


  9. Familiarity with basic statistical concepts (e.g., histogram, correlation) will help the reader appreciate the more advanced data processing examples. Hadoop in Action


  10. The tenuring distribution is a histogram showing the ages of objects in the young generation’s survivor spaces. Java Performance



655、单词 parallelized 的匹配文本:


  1. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. MapReduce - Simplied Data Processing on Large Clusters


  2. The second motivator is that such operations can often be parallelized. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. Distributed grep Grep, a very powerful tool that uses regular expressions for finding lines of text of interest, is easily parallelized by applying a regular expression match against each line of input and only outputting lines that match. MapReduce Design Patterns


  4. Pig offers a high-level language (Pig Latin) to specify highly parallelized data processing on Hadoop clusters, thus leveraging the MapReduce framework. Professional Hadoop Solutions


  5. That means this input file can be parallelized by any number of lines, processed independently, and aggregated to produce exactly the same result. HBase in Action


  6. Doing so would give constant-time lookups in the worst case, and like cuckoo hashing, the lookup could be parallelized to simultaneously check the bounded set of possible locations. Data Structures and Algorithm Analysis in Java 3rd Edition


  7. MapReduce decomposes work submitted by a client into small parallelized map and reduce workers, as shown in figure 1.4 Hadoop in Practice


  8. This was the slowest part of the recovery process and because there are many HLogs per server, it could be parallelized. Apache Hadoop Goes Realtime at Facebook


  9. Not all algorithms can be parallelized There are entire classes of problems that cannot easily be parallelized. Hadoop Operations


  10. Ideally, the data analysis system should also have a robust mechanism for allowing the user to write user defined functions (UDFs) and queries that utilize UDFs should automatically be parallelized across the processing nodes in the shared-nothing cluster. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



656、单词 doug 的匹配文本:


  1. Hadoop was created by Doug Cutting, who now works as an architect at Cloudera and serves as chair of the Apache Software Foundation. Monitoring with Ganglia


  2. Hadoop rides on the back, sweat, and love of Doug Cutting, and many people of Yahoo! Inc. Pro Hadoop


  3. Shortly after the release of the paper, a free and open source software pioneer by the name of Doug Cutting started working on a MapReduce implementation to solve scalability in another project he was working on called Nutch, an effort to build an open source search engine. MapReduce Design Patterns


  4. In 2002, while developing a project called Nutch (a search engine project focused on crawling, indexing, and searching Internet web pages), Doug Cutting and Mike Cafarella were struggling with a solution for processing a vast amount of information. Professional Hadoop Solutions


  5. Okay, maybe you wouldn’t, and we surely didn’t; but Doug Cutting and Mike Cafarella did. HBase in Action


  6. My team was making discoveries similar to those that Doug Cutting and others at Nutch had made several years earlier regarding how to efficiently store and manage terabytes of crawled and analyzed data. Hadoop in Practice


  7. The project’s creator, Doug Cutting, explains how the name came about: Hadoop:The Definitive Guide 3rd Edition


  8. Doug Cutting got it all started, and Yahoo had the foresight to support it early on. Hadoop in Action


  9. Avro was created by Doug Cutting, most famous perhaps for creating Apache Hadoop, the implementation of Google’s MapReduce algorithm. Cassandra The Definitive Guide


  10. Thanks, Doug At the same time, Doug Cutting was working on the Nutch open source web search engine. Hadoop Beginners Guide



657、单词 unsynchronized 的匹配文本:


  1. You have to carefully study the source code and hope that future versions do not introduce unsynchronized mutators. Core Java Volume I Fundamentals 9th Edition


  2. That way, there’s no chance of accidentally exposing the unsynchronized version. Thinking in Java 4th Edition


  3. How does a synchronized method behave in conjunction with an unsynchronized. Java Threads 3rd Edition


  4. Unsynchronized Persistence Contexts mean a persistence context does not have to be enlisted in a transaction. Java EE 7 Essentials


  5. A reader/writer lock is a lock that allows unsynchronized reads from the data it protects, but enforces exclusiveness for writes to the data. Oracle JRockit The Definitive Guide


  6. The StringBuilder class, added in release 1.5, is an unsynchronized replacement for StringBuffer, which is now obsolete.) Effective Java 2nd Edition


  7. An example of this technique might be a web server that uses an unsynchronized LogFile class but gives each separate thread its own separate log so no resources are shared between the individual threads. Java Network Programming 4th Edition


  8. A non-static increment( ) method is synchronized separately on each this object, so the updates remain unsynchronized across the two threads. Java Performance Tuning 2nd Edition


  9. A synchronized method typically requires greater call overhead than an unsynchronized method. Concurrent Programming in Java Design Principles and Pattern


  10. Methods can be completely unsynchronized for performance, or automatically locked and managed by the container. Pro JPA2



658、单词 notifies 的匹配文本:


  1. Each time a window needs to be redrawn, no matter what the reason, the event handler notifies the component. Core Java Volume I Fundamentals 9th Edition


  2. TestListener implementation that notifies user about test suite execution time. Gradle in Action


  3. Each participant then notifies the coordinator of its prepare timestamp. Spanner - Google’s Globally-Distributed Database


  4. If the feature evaluates to be true, it emits the feature name as the key and notifies whether the product is within the first 10,000 products as the value. Hadoop MapReduce Cookbook


  5. When each DataNode has completed writing the block locally, the blocks are moved from temporary storage into permanent storage, and each DataNode asynchronously notifies the NameNode of their block storage (steps A, B, and C) Hadoop in Practice


  6. The flag is checked in a separate thread every three seconds, and if set, it notifies the tasktracker of the current task status. Hadoop:The Definitive Guide 3rd Edition


  7. If not, the client notifies the NameNode of the corrupt replica and then fetches a different replica of the block from another DataNode. The Hadoop Distributed File System


  8. The model object then notifies the list of the changes, and the list repaints itself. Core Java Volume II Advanced Features 9th Edition


  9. This notifies the phaser that this thread has finished the actual phase, but it won''t participate in the future phases, so the phaser won''t have to wait for it to continue. Java 7 Concurrency Cookbook


  10. Ensure that every method causing state changes that affect the truth value of any waited-for condition notifies threads waiting for state changes, causing them to wake up and recheck their guard conditions. Concurrent Programming in Java Design Principles and Pattern



659、单词 predef 的匹配文本:


  1. The scala.Predef object is automatically imported into scope by Scala. Scala in Depth


  2. Scala Predef (part of the standard library) maps println to Console.println for you so you don’t have to prefix it with Console when using it. Scala in Action


  3. Note Scala implicitly imports members of packages java.lang and scala, as well as the members of a singleton object named Predef, into every Scala source file. Programming in Scala 2nd Edition


  4. In addition, by default Scala imports two packages, the scala.Predef. Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine


  5. In the next chapter, we’ll learn about the Scala type hierarchy—in particular, the Predef object that includes many useful definitions. Programming Scala


  6. These could equally well have been placed into the scala package object, but Predef was introduced before Scala had package objects.) Scala for the Impatient


  7. Because println is used so frequently, it’s part of Scala’s Predef, the predefined stuff that’s automatically part of every program. Beginning Scala



660、单词 parallelize 的匹配文本:


  1. The issues of how to parallelize the computation, distribute the data, and handle failures conspire to obscure the original simple computation with large amounts of complex code to deal with these issues. MapReduce - Simplied Data Processing on Large Clusters


  2. We’d like to parallelize the regular expression search across a larger body of text. MapReduce Design Patterns


  3. If all the pictures are stored in Hadoop in the form of a sequence file, then you can use a simple map-only job to parallelize execution. Professional Hadoop Solutions


  4. In return, there is the possibility to parallelize the processing within a given record without. Interpreting the Data Parallel Analysis with Sawzall


  5. It’s a distributed system made up of a distributed filesystem and it offers a way to parallelize and execute programs on a cluster of machines (see figure 1.1) Hadoop in Practice


  6. A critical advantage of declustered replication is the ability to parallelize failure recovery. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. The Tenzing scheduler can intelligently parallelize operations to make hash joins run faster. Tenzing - A SQL Implementation On The MapReduce Framework


  8. To effectively parallelize the query across all the mappers, you need a strategy to partition the table into segments each mapper will retrieve. Hadoop Beginners Guide


  9. Tuning the Number of Mappers and Reducers Hive is able to parallelize queries by breaking the query into one or more MapReduce jobs. Programming Hive


  10. Furthermore, data analysis workloads tend to consist of many large scan operations, multidimensional aggregations, and star schema joins, all of which are fairly easy to parallelize across nodes in a shared-nothing network. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



661、单词 emulator 的匹配文本:


  1. A disk format supported by VirtualBox virtual machine monitor and the QEMU emulator. bk-compute-adminguide-grizzly


  2. However, if you want to play with Android apps in a sandbox on your PC, consider downloading the Android SDK (http://developer.android.com/sdk/index.html), and use an emulator and a tool such as adb (http://developer.android.com/guide/developing/ tools/adb.html) to upload and install apps. Jenkins Continuous Integration Cookbook


  3. In the example, the emulator gradually forms a more and more complex expression on the stack, and when it is time to pop it and return it, the expression in its entirety can be used to form code. Oracle JRockit The Definitive Guide


  4. We can run automated tests on a simulator/emulator or on a real device using iOS and Android drivers. Selenium Testing Tools Cookbook


  5. If you’re on Windows, you’ll first have to get the Cygwin POSIX emulator and then install Git. Cassandra The Definitive Guide


  6. Replacing the external entity with an emulator that is deployed within benchmark setup is often required to address this issue. Java Performance


  7. Use the Eclipse IDE to build and compile your Android project, and then deploy the application to the emulator. Java 7 Recipes A Problem-Solution Approach


  8. TPC-E traditionally has three components – a customer emulator, a market emulator, and a DBMS running stored SQL procedures. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications


  9. Quick EMUlator (QEMU) One of the hypervisors supported by OpenStack, generally used for development purposes. OpenStack Operations Guide


  10. Important preliminary points While you can use the Android emulator for the Android parts of the chapter, it is highly recommended that you have a real device that you can use. Selenium 2 Testing Tools Beginner''s Guide



662、单词 cgi 的匹配文本:


  1. Get the latest, stable, HBase-supported Hadoop release from Hadoop''s official site, http://www.apache.org/dyn/closer.cgi/hadoop/common/ HBase Administration Cookbook


  2. The Nagios UI also supports custom CGI headers and footers, which make it possible to accomplish rollover popups of the action_url icon containing graphs from the Ganglia graph.php. Monitoring with Ganglia


  3. The arrival of dynamically generated Web pages changed completely what could be done with a Web site, through Common Gateway Interface (CGI) scripts written in languages like PERL. Java Network Programming and Distributed Computing


  4. To get started, grab the latest binary release by pointing your browser to http://jmeter.apache.org/download_jmeter.cgi. Performance Testing With JMeter 2.9


  5. This submission passes through the Common Gateway Interface (CGI) provided on all Web servers. Thinking in Java 4th Edition


  6. Visit the link http://www.apache.org/dyn/closer.cgi/ hbase/, and select the nearest mirror (the suggested mirror on the top is the optimal choice) Hadoop Operations and Cluster Management Cookbook


  7. When form data are sent to a web server, it does not matter whether the data are interpreted by a servlet, a CGI script, or some other server-side technology. Core Java Volume II Advanced Features 9th Edition


  8. The Open Directory interface is a simple form with one input field named search; input typed in this field is sent to a program at http://search.dmoz.org/cgi-bin/search, which does the actual search. Java Network Programming 4th Edition


  9. For example, Common Gateway Interface (CGI) scripts for HTTP servers typically need to be started for every message sent to the server. Java Performance Tuning 2nd Edition


  10. Ant is available from the Apache Ant website at the following URL: http://ant.apache.org/ bindownload.cgi. ActiveMQ in Action



663、单词 functor 的匹配文本:


  1. The functor would be something that converts cats into dogs. Scala in Depth


  2. Individual bytecode instructions are represented in Prolog as terms whose functor is the name of the instruction and whose arguments are its parsed operands. The Java Virtual Machine Specification Java SE 7 Edition


  3. So we just wrote a generic unzip that works not just for lists, but for any functor! Functional Programming in Scala


  4. A functor is away to go fromone category to another that preserves the categorical structure of its domain. Steps in Scala An Introduction to Object-Functional Programming



664、单词 kdc 的匹配文本:


  1. The installation includes setting up a Kerberos admin server and a Key Distribution Center (KDC) HBase Administration Cookbook


  2. The Kerberos KDC also has copies of everyone’s secret key, and facilitates all communications. Professional Hadoop Solutions


  3. We can get one by authenticating to the KDC, using kinit: Hadoop:The Definitive Guide 3rd Edition


  4. Each realm can have its own settings including the location of the KDC on the network and supported encryption algorithms. Hadoop Operations



665、单词 renderer 的匹配文本:


  1. Add to this the fact that we still cannot change the renderer easily; doing so means changing the class that launches the application. Pro Spring 3


  2. The basic mechanism for display is contained in the rendererComponent method of class Table. Programming in Scala 2nd Edition


  3. This is a perfect job for a Copy task with a filter attached to it, but that filter has a lot of work to do! It has to be a full-featured Markdown parser and HTML renderer. Gradle Beyond The Basics


  4. The code below declares a class that has a constructor with a single argument, viewRenderer, which sets an immutable field of the same name. Functional Programming Patterns in Scala and Clojure


  5. The set of available attributes supported by a JSF component is determined by the combination of the UIComponent and Renderer for that tag. Java EE 7 Essentials


  6. Although the JList class can display strings and icons automatically, you need to install a list cell renderer into the JList object for all custom drawing. Core Java Volume II Advanced Features 9th Edition


  7. To set this up, suppose there is a generic Pic interface for images, and a Renderer interface describing services that accept a URL pointing to image data and ultimately return a Pic. Concurrent Programming in Java Design Principles and Pattern


  8. This time the Jackson renderer complains about the Address class exposing a copy property, which in turn causes a recursion. Spring Data Modern Data Access for Enterprise Java



666、单词 decoupled 的匹配文本:


  1. As with most contemporary distributed file systems, HDFS is based on an architecture with the namespace decoupled from the data. HDFS scalability the limits to growth


  2. The beauty of systems that use the changes API is that they are decoupled. CouchDB The Definitive Guide


  3. By using a plugin, functionality is decoupled from the core Gradle build logic. Gradle Effective Implementation Guide


  4. Ceph directly addresses the issue of scalability while simultaneously achieving high performance, reliability and availability through three fundamental design features: decoupled data and metadata, dynamic distributed metadata management, and reliable autonomic distributed object storage. Ceph A Scalable, High-Performance Distributed File System


  5. That means your client code is entirely decoupled from your cluster deployment. HBase in Action


  6. In effect, the objects no longer know how to compare themselves; instead, this information is completely decoupled from the objects in the array. Data Structures and Algorithm Analysis in Java 3rd Edition


  7. The collection mechanism for metrics is decoupled from the component that receives the updates, and there are various pluggable outputs, including local files, Ganglia, and JMX. Hadoop:The Definitive Guide 3rd Edition


  8. Because each row is served by exactly one server, HBase is strongly consistent, and using its multiversioning can help you to avoid edit conflicts caused by concurrent decoupled processes or retain a history of changes. HBase:The Definitive Guide


  9. Accrual failure detection is based on two primary ideas: that failure detection should be flexible by being decoupled from the application being monitored, and outputting a continuous level of “suspicion” regarding how confident the monitor is that a node has failed. Cassandra The Definitive Guide


  10. As the backbone of an event-driven architecture—The decoupled, asynchronous style of architecture described in the previous point allows the broker itself to scale much further and handle considerably more clients via tuning, additional memory allocation, and so on (known as vertical scalability) instead of only relying upon the ability of the number of broker nodes to be increased to handle many more clients (known as horizontal scalability) ActiveMQ in Action



667、单词 nonstandard 的匹配文本:


  1. Eclipse is written in Java, but since it uses a nonstandard windowing library, it is not quite as portable as Java itself. Core Java Volume I Fundamentals 9th Edition


  2. The following settings, in a separate configuration file, configure the second gmond instance to listen on the nonstandard port: Monitoring with Ganglia


  3. The build tool’s response is to add scripting functionality through nonstandard extension mechanisms. Gradle in Action


  4. Relational database vendors often look to nonstandard solutions for this problem, such as storing the time-series data as an opaque blob and providing proprietary query extensions for its introspection. HBase in Action


  5. We can extend binomial queues to support some of the nonstandard operations that. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. As they are implemented, MapWritable and SortedMapWritable use positive byte values for custom types, so a maximum of 127 distinct nonstandard Writable classes can be used in any particular MapWritable or SortedMapWritable instance. Hadoop:The Definitive Guide 3rd Edition


  7. Next, set the ensemble locations and client port, if nonstandard, in hbase-site.xml, or add a suitably configured zoo.cfg to HBase’s CLASSPATH. HBase:The Definitive Guide


  8. JAX-WS also facilitates, using a nonstandard programming model, the publishing and invoking of a web service that uses WS-* specifications such as WS-Security, WS-Secure Conversation, and WS-Reliable Messaging. Java EE 7 Essentials


  9. If a field does not implement Comparable, or you need to use a nonstandard ordering, you can use an explicit Comparator instead. Effective Java 2nd Edition


  10. This relieves much of the frustration developers have faced in lacking control flow in Ant or being forced into plug-in development in Maven to accomplish nonstandard tasks. Building and Testing with Gradle



668、单词 robustness 的匹配文本:


  1. The AM scalability benchmark measures single-job robustness by saturating the AM with container bookkeeping duties. Apache Hadoop YARN Yet Another Resource Negotiator


  2. The tradeoff of being confined to the MapReduce framework is the ability to process your data with distributed computing, without having to deal with concurrency, robustness, scale, and other common challenges. MapReduce Design Patterns


  3. Chapter 3: Application Programmer Interface Introduction Connecting to a Cassandra server Creating a keyspace and column family from the client Using MultiGet to limit round trips and overhead Writing unit tests with an embedded Cassandra server Cleaning up data directories before unit tests Generating Thrift bindings for other languages (C++, PHP, and others) Using the Cassandra Storage Proxy "Fat Client" Using range scans to find and remove old data Iterating all the columns of a large key Slicing columns in reverse Batch mutations to improve insert performance and code robustness Using TTL to create columns with self-deletion times Working with secondary indexes. Cassandra High Performance Cookbook


  4. The handling of undefined values in Sawzall adds a similar level of robustness. Interpreting the Data Parallel Analysis with Sawzall


  5. Apache Nutch offers features such as politeness, robustness, and scalability. Hadoop MapReduce Cookbook


  6. There are, however, several notable exceptions to this progress such as disk seek time and operating system robustness. The anatomy of a large-scale hypertextual Web search engine


  7. Though the sync() operation is designed to not unduly tax HDFS, it does have some overhead, so there is a trade-off between data robustness and throughput. Hadoop:The Definitive Guide 3rd Edition


  8. On the other hand, its robustness and scalability make it suitable for even the most demanding jobs at Yahoo and Facebook. Hadoop in Action


  9. This means far more robustness for your applications by simply utilizing a feature of ActiveMQ. ActiveMQ in Action


  10. While simple applications of monitor methods are indeed simple, the chances for error (for example slipped conditions) can increase dramatically when additional factors are addressed, especially performance and robustness in the face of thread cancellation. Concurrent Programming in Java Design Principles and Pattern



669、单词 speedup 的匹配文本:


  1. You’ve run it several times on a single core, but you’re eager to see the speedup on the new machine. Programming Concurrency on the JVM


  2. The disadvantages are that the speedup isn’t nearly as large as it sounds, and your learning chain will require more maintenance. Mahout in Action


  3. Our comparison point, however, is a single monolithic scheduler, so even a single-digit speedup is helpful. Omega - flexible, scalable schedulers for large compute clusters


  4. We refer to the ratio of the Hadoop run to the incremental run (Incoop dynamic update) as speedup (in work and in time) Incoop MapReduce for Incremental Computations


  5. No reduce step means an enormous speedup! To leave only edges where both source and destination are neighbors of the seed node, repeat the join: Hadoop:The Definitive Guide 3rd Edition


  6. The speedup graph shows that for the early part of the curve, we get linear scaling; the benefits of adding more nodes decreases after this point. Hadoop in Action


  7. At worst, buffering has no impact on performance, while at best it can give you an order of magnitude speedup or more. Java Network Programming 4th Edition


  8. I have seen all of these yield huge improvements to applications, sometimes a 20-fold speedup. Java Performance Tuning 2nd Edition


  9. Only when you blow past the data volume that a single server could reasonably handle or saturate the I/O channels of your storage array can you fully appreciate the performance speedup of Impala over competing solutions and the effects of the various tuning techniques. Cloudera Impala


  10. Because individual shards don’t need to communicate with each other, the resulting speedup is nearly linear. Web search for a planet the google cluster architecture



670、单词 fsck 的匹配文本:


  1. The name hbck comes from the HDFS fsck command, which is the tool to check HDFS for inconsistencies. HBase Administration Cookbook


  2. Use the following steps to check the status of an HDFS cluster with hadoop fsck: Hadoop Operations and Cluster Management Cookbook


  3. File systems come with a file-system check utility like fsck that checks for the consistency of a file system. HBase in Action


  4. You can use the fsck command to find the block size and block locations of a particular file path in the HDFS. Hadoop MapReduce Cookbook


  5. There are a few other operations that can be performed via the HTTP interface, such as fsck for retrieving any issues with the filesystem, and contentSummary, which returns statistical information about a directory, such as quota limits, size, and more: Hadoop in Practice


  6. However, there are tools to perform filesystem maintenance, such as df and fsck, that operate on the filesystem block level. Hadoop:The Definitive Guide 3rd Edition


  7. But that was not the case: the append in Hadoop 0.19.0 was so badly suited that a hadoop fsck / would report the HDFS as being corrupt because of the open logfiles HBase kept. HBase:The Definitive Guide


  8. By default fsck will ignore files still open for writing by a client. Hadoop in Action


  9. The tool is hadoop fsck, which can be used to examine and fix filesystem problems. Hadoop Beginners Guide


  10. Checking Filesystem Integrity with fsck There are a few pathological conditions that can occur in HDFS. Hadoop Operations



671、单词 vals 的匹配文本:


  1. Make value classes final, and mark all their fields final (vals in Scala) Programming Concurrency on the JVM


  2. This can sometimes boil down to using objects and vals instead of classes and defs. Scala in Depth


  3. Yes, you can share the settings and vals from the build.scala file to build.sbt files. Scala in Action


  4. Scala encourages you to lean towards vals, but ultimately reach for the best tool given the job at hand. Programming in Scala 2nd Edition


  5. We can assign the elements of a tuple into multiple vars or vals in parallel, as shown in this example: Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine


  6. All we need to do is define a class that defines some vals as constructor arguments, which will cause the passed-in values to be assigned to public vals. Functional Programming Patterns in Scala and Clojure


  7. There are a few exceptions to the rule that you must initialize vals and vars when they are declared. Programming Scala


  8. Because sumExpr, prodExpr, and factor reference each other and, thus, are recursive, we must define the type of at least one of the three vals so the type inferencer can do its work. Beginning Scala



672、单词 subtasks 的匹配文本:


  1. Suppose you have a processing task that naturally decomposes into subtasks, like this: Core Java Volume I Fundamentals 9th Edition


  2. Subtasks generated in tasks run by a given worker thread are pushed onto that workers own deque. A Java ForkJoin Framework


  3. It’s an implementation of the ExecutorService interface, which distributes those subtasks to worker threads in a thread pool, called ForkJoinPool. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. We propose techniques for controlling the granularity of tasks by dividing large tasks into smaller subtasks, which can be reused even when the large tasks cannot. Incoop MapReduce for Incremental Computations


  5. As you get more comfortable writing MapReduce programs and take on more ambitious data processing tasks, you’ll find that many complex tasks need to be broken down into simpler subtasks, each accomplished by an individual MapReduce job. Hadoop in Action


  6. The ForkJoinPool class differs from other executors, in that its threads attempt to find and execute subtasks created by other currently running tasks. Java 7 New Features Cookbook


  7. Another key underlying concept is that of "divide and conquer", where a single problem is broken into multiple individual subtasks. Hadoop Beginners Guide


  8. Unlike the Executor framework, when a task is waiting for the finalization of the subtasks it has created using the join operation, the thread that is executing that task (called worker thread) looks for other tasks that have not been executed yet and begins its execution. Java 7 Concurrency Cookbook


  9. The main object that commences processing normally waits for all subtasks to finish before proceeding. Concurrent Programming in Java Design Principles and Pattern


  10. If you need to apply a lambda many times, you often have a chance to split up the work into subtasks that execute concurrently. Java SE 8 for the Really Impatient



673、单词 sla 的匹配文本:


  1. The service level agreement (SLA) for jobs is long and can sustain recovery from machine failure. Pro Hadoop


  2. Configuring service-level authentication The purpose of service-level authentication (SLA) is to ensure that Hadoop users have the proper permission to access certain services. Hadoop Operations and Cluster Management Cookbook


  3. We phrase our SLA as the time to reach 99.9% completeness at each tier below the producer. Building LinkedIn’s Real-time Activity Data1


  4. If you take your performance metrics seriously, then the build should fail if the required SLA is not met. Jenkins The Definitive Guide


  5. Oozie Service Level Agreement (SLA) — This provides support for tracking the execution of Workflow applications. Professional Hadoop Solutions


  6. Other configuration parameters are generally used to optimize the cluster’s performance; you’ll probably tackle them while tuning the system based on your use case and SLA definitions. HBase in Action


  7. The response times of these requests should stay within specific limits to guarantee a positive user experience—also commonly referred to as a service-level agreement (SLA) HBase:The Definitive Guide


  8. Use the timeout because it is important to your SLA, not because you want to abort the execution of long running queries. Elasticsearch The Definitive Guide


  9. Primarily, these problems center on the issues of resource starvation and (lack of) SLA guarantees, as described earlier. Hadoop Operations


  10. In 2009, we started to investigate systems that would allow us to scale out, as the amount of data continued to grow and still meet our SLA with the business users. Programming Hive



674、单词 tion 的匹配文本:


  1. You can write various complicated queries against the INFORMA TION_SCHEMA tables, but there are two easier techniques. High Performance MySQL 3rd Edition


  2. If your program is using a connection object called Mongo or Connec tion or something else, you are using the old, default-unsafe API. MongoDB The Definitive Guide 2nd Edition


  3. This includes (i) developing applications intended to run on an tation of the Specification, provided that such applications do not themselves implement any of the Specification, and (ii) discussing the Specification with any third party; and (iii) g brief portions of the Specification in oral or written communications which discuss the tion provided that such excerpts do not in the aggregate constitute a significant portion of the tion. Java EE 7 Specification


  4. Here is an example of a filter function that would cause only documents with a collec tion value of author to be replicated: Scaling CouchDB


  5. It turns out that files() returns a FileCollec tion, which is a foundational interface for file programming in Gradle. Gradle Beyond The Basics


  6. The Configura tion instance points these classes at the cluster the code is to work against. Hadoop:The Definitive Guide 3rd Edition


  7. The region servers, and more precisely, each region, keeps a list of transactions, which are initiated with a beginTransac tion() call, and are finalized with the matching commit() call. HBase:The Definitive Guide


  8. Errors in transmission are wrapped in the Execu tionException thrown when the Future object is queried. Java EE 7 Essentials


  9. For example, to get the value of the Content-type and Content-encoding header fields of a URLConnec tion object uc, you could write: Java Network Programming 4th Edition


  10. If you have a lot of partitions and you want to see if partitions have been defined for particular partition keys, you can further restrict the command with an optional PARTI TION clause that specifies one or more of the partitions with specific values: Programming Hive



675、单词 felix 的匹配文本:


  1. Spring DM is tested on three of these, which appear to be the most popular because they’re feature-rich, reliable, and open source: Eclipse Equinox, Apache Felix, and Knopflerfish. Spring Dynamic Modules in Action


  2. All valid Felix command activities are valid here as well. Spring Roo in Action



676、单词 aggregators 的匹配文本:


  1. Pregel aggregators are a mechanism for global communication, monitoring, and data. Pregel  -  A System for Large-Scale Graph Processing


  2. This can help online aggregators, like news websites, to automatically add articles to a cluster as and when they arrive. Mahout in Action


  3. The deaf and mute parameters exist to allow some gmond nodes to act as special-purpose aggregators and relays for other gmond nodes. Monitoring with Ganglia


  4. Chapter 8, Advanced and Alternate MapReduce Techniques: This chapter demonstrates how to use several advanced features of Hadoop Core: map-side joins, chain mapping, streaming, pipes, and aggregators. Pro Hadoop


  5. This type of grouping is useful for implementation of stateful bolts, such as counters, aggregators, joins, and so on. Professional Hadoop Solutions


  6. Solid arcs represent data flowing from the analysis machines to the aggregators; dashed arcs represent the aggregated data being merged, first into one file per aggregation machine and then to a single final, collated output file. Interpreting the Data Parallel Analysis with Sawzall


  7. The flat file item writer can use a custom LineAggregator to delegate aggregation to dedicated LineAggregators (one for each product subclass) Spring Batch in Action


  8. For the second issue, Cascading departs from map and reduce operations directly by introducing higher-level abstractions as alternatives: Functions, Filters, Aggregators, and Buffers. Hadoop:The Definitive Guide 3rd Edition


  9. Our system incorporates ideas from existing log aggregators and messaging systems, and is suitable for both offline and online message consumption. Kafka a Distributed Messaging System for Log Processing


  10. These first level aggregators then send the events to one of the series of agents that access the Hadoop cluster. Hadoop Beginners Guide



677、单词 ios 的匹配文本:


  1. Instead of relying on one massive database you can access over the Internet, you can have a copy of the data you need on your laptop, iOS, or Android mobile phone, and then synchronize the information back to your big database. Getting Started with CouchDB


  2. If it is not in RAM, there is a read IO before the write IO, a total of two IOs for one write. Monitoring with Ganglia


  3. In addition to being an expert in Java, he also develops for the iOS platform, and when the weather''s nice, he goes flying. Jenkins Continuous Integration Cookbook


  4. The iOS Selenium driver runs either in the iPhone simulator or on an iPhone device itself. Testable JavaScript


  5. Chapter 7, Testing on Mobile Browsers, introduces you to testing mobile web applications with the Apple iOS and Android platform. Selenium Testing Tools Cookbook


  6. Mark is currently an independent software consultant focusing on developing mobile web and native applications on Android and iOS. Java 7 Recipes A Problem-Solution Approach


  7. A real iOS device is not needed as that simulates a device and therefore is significantly faster. Selenium 2 Testing Tools Beginner''s Guide



678、单词 irc 的匹配文本:


  1. If not, or if the folks on IRC cannot answer your questions, you can post them to the mailing lists described next. Monitoring with Ganglia


  2. Notifications can include emails, IMs, IRC messages, SMS, and many more. Gradle in Action


  3. If you are planning to use a development release in production, ask about it first on the mailing list or IRC to get the developers’ advice. MongoDB The Definitive Guide 2nd Edition


  4. Juven wrote the section on IRC notifications in Chapter 8, Notification. Jenkins The Definitive Guide


  5. Communicating with the Cassandra developers and users through IRC and e-mail. Cassandra High Performance Cookbook


  6. I joined the mailing lists and the IRC channel and started asking questions. HBase in Action


  7. Not long after, Lars, the author of the book you are now reading, showed up on the #hbase IRC channel. HBase:The Definitive Guide


  8. Sinks In addition to the logger and file roll sinks we used previously, Flume also supports sinks for HDFS, HBase (two types), Avro (for agent chaining), null (for testing), and IRC (for an Internet Relay Chat service) Hadoop Beginners Guide


  9. It’s easy to understand the logical path if you think about an IRC chat session. Java Network Programming 4th Edition


  10. He is a contributor to the Apache Hive project and is active helping others on the Hive IRC channel. Programming Hive



679、单词 iptables 的匹配文本:


  1. In addition, network traffic monitoring using iptables/ULOG is supported on Linux platforms. Monitoring with Ganglia


  2. It accepts networking tasks from the queue and then performs tasks to manipulate the network (such as setting up bridging interfaces or changing iptables rules) bk-compute-adminguide-grizzly


  3. Configuration of a passwordless login failure can be caused by many reasons, for example, the configuration of the firewall (or iptables, to be more specific), SELinux, and even the SSHD server itself. Hadoop Operations and Cluster Management Cookbook


  4. In addition, it demands an understanding of Linux systems administration, such as installing servers, networking with iptables, and basic virtualization technologies. Deploying OpenStack


  5. Similar to nova-compute, it accepts networking tasks from the queue and performs tasks to manipulate the network, such as setting up bridging interfaces or changing iptables rules. bk-admin-guide-cloud-latest


  6. Using IPTables to simulate complex network scenarios in a local environment. Cassandra High Performance Cookbook


  7. Agents exist for all sorts of behaviors, such as managing running services; running Puppet; managing packages, processes, and files; and even banning IP addresses with iptables. Managing Infrastructure with Puppet


  8. We’re going to use the iptables firewall to block traffic to the primary Puppet CA worker, effectively simulating a failure of the service. Pro Puppet


  9. He decided to temporarily disable iptables while working on the new configuration: Hadoop Operations


  10. You should be familiar with concepts such as DHCP, Linux bridges, VLANs, and iptables. OpenStack Operations Guide



680、单词 centroids 的匹配文本:


  1. The algorithm does multiple rounds of processing and refines the centroid locations until the iteration max-limit criterion is reached or until the centroids converge to a fixed point from which they don’t move very much. Mahout in Action


  2. Then, the map task compares the information about the customer against each of the cluster''s centroids, and assigns each customer to the cluster that is closest to that customer. Hadoop MapReduce Cookbook


  3. The K-means process starts with the initial placement of the K cluster centroids. Hadoop in Practice



681、单词 decouple 的匹配文本:


  1. Tip We strongly suggest that you decouple try/catch and try/finally blocks. Core Java Volume I Fundamentals 9th Edition


  2. To avoid that, actor libraries in general decouple actors from threads. Programming Concurrency on the JVM


  3. An object-oriented program contains some upcasting somewhere, because that’s how you decouple yourself from knowing about the exact type you’re working with. Thinking in Java 4th Edition


  4. We will see how we can decouple the definition and usage of a custom task and plugin into separate source files. Gradle Effective Implementation Guide


  5. Java is a core part of the Hadoop stack’s DNA, and you can’t decouple the two easily. HBase in Action


  6. Wouldn''t it make sense to decouple the Java source code from the bytecode format so that one might write Java compilers for other languages that in turn can run on Java Virtual Machines? Of course it would, and it was probably argued that this would further help the spread and adoption of Java early in its design stage. Oracle JRockit The Definitive Guide


  7. In practice, it can be difficult to completely decouple the I/O, but usually some reads can be anticipated and some writes can be run asynchronously without the program requiring immediate confirmation of success. Java Performance Tuning 2nd Edition


  8. In addition to the cross-language capabilities provided by ActiveMQ, the ability to integrate such applications without the use of RPC is definitely a big benefit because messaging truly helps to decouple the applications. ActiveMQ in Action


  9. By not coding directly against members of another class, you also decouple the code, which helps to ensure that if one object changes, others that depend upon it are not adversely affected. Java 7 Recipes A Problem-Solution Approach


  10. Perhaps it suits the development process to even have a file for each entity, either to decouple them from each other or to reduce conflicts resulting from the version control and configuration management system. Pro JPA2



682、单词 msg 的匹配文本:


  1. Write a protocol message both to the network socket and to // the screen protected void writeMsg(String msg) throws Exception; Java Network Programming and Distributed Computing


  2. Suppose your summary table is called msg_per_hr and is defined as follows: High Performance MySQL 3rd Edition


  3. Rule msg: If we see a PASSWORD we should flag. Jenkins Continuous Integration Cookbook


  4. Finally, you may want to let the IRC plugin use the /notice command instead of the default /msg command. Jenkins The Definitive Guide


  5. You have several logging levels, like error, info, and warning, and at the same time your application is separated into modules like user-profile, image-gallery, msg-inbox, and so forth. RabbitMQ in Action


  6. To do this, we define a member instance msg just for messaging. Hadoop RealWorld Solutions Cookbook


  7. As shown in the figure, mapper task will emit all message sizes under the key msgSize, and they are all sent to a one-reducer job. Hadoop MapReduce Cookbook


  8. Member has encountered a Fatal Error; the errMsg field in the members array for this server should show more details about the problem. The Definitive Guide to MongoDB


  9. Also, when discussing invocations of the form obj.msg(arg), the recipient (that is, the object bound to variable obj) is called the target object. Concurrent Programming in Java Design Principles and Pattern


  10. Do something, in this case the IN parameter value is assigned to the OUT parameter msg :=text; end; Java 7 Recipes A Problem-Solution Approach



683、单词 compacting 的匹配文本:


  1. We will start by manually flushing and compacting regions, and then split the regions manually. HBase Administration Cookbook


  2. Less flushing and therefore less compacting should make the VFS Cache (Virtual File System Cache) more effective. Cassandra High Performance Cookbook


  3. The main area of the CouchDB interface is where you will spend most of your time interacting with CouchDB databases—creating databases and documents, uploading attachments, modifying documents, compacting databases, and deleting databases. Beginning CouchDB


  4. The way to get good write performance is by not flushing, compacting, or splitting too often because the I/O load goes up during that time, slowing the system. HBase in Action


  5. Compaction metrics When the region server has to perform the asynchronous (or manually invoked) housekeeping task of compacting the storage files, it reports its status in a different metric. HBase:The Definitive Guide


  6. The solution was to stop unconditionally compacting files above a certain size and skip compactions if enough candidate files could not be found. Apache Hadoop Goes Realtime at Facebook


  7. Compacting them prevents the database from having to perform a seek to pull the data from each SSTable. Cassandra The Definitive Guide


  8. If it weren’t for invocation chaining, these six methods could have been replaced by one method in the common Buffer superclass.) Compacting shifts any remaining data in the buffer to the start of the buffer, freeing up more space for elements. Java Network Programming 4th Edition


  9. Instead, there is usually a series of attempts to free up space by reclaiming objects, compacting objects, defragmenting the heap, and so on. Java Performance Tuning 2nd Edition


  10. Each concrete buffer implementation for a primitive type 0 exports a method for compacting buffers: Fundamental Networking in Java



684、单词 unapply 的匹配文本:


  1. The compiler adds a method called unapply, which allows the class name to be used as an extractor for pattern matching (more on this later) Scala in Action


  2. An extractor in Scala is an object that has a method called unapply as one of its members. Programming in Scala 2nd Edition


  3. The extractor has one method named unapply( ) that accepts the value. Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine


  4. In fact, the information returned by unapply might be a transformation of the actual information in the type. Programming Scala


  5. Here method unapply implicitly introduces case classes since Some and None are a case class and a case object, respectively. Steps in Scala An Introduction to Object-Functional Programming



685、单词 applicative 的匹配文本:


  1. Instead of relying on a flatMap operation, a function can curried and values fed into it in an applicative style. Scala in Depth


  2. Since weall monads are applicative functors have already implemented  instances for a great many data types, we don''tMonad have to write  instances for them. Functional Programming in Scala



686、单词 rabbitmqctl 的匹配文本:


  1. You can check to make sure this is running with the rabbitmqctl: Deploying OpenStack


  2. If you installed from an RPM or Ubuntu/Debian package, you may need to run rabbitmqctl as root. RabbitMQ in Action


  3. The operator uses the rabbitmqctl command to add the mcollective account with a password. Pro Puppet


  4. RabbitMQ Web Management Interface or rabbitmqctl Aside from connection failures, RabbitMQ log files are generally not useful for debugging OpenStack related issues. OpenStack Operations Guide



687、单词 finalization 的匹配文本:


  1. This appendix details changes to Section 12.6 of the Java language specification, which deals with finalization. JSR-133 Java Memory Model and Thread


  2. Direct buffers Can sometimes achieve zero-copy transfer But have setup and finalization overhead Best for applications with long-lived connections. Scalable IO in Java


  3. The package java.lang.ref describes weak references, which interact with garbage collection and finalization. The Java Language Specification Java SE 7 Edition


  4. Phantom references Phantom references are the preferred way of implementing finalization. Oracle JRockit The Definitive Guide


  5. Unfortunately, the finalizer thread was running at a lower priority than another application thread, so objects weren’t getting finalized at the rate they became eligible for finalization. Effective Java 2nd Edition


  6. While a constructor method performs initialization for an object, a finalizer method can be used to perform cleanup or “finalization” for the object. Java in a Nutshell 5th Edition


  7. There is no way to avoid this overhead apart from minimizing the use of finalization methods in the application. Java Performance Tuning 2nd Edition


  8. One peculiarity of this mechanism is that Thread has to check if it has been interrupted or not, and it can decide if it responds to the finalization request or not. Java 7 Concurrency Cookbook


  9. When you have a choice between them, it is almost always preferable to perform cleanup in finally clauses rather than relying on finalization (i.e., overriding Object.finalize) Concurrent Programming in Java Design Principles and Pattern


  10. Finalization of a servlet can be very important, especially if the servlet is using some resources that may lead to a memory leak, making use of a reusable resource such as a database connection, or in order to persist some values for another session. Java 7 Recipes A Problem-Solution Approach



688、单词 semaphores 的匹配文本:


  1. Semaphores were invented by Edsger Dijkstra in 1968, for use as a synchronization primitive. Core Java Volume I Fundamentals 9th Edition


  2. Internal concurrency issues Contention for resources such as semaphores, access to pages in the InnoDB buffer pool, and so on. High Performance MySQL 3rd Edition


  3. Generally, counting semaphores are used to limit the amount of resources that can be used at one time. Redis in Action


  4. Threading encourages shared state concurrency, and it’s hard to make programs run in parallel because of locks, semaphores, and dependencies between threads. Scala in Action


  5. Class exposes synchronization and semaphores in its public interface The Findbugs explanation on this rule is pretty clear: Sonar Code Quality Testing Essentials


  6. The sleeping barber problem is usually solved with semaphores and mutexes, but we’ve got better tools at our disposal. Programming Scala


  7. Not only does it have explicit constructs that can be used for threads, locks, and semaphores but it was also designed so that every object in a Java program can conveniently be used as the limiting resource, or monitor object, constraining access to code in a critical section. Oracle JRockit The Definitive Guide


  8. Semaphore The Semaphore class models semaphores, a traditional concurrent programming construct. Java in a Nutshell 5th Edition


  9. Semaphores are a generic synchronization mechanism that you can use to protect any critical section in any problem. Java 7 Concurrency Cookbook


  10. There are other ways to describe semaphores as well, including those based on their original motivating metaphor: the signaling flags used to prevent railroad collisions. Concurrent Programming in Java Design Principles and Pattern



689、单词 bayes 的匹配文本:


  1. Mahout classification algorithms include naive Bayes, complementary naive Bayes, stochastic gradient descent (SGD), and random forests. Mahout in Action


  2. In this recipe, we will use a naive Bayes classifier from Apache Mahout to determine if a set of terms found in a movie review mean the movie had a negative or positive reception. Hadoop RealWorld Solutions Cookbook


  3. Classification using Naive Bayes Classifier A classifier assigns inputs into one of the N classes based on some properties (features) of inputs. Hadoop MapReduce Cookbook


  4. Bayes theorem comes into play once each word has had a spam and ham probability calculated, and combines them together to form the overall email probability of ham or spam. Hadoop in Practice



690、单词 synchronously 的匹配文本:


  1. In a traditional SQL database hosting thousands or millions of users, using a synchronously replicated log would risk interruptions of widespread impact [11] Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. Journaled file systems are not needed for the NameNode, as the critical data is written synchronously. Pro Hadoop


  3. Activity tasks participating in SWF can be executed synchronously or asynchronously. Professional Hadoop Solutions


  4. This should be quick: writes should be visible as soon as possible, particularly when multiple writers or mixed readers and writers force clients to operate synchronously. Ceph A Scalable, High-Performance Distributed File System


  5. Using this code will result in the index file being created synchronously: Hadoop in Practice


  6. For each binding, there is a choice between performing operations synchronously or asynchronously. Hadoop:The Definitive Guide 3rd Edition


  7. Another problem is that the NameNode synchronously updates the BackupNode on every transaction, thus the reliability of the entire system could now be lower than the reliability of the standalone NameNode. Apache Hadoop Goes Realtime at Facebook


  8. To achieve a strict consistency, all update operations will be performed synchronously, meaning that they must block, locking all replicas until the operation is complete, and forcing competing clients to wait. Cassandra The Definitive Guide


  9. Client applications written against the Hadoop APIs can submit jobs either synchronously and block for the result, or asynchronously and poll the master for job status. Hadoop Operations


  10. The MessageConsumer can consume messages either synchronously by using one of the receive() methods or asynchronously by providing a MessageListener implementation to the consumer. ActiveMQ in Action



691、单词 unidirectional 的匹配文本:


  1. Ganglia’s lightweight protocol, which functions reliably as unidirectional UDP traffic, has proven itself to be ready for business at this level, too, as demonstrated by its presence as an embedded agent in the Lumicall app for Android. Monitoring with Ganglia


  2. Channel adapters and gateways are alike in that they’re both interfaces to external systems, but not alike in that channel adapters are unidirectional (fire-and-forget) while gateways support request/reply communications. Spring in Practice


  3. In the graph, however, it is implemented using a unidirectional relationship. Graph Databases


  4. A unidirectional relationship requires the owning side to specify the annotation. Java EE 7 Essentials


  5. In this case, you must break it into two or more runs of unidirectional text. Java in a Nutshell 5th Edition


  6. The master acts as the authoritative source of the data, and operates in a unidirectional relationship with the slave nodes, which must synchronize their copies. Cassandra The Definitive Guide


  7. Depending on the type of graph, the edges may be bidirectional or unidirectional and may have weights associated with them. Hadoop Beginners Guide


  8. By default, a network only operates in a unidirectional manner and logically pushes messages across its network connection, as shown in figure 10.4 ActiveMQ in Action


  9. But the listed constraints serve to eliminate large classes of safety and liveness problems while also satisfying common reusability and performance goals: unidirectional flow avoids deadlock, connectivity management avoids unwanted interleavings across different flows, transfer protocols avoid safety problems due to inadvertent sharing without the need for extensive dynamic synchronization, and interface conformance assures type safety while still permitting interoperability among components. Concurrent Programming in Java Design Principles and Pattern


  10. If only one entity has a pointer to the other, the relationship is said to be unidirectional. Pro JPA2



692、单词 jcr 的匹配文本:


  1. In it, we continue our exploration of NoSQL data stores, this time involving the Java Content Repository (JCR) and MongoDB/Spring Data MongoDB. Spring in Practice


  2. We’ll start by briefly describing the key features of Apache Jackrabbit and the Content Repository for Java technology (JCR) API (http://www.jcp.org/en/jsr/ detail?id=170) that it implements. Tika in Action



693、单词 joe 的匹配文本:


  1. Joe, you have a bright future Bob, you have a bright future Bob and Fred are keeping their fingers crossed Fred, you have a bright future Bob and Fred are happy campers. Programming Concurrency on the JVM


  2. Joe’s quote is the one that we find to be the most accurate description of scaling. CouchDB The Definitive Guide


  3. Tom and Joe work as software developers for Acme Enterprises, a startup company that offers a free online service for finding the best deals in your area. Gradle in Action


  4. It returns the documents that have the joe value in the name field and the 1981 value in the year field. Mastering ElasticSearch


  5. I am grateful to my company, Novetta Solutions, and I would especially like to thank Joe Pantella and the rest of the Novetta executive team for being supportive of me in writing this book. Professional Hadoop Solutions


  6. Joe Lennon is a web applications and database developer from Cork, Ireland. Beginning CouchDB


  7. Your goal is to find the shortest path between Dee and Joe. Hadoop in Practice


  8. That variable has a type Employee.class, a name "joe", and a value—the value of the object reference joe. Core Java Volume II Advanced Features 9th Edition


  9. The cluster administrator pointed out that he could perform an HDFS transfer with the command hadoop -put some_file.log /user/joe and with some luck, hit the issue. Hadoop Operations


  10. That is, when I buy a book from Amazon using its secure server, it has to prove to my browser’s satisfaction that it is indeed Amazon and not Joe Random Hacker. Java Network Programming 4th Edition



694、单词 cloneable 的匹配文本:


  1. For example, the Java programming language has an important interface built into it, called Cloneable. Core Java Volume I Fundamentals 9th Edition


  2. Instead of creating a new bootstrap for each of these channels, Netty has made the AbstractBootstrap cloneable. Netty in Action MEAP V08


  3. If a digest implementation does not override this method and does not implement the Cloneable interface, the getDigestLength( ) method will not operate correctly. Java Security 2nd Edition


  4. In this regard, it is similar to the Cloneable interface that we discussed in Volume I, Chapter 6 Core Java Volume II Advanced Features 9th Edition


  5. You cannot, without resorting to reflection (Item 53), invoke the clone method on an object merely because it implements Cloneable. Effective Java 2nd Edition


  6. Since arrays extend Object and implement the Cloneable and Serializable interfaces, any array type can be widened to any of these three types. Java in a Nutshell 5th Edition


  7. You can create new instances of classes that implement the Cloneable interface using the clone( ) method. Java Performance Tuning 2nd Edition


  8. Implement the Cloneable interface within the class that you want to clone; then call that object’s clone method to make a copy of it. Java 7 Recipes A Problem-Solution Approach



695、单词 vmstat 的匹配文本:


  1. Run the vmstat command to show the virtual memory statistics: HBase Administration Cookbook


  2. The output of vmstat confirmed what we saw in iostat and showed that the CPUs were basically idle except for some I/O wait during the spike of writes (ranging up to 9% wait) High Performance MySQL 3rd Edition


  3. Meanwhile, you can monitor what is happening on the server side using your favorite statistics generator (vmstat, iostat, uptime, ps, etc.), using nodeload to graph and persist the values. Testable JavaScript


  4. Solution Use the Linux tool vmstat to observe the CPU context switches. Hadoop in Practice


  5. The complete battery usually includes OS utilities such as top, vmstat, sar, iostat, and netstat, but also Hadoop-specific tools such as hadoop dfsadmin, and hadoop fsck. Hadoop Operations


  6. System performance measuring utilities (vmstat, iostat, sar, top on Unix; the task manager and performance monitor on Windows) Java Performance Tuning 2nd Edition


  7. Linux and Solaris have vmstat, which shows combined CPU utilization across all virtual processors. Java Performance


  8. You can monitor your operating system by yourself by using different operating system commands such as vmstat, dstat, and iostat. Apache Solr 4 Cookbook



696、单词 tdd 的匹配文本:


  1. There has been a big push for developers to embrace test-driven development (TDD) Scala in Depth


  2. Using TDD, you write tests that fail first, and then build the functionality needed to pass the tests. Jenkins Continuous Integration Cookbook


  3. Your first steps will be getting started with practices like test-driven development (TDD) and continuous integration for your Scala project. Scala in Action


  4. One of the most efficient ways to write high quality tests is to write them first, using techniques such as Test-Driven Development (TDD) or Behavior-Driven Development (BDD) Jenkins The Definitive Guide


  5. That’s why code testing is becoming an integral part of code development, and programmers are moving more and more toward testdriven development (TDD) Professional Hadoop Solutions


  6. These days, the goals of Design by Contract are largely met by Test-Driven Development (TDD) Programming Scala


  7. When it comes to writing Java code, I’m a big proponent of test-driven development (TDD),1 and with MapReduce things are no different. Hadoop in Practice


  8. Introduction Behavior-driven Development (BDD) is an agile software development practice that enhances the paradigm of Test Driven Development (TDD) and acceptance tests, and encourages the collaboration between developers, quality assurance, domain experts, and stakeholders. Selenium Testing Tools Cookbook


  9. For developers who use test-driven development (TDD), in which tests are written before code and the full unit test suite is executed after every development iteration (which can be as small as a change to a single method), any kind of interaction with the application server is a problem. Pro JPA2


  10. Take a test-driven design (TDD) approach to software development utilizing the Yii testing framework. Selenium 2 Testing Tools Beginner''s Guide



697、单词 iterated 的匹配文本:


  1. Internal iteration— In contrast to collections, which are iterated explicitly using an iterator, stream operations do the iteration behind the scenes for you. Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. The purpose of this sort is to group equivalent keys together so that their values can be iterated over easily in the reduce task. MapReduce Design Patterns


  3. This content is returned by HBase in the form of a navigable map, which can be iterated to get the values of the individual columns with their names. Professional Hadoop Solutions


  4. Since the views are themselves Collections, the views can be iterated. Data Structures and Algorithm Analysis in Java 3rd Edition


  5. Iterative process where all the unvisited nodes are iterated, and the distance from the start node is. Hadoop in Practice


  6. Now it becomes clear why we had to preserve all URLs in SelectorEntry records, because now we can extract them from the iterated values. Hadoop:The Definitive Guide 3rd Edition


  7. During the same time frame, we have iterated rapidly on the core software (HBase/HDFS) as well as the application logic running against HBase. Apache Hadoop Goes Realtime at Facebook


  8. In Java 5.0 and later, arrays can also be iterated with the for/in loop. Java in a Nutshell 5th Edition


  9. This technique works because the termination test iterated many times can have a higher cost than producing and catching an Exception once. Java Performance Tuning 2nd Edition


  10. In the second example, the length() method is used to find the number of characters in the String so that they can be iterated over using the charAt() method. Java 7 Recipes A Problem-Solution Approach



698、单词 fsimage 的匹配文本:


  1. Web encryption — To secure all communications using HTTP (Hadoop web user interfaces, MapReduce Shuffle, and FSImage operations), the Hadoop web server must be configured to use SSL, setting up the proper keystores. Professional Hadoop Solutions


  2. The fsimage file is a persistent checkpoint of the filesystem metadata. Hadoop:The Definitive Guide 3rd Edition


  3. At startup time, the HDFS NameNode reads filesystem metadata from a file called the fsimage file. Apache Hadoop Goes Realtime at Facebook


  4. The single most important piece of data in the cluster – fsimage The NameNode process stores two data structures to disk, the fsimage file and the edits log of changes to it. Hadoop Beginners Guide


  5. Managing Filesystem Metadata The namenode stores its filesystem metadata on local filesystem disks in a few different files, the two most important of which are fsimage and edits. Hadoop Operations



699、单词 committer 的匹配文本:


  1. The blog post http://bit.ly/flumePerfPt1 from Flume committer Mike Percy should give you some good starting points. Apache Flume Distributed Log Collection for Hadoop


  2. In addition to being a committer on the Apache HTTPD and APR projects, Brad is also a developer as well as one of the administrators of the Ganglia project. Monitoring with Ganglia


  3. As of Hadoop 0.19.0, this directory is a function of the OutputCommitter the job is using. Pro Hadoop


  4. If this is unacceptable, consider using a custom OutputCommitter to write temporary output to the file system. MapReduce Design Patterns


  5. Edward Capriolo is a member of the Apache Software Foundation and a committer for the Hadoop-Hive project. Cassandra High Performance Cookbook


  6. Srinath is also a committer of Apache open source projects Axis, Axis2, and Geronimo. Hadoop MapReduce Cookbook


  7. Tasks and jobs can fail, they can be restarted, and they can also be speculatively executed.45 To allow OutputFormats to correctly handle these scenarios, MapReduce has the notion of an OutputCommitter, which is a mechanism by which MapReduce invokes a callback when an individual task as well as the overall job have completed. Hadoop in Practice


  8. In short order, Tom earned the role of Hadoop committer and soon thereafter became a member of the Hadoop Project Management Committee. Hadoop:The Definitive Guide 3rd Edition


  9. This term was coined by Cassandra committer Eric Evans of Rackspace, but he has since distanced himself from the term in preference of the term “Big Data” to highlight the fact that this family of nonrelational databases is defined not by what they’re not (implementations of SQL), but rather by what they do (handle huge data loads) Cassandra The Definitive Guide


  10. Bruce Snyder is a co-founder of Apache Geronimo, a committer for ActiveMQ, Camel, and ServiceMix, and a member of various JCP expert groups. ActiveMQ in Action



700、单词 wsdl 的匹配文本:


  1. To explore the more formal SOAP-based web service API that’s available, access http://localhost:8080/RecommenderService.jws?wsdl to see the WSDL (Web Services Definition Language) file that defines the input and output of this web service. Mahout in Action


  2. Due to a bug in vSphere ESX 4.1 default wsdl. bk-compute-adminguide-grizzly


  3. The tool can read Web Service Definition Language (WSDL) files publicized by web services, using the information to generate the skeleton for functional tests. Jenkins Continuous Integration Cookbook


  4. A plug-in that creates WSDL files from an annotated Java web service implementation should not include its own code for scanning for annotations and generating content in a specialized XML vocabulary, but should instead declare a dependency on an existing library to do that work, and provide a mechanism for that library to be fetched from an online repository if it is not already present on the build system. Gradle Beyond The Basics


  5. Spring WS’s contract-first approach The contract-first approach used by Spring WS consists of first implementing web service contracts using the Web Services Description Language (WSDL) independent of the classes used to implement the services. Spring Dynamic Modules in Action


  6. The JAX-WS runtime publishes the web service and its capabilities using Web Services Description Language (WSDL) Java EE 7 Essentials


  7. These descriptions are formatted in a special language, called Interface Definition Language (IDL) for CORBA and Web Services Description Language (WSDL) for web services. Core Java Volume II Advanced Features 9th Edition


  8. As with all distributed computing, caching is especially important and should be applied to data and metadata such as WSDL (Web Services Description Language) files. Java Performance Tuning 2nd Edition


  9. It is a platform and programming language neutral system that exposes standardized interfaces accessible over a set of standard Internet communication protocols; typically described using Web Services Description Language (WSDL) Java Performance


  10. Unlike other XML-based remoting technologies (such as SOAP and XML-RPC), Burlap’s message structure is as simple as possible and doesn’t require an external definition language (such as WSDL or IDL) Spring in Action 4th Edition



701、单词 uncompressed 的匹配文本:


  1. Data stream If you do not want to output a SequenceFile because your data doesn''t have a natural key, you can use a DataStream to output only the value, uncompressed. Apache Flume Distributed Log Collection for Hadoop


  2. The uncompressed archive file will contain the following files and folders: Hadoop Operations and Cluster Management Cookbook


  3. In uncompressed, nonreplicated form the datasets occupy about a petabyte of space. Dremel - Interactive Analysis of WebScale Datasets


  4. The uncompressed dataset is tens of terabytes, which is small compared to many NoSQL instances, but was large enough to cause difficulties with sharded MySQL. Spanner - Google’s Globally-Distributed Database


  5. This file format is shown, along with uncompressed SequenceFiles, in figure 3.9 Hadoop in Practice


  6. Consider an uncompressed file stored in HDFS whose size is 1 GB. Hadoop:The Definitive Guide 3rd Edition


  7. All the input files we’ve seen so far are uncompressed text files where each record is a line. Hadoop in Action


  8. What just happened? After downloading the latest stable Hive release, we copied it to the desired location and uncompressed the archive file. Hadoop Beginners Guide


  9. Interestingly, the memory usage of the uncompressed, basic data-structures is about the same as the memory usage of Dremel’s compressed format. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. The search process is challenging because of the large amount of data: The raw documents comprise several tens of terabytes of uncompressed data, and the inverted index resulting from this raw data is itself many terabytes of data. Web search for a planet the google cluster architecture



702、单词 centos 的匹配文本:


  1. You''d be hard pressed to find such a company using the bleeding-edge Linux kernel rather than something like Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu LTS, or any of the other distributions that aim for is stability and compatibility. Apache Flume Distributed Log Collection for Hadoop


  2. If you are using a Linux distribution other than CentOS, such as Debian, please refer to corresponding documentation. Hadoop Operations and Cluster Management Cookbook


  3. There are also excellent native packages for Mac OS X and most of the major Linux distributions, including Ubuntu, RedHat (including CentOS and Fedora) and OpenSolaris. Jenkins The Definitive Guide


  4. Binary installers are available for Ubuntu, Fedora, CentOS, FreeBSD, and Mac OS X systems through each system’s individual package manager. Beginning CouchDB


  5. There have been successful deployments on both Red Hat-based (Red Hat Enterprise Linux [RHEL], CentOS) and Debian-based (Ubuntu and so on) flavors of Linux. HBase in Action


  6. Let’s look at the instructions for installation on a RedHat-based Linux system (in this case you’ll use CentOS) Hadoop in Practice


  7. Here is a short list of operating systems that are commonly found as a basis for HBase clusters: CentOS. HBase:The Definitive Guide


  8. Coming back to installing Kafka, as a first step, we need to download the available stable/beta release (all the commands are tested on CentOS 5.5 OS and may differ on other kernel-based OS) Apache Kafka


  9. A significant number of production clusters run on RedHat Enterprise Linux or its freely available sister, CentOS. Hadoop Operations


  10. If you happen to be running Debian, Ubuntu, CentOS, or Fedora, you’ll always have access to the latest versions. MongoDB in Action



703、单词 telnet 的匹配文本:


  1. We''ll use the nc command (you can use telnet or something similar) to send the String "Hello World" and click on  to mark the end of the event: Apache Flume Distributed Log Collection for Hadoop


  2. On top of IP, we have other common protocols, such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) (this chapter’s applications exploit these protocols), and on top of these we have even more, including HTTP, TELNET, DNS, and so on. Pro Java 7 NIO.2


  3. To test whether gmond is operational on a given host, telnet to gmond’s TCP port: Monitoring with Ganglia


  4. Telnet is a service that allows users to open a remote-terminal session to a specific machine. Java Network Programming and Distributed Computing


  5. Install telnet with the following command: sudo yum install telnet. Hadoop Operations and Cluster Management Cookbook


  6. The tsd process exposes a simple telnet-like protocol for receiving observations. HBase in Action


  7. Before writing our first network program, let’s discuss a great debugging tool for network programming that you already have —namely, telnet. Core Java Volume II Advanced Features 9th Edition


  8. In another window, we open a telnet connection to port 3000 on the local machine using the curl utility. Hadoop Beginners Guide


  9. For instance, FTP packets that seem to contain Telnet data can be rejected. Java Network Programming 4th Edition


  10. It’s even possible to communicate with the broker through the telnet session using STOMP, as we’ll see in a moment. ActiveMQ in Action



704、单词 eden 的匹配文本:


  1. Usually, objects are allocated in the Eden space of the young generation. HBase Administration Cookbook


  2. Eden space: It is the part of the heap memory where the JVM initially allocates most of the object types. Mastering ElasticSearch


  3. Older objects still have some reference, and have therefore survived a few garbage collections, so the Survivor Ratio is the ratio of eden space to survivor space in the young object part of the heap. Cassandra The Definitive Guide


  4. The "new" space (also called "Eden" or "the young generation") is the heap space where new objects are created. Java Performance Tuning 2nd Edition


  5. The eden is almost always empty after a minor garbage collection. Java Performance



705、单词 authenticating 的匹配文本:


  1. We can be sure that the user is who he says he is because he has been authenticated against one of the authenticating schemes in CouchDB. CouchDB The Definitive Guide


  2. It achieves this by authenticating on the underlying Remote Procedure Calls (RPC) Hadoop Operations and Cluster Management Cookbook


  3. As the central arbiter of the compute cluster, the JT was also responsible for admission control, tracking the liveness of TTs (to re-execute running tasks or tasks whose output becomes unavailable), launching tasks speculatively to route around slow nodes, reporting job status to users through a web server, recording audit logs and aggregate statistics, authenticating users and many other functions; each of these limited its scalability. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Tasks of the job also use the Delegation token when authenticating to NameNode, and they utilize Block Access tokens to enable them to perform the necessary operations on the data block. Professional Hadoop Solutions


  5. The container may authenticate the user e application is started, or it may use lazy authentication, authenticating when a protected resource is accessed. Java EE 7 Specification


  6. We can get one by authenticating to the KDC, using kinit: Hadoop:The Definitive Guide 3rd Edition


  7. Providing Your Own Authentication You can provide your own method of authenticating to Cassandra if you want to make special requirements, such as a Kerberos ticket or encryption, or if you want to store passwords in a different location, such as an LDAP directory. Cassandra The Definitive Guide


  8. It never makes sense for a system to support authorization without first authenticating users; a person could simply lie about who they are to gain privileges they wouldn’t otherwise have. Hadoop Operations


  9. Earlier in this chapter, we described ActiveMQ plug-ins used to secure the broker by authenticating the clients and authorizing the access to destinations. ActiveMQ in Action


  10. Unless you have enabled a non-authenticating cipher suite,8 the server will authenticate itself to the client during the handshake. Fundamental Networking in Java



706、单词 gnu 的匹配文本:


  1. Debian GNU/Linux You can install the CouchDB package by running: CouchDB The Definitive Guide


  2. Generate GNU Privacy Guard (GPG) signatures for all artifacts you wish to publish (shown in figure 14.7 with the file extension .asc) Gradle in Action


  3. Note that you will need to have a GNU C/C++ compiler collection installed to compile the protocol buffer source. Hadoop RealWorld Solutions Cookbook


  4. Soon after IBM got involved, CouchDB’s license was changed to use the Apache License rather than the restrictive and cumbersome GNU General Public License (GPL) Beginning CouchDB


  5. Plotting the Hadoop results using GNU Plot Although Hadoop jobs can generate interesting analytics, making sense of those results and getting a detailed understanding about the data often require us to see the overall trends in the data. Hadoop MapReduce Cookbook


  6. The comparison order is defined by a specification that is like the one used for GNU sort. Hadoop:The Definitive Guide 3rd Edition


  7. For example, with the Gnu C compiler on Linux, use these commands: Core Java Volume II Advanced Features 9th Edition


  8. If you use a Windows system, you might use Notepad or WordPad, if you don’t have a more specialized programmer’s editor (versions of GNU Emacs, for example, are available for Windows) Java in a Nutshell 5th Edition


  9. First we’ll use GNU Wget (http://mng.bz/DMf6), a popular tool for retrieving files using HTTP, to subscribe to the desired destination: ActiveMQ in Action


  10. Solution 2 Analyze your program’s memory consumption by using VisualVM (an external GNU Open Source Java profiler and visualization tool).To use and install VisualVM, go to http://visualvm.java.net/, download and follow the steps for installation (usually uncompressing in the folder of your choice and then clicking the provided executable) Java 7 Recipes A Problem-Solution Approach



707、单词 tasktrackers 的匹配文本:


  1. There are two types of nodes that control the job execution process: a jobtracker and a number of tasktrackers. Hadoop:The Definitive Guide 3rd Edition


  2. The jobtracker accepts new jobs, breaks them into map and reduce tasks, and assigns those tasks to tasktrackers in the cluster. Cassandra The Definitive Guide


  3. Just like the relationship between datanodes and the namenode in HDFS, tasktrackers inform the jobtracker as to their current health and status by way of regular heartbeats. Hadoop Operations



708、单词 concatenated 的匹配文本:


  1. Each entity is mapped into a single Bigtable row; the primary key values are concatenated to form the Bigtable row key, and each remaining property occupies its own Bigtable column. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. The toString method is ubiquitous for an important reason: Whenever an object is concatenated with a string by the “+” operator, the Java compiler automatically invokes the toString method to obtain a string representation of the object. Core Java Volume I Fundamentals 9th Edition


  3. The following listing shows how to print out the full, concatenated path of all dependencies assigned to the configuration cargo. Gradle in Action


  4. The identifiers can either be concatenated by some unique delimiter, leading to the output of one key/value pair per group, or each input value can be written with the input key, known as the identity reducer. MapReduce Design Patterns


  5. Alias your concatenated field Like most Hive UDFs, you can alias the output of concat_ws() Hadoop RealWorld Solutions Cookbook


  6. In this technique it was assumed that you were working with a file format (such as image files) that couldn’t be concatenated together. Hadoop in Practice


  7. The part files contain the contents of a number of the original files concatenated together, and the indexes make it possible to look up the part file that an archived file is contained in, as well as its offset and length. Hadoop:The Definitive Guide 3rd Edition


  8. The row key, placed as the first field in the tuple, is the concatenated representation created during the initial copying of the data from the file into HBase. HBase:The Definitive Guide


  9. In our reduce() method, we have concatenated the list of values for each key in the order Hadoop has given them to us. Hadoop in Action


  10. This parser is executed on each node in parallel, parsing the concatenated documents file and writing the found URLs into a file on the local disk. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



709、单词 bootstrapping 的匹配文本:


  1. Because I was very busy bootstrapping Percona, Vadim’s and my consulting company, and because English is not my first language, we all had different roles. High Performance MySQL 3rd Edition


  2. Having discussed the core business logic implemented by the EchoServerHandler, all that remains is to examine the bootstrapping of the server itself. Netty in Action MEAP V08


  3. Bootstrapping your test environment in your build is very diverse, product-specific, and tailored to the requirements of your project. Gradle in Action


  4. When we discussed public and private key pairs, we talked about the bootstrapping issue involved with key distribution: the problem of obtaining the public key of a trusted certificate authority. Java Security 2nd Edition


  5. You can see an example of how to do that at http://projects.puppetlabs.com/projects/1/wiki/Bootstrapping_With_Puppet. Pro Puppet


  6. The HBase master is responsible for bootstrapping a virgin install, for assigning regions to registered regionservers, and for recovering regionserver failures. Hadoop:The Definitive Guide 3rd Edition


  7. A great advantage of XFS during bootstrapping a server is the fact that it formats the entire drive in virtually no time. HBase:The Definitive Guide


  8. The client uses the bootstrapping API from the javax.ejb package to start the container and identify the set of enterprise bean components for execution. Java EE 7 Essentials


  9. Bootstrapping The "brain" of the JRockit JVM is the runtime system itself. Oracle JRockit The Definitive Guide


  10. Again, the second node sleeps for 30 seconds and then starts bootstrapping. Cassandra The Definitive Guide



710、单词 aggregations 的匹配文本:


  1. Many Dremel queries are one-pass aggregations; therefore, we focus on explaining those and use them for experiments in the next section. Dremel - Interactive Analysis of WebScale Datasets


  2. The Shuffle and Reduce phases support cross-element computations, such as aggregations and grouping. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. The set of aggregations is limited but the query phase can involve more general computations, which we express in a new interpreted, procedural programming language called Sawzall.1 (An interpreted language is fast enough: most of the programs are small and on large data sets the calculation tends to be I/O bound, as is discussed in the section on performance.) Interpreting the Data Parallel Analysis with Sawzall


  4. Data aggregations based on user demographics (such as differences in user habits between teenagers and users in their 30s) Hadoop in Practice


  5. The aggregations that we get back are a bit more complicated, but still fairly. Elasticsearch The Definitive Guide


  6. To support the existing insights functionality, time and demographic-based aggregations would be necessary. Apache Hadoop Goes Realtime at Facebook


  7. The majority of queries in our workload are analytic queries that have aggregations and/or joins. Tenzing - A SQL Implementation On The MapReduce Framework


  8. If you are working primarily on large aggregations where you care mostly about trends, individual records likely don''t matter. Hadoop Beginners Guide


  9. For this to work, we need to execute the aggregations on multiple levels. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. Furthermore, data analysis workloads tend to consist of many large scan operations, multidimensional aggregations, and star schema joins, all of which are fairly easy to parallelize across nodes in a shared-nothing network. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



711、单词 journaling 的匹配文本:


  1. Such a large difference in performance is attributed to get block locations (read workload) being a memory-only operation, while creates (write workload) require journaling, which is bounded by the local hard drive performance. HDFS scalability the limits to growth


  2. However, in some cases it can improve performance because the journaling lets the filesystem delay the writes to the data’s final location. High Performance MySQL 3rd Edition


  3. Thus, they must be set up first and the data they hold is extremely important: make sure that they are running with journaling enabled and that their data is stored on non-ephemeral drives. MongoDB The Definitive Guide 2nd Edition


  4. Synchronous writes or journaling provide the desired safety, but only with a heavy latency and performance penalty. Ceph A Scalable, High-Performance Distributed File System


  5. We turn off logging or journaling, which frequently is not a desirable (or, depending on your situation, legal) option. Cassandra The Definitive Guide


  6. The most notable feature of ext3 is support for journaling, which records changes in a journal or log prior to modifying the actual data structures that make up the filesystem. Hadoop Operations


  7. If this potentially locks parts of the application for too long, an alternative, such as journaling, might be specified. Java Performance Tuning 2nd Edition


  8. Creating Snapshots with a Journaling Filesystem Many modern volume managers have the ability to create snapshots of the state of the drive at any particular point-in-time. The Definitive Guide to MongoDB


  9. In MongoDB’s case, users control the speed and durability trade-off by choosing write semantics and deciding whether to enable journaling. MongoDB in Action



712、单词 john 的匹配文本:


  1. The template library we’re using in the example blog is by John Resig and was chosen for simplicity. CouchDB The Definitive Guide


  2. John Wiley & Sons, Inc., is not associated with any product or vendor mentioned in this book. Professional Hadoop Solutions


  3. Now let’s update this document with John Smith’s actual contact details. Beginning CouchDB


  4. A diffi  cult topic lucidly explained.”—John Griffi  n, coauthor of Hibernate Search in Action. HBase in Action


  5. And the response will be John Smith’s original JSON document: Elasticsearch The Definitive Guide


  6. John Drum is a bicoastal software engineer with over 20 years of experience in industries ranging from e-commerce to financial services. Getting Started with Google Guava


  7. It has been said, though it is hard to verify, that the 17th-century English poet John Milton had actually read every published book on the face of the earth. Cassandra The Definitive Guide


  8. About the Author John Russell is a software developer and technical writer, and he’s currently the documentation lead for the Cloudera Impala project. Cloudera Impala


  9. John was also instrumental in driving through some of the newer features in Hive like StorageHandlers and Indexing Support. Programming Hive


  10. John O''Conner is a husband, father, and sleep-deprived technology enthusiast. Java 7 Recipes A Problem-Solution Approach



713、单词 spittr 的匹配文本:


  1. Before we dive into the details of listing 5.1, you may wonder what the word spittr has to do with anything. Spring in Action 4th Edition



714、单词 distcp 的匹配文本:


  1. A full shutdown backup has to stop HBase (or disable all tables) at first, then use Hadoop''s distcp command to copy the contents of an HBase directory to either another directory on the same HDFS, or to a different HDFS. HBase Administration Cookbook


  2. For writes we consider a different distcp-like job load, which produces a lot of writes. HDFS scalability the limits to growth


  3. Distcp This runs a distributed copy job (as a MapReduce job) Professional Hadoop Solutions


  4. Hadoop Distributed Copy (distcp) is a tool for efficiently copying large amounts of data within or in between clusters. Hadoop RealWorld Solutions Cookbook


  5. But copying over this directory (using distcp) doesn’t make for a great backup solution, especially in a running system. HBase in Action


  6. Parallel Copying with distcp The HDFS access patterns that we have seen so far focus on single-threaded access. Hadoop:The Definitive Guide 3rd Edition


  7. You can now, for example, use the hadoop distcp command to move the directory from one cluster to another, and perform the import there. HBase:The Definitive Guide


  8. The first is the distributed copy tool, or distcp for short, which copies HDFS data in parallel either to another location within the same cluster, or between clusters. Hadoop Operations


  9. For example, a user may want to have a process in which step one is a custom MapReduce job, step two uses the output of step one and processes it using Hive, and finally step three uses distcp to copy the output from step 2 to a remote cluster. Programming Hive


  10. The distcp utility is used for copying large amounts of files within a single Hadoop cluster or between Hadoop clusters, and leverages Hadoop itself to execute the copy in a distributed manner. Spring Data Modern Data Access for Enterprise Java



715、单词 ddl 的匹配文本:


  1. The group for managing tables is called Data Definition Language (DDL) HBase Administration Cookbook


  2. A new Apache project (HCatalog) extends the reach of Hive’s metastore while preserving components from the Hive DDL for table definition. Professional Hadoop Solutions


  3. Note that the onus of ensuring that the bucket files are properly created and named are a responsibility of the application and HiveQL DDL statements do not currently try to bucket the data in a way that it becomes compatible to the table properties. Hive A Petabyte Scale Data Warehouse Using Hadoop


  4. Protocol buffers are described by a data description language (DDL) that defines the content of the messages. Interpreting the Data Parallel Analysis with Sawzall


  5. MasterObserver—For hooking into DDL events, such as table creation or schema modifications, HBase provides the MasterObserver. HBase in Action


  6. MasterObserver This can be used to react to administrative or DDL-type operations. HBase:The Definitive Guide


  7. Returns the number of rows affected, or 0 for DDL statements such as CREATE TABLE. Core Java Volume II Advanced Features 9th Edition


  8. Starting with version 0.7 of Cassandra, you can use API operations to make changes to your schemas, much like you would in SQL by issuing Data Definition Language (DDL) statements, such as CREATE TABLE or ALTER TABLE. Cassandra The Definitive Guide


  9. When you come to Impala from a background with a traditional relational database product, you find the same familiar SQL query language and DDL statements. Cloudera Impala


  10. In any case, a side benefit of using a file to store Hive queries for initialization is that you can track the changes to your DDL via version control. Programming Hive



716、单词 dtd 的匹配文本:


  1. Executing the last Maven command is a fantastic first step for creating our example because it creates a simple, functional Struts 2 web application with the components we need to create our test, including Spring integration (although it uses an old DTD version of the configuration XML file) Pro Spring Security


  2. The appropriate or DTD is chosen by analyzing the deployment descriptor to determine ersion it claims to conform to. Java EE 7 Specification


  3. The DTD bean and property mechanism can’t hide complex bean creation, which is a shortcoming in configuring advanced features like aspect-oriented programming (AOP) and security. Spring Batch in Action


  4. In this code, an XML prologue is followed by a document type declaration (DTD) Java EE 7 Essentials


  5. The header can be followed by a document type definition (DTD), such as Click here to view code imageClick here to view code image. Core Java Volume II Advanced Features 9th Edition


  6. It searches for attributes whose XML type (as declared in the document’s DTD) is ID. Java in a Nutshell 5th Edition


  7. Processing and validating against a DTD is normally cheaper than processing and validating against a W3C Schema. Java Performance


  8. XmlQueryParser parses such XML and produces a Query object for you, and the sandbox includes a full DTD to formally specify the out-of-the-box tags, as well as full HTML documentation, including examples, for all tags. Lucene in Action 2nd Edition



717、单词 javax 的匹配文本:


  1. All standard Java packages are inside the java and javax package hierarchies. Core Java Volume I Fundamentals 9th Edition


  2. Note that the URLName is a new class introduced as part of the javax.mail class, and differs only from a URL object in that it does not support methods to open a connection to the resource. Java Network Programming and Distributed Computing


  3. For the purposes of this Agreement:  "Independent Implementation" shall mean an tation of the Specification that neither derives from any of Specification Lead''s  source code or de materials nor, except with an appropriate and separate license from Specification Lead, ny of Specification Lead''s  source code or binary code materials; "Licensor Name Space" shall public class or interface declarations whose names begin with "java", "javax", "com.sun" and le”  or their equivalents in any subsequent naming convention adopted by Oracle America, Inc. Java EE 7 Specification


  4. Standard javax.json.* APIs are used to generate the JSON representation from a string: Java EE 7 Essentials


  5. The DataSource interface is defined in the javax.sql standard extension package. Core Java Volume II Advanced Features 9th Edition


  6. The standard libraries and optional packages, whose names begin with java and javax, are exceptions to this rule. Effective Java 2nd Edition


  7. Using the ExtendedSSLSession interface The javax.net.ssl package provides a series of classes used to effect secure socket communication. Java 7 New Features Cookbook


  8. The documentation of the javax.crypto package and its subpackages was also reviewed by Jon Eaves. Java in a Nutshell 5th Edition


  9. This technique is used extensively in the java.awt and javax.swing packages, which define interfaces and abstract classes associated with different kinds of event-handling methods. Concurrent Programming in Java Design Principles and Pattern


  10. They are represented in the javax.mail API by the MimeMessage class. Java 7 Recipes A Problem-Solution Approach



718、单词 libvirt 的匹配文本:


  1. Hypervisor Metrics The standard set of sFlow hypervisor and virtual machine metrics (Table 8-2) are based on metrics defined by the open source libvirt project. Monitoring with Ganglia


  2. The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor''s APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, etc.) bk-compute-adminguide-grizzly


  3. An example of this would be nova-compute accepting a message from the queue to create a new instance and then using the libvirt library to start a new KVM instance. Deploying OpenStack


  4. Flavor customization can be limited by the hypervisor in use, for example the libvirt driver enables quotas on CPUs available to a VM, disk tuning, bandwidth I/O, and instance VIF traffic control. bk-admin-guide-cloud-latest


  5. Data can be collected by notifications sent by existing OpenStack components (for example, usage events emitted from Compute) or by polling the infrastructure (for example, libvirt) admin-user-guide-trunk


  6. The valid model values depend on the libvirt_type setting, as shown in the following tables: user-guide


  7. Compute nodes are where the computing resources are held, and in our example architecture they run the hypervisor (KVM), libvirt (the driver for the hypervisor, which enables live migration node to node), nova-compute, nova-api-metadata (generally only used when running in multi-host mode, it retrieves instance-specific metadata), nova-vncproxy, and nova-network. OpenStack Operations Guide



719、单词 memtable 的匹配文本:


  1. When a write is received by Cassandra, the data is first recorded in a commit log, then written to an in-memory structure known as memtable. NoSQL Distilled


  2. Using a dedicated Commit Log disk Write operations are done sequentially to a commit log on disk and modify a sorted structure in memory called a Memtable. Cassandra High Performance Cookbook


  3. For example, a merging compaction also includes a memtable flush. HBase:The Definitive Guide


  4. A write will not count as successful until it’s written to the commit log, to ensure that if a write operation does not make it to the in-memory store (the memtable, discussed in a moment), it will still be possible to recover the data. Cassandra The Definitive Guide


  5. Of these updates, the recently committed ones are stored in memory in a sorted buffer called a memtable; the older updates are stored in a sequence of SSTables. Bigtable - A Distributed Storage System for Structured Data



720、单词 instantiates 的匹配文本:


  1. This method also instantiates a local instance of the key and value classes, Pro Hadoop


  2. The TaskTracker loads the token in memory and instantiates tasks, which should be run as the user that initiated the MapReduce process (you will need to configure that) Professional Hadoop Solutions


  3. The class SecurityScanMain reads the required arguments to connect to Accumulo and instantiates a BatchWriter instance to write out test data to the acled table. Hadoop RealWorld Solutions Cookbook


  4. The nested var element actually instantiates the variable, using the variable name defined in the name attribute. Scaling CouchDB


  5. This is particularly useful because it handles importing the necessary libraries and instantiates all the classes you’ll need. HBase in Action


  6. This method instantiates a new ZooKeeper object, which is the central class of the client API and the one that maintains the connection between the client and the ZooKeeper service. Hadoop:The Definitive Guide 3rd Edition


  7. Also note that a bigger buffer takes more memory—on both the client and server side since the server instantiates the passed write buffer to process it. HBase:The Definitive Guide


  8. The driver instantiates, configures, and passes a JobConf   object named job to JobClient. Hadoop in Action


  9. The second daemon, the tasktracker, accepts task assignments from the jobtracker, instantiates the user code, executes those tasks locally, and reports progress back to the jobtracker periodically. Hadoop Operations


  10. As you can see, listing 7.2 instantiates the BrokerService and configures the brokerName and dataDirectory properties. ActiveMQ in Action



721、单词 webapp 的匹配文本:


  1. This approach requires you to install patched versions of whisper and the Graphite webapp on your gweb server. Monitoring with Ganglia


  2. The default convention for web application sources is the directory src/main/webapp. Gradle in Action


  3. The generated .war file is pointed to by the webapp tag, and Jetty runs the application. Jenkins Continuous Integration Cookbook


  4. We set the root of the WAR file contents to the directory src/main/webapp. Gradle Effective Implementation Guide


  5. If you are running Jenkins in a Java EE container such as Tomcat or JBoss, you can configure the webapp to expose its own environments variables. Jenkins The Definitive Guide


  6. These enterprise beans are referenced by se beans in ejb3.jar and by the servlets packaged in webapp.war. Java EE 7 Specification


  7. Finally, there are service accounts, e.g., the apache and mysql users and groups required by multiple Puppet classes, such as the apache, mysql, and webapp classes. Pro Puppet


  8. These are available in the example source code and are named jms-webapp-local and jms-webapp-global. ActiveMQ in Action


  9. The following command shows how to get a few interesting request processing statistics for a servlet in the Web application named TestWebapp. Java Performance


  10. When you cannot use Luke, because the index is on a remote server and not accessible to your local computer, LIMO is a good fallback since it runs as a webapp on the server. Lucene in Action 2nd Edition



722、单词 invoker 的匹配文本:


  1. When it returns, its int return value is pushed onto the operand stack of the frame of the invoker, the add12and13 method. The Java Virtual Machine Specification Java SE 7 Edition


  2. In addition to these remoting protocols, Spring also provides its own HTTP-based invoker that is based on standard Java serialization. Pro Spring 3


  3. With the properties file in place, a simple JobInvoker class such as the one shown in Listing 11-17 can start the required cluster and optionally wait for its completion. Professional Hadoop Solutions


  4. If this were not considered an error, an invoker of the method move on a reference of type Point could find the contract between it and Point broken if this exception were thrown. The Java Language Specification Java SE 7 Edition


  5. In addition to the Command interface and its implementations, there’s a client, which is responsible for creating the Command; an invoker, which is responsible for running it; and a receiver, on which the Command performs its action. Functional Programming Patterns in Scala and Clojure


  6. The InvokeForString invoker is used because the return type of the method is a String. Hadoop:The Definitive Guide 3rd Edition


  7. If an incoming request needs to be serviced by multiple threads but the invoker needs to remain unaware of the messaging system, an inbound gateway provides the solution. Spring Integration in Action


  8. Starting in version 0.8, Pig offers invoker methods that allow you to treat certain static Java functions as if they were Pig UDFs. Programming Pig


  9. Stack trace shows from the most specific place where the exception happened (the line where the exception occurred) to the top-level invoker of the offending code (and everything in-between) Java 7 Recipes A Problem-Solution Approach


  10. Let’s look at Spring’s HTTP invoker, which offers RPC over HTTP (like Hessian/Burlap) while at the same time using Java serialization of objects (like RMI) Spring in Action 4th Edition



723、单词 xmx 的匹配文本:


  1. Remember if you increase this value you may also have to increase your Java heap space using the -Xmx and optionally the -Xms parameters. Apache Flume Distributed Log Collection for Hadoop


  2. In addition, there may be a danger here of running out of memory if your byte buffer grows too large, so you must be mindful of memory consumption (perhaps adjust Java heap parameters such as Xms and Xmx) Pro Java 7 NIO.2


  3. Xmx This selects the maximum size that Java will allow the heap to grow to. Mahout in Action


  4. A Java process will never use more heap memory than we''ve specified with the Xmx parameter (or the ES_MAX_MEM property) Mastering ElasticSearch


  5. Heap memory controlled with the Xmx JVM option should only be a portion of your total system memory. Cassandra High Performance Cookbook


  6. For example, the option for setting the maximum heap size, -Xmx, is the same on most JVMs, JRockit included. Oracle JRockit The Definitive Guide


  7. Try setting the -Xmx and -Xms options to the same value to keep Java from having to manage heap growth. Cassandra The Definitive Guide


  8. This method simply gives the -Xmx value, and is of no use to monitor heap usage. Java Performance Tuning 2nd Edition


  9. Note, -XX:MaxHeapSize is the same as the command line option -Xmx. Java Performance


  10. The option -Xms size sets the starting size of the heap and the option -Xmx size sets the maximum allowed size of the heap. Lucene in Action 2nd Edition



724、单词 kanban 的匹配文本:


  1. Shantonu Sarker is a proactive software test engineer with seven years of experience in test automation, development (C# and Java), and project management with Agile (Scrum and Kanban) Performance Testing With JMeter 2.9


  2. But the application you set out to build in chapter 6 won’t be complete by chapter’s end because to build a functional Kanban application, your application needs to store information such as stories and its status information into persistent storage. Scala in Action


  3. This approach is comparable to the philosophy of continual improvement promoted by lean processes such as Kanban. Jenkins The Definitive Guide



725、单词 denormalized 的匹配文本:


  1. Our schema language lets applications control the placement of hierarchical data, storing data that is accessed together in nearby rows or denormalized into the same row. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. Therefore, creating the denormalized records for MapReduce has to be done in a batch fashion periodically. MapReduce Design Patterns


  3. As a result, most HBase-based implementations are using highly denormalized data. Professional Hadoop Solutions


  4. It primarily retrieves events by looking up all logs containing a keyword from a range of time.To make this efficient, the event timeline is denormalized to produce one timeline per keyword. Cassandra High Performance Cookbook


  5. Any form of slave replication or external caching introduces weak consistency into your now denormalized data. Hadoop:The Definitive Guide 3rd Edition


  6. The denormalized requirement would further increase the number of writes to the system as each message could be written several times. Apache Hadoop Goes Realtime at Facebook


  7. If you have designed a data model and find that you need something like a join, you’ll have to either do the work on the client side, or create a denormalized second column family that represents the join results for you. Cassandra The Definitive Guide


  8. That is why data warehouses (and also Impala) are more tolerant of denormalized data, with values that are duplicated and possibly stored in raw string form rather than condensed to numeric IDs. Cloudera Impala


  9. Obviously, in denormalized datasets with often several thousands of columns this can make a huge difference compared to the the row-wise storage used by most database systems. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. Joins should be avoided in favor of denormalized data, when feasible. Programming Hive



726、单词 newline 的匹配文本:


  1. Usually, text files on a computer make assumptions like a newline character terminates each record. Apache Flume Distributed Log Collection for Hadoop


  2. If you add another query parameter, heartbeat=N, where N is a number, CouchDB will send you a newline character each N milliseconds. CouchDB The Definitive Guide


  3. These could have non-printable characters inside them such as a tab or newline that would affect the CLI output. Cassandra High Performance Cookbook


  4. We explicitly tell the SerDe that a tab character separates each field and a newline character separates each record. Hadoop RealWorld Solutions Cookbook


  5. Note that we had to split this command across two lines, so you use the “\” character to escape the newline. Hadoop in Practice


  6. The value is the contents of the line, excluding any line terminators (e.g., newline or carriage return), and is packaged as a Text object. Hadoop:The Definitive Guide 3rd Edition


  7. This format is like a stream of valid one-line JSON documents joined together by newline"\n" characters. Elasticsearch The Definitive Guide


  8. The newline character (\n) can trivially be thought of as the sync marker pointing out to both splittable boundaries and record boundaries. Hadoop in Action


  9. Redis Redis is not a “plain” key-value store, as it supports a variety of values in different data structures such as binary-safe strings (strings that don’t contain a space or newline character), lists and sets of binary-safe strings, and sorted sets, which contain a floatingnumber score. Cassandra The Definitive Guide


  10. The format of the file specified by dfs.hosts is a newline separated list of hostnames or IP addresses, depending on how machines identify themselves to the cluster. Hadoop Operations



727、单词 sparc 的匹配文本:


  1. The storage format is platformneutral, meaning you can copy the data and index files from an Intel-based server to a PowerPC or Sun SPARC without any trouble. High Performance MySQL 3rd Edition


  2. On SPARC systems running Solaris, the maximum size of a process is 4 GB, leaving us. Java Threads 3rd Edition


  3. We can''t readily move the program, optimizations and all, to SPARC. Oracle JRockit The Definitive Guide


  4. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. Java Performance


  5. This usually isn’t difficult to comply with, but users running SPARC, PowerPC, PA-RISC, and other big-endian architectures will have to hold off.1 Most of the drivers support both little- and bigendian byte orderings, so clients of MongoDB can usually run on either architecture. MongoDB in Action



728、单词 parameterization 的匹配文本:


  1. To understand and benefit from Scala collections, you need to know two concepts: type parameterization and higher-order functions. Scala in Action


  2. It gives you a new concise way to express behavior parameterization. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. We will take a closer look at the parameterization of each method and ways to run it. Sonar Code Quality Testing Essentials


  4. But this form demonstrates that while the type parameterization portion (the type names in square brackets) forms part of the type of the instance, the value parameterization part (the values in parentheses) does not. Programming in Scala 2nd Edition


  5. Oozie uses the JSP expression language to perform parameterization, as you’ll see in the following code. Hadoop in Practice


  6. We can create data-driven Selenium WebDriver tests using the JUnit 4 parameterization feature. Selenium Testing Tools Cookbook


  7. If you choose to implement your own matchers, you should expect to invest some of your time in fine-tuning parameterization. Spring Integration in Action


  8. Because generics are implemented by erasure (Item 25), you can use a single object for all required type parameterizations, but you need to write a static factory method to repeatedly dole out the object for each requested type parameterization. Effective Java 2nd Edition


  9. The same is not generally true for generic methods: the compiler can almost always figure out the correct parameterization of a generic method based on the arguments you pass to the method. Java in a Nutshell 5th Edition


  10. The first point to take into consideration is that the type of data used for the parameterization of the Future interface in the declaration of the list that stores the result objects must be compatible with the one used to parameterized the Callable objects. Java 7 Concurrency Cookbook



729、单词 grep 的匹配文本:


  1. In the namenode_status() function, we use the jps command to show all running Java processes owned by the hadoop user, and the grep name of the NameNode daemon to see whether it has started: HBase Administration Cookbook


  2. Here are some common netcat/grep commands against gmetad port 8651 Monitoring with Ganglia


  3. Distributed Grep: The map function emits a line if it matches a supplied pattern. MapReduce - Simplied Data Processing on Large Clusters


  4. Distributed grep Grep, a very powerful tool that uses regular expressions for finding lines of text of interest, is easily parallelized by applying a regular expression match against each line of input and only outputting lines that match. MapReduce Design Patterns


  5. Od and strings can be combined with other command-line tools such as pipelines or grep. Cassandra High Performance Cookbook


  6. Verify the installation by listing the processes through the ps | grep java command (if you are using Linux) or via Task Manager (if you are in Windows), in the master node and slave nodes. Hadoop MapReduce Cookbook


  7. Google, in its MapReduce paper, indicated that it used its version of MapReduce to create its web index from crawl data.10 Google also highlights applications of MapReduce to include activities such as a distributed grep, URL access frequency (from log data), and a term-vector algorithm, which determines popular keywords for a host. Hadoop in Practice


  8. Also use a different string to grep; we use a portion of the IP address we know isn''t shared with any other host in the cluster. Hadoop Beginners Guide


  9. The text file format is convenient for sharing data with other tools, such as Pig, Unix text tools like grep, sed, and awk, etc. Programming Hive


  10. We report load times for two data sets, Grep and UserVisits in. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



730、单词 petabytes 的匹配文本:


  1. If the spikes are in your network graphs and are in the range of petabytes/sec, consider rebuilding Ganglia using the following flag: Monitoring with Ganglia


  2. Basic arithmetic shows us that by applying the same process to create a cluster with three layers of proxies, we can manage 262 petabytes on thousands of machines. CouchDB The Definitive Guide


  3. The system scales to thousands of CPUs and petabytes of data, and has thousands of users at Google. Dremel - Interactive Analysis of WebScale Datasets


  4. Our system is in active use by many developers, and has processed petabytes of data. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  5. The primary goals of the architecture are scalability (to hundreds of petabytes and beyond), performance, and reliability. Ceph A Scalable, High-Performance Distributed File System


  6. A note from the trenches: "You know you have a large storage system when you get paged at 1 AM because you only have a few petabytes of storage left." Storage Architecture and Challenges


  7. As systems grow to petabytes and beyond, however, it becomes increasingly doubtful that a global snapshot schedule or policy will be appropriate for all data. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  8. Thanks to MapReduce, Tenzing scales to thousands of cores and petabytes of data on cheap, unreliable hardware. Tenzing - A SQL Implementation On The MapReduce Framework


  9. Bigtable is designed to reliably scale to petabytes of data and thousands of machines. Bigtable - A Distributed Storage System for Structured Data


  10. Databases do not meet the storage or throughput requirements of these tasks: Google’s indexing system stores tens of petabytes of data and processes billions of updates per day on thousands of machines. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



731、单词 gweb 的匹配文本:


  1. As depicted in Figure 1-1, Ganglia is architecturally composed of three daemons: gmond, gmetad, and gweb. Monitoring with Ganglia



732、单词 opcode 的匹配文本:


  1. Chapter 6 specifies the instruction set of the Java Virtual Machine, presenting the instructions in alphabetical order of opcode mnemonics. The Java Virtual Machine Specification Java SE 7 Edition


  2. An Opcode tells Memcached which operations you wish to perform. Netty in Action MEAP V08


  3. Operations are always encoded with one byte for the opcode, followed by an optional number of operands of variable length. Oracle JRockit The Definitive Guide


  4. An opcode is an enumerated type whose elements represent operations on some machine, such as the Operation type in Item 30, which represents the functions on a simple calculator. Effective Java 2nd Edition


  5. The Opcode enumerated type defines an abstract method perform(), which is then implemented by the class body of each value of the type. Java in a Nutshell 5th Edition


  6. Returning to the oplog entry, the second field, op, specifies the opcode. MongoDB in Action



733、单词 encodes 的匹配文本:


  1. This is rather clever, because you can immediately tell whether a code unit encodes a single character or it is the first or second part of a supplementary character. Core Java Volume I Fundamentals 9th Edition


  2. The setToRaw method, in Listing 9-4, is used to create and store a value in a key object that correctly encodes either a search request or a search space. Pro Hadoop


  3. Data compression in Hadoop is done by a compression codec, which is a program that encodes and decodes data streams. Hadoop Operations and Cluster Management Cookbook


  4. After building a model of its requirements, the AM encodes its preferences and constraints in a heartbeat message to. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Note that you don’t need to supply schema information, because Avro encodes that in. Hadoop in Practice


  6. ByteBuffer encode(String str) encodes the given string into a sequence of bytes. Core Java Volume II Advanced Features 9th Edition


  7. First of all, the citation data encodes a graph   in the same vein that web links and social networks   are also graphs. Hadoop in Action


  8. Protocol independence Thrift encodes and decodes the data types for use across protocols. Cassandra The Definitive Guide


  9. Hive uses an input format to split input streams into records, an output format to format records into output streams (i.e., the output of queries), and a SerDe to parse records into columns, when reading, and encodes columns into records, when writing. Programming Hive


  10. The other dependency is an API that encodes arbitrary strings first with the Metaphone algorithm, then the Base64 format. Building and Testing with Gradle



734、单词 specifier 的匹配文本:


  1. The conversion character that ends a format specifier indicates the type of the value to be formatted: f is a floating-point number, s a string, and d a decimal integer. Core Java Volume I Fundamentals 9th Edition


  2. To resolve an unresolved symbolic reference to a call site specifier involves three steps: The Java Virtual Machine Specification Java SE 7 Edition


  3. Each access specifier only controls the access for that particular definition. Thinking in Java 4th Edition


  4. Both "title" and "content" are still returned, even though they weren’t explicitly included in the key specifier. MongoDB The Definitive Guide 2nd Edition


  5. When a FlatSpec is executed, it will run each specifier clause as a ScalaTest test. Programming in Scala 2nd Edition


  6. Here is an example of where a type declaration specifier is distinct from a type name: The Java Language Specification Java SE 7 Edition


  7. Otherwise, the visibility of class members is the same as private without a scope specifier. Programming Scala


  8. The %d format specifier is for formatting integers in decimal format: Java in a Nutshell 5th Edition


  9. Thus, all calls to remove take an optional query specifier for selecting exactly which documents to delete. MongoDB in Action


  10. A data field consists of a length specifier sequence followed by the actual data value. Understanding MySQL Internals



735、单词 outputting 的匹配文本:


  1. For example, what if you wanted to support accessing recipe information by using the ingredients from the recipe? The emit() function we used in the original view can be called multiple times within your map function, each time outputting a row. Getting Started with CouchDB


  2. A debug_level greater than zero will result in gmond running in the foreground and outputting debugging information. Monitoring with Ganglia


  3. Popular uses of show functions are for outputting HTML page, CSV files, or XML needed for compatibility with a particular interface. CouchDB The Definitive Guide


  4. For instance, outputting HI THERE to a numeric field will insert NULL for the field value for that row. Hadoop RealWorld Solutions Cookbook


  5. You are not restricted to just outputting the values in the object; you can also manipulate them and perform calculations on them where relevant. Beginning CouchDB


  6. We used the text outputting invert indexing MapReduce program in step 6 for the clarity of understanding the algorithm. Hadoop MapReduce Cookbook


  7. Your MapReduce job is outputting a different Avro object (StockAvg) from the job input. Hadoop in Practice


  8. You can at least check if the mapper is outputting the proper data and if the proper data is being sent to the reducer. Hadoop in Action


  9. There are similar classes for outputting data to Cassandra in the Hadoop package, but at the time of this writing, those classes are still being finalized. Cassandra The Definitive Guide


  10. The reduce function then operates on the intermediate key-value pairs, processing all values that have the same key together and outputting the result. Hadoop Operations



736、单词 ufo 的匹配文本:


  1. Getting the UFO sighting dataset We will use a public domain dataset of over 60,000 UFO sightings. Hadoop Beginners Guide


  2. Gluster UFO uses a customizes version of Swift that uses Gluster as the back-end. OpenStack Operations Guide



737、单词 initializers 的匹配文本:


  1. All field initializers and initialization blocks are executed, in the order in which they occur in the class declaration. Core Java Volume I Fundamentals 9th Edition


  2. As Dog.class is loaded (creating a Class object, which you’ll learn about later), all of its static initializers are run. Thinking in Java 4th Edition


  3. In this version of Rational, we added a private field, g, and modified the initializers for numer and denom. Programming in Scala 2nd Edition


  4. Declaration order is significant only for local variables, local classes, and the order of initializers of fields in a class or interface. The Java Language Specification Java SE 7 Edition


  5. For example, invocations might come from background threads or static initializers. Effective Java 2nd Edition


  6. Variable names and optional initializers are separated from each other with commas: Java in a Nutshell 5th Edition


  7. These are also the actual names for constructors and static initializers in the class file.) FloatingDecimal is private to the java.lang package, which handles most of the logic involved in converting floating-point numbers. Java Performance Tuning 2nd Edition


  8. Next, class initialization runs the class static initializers, and initializers for static fields. Java Performance



738、单词 geronimo 的匹配文本:


  1. The Spring Project One of the most endearing things about the Spring project is the level of activity currently present in the community and the amount of cross-pollination between other projects such as CGLIB, Apache Geronimo, and AspectJ. Pro Spring 3


  2. Srinath is also a committer of Apache open source projects Axis, Axis2, and Geronimo. Hadoop MapReduce Cookbook


  3. He contributes to open source projects—Apache OpenEJB and Apache Geronimo. Java 7 New Features Cookbook


  4. Jeff Bleiel, our development editor, for his work liaising with Manning—without Jeff’s input, the book would not be what it is today; Gary Tully for his tireless technical reviews of the entire book; the Manning staff for their arduous work on the book; Filip Hanik for his assistance with Tomcat; Jan Bartel and Greg Wilkins for their assistance with Jetty; David Jencks and Kevan Miller for their assistance with Geronimo; and Jaikiran Pai for his assistance with JBoss. ActiveMQ in Action



739、单词 daos 的匹配文本:


  1. Two Different Service Layers Implementation Traditionally, in the data access layer, developers will implement data access objects (DAOs) to separate the data access logic from the business logic. Pro Spring 3


  2. However, references to objects that hold no state (e.g., services, facades, DAOs, and so on) should be declared as transient. Pro Spring MVC


  3. Another potential problem is that it’s likely that many DAOs may need to share this connection information. Spring in Practice


  4. Most of the time, you secure the business services; some other times, you might need to secure the DAOs. Pro Spring Security


  5. Spring Batch bases this class on a set of Data Access Objects (DAOs) used for dedicated interactions and data management. Spring Batch in Action


  6. If DAOs and business services follow the same development and deployment cycles, there is no point in splitting them into different bundles. Spring Dynamic Modules in Action


  7. Rather than relying on repositories or DAOs to wrap access to your entities, you use the static finder methods to locate them, and methods such as persist() and merge() to manipulate them. Spring Roo in Action


  8. DAOs instead of inheriting from the DAO class, then you can design. SQL Antipatterns


  9. Spring’s ORM support builds on the DAO support, providing a convenient way to build DAOs for several ORM solutions. Spring in Action 4th Edition



740、单词 keypair 的匹配文本:


  1. We can now connect to the box using our keypair file and retrieve the results file. Performance Testing With JMeter 2.9


  2. Create a keypair so you can ssh to the instance: bk-compute-adminguide-grizzly


  3. But before we start spinning up instances, we need to make a keypair so that we will be able to log in to the new instance via ssh. Deploying OpenStack


  4. If this feature is enabled, a user can ssh to an instance without an ssh keypair. bk-admin-guide-cloud-latest


  5. For example, to connect as the "ubuntu" user using the my-keypair.pem private key file: Storm Blueprints Patterns for Distributed Real-time Computation


  6. However, if you do this you must generate the initial keypair with a special value in the CN field; Thawte will tell you what that value is when you follow the links to request the certificate. Java Security 2nd Edition


  7. Whirr uses SSH to communicate with machines running in the cloud, so it’s a good idea to generate an SSH keypair for exclusive use with Whirr. Hadoop:The Definitive Guide 3rd Edition


  8. When configured, you create a new public/private keypair using the gpg command: Spring Roo in Action


  9. The following command generates an SSH   public/private key pai r and registers the public key to Amazon EC2 under the key name gsg-keypair. Hadoop in Action


  10. If you have generated a keypair with an external tool, you can import it into OpenStack. user-guide



741、单词 vars 的匹配文本:


  1. Vars that we will not cover here; please refer to Clojure documentation or books for more details. Programming Concurrency on the JVM


  2. OpenStack can be used by corporations, service providers, VARS, SMBs, researchers, and global data centers looking to deploy large-scale cloud deployments for private or public clouds. bk-compute-adminguide-grizzly


  3. EnvVars even has a method to get the environment variables from remote Jenkins nodes. Jenkins Continuous Integration Cookbook


  4. One telltale sign is that if code contains any vars, it is probably in an imperative style. Programming in Scala 2nd Edition


  5. We can assign the elements of a tuple into multiple vars or vals in parallel, as shown in this example: Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine


  6. The Puppet master process running inside Passenger will check the environment variables set by the SSLOptions +StdEnvVars configuration in order to authorize the Puppet agent. Pro Puppet


  7. If expr is Var, we extract the name and return the lookup of the name in the vars Map. Beginning Scala


  8. The valid keywords are: source, which specifies sourcefile information; lines, which specifies line number information; and vars, which specifies local variable debugging information. Java in a Nutshell 5th Edition


  9. In the following example, the Java class EnvVars accepts an environment variable name and prints out its value. Java 7 Recipes A Problem-Solution Approach


  10. Rather than waste the CPU by initializing them all the time, this macro exists to initialize them when the use of one of those tools is detected, or when FORCE_INIT_OF_VARS is defined. Understanding MySQL Internals



742、单词 hbck 的匹配文本:


  1. Chapter 3, Using Administration Tools: In this chapter, we describe the usage of various administration tools such as HBase web UI, HBase Shell, HBase hbck, and others. HBase Administration Cookbook


  2. We’ll cover the hbck, hlog, and hfile commands in future sections. HBase in Action


  3. The hbase hbck tool uses this to generate missing meta table entries, for example. HBase:The Definitive Guide



743、单词 datafiles 的匹配文本:


  1. Once you start the database and it accesses the datafiles for the first time, you cannot delete any of the data files. MongoDB The Definitive Guide 2nd Edition


  2. Avro datafiles support compression and are splittable, which is crucial for a MapReduce data input format. Hadoop:The Definitive Guide 3rd Edition


  3. It does this by keeping the old datafiles around, so that it can preserve its “view” on what the index looked like at the time it started. Elasticsearch The Definitive Guide


  4. It also has scripts for converting SSTables (the datafiles) to JSON and back. Cassandra The Definitive Guide


  5. What just happened? Firstly, we created the datafiles to be used in this example. Hadoop Beginners Guide


  6. So, if you had a directory input with two datafiles today and yesterday under it, and you specified input as your file to load, Pig will read both today and yesterday as input. Programming Pig


  7. By default, MongoDB places all of its datafiles in a single directory. The Definitive Guide to MongoDB


  8. MyISAM index files are much more complex than the datafiles. Understanding MySQL Internals



744、单词 dbms 的匹配文本:


  1. Conceptually, such user defined functions (UDFs) can be combined with SQL queries, but the experience reported in the comparison paper indicates that UDF support is either buggy (in DBMS-X) or missing (in Vertica) MapReduce a flexible data processing tool


  2. It has DBMS-like features, is very fast, supports distributed searching, and scales well. High Performance MySQL 3rd Edition


  3. Limited by the page-level data manipulation inside a traditional DBMS engine, PAX uses a fixed page as the basic unit of data record organization. RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems


  4. It allows accessing the data without a time-consuming loading phase, which is a major impedance to database usage in analytical data processing [13], where it is often possible to run dozens of MR analyses before a DBMS is able to load the data and execute a single query. Dremel - Interactive Analysis of WebScale Datasets


  5. HadoopDB: an architectural hybrid of MapReduce and DBMS technologies for analytical workloads” Spanner - Google’s Globally-Distributed Database


  6. Hadoop and HBase make up an open source DBMS inspired by. SQL Antipatterns


  7. At the same time, distributed DBMS vendors have integrated the MapReduce execution model in their engines [13] to provide. Tenzing - A SQL Implementation On The MapReduce Framework


  8. PreparedStatements send a precompiled SQL statement to the DBMS rather than a String. Java 7 Recipes A Problem-Solution Approach


  9. The indexing system could store the repository in a DBMS and update individual documents while using transactions to maintain invariants. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications


  10. Given the exploding data problem, all but three of the above mentioned analytical database start-ups deploy their DBMS on a shared-nothing architecture (a collection of independent, possibly virtual, machines, each with local disk and local main memory, connected together on a high-speed network) HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



745、单词 poller 的匹配文本:


  1. The problem is that even using roll-up schemes, a central poller can poll an individual agent only so fast, and there’s only so much polling you can do before the network traffic becomes burdensome. Monitoring with Ganglia


  2. If your provider doesn’t support IMAP IDLE, then you can use a standard IMAP inbound channel adapter with a poller: Spring in Practice


  3. For example, imagine you have a method called poll in a custom FilePoller class. Spring Integration in Action


  4. A poller for each cluster can now run asynchronously, scanning only its directory for new files to upload. Hadoop Operations


  5. The adapter uses a poller to check the directory since the filesystem is not an event-driven source. Spring Data Modern Data Access for Enterprise Java



746、单词 isbn 的匹配文本:


  1. Its items are books, of course, and it might naturally define a book by its ISBN number (a unique product identifier) Mahout in Action


  2. The exception to this rule, of course, is if you have a separate identifier (such as a product code, ISBN, or other information) that you want to use in addition to your main document ID because it provides reference or connectivity information. Getting Started with CouchDB


  3. You can also search for the book at www.wrox.com by ISBN (the ISBN for this book is 978-1-11861193-7) to find the code. Professional Hadoop Solutions


  4. Right now, this data isn’t exactly very useful; it merely tells you the ISBN of some book. Beginning CouchDB


  5. For example, the book with ISBN 0-201-96426-0 has two authors with codes DATE and DARW. Core Java Volume II Advanced Features 9th Edition


  6. Consider a Book class having four fields: title, author, publisher, and isbn. Getting Started with Google Guava


  7. An attribution usually includes the title, author, publisher, and ISBN. Cassandra The Definitive Guide


  8. For instance, ISBN numbers may be URIs even if the book cannot be retrieved over the Internet. Java Network Programming 4th Edition


  9. For example, given our book test data, the following code retrieves the single document matching the ISBN provided: Lucene in Action 2nd Edition


  10. The function will return three properties and ISBN as an integer, a title as a string, and authors as an array of strings. Programming Hive



747、单词 cdh 的匹配文本:


  1. Oozie code was tested on the latest version of Oozie (available, for example, as part of Cloudera CDH 4.1 distribution) Professional Hadoop Solutions


  2. Put this recipe into a file that you can pass as a configuration to the Whirr script, such as my_cdh_recipe. HBase in Action


  3. If you’re running on a CDH distribution, the scripts will run configuration-free. Hadoop in Practice


  4. Alternatively, Cloudera’s Distribution Including Apache Hadoop (CDH) contains an installation package for Sqoop alongside compatible editions of Hadoop and other tools such as Hive. Hadoop:The Definitive Guide 3rd Edition


  5. Cloudera makes the distribution available in a number of different formats: source and binary tar files, RPMs, Debian packages, VMware images, and scripts for running CDH in the cloud. HBase:The Definitive Guide


  6. The Enterprise Edition is actually quite different from the Basic Edition; it is more of a layer atop Hadoop, and in fact, can be used with other distributions such as CDH or HDP. Hadoop Beginners Guide


  7. Both Apache Hadoop and CDH come with support for FUSE HDFS which, as you may have guessed, allows you to mount the Hadoop distributed filesystem as you would any other device. Hadoop Operations



748、单词 dih 的匹配文本:


  1. Data Import Handler (DIH)  provides a mechanism for integrating different data sources with Apache Solr for indexing. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. Another way to ask Solr to pull data is to use the DataImportHandler (DIH) which can pull data from a database and other sources. Solr 3 Enterprise Search Server


  3. We cover Solr’s data import handler (DIH) in chapter 12 Solr in action


  4. One more thing before the indexing – you should copy an appropriate JDBC driver to the lib directory of your Solr installation or the dih directory we created before. Apache Solr 4 Cookbook



749、单词 dsa 的匹配文本:


  1. The following command generates a dsa key with an empty password in the file ~/.ssh/id_dsa: Pro Hadoop


  2. If we need to generate a DSA key, the security provider returns to us a class associated with the Sun provider class since the Sun provider, at position 1, is the first class that says that it can perform DSA key generation. Java Security 2nd Edition


  3. The rsa and dsa keys from both the mail host and the web host are now being exported and stored in the configuration database. Pro Puppet


  4. For example, here is a matching pair of public and private Digital Signature Algorithm (DSA) keys. Core Java Volume II Advanced Features 9th Edition


  5. In the default Java implementation shipped from Sun, the only supported algorithm is “DSA”; this is the default if this option is omitted. Java in a Nutshell 5th Edition



750、单词 david 的匹配文本:


  1. Of course, the real story lies in the people behind the projects—people such as Berkeley Professor David Culler, who had the vision of building powerful clusters out of commodity hardware long before it was common industry practice. Monitoring with Ganglia


  2. The first person I showed the draft to was David James, the organizer of the Washington DC–area Groovy user group. Gradle in Action


  3. For example, a security policy file might contain a statement like the following to allow permission to read the private KerberosKey credentials of a KerberosPrincipal named “davidJava in a Nutshell 5th Edition


  4. David Gruzman is a Hadoop and big data architect with more than 18 years of hands-on experience, specializing in the design and implementation of scalable high-performance distributed systems. Hadoop Beginners Guide


  5. Jeff Bleiel, our development editor, for his work liaising with Manning—without Jeff’s input, the book would not be what it is today; Gary Tully for his tireless technical reviews of the entire book; the Manning staff for their arduous work on the book; Filip Hanik for his assistance with Tomcat; Jan Bartel and Greg Wilkins for their assistance with Jetty; David Jencks and Kevan Miller for their assistance with Geronimo; and Jaikiran Pai for his assistance with JBoss. ActiveMQ in Action


  6. We thank the anonymous reviewers, David Nagle, and our shepherd Brad Calder, for their feedback on this paper. Bigtable - A Distributed Storage System for Structured Data


  7. David Ha and Rumit Patel, at M6D, contributed the case study and code on the Rank function. Programming Hive


  8. Very special thanks also go to David Holmes for, among many contributions, helping to develop and extend material for tutorials that in turn became included in the second edition. Concurrent Programming in Java Design Principles and Pattern


  9. For more than 30 years, David''s expertise has been in multiplatform network integration and systems programming. Java 7 Recipes A Problem-Solution Approach


  10. The continued growth in software engineering maturity today can be seen in the DevOps movement, climaxing with the publication last year of Jez Humble and David Farley’s book Continuous Delivery (AddisonWesley) Building and Testing with Gradle



751、单词 deserialized 的匹配文本:


  1. We discuss serialization in Chapter 1 of Volume II.) To display the applet, the object is deserialized from the file to return it to its previous state. Core Java Volume I Fundamentals 9th Edition


  2. Values that are objects are serialized and then deserialized by each MapReduce task when tasks start. Pro Hadoop


  3. Actually, according to lazy decompression, a column will not be decompressed until one of its field is being deserialized. RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems


  4. This Bloom filter can later be deserialized from HDFS just as easily as it was written. MapReduce Design Patterns


  5. The tables created in the manner describe above are serialized and deserialized using default serializers and deserializers already present in Hive. Hive A Petabyte Scale Data Warehouse Using Hadoop


  6. What’s more, the deserialization procedure creates a new instance for each object deserialized from the stream. Hadoop:The Definitive Guide 3rd Edition


  7. If the readResolve method is defined, it is called after the object is deserialized. Core Java Volume II Advanced Features 9th Edition


  8. Otherwise, each time a serialized instance is deserialized, a new instance will be created, leading, in the case of our example, to spurious Elvis sightings. Effective Java 2nd Edition


  9. One of the most important features of the java.io package is the ability to serialize objects: to convert an object into a stream of bytes that can later be deserialized back into a copy of the original object. Java in a Nutshell 5th Edition


  10. A class that has been saved with a prior SUID cannot be deserialized on the new SUID. Java 7 Recipes A Problem-Solution Approach



752、单词 linearly 的匹配文本:


  1. The results are not surprising: in the single-path baseline case, the scheduler busyness is low as long as scheduling is quick, but scales linearly with increased tjob (Figure 6a) Omega - flexible, scalable schedulers for large compute clusters


  2. The way the B-tree storage works means that if you don’t actually reduce your data in the reduce function, you end up having CouchDB copy huge amounts of data around that grow linearly, if not faster with the number of rows in your view. CouchDB The Definitive Guide


  3. Retrieval time for columnar nested data grows linearly with the number of fields. Dremel - Interactive Analysis of WebScale Datasets


  4. Snapshot reads can execute at any up-to-date replicas, so their throughput increases almost linearly with the number of replicas. Spanner - Google’s Globally-Distributed Database


  5. Ceph’s data performance scales nearly linearly in the number of OSDs. Ceph A Scalable, High-Performance Distributed File System


  6. The most important aspect of our review is to determine whether use of any of Chubby’s resources (RPC rate, disc space, number of files) grows linearly (or worse) with number of users or amount of data processed by the project. Chubby - The lock service for loosely-coupled distributed systems


  7. Scalable—Hadoop scales linearly to handle larger data by adding more nodes to the cluster. Hadoop in Action


  8. It’s fault-tolerant and scales linearly, and it’s intended for use in web applications. Cassandra The Definitive Guide


  9. This can be strange at first because when we think about processing a file, we tend to think of it occurring linearly from byte zero through to the end of the file. Hadoop Operations


  10. We chose an architecture that scales linearly over many orders of magnitude on commodity machines, but we’ve seen that this costs a significant 30fold overhead compared to traditional database architectures. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



753、单词 lvm 的匹配文本:


  1. This in turn can increase the time it takes to run FLUSH TABLES WITH READ LOCK for a logical volume manager (LVM) snapshot or other backup operation. High Performance MySQL 3rd Edition


  2. As of Grizzly, the following block storage drivers are compatible: iSCSI-based, LVM, and Ceph. bk-compute-adminguide-grizzly


  3. This is a twostep process: creating the volume group with Linux Volume Manager (LVM) and starting iSCSI services. Deploying OpenStack


  4. From the SAN to the cloud controller, we have an active iscsi session (used for the "cindervolumes" LVM''s VG) bk-admin-guide-cloud-latest


  5. Normally, this would be an LVM Logical Volume specifically set aside for this purpose. high-availability-guide


  6. As an aside, the Linux Logical Volume Manager (LVM) should never be used for Hadoop data disks. Hadoop Operations


  7. Users typically manage a RAID cluster using Linux’s logical volume manager, LVM, with a RAID level of 10 MongoDB in Action


  8. You can partition this large array with the boot, root, swap, and LVM areas. OpenStack Operations Guide



754、单词 mds 的匹配文本:


  1. Ceph [9] has a cluster of namespace servers (MDS) and uses a dynamic subtree partitioning algorithm in order to map the namespace tree to MDSes evenly. HDFS scalability the limits to growth


  2. A metadata server (MDS), also called master server, which manages the file repartition, their access and the namespace. bk-compute-adminguide-grizzly


  3. Clients typically interact with a metadata server (MDS) to perform metadata operations (open, rename), while communicating directly with OSDs to perform file I/O (reads and writes), significantly improving overall scalability. Ceph A Scalable, High-Performance Distributed File System


  4. Ceph [17] has a cluster of namespace servers (MDS) and uses a dynamic subtree partitioning algorithm in order to map the namespace tree to MDSs evenly. The Hadoop Distributed File System


  5. Each lease grants active monitors permission to distribute copies of the cluster map to OSDs or clients who request 1This is implemented as a generic service and used to manage a variety of other global data structures in Ceph, including the MDS cluster map and state for coordinating client access to the system. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.



755、单词 serializer 的匹配文本:


  1. Event serializers An event serializer is the mechanism by which a Flume event is converted into another format for output. Apache Flume Distributed Log Collection for Hadoop


  2. The SGD models share a helper class called ModelSerializer that handles the serialization and deserialization of all SGD models. Mahout in Action


  3. If you just call toXML( ) you’ll get everything run together, so the Serializer is a convenient tool. Thinking in Java 4th Edition


  4. It allows you to use a custom serializer and deserializer (known as SerDe) for different file formats. Professional Hadoop Solutions


  5. The role of a Serializer is to take the encoding burden away from the user. Cassandra High Performance Cookbook


  6. Hadoop comes with four serializers: Avro, Java, Tether (for binary data contained within a TetherData class), and Writable10 (the default serializer) Hadoop in Practice


  7. A Serialization defines a mapping from types to Serializer instances (for turning an object into a byte stream) and Deserializer instances (for turning a byte stream into an object) Hadoop:The Definitive Guide 3rd Edition


  8. ProducerConfig: The ProducerConfig class encapsulates the values required for establishing the connection with brokers such as the broker list, message partition class, serializer class for the message, and partition key. Apache Kafka


  9. The choice of the serializer depends on several factors including ease of use, extensibility, and, of course, performance. Java Performance


  10. You can check this assumption using the Ruby driver’s BSON serializer: MongoDB in Action



756、单词 rdd 的匹配文本:


  1. This allows them to efficiently provide fault tolerance by logging the transformations used to build a dataset (its lineage) rather than the actual data.1 If a partition of an RDD is lost, the RDD has enough information about how it was derived from other RDDs to recompute. Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing


  2. The central idea of resilient distributed datasets (RDD) is leveraged to achieve significant performance improvements over classic MapReduce for this class of applications. Apache Hadoop YARN Yet Another Resource Negotiator



757、单词 sms 的匹配文本:


  1. Find a particular metric, which is especially useful if a metric is rare, such as out going_sms_queue. Monitoring with Ganglia


  2. Notifications can include emails, IMs, IRC messages, SMS, and many more. Gradle in Action


  3. Other alerting methods such as SMS messages and paging, are also available. Hadoop Operations and Cluster Management Cookbook


  4. Facebook Messaging —  This is a unified system that combines all communication capabilities (including Facebook messages, e-mail, chat, and SMS) Professional Hadoop Solutions


  5. With the click of a button on a web page or browser plug-in, ShareThis allows users to seamlessly access their contacts and networks from anywhere online and share the content through email, IM, Facebook, Digg, mobile SMS, etc., without ever leaving the current page. Hadoop:The Definitive Guide 3rd Edition


  6. Obviously, this is much shorter, and easier to copy into an email or send through a restricted medium, like Twitter or SMS. HBase:The Definitive Guide


  7. Facebook Messages gives every Facebook user a facebook.com email address, integrates the display of all e-mail, SMS and chat messages between a pair or group of users, has strong controls over who users receive messages from, and is the foundation of a Social Inbox. Apache Hadoop Goes Realtime at Facebook


  8. A branch company of China Mobile can have more than 20 million subscribers, leading to more than SMS every day. Hadoop in Action


  9. If a Short Message Service (SMS) message is received, it will be displayed within the top menu bar while the application is running. Java 7 Recipes A Problem-Solution Approach


  10. Figure 2: (a) MapReduce job generated by Hive (b) MapReduce job generated by SMS assuming sales is partitioned by YEAR(saleDate) HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



758、单词 spdy 的匹配文本:


  1. In turn, Netty has benefited from interaction with these projects, enhancing both its scope and flexibility through implementations of protocols such as FTP, SMTP, HTTP, WebSocket and SPDY as well as others, both binary and text-based. Netty in Action MEAP V08


  2. Though modern technologies like WebSockets and SPDY can offer incremental data production, or even server-side push messages, the protocols involved are still in the process of being finalized, and client-side support in many programming languages is incomplete. Redis in Action



759、单词 autowiring 的匹配文本:


  1. Autowiring Your Bean In all the examples so far, we have had to define explicitly, via the configuration file, how the individual beans are wired together. Pro Spring 3


  2. This allows you to inject dependencies directly in the test using, for, example autowiring. Pro Spring MVC


  3. We use autowiring and component scanning often in this book because they’re much more compact (although less explicit).3 Spring in Practice


  4. In Example 6-30, we access the Converter instances by enabling component scanning and autowiring them into the configuration class to eventually wrap them into the CustomConversions instance. Spring Data Modern Data Access for Enterprise Java


  5. Working together, component scanning and autowiring are a powerful force and can help keep explicit configuration to a minimum. Spring in Action 4th Edition



760、单词 quicksort 的匹配文本:


  1. We then recursively call the Quicksort method on each partition. Scala in Depth


  2. The sorting algorithm that’s used in the Java standard library is designed to be optimal for the particular type you’re sorting—a Quicksort for primitives, and a stable merge sort for objects. Thinking in Java 4th Edition


  3. If the values to be sorted will fit into the sort buffer, MySQL can perform the sort entirely in memory with a quicksort. High Performance MySQL 3rd Edition


  4. As its name implies, quicksort is a fast sorting algorithm in practice and is especially useful in C++, or for sorting primitive types in Java. Data Structures and Algorithm Analysis in Java 3rd Edition


  5. The  function sorts a list by turning it into a mutable array, sortingquicksort the array in place using the well-known Quicksort algorithm, and then turning the array back into a list. Functional Programming in Scala


  6. Imagine a SortStrategy interface with a single sort operation and different implementations for Quicksort, Mergesort, and so on. Cassandra The Definitive Guide


  7. For this, I use a standard quicksort algorithm (the quicksort implementation can be found in Section 11.9) Java Performance Tuning 2nd Edition


  8. For example, the following function implements the quicksort sorting algorithm of Tony Hoare. Steps in Scala An Introduction to Object-Functional Programming


  9. Familiar sequential examples include quicksort, mergesort, and many data structure, matrix, and image processing algorithms. Concurrent Programming in Java Design Principles and Pattern



761、单词 rsync 的匹配文本:


  1. It reads every region server''s hostname from the file, and then invokes the rsync command to sync the HBase installation directory from master node to that region server. HBase Administration Cookbook


  2. The prospect of firing up rsync or kludging together some Perl is altogether different when 20,000 hosts are involved. Monitoring with Ganglia


  3. All of the core configuration files can be identical, which allows the use of the Unix rsync command to distribute the configuration files. Pro Hadoop


  4. Much like rsync synchronizes two directories locally or over a network, replication synchronizes two databases locally or remotely. CouchDB The Definitive Guide


  5. Several recipes in this chapter will use password-less access to the OpenSSL tools such as ssh, scp, and rsync. Cassandra High Performance Cookbook


  6. It was a .NET shop, and we had no operational help, so we learned to combine bash with rsync and managed the cluster ourselves. HBase in Action


  7. You can use rsync to copy the Hadoop configuration from another node. Hadoop MapReduce Cookbook


  8. You need a mechanism to automate the process of copying files of any format into HDFS, similar to the Linux tool rsync. Hadoop in Practice


  9. If you are familiar with rsync, you can think of the -overwrite or -update options as adding an implicit trailing slash to the source.) If you are unsure of the effect of a distcp operation, it is a good idea to try it out on a small test directory tree first. Hadoop:The Definitive Guide 3rd Edition


  10. Next, using a secure copy utility like scp or rsync tunnelled over ssh, copy the keytabs to the proper machines and place them in the Hadoop configuration directory. Hadoop Operations



762、单词 bitwise 的匹配文本:


  1. Note that on primitives, the identity comparison does bitwise comparison but on objects it does reference equality. Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. Because M is a power of 2, we can use bitwise operators. Data Structures and Algorithm Analysis in Java 3rd Edition


  3. The unions are performed with the bitwise or method exposed by the BloomFilter class. Hadoop in Practice


  4. The key’s hash code is turned into a nonnegative integer by bitwise ANDing it with the largest integer value. Hadoop:The Definitive Guide 3rd Edition


  5. BitComparator Performs a bitwise comparison, providing a BitwiseOp class with AND, OR, and XOR operators. HBase:The Definitive Guide


  6. This representation lets you use the bitwise OR operation to combine several constants into a set, known as a bit field: Effective Java 2nd Edition


  7. In a bitwise AND operation, each bit of the operands are Anded with each other. Java 7 New Features Cookbook


  8. The arithmetic, increment and decrement, bitwise, and shift operators return a double if at least one of the operands is a double. Java in a Nutshell 5th Edition


  9. Therefore, if a channel needs to register for multiple operations in the same selector (e.g., for both reading and writing on a socket), combine the constants with the bitwise or operator (|) when registering: Java Network Programming 4th Edition


  10. StrictMath provides bitwise consistency across platforms; earlier versions of Math used platform-specific native functions that were not identical across all platforms. Java Performance Tuning 2nd Edition



763、单词 sstable 的匹配文本:


  1. Writes are batched in memory and periodically written out to structures known as SSTable. NoSQL Distilled


  2. When thresholds are reached, a Memtable is flushed to disk in a sorted format called an SSTable. Cassandra High Performance Cookbook


  3. They are based on Hadoop’s TFile class,‖ and mimic the SSTable format used in Google’s Bigtable architecture. HBase:The Definitive Guide


  4. It might be fixed in a future release, but the change is pending an update to the underlying storage file (the SSTable) Cassandra The Definitive Guide


  5. This is useful for Bigtable, ColumnIO, RecordIO and SSTable files in protocol buffer format. Tenzing - A SQL Implementation On The MapReduce Framework


  6. The Google SSTable file format is used internally to store Bigtable data. Bigtable - A Distributed Storage System for Structured Data


  7. In either case, Bigtable must read the entire SSTable block from the file system and decompress it. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



764、单词 ndb 的匹配文本:


  1. MySQL AB acquired the NDB database from Sony Ericsson in 2003 and built the NDB Cluster storage engine as an interface between the SQL used in MySQL and the native NDB protocol. High Performance MySQL 3rd Edition


  2. Note that the MEMORY storage engine used to be called HEAP, and some of the file and directory names in the source tree still reflect the earlier name.) However, the list grew rapidly with the addition of BerkeleyDB, MERGE, InnoDB, and more recently, NDB for the MySQL Cluster. Understanding MySQL Internals



765、单词 mockito 的匹配文本:


  1. To help mock the behavior of the service layer, we will use Mockito (http://code.google.com/p/mockito), which is a popular mocking framework. Pro Spring 3


  2. Throughout this section we will make you more familiar how you can use Mockito for all your mocking needs. Pro Spring MVC


  3. At C you create the mock object using the Mockito framework. Spring in Practice


  4. It uses the object-mocking library, Mockito, to mock most of the essential Hadoop objects so the user only needs to focus on the map and reduce logic. Hadoop RealWorld Solutions Cookbook


  5. In this section, we introduce how to write basic unit tests with JUnit, and then we look at the Mockito mock framework. Spring Batch in Action


  6. In addition to Hamcrest matchers, Spring Integration tests can make use of Mockito extensions. Spring Integration in Action


  7. We cover unit testing and Mockito, mocking the persistence tier, integration testing in-container against entities, repositories and services, and how to write functional, black box tests with Selenium, both using Roo’s support for HTML table-based tests as well as using the JUnit API. Spring Roo in Action


  8. Here you use a mock object framework known as Mockito to create a mock implementation of the Quest interface. Spring in Action 4th Edition


  9. His current topics of research center around project automation, including: build tools (Gradle, Leiningen, Maven, Ant), distributed version control (Git, Mercurial), testing frameworks (Geb, Spock, JUnit, TestNG, Mockito), continuous integration (Jenkins, Hudson, Bamboo) and code quality metrics (Sonar, CodeNarc, PMD) Building and Testing with Gradle



766、单词 preconfigured 的匹配文本:


  1. An AMI is a preconfigured operating system and software, which is used to create a virtual machine within EC2 HBase Administration Cookbook


  2. It is preconfigured to serve the correct MIME type for JNLP files. Core Java Volume I Fundamentals 9th Edition


  3. Applying the plugin to your project added preconfigured tasks and a standardized project structure wrapped by an opinionated framework. Gradle in Action


  4. When a slave node was assigned a task by the JobTracker, its TaskTracker will fork a JVM for that task, allocating a preconfigured amount of computing resources. Hadoop Operations and Cluster Management Cookbook


  5. Gradle provides some preconfigured repositories, but it is also very easy to use a custom Maven or Ivy repository. Gradle Effective Implementation Guide


  6. To make this install even simpler, you can use plenty of preconfigured public Amazon Machine Image (AMI) images containing a preinstalled instance of Hadoop. Professional Hadoop Solutions


  7. You can download the latest Whirr release from the aforementioned site and find preconfigured configuration files in the recipes directory. HBase:The Definitive Guide


  8. More demanding users can create their own images, but most users are well served by one of the many preconfigured ones. Hadoop in Action


  9. Also, JNDI is widely used for the storage of preconfigured JMS administrative objects. ActiveMQ in Action


  10. Most of the preconfigured virtual machines (VMs) available are only designed for VMWare, but if you prefer VirtualBox you may find instructions on the Web that explain how to import a particular VM into VirtualBox. Programming Hive



767、单词 mrunit 的匹配文本:


  1. He’s contributed to several projects in the open source community and is a committer and PMC member on Sqoop, Flume, and MRUnit. Apache Sqoop Cookbook


  2. MRUnit is a unit testing framework designed specifically for Hadoop. Professional Hadoop Solutions


  3. There are examples that use MRUnit and local mode for ease of testing. Hadoop RealWorld Solutions Cookbook


  4. MRUnit is a test framework you can use to unit test MapReduce code. Hadoop in Practice


  5. For running unit tests we use junit, as well as a couple of helper libraries: hamcrest-all provides useful matchers for writing test assertions, and mrunit is used for writing MapReduce tests. Hadoop:The Definitive Guide 3rd Edition


  6. MRUnit The final Apache Hadoop project we will mention also highlights the wide range of what is available. Hadoop Beginners Guide



768、单词 syntactically 的匹配文本:


  1. Suppose, for example, that a user asks to connect to a URL that is syntactically wrong. Core Java Volume I Fundamentals 9th Edition


  2. Pig Ordering in Pig is syntactically pretty easy, but it’s a very expensive operation. MapReduce Design Patterns


  3. When a saw job request is received by the system, a Sawzall processor is invoked to verify that the program is syntactically valid. Interpreting the Data Parallel Analysis with Sawzall


  4. When the Pig Latin interpreter sees the first line containing the LOAD statement, it confirms that it is syntactically and semantically correct, and adds it to the logical plan, but it does not load the data from the file (or even check whether the file exists) Hadoop:The Definitive Guide 3rd Edition


  5. Update model values Reaching this phase means that the request values are syntactically valid. Java EE 7 Essentials


  6. Syntactically, in the FROM clause you add the JOIN keyword between the tables and then specify the join columns after the ON keyword. Hadoop in Action


  7. Syntactically, the only difference between static and nonstatic member classes is that static member classes have the modifier static in their declarations. Effective Java 2nd Edition


  8. Syntactically, annotations are used like modifiers and can be applied to the declarations of packages, types, constructors, methods, fields, parameters, and local variables. Java in a Nutshell 5th Edition


  9. As long as the URI is syntactically correct, Java does not need to understand its protocol in order to create a representative URI object. Java Network Programming 4th Edition


  10. Some concurrent OO programming languages (see Further Readings) syntactically require separation between non-public methods defining functionality and public methods defining concurrency control policies; that is, they mandate the kind of separation seen in the template method version of class ReadWrite. Concurrent Programming in Java Design Principles and Pattern



769、单词 parameterize 的匹配文本:


  1. We can leverage this column to parameterize all requests containing account numbers to pick the right accounts for each user that is logged in. Performance Testing With JMeter 2.9


  2. We will learn how to work with files and directories, apply logging to our build scripts, and use properties to parameterize our build scripts. Gradle Effective Implementation Guide


  3. What you could do is add a parameter to your method to parameterize the color and be more flexible to such changes: Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. First, you parameterize the input and output types of the mapper, and the input and output types of the reducer. Professional Hadoop Solutions


  5. It’s easy enough to parameterize imperative task activity with a System property, an environment variable, or even a variable in the build script itself, but those solutions also founder on complexity considerations. Gradle Beyond The Basics


  6. The example lists all the different calls you can use to parameterize the delete() operation. HBase:The Definitive Guide


  7. It is generally not too difficult to parameterize your collection declarations and make use of the generic types and methods provided by the JDK. Effective Java 2nd Edition


  8. As we saw earlier, Flume''s ability to parameterize the location to which it writes on HDFS is a great tool to aid this problem. Hadoop Beginners Guide


  9. In this example, as your task is a Runnable object that is not parameterized, you have to parameterize them with the ? symbol as a parameter. Java 7 Concurrency Cookbook


  10. All the type information that you specify when you parameterize a generic type gets “erased” by the compiler when the code is compiled down to byte code. Java 7 Recipes A Problem-Solution Approach



770、单词 jpanel 的匹配文本:


  1. Note Instead of extending JComponent, some programmers prefer to extend the JPanel class. Core Java Volume I Fundamentals 9th Edition


  2. This is seen in makeBPanel( ), which creates a button group in a JPanel. Thinking in Java 4th Edition


  3. To add multiple components into the same tab, first pack them up in a container, such as a JPanel. Core Java Volume II Advanced Features 9th Edition


  4. Within the JPanel, we overrode the paintComponent method and created a new GradientPaint object. Java 7 New Features Cookbook


  5. In Swing, a common container that is often used is the JPanel class, which is not only a container but also a component (JComponent) Java 7 Recipes A Problem-Solution Approach



771、单词 keytool 的匹配文本:


  1. As the user that Tomcat will run under, create a self-signed certificate via: keytool -genkey -alias tomcat -keyalg RSA. Jenkins Continuous Integration Cookbook


  2. To generate a self-signed certificate we can use a Java tool called keytool. Pro Spring MVC


  3. Creating a client certificate key pair The self-signed client certificate is created in the same way as the self-signed server certificate is created, by generating a key pair using the keytool command. Spring Security 3.1


  4. If you now execute mvn keytool:genkey followed by mvn jetty:deploy-war from the directory where you have this new pom.xml file, the default demo application of the CAS server will execute. Pro Spring Security


  5. Administration of the keystore is handled by the keytool utility (see Chapter 10) Java Security 2nd Edition


  6. The JDK comes with the keytool program, which is a command-line tool to generate and manage a set of certificates. Core Java Volume II Advanced Features 9th Edition


  7. This is accomplished from the command prompt by executing the following command: keytool -genkey -keystore mySrvKeystore -keyalg RSA. Java 7 New Features Cookbook


  8. Assuming that a keystore exists on your system (see the keytool documentation in Chapter 8), you can get one with code like the following: Java in a Nutshell 5th Edition


  9. It was built with the keytool program that’s bundled with the JDK like this: Java Network Programming 4th Edition


  10. For that purpose the keytool will be used—the command-line tool for managing keystores that’s distributed with Java. ActiveMQ in Action



772、单词 jra 的匹配文本:


  1. The next four blocks of text define protection domains: code that is loaded from O''Reilly''s web site has permission to read files in /tmp and to start a print job; code that is signed by sdo and loaded from Sun''s web site has permission to do anything it wants to; code that is signed by jra is able to operate on any nonprivileged socket; and all code is allowed to read the java.vendor system property. Java Security 2nd Edition


  2. The recorded profile can later be analyzed offline, using the JRA Mission Control plugin. Oracle JRockit The Definitive Guide



773、单词 reuters 的匹配文本:


  1. The data was originally collected and labeled by Carnegie Group and Reuters in the course of developing the CONSTRUE text categorization system. Mahout in Action


  2. Applying AJAX Solr on Reuters'' data Reuters is actually a dataset on which we are going to apply the AJAX Solr mechanism. Web Crawling and Data Mining with Apache Nutch


  3. Reuters Financial Software Thomson Reuters is a well-known provider of real-time financial data, news reporting, business data, and related software products to help customers make best use of Reuters data. Monitoring with Ganglia


  4. He is a lead developer at Thomson Reuters where he’s responsible for a number of activities, some of which are directly related to Maven, Groovy, and build tools such as Artifactory and Jenkins. Jenkins The Definitive Guide


  5. That’s why mega news outfit Reuters purchased Teknekron in 1994 RabbitMQ in Action


  6. The AJAX Solr project homepage is at http://evolvingweb.github.com/ajaxsolr/ and provides a great demo of searching Reuters business news wire results: Solr 3 Enterprise Search Server


  7. Because K-means works with vectorized data, if you want to use K-means with text data (such as clustering on the Reuters news collection) you need to vectorize that data prior to using it with Mahout. Hadoop in Practice


  8. In the benchmark framework there are several built-in document sources we could choose from, including the Reuter''s corpus (ReutersDocMaker), Wikipedia articles (EnwikiDocMaker) and a simple document source that recursively finds all *.txt files under a directory (DirDocMaker) Lucene in Action 2nd Edition



774、单词 github 的匹配文本:


  1. If it is not available, you can get it from the github repository. Monitoring with Ganglia


  2. If you prefer, a TAR file is available as well: http://github.com/couchapp/couchapp/tarball/master. CouchDB The Definitive Guide


  3. Testing frameworks like Jasmine (http://pivotal.github.io/jasmine/) let you write test code in JavaScript. Gradle in Action


  4. You can find a majority of his contributions via http://github.com/davisp or on his web site at http://www.davispj.com. Beginning CouchDB


  5. As of this writing, there are at least two projects up on github that add secondary indices to HBase. Hadoop:The Definitive Guide 3rd Edition


  6. Building the Examples The examples you will see throughout this book can be found in full detail in the publicly available GitHub repository at http://github.com/larsgeorge/hbase-book. HBase:The Definitive Guide


  7. Quick Start If you’re looking to set up a cluster to play around with as fast as possible, you can set one up in a minute or two using the mongo-snippets repository on Github. Scaling MongoDB


  8. There’s a good help section on how to get set up at http://help.github .com, and the site http://gitref.org offers a really good reference for beginners. Cassandra The Definitive Guide


  9. To download the driver, go to the github website (http://github.com) The Definitive Guide to MongoDB


  10. MacPorts (http://www.macports.org) and Homebrew (http://mxcl.github.com/ homebrew/) are two package managers for Mac OS X known to maintain up-to-date versions of MongoDB. MongoDB in Action



775、单词 nonstatic 的匹配文本:


  1. The Ganglia chart in Figure 9-7 trends overall request rates and types to a web server cluster serving nonstatic content for the Tagged.com site. Monitoring with Ganglia


  2. There are two types of data in the real world: static and nonstatic data. Hadoop Operations and Cluster Management Cookbook


  3. From the joinpoint perspective, Spring AOP only supports pointcuts matching on the execution of public nonstatic methods. Pro Spring 3


  4. For complex synchronization that involves both static and nonstatic methods, Java Threads 3rd Edition


  5. This method saves the class of the object, the signature of the class, and the values of any nonstatic, nontransient fields of the class and its superclasses. Core Java Volume II Advanced Features 9th Edition


  6. There are four kinds of nested classes: static member classes, nonstatic member classes, anonymous classes, and local classes. Effective Java 2nd Edition


  7. Nonstatic member classes may be defined within other classes or enumerated types and are analogous to instance methods or fields. Java in a Nutshell 5th Edition


  8. Although nonstatic, these methods do set and get a static field that determines the default behavior for all instances of the URLConnection class created after the change. Java Network Programming 4th Edition


  9. In some VMs, synchronizing static methods takes significantly longer than synchronizing nonstatic methods, suggesting that code is global in these VMs for the static synchronizations. Java Performance Tuning 2nd Edition


  10. Only one execution thread will access one of the static methods declared with the synchronized keyword, but another thread can access other nonstatic methods of an object of that class. Java 7 Concurrency Cookbook



776、单词 classifiers 的匹配文本:


  1. Stepby-step examples give you experience with each step to help you build classifiers and fine-tune them for best performance. Mahout in Action


  2. Not all classes in the hierarchy should be instantiated; in this case Filter and Belt are just classifiers so you do not create an instance of either one, but only of their subclasses. Thinking in Java 4th Edition


  3. Version of a dependency, usually consisting of classifiers like minor and major version. Gradle in Action


  4. In a Maven repository, we can use classifiers for a dependency. Gradle Effective Implementation Guide


  5. There are a number of ways in which classifiers can be trained and validated. Hadoop RealWorld Solutions Cookbook


  6. Classifiers have widespread applications such as e-mail spam filtering, finding most promising products, selecting customers for closer interactions, and taking decisions in machine learning situations, and so on. Hadoop MapReduce Cookbook


  7. Tip  External node classifiers are also one of the means by which tools like the Puppet Dashboard and Foreman can be integrated into Puppet and provide node information, as you will see in Chapter 7 Pro Puppet


  8. In this chapter we’ll look at the Mahout MapReduce implementations of recommenders, classifiers, and clusterers. Hadoop in Practice



777、单词 workaround 的匹配文本:


  1. In this case you are still on the hook to come up with a workaround or to dive into the code, fix it, and hopefully submit that patch back to the open source community (where at some future point it''ll make it into an update of your distribution or the next version) Apache Flume Distributed Log Collection for Hadoop


  2. This is a very unusual workaround that you should not emulate in your programs. Core Java Volume I Fundamentals 9th Edition


  3. Again, even though you can make this happen with a workaround, you can’t shake off the feeling that your build process will need to adapt to the tool, not the tool to your build process. Gradle in Action


  4. A workaround would be to create an SSL proxy using nginx or Apache and mod_proxy. Beginning CouchDB


  5. The Java code for this method contains additional complexity due to error handling and a workaround for a feature not yet implemented in the client API. HBase in Action


  6. The workaround to this is to have the ad hoc Node forward to a local Flume Agent Node, which can have these reliability and failover properties. Hadoop in Practice


  7. For earlier releases, the combiner had to be written in Java, so as a workaround it was common to do manual combining in the mapper without having to resort to Java. Hadoop:The Definitive Guide 3rd Edition


  8. Fortunately, there is a simple workaround for cases like these, called. Elasticsearch The Definitive Guide


  9. So if performance is a concern, you’re best to stick to the TCP transport and find some other workaround for the firewall issues. ActiveMQ in Action


  10. Another workaround is to define a “temporary” table with the storage configured to match the desired output format (e.g., tab-delimited fields) Programming Hive



778、单词 delimiters 的匹配文本:


  1. This is a common Hadoop wrapper that consists of a key and value field separated by binary field and record delimiters. Apache Flume Distributed Log Collection for Hadoop


  2. Careful with those delimiters! With so many Nagios plug-ins, written by so many different authors, it’s important to carefully choose your delimiter and avoid using the same one returned by a plug-in. Monitoring with Ganglia


  3. Be considerate of follow on data parsing to ensure proper field delimiters. MapReduce Design Patterns


  4. This type removes the pipe characters used as delimiters for a display that is easier to read. Cassandra High Performance Cookbook


  5. We can omit the fields delimited by '','' and rows delimited by ''\n'' since they are the default field and row delimiters assumed by ROW FORMAT, and the ACLED Nigeria data is in that format. Hadoop RealWorld Solutions Cookbook


  6. In addition, delimiters can be specified to delimit the serialized keys and values of maps and different delimiters can also be specified to delimit the various elements of a list (collection) Hive A Petabyte Scale Data Warehouse Using Hadoop


  7. Variable lengths mean you need delimiters and escaping logic in your client code to figure out the composite attributes that form the key. HBase in Action


  8. Problem How do you write data to relational databases, and at the same time ensure that writes are idempotent? Solution This technique covers how Sqoop can be used to export text files to a relational database, and also looks at how it can be configured to work with files with custom field and record delimiters. Hadoop in Practice


  9. The line has been split into multiple lines to show each field; in the real file, fields are packed into one line with no delimiters. Hadoop:The Definitive Guide 3rd Edition


  10. The task addressing scheme is similar to directory paths in the file system, except it uses colons instead of slashes as delimiters. Building and Testing with Gradle



779、单词 datafile 的匹配文本:


  1. Observant readers would have noticed that our input datafile also contains an account_id column. Performance Testing With JMeter 2.9


  2. Alternatively, we can use the command hadoop fs -put /data/datafile / user/hduser/data. Hadoop Operations and Cluster Management Cookbook


  3. Use the following command to generate an HBase bulkload datafile: Hadoop MapReduce Cookbook


  4. An Avro datafile has a metadata section where the schema is stored, which makes the file self-describing. Hadoop:The Definitive Guide 3rd Edition


  5. Finally, datafile.jude is a string that is passed to that main( ) method as the single element of an array of String objects. Java in a Nutshell 5th Edition


  6. Once the key is found in the Bloom filter, it is used to consult the corresponding datafile and find the column values. Cassandra The Definitive Guide


  7. While developing these scripts the author noticed that the last set of records in his UFO datafile had data in a better structured manner than those at the start of the file. Hadoop Beginners Guide


  8. The binary formats used are primarily intended for exchanging data between two different Java programs through a network connection, a datafile, a pipe, or some other intermediary. Java Network Programming 4th Edition


  9. The option will tell you if there are any problems with either the datafile or the indexes. The Definitive Guide to MongoDB


  10. DATAFILE-BASED BACKUPS Most users opt for a file-based backup, where the raw data files are copied to a new location. MongoDB in Action



780、单词 splittable 的匹配文本:


  1. SequenceFiles are also splittable which makes for better locality and parallelism when running MapReduce jobs on your data, especially on large files. Apache Flume Distributed Log Collection for Hadoop


  2. We will use the hadoop-lzo library, which brings splittable LZO compression to Hadoop. HBase Administration Cookbook


  3. With that in mind, a Cartesian product does not fit into the MapReduce paradigm very well because the operation is not intuitively splittable, cannot be parallelized very well, and thus requires a lot of computation time and a lot of network traffic. MapReduce Design Patterns


  4. Using SequenceFiles is often a best option, because they are both compressable and splittable. Professional Hadoop Solutions


  5. In addition, thanks to the hard work of the Hadoop community, LZO compressed files are splittable. Hadoop RealWorld Solutions Cookbook


  6. SequenceFiles are a splittable binary file format that stores data in the form of key/value pairs. Hadoop in Practice


  7. Split the file into chunks in the application, and compress each chunk separately using any supported compression format (it doesn’t matter whether it is splittable) Hadoop:The Definitive Guide 3rd Edition


  8. This compression format is unique in that it’s splittable for Hadoop, even when used outside the sequence file format. Hadoop in Action


  9. The most efficient format is SequenceFile, a binary splittable container that supports compression. Hadoop Beginners Guide


  10. Since map output need not be splittable, it is not important to select a compression codec that exhibits such a property. Hadoop Operations



781、单词 nulls 的匹配文本:


  1. You could have avoided the wildcard type by turning hasNulls into a generic method: Core Java Volume I Fundamentals 9th Edition


  2. NULLs are not stored explicitly as they are determined by the definition levels: any definition level smaller than the number of repeated and optional fields in a field’s path denotes a NULL. Dremel - Interactive Analysis of WebScale Datasets


  3. The reducer groups the nulls together by key, so we’ll have one null per key. MapReduce Design Patterns


  4. Any unexpected variable’s values (for example, nulls) should be logged during execution. Professional Hadoop Solutions


  5. The SELECT statement uses the Hive TRANSFORM operator to separate each column by tabs and to cast all columns as String with nulls as ''\n'' Hadoop RealWorld Solutions Cookbook


  6. Every document can have as many fields as required, and these can be of any valid JSON type, such as strings, numbers, booleans, nulls, objects, or arrays. Beginning CouchDB


  7. The column-oriented architecture allows for huge, wide, sparse tables as storing NULLs is free. HBase:The Definitive Guide


  8. We see that there’s a row with nulls , indicating that a record violated the schema. Hadoop in Action


  9. That would mean you might have to supply empty placeholder values (nulls) in order to keep the uniform size in case you didn’t have a value for an optional attribute (such as a fax number or apartment number) Cassandra The Definitive Guide


  10. If some fields are numbers and Hive encounters nonnumeric strings, it will return nulls for those fields. Programming Hive



782、单词 pregel 的匹配文本:


  1. The synchronicity of this model makes it easier to reason about program semantics when implementing algorithms, and ensures that Pregel programs are inherently free of deadlocks and data races common in asynchronous systems. Pregel  -  A System for Large-Scale Graph Processing


  2. However, we show that RDDs are expressive enough to capture a wide class of computations, including recent specialized programming models for iterative jobs, such as Pregel, and new applications that these models do not capture. Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing


  3. One of Apache’s projects, Giraph (which is discussed later in this section), is the open source counterpart to Pregel that uses Hadoop. Professional Hadoop Solutions


  4. PageRank with Apache Giraph This recipe is primarily aimed at building and testing the default Apache Giraph PageRank example, modeled after the Google Pregel implementation. Hadoop RealWorld Solutions Cookbook


  5. There is also a Google project called Pregel, which they’ve been using internally for a couple of years and which they might open source. Cassandra The Definitive Guide



783、单词 ack 的匹配文本:


  1. Nonetheless, if the use of UDP is called for, one solution is for the party receiving packets to send an acknowledgment packet (also referred to as an ACK) back to the sender. Java Network Programming and Distributed Computing


  2. Either the consumer must explicitly send an acknowledgement to RabbitMQ using the basic.ack AMQP command, or it can set the auto_ack parameter to true when it subscribes to the queue. RabbitMQ in Action


  3. Storm''s ISpout interface defines three methods involved in the reliability API: nextTuple,ack, and fail. Storm Blueprints Patterns for Distributed Real-time Computation


  4. Figure 4: RADOS responds with an ack after the write has been applied to the buffer caches on all OSDs replicating the object. Ceph A Scalable, High-Performance Distributed File System


  5. The bolt then sends ack, fail, emit, or log messages back via STDOUT. Storm Real-Time Processing Cookbook


  6. After each tuple is processed, the collector’s ack() method is called to indicate that processing has completed successfully. Getting Started with Storm


  7. DFSOutputStream also maintains an internal queue of packets that are waiting to be acknowledged by datanodes, called the ack queue. Hadoop:The Definitive Guide 3rd Edition


  8. During writes, HDFS sets up a pipeline connection to each replica and all replicas must ACK any data sent to them. Apache Hadoop Goes Realtime at Facebook


  9. Parallel writes to write-quorum Bookies and wait for ack-quorum (see BOOKKEEPER-208) Namenode High Availability


  10. In this example, note that we’ve passed the additional ack header, which defines the way messages are acknowledged. ActiveMQ in Action



784、单词 upcast 的匹配文本:


  1. But, since the container held only Objects, when you added an object reference into the container it was upcast to Object, thus losing its character. Thinking in Java 4th Edition



785、单词 cglib 的匹配文本:


  1. The Spring Project One of the most endearing things about the Spring project is the level of activity currently present in the community and the amount of cross-pollination between other projects such as CGLIB, Apache Geronimo, and AspectJ. Pro Spring 3


  2. Note  Spring can also apply AOP on class-based proxies; however, this requires the cglib library (http://cglib.sourceforge.net) on the classpath. Pro Spring MVC


  3. This will instruct Spring to create a class-based proxy using cglib instead of using an interface-based proxy. Spring Security 3.1


  4. In general, Spring favors standard interface-based proxies instead of CGLIB, and that is what I will use in this book. Pro Spring Security


  5. Adding this proxy is supported for both interface-based and concrete classes, but for the latter you’ll need to use CGLib. Spring Dynamic Modules in Action


  6. Instead, it must use CGLib to generate a class-based proxy. Spring in Action 4th Edition



786、单词 hypervisors 的匹配文本:


  1. In many ways, hypervisors have more in common with switches than they do with a general-purpose server. Monitoring with Ganglia


  2. Hypervisors OpenStack Compute requires a hypervisor and Compute controls the hypervisors through an API server. bk-compute-adminguide-grizzly


  3. However, the majority of OpenStack development is done on systems using KVM and Xen-based hypervisors. bk-admin-guide-cloud-latest


  4. Some hypervisors even allow the guests to overcommit memory within their limited allocation space. Oracle JRockit The Definitive Guide


  5. Many virtualization vendors are aware of this and are working toward more intelligent hypervisors, but ultimately, it’s still slower than being directly on bare metal. Hadoop Operations


  6. The following hypervisors support the configuration drive: libvirt, xenserver, hyper-v, and vmware. user-guide


  7. It is also possible to run multiple hypervisors in a single deployment using Host Aggregates or Cells. OpenStack Operations Guide



787、单词 facilitates 的匹配文本:


  1. The serialized data is coupled with the data schema, which facilitates its processing with different programming languages. Hadoop Operations and Cluster Management Cookbook


  2. The Kerberos KDC also has copies of everyone’s secret key, and facilitates all communications. Professional Hadoop Solutions


  3. The actual sampling probability used is recorded along with the trace itself; this facilitates accurate accounting of trace frequencies in analytical tools built around Dapper data. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  4. Writing unit tests prior to writing your code forces your code to be structured in a way that easily facilitates testing. Hadoop in Practice


  5. ZooKeeper facilitates loosely coupled interactions ZooKeeper interactions support participants that do not need to know about one another. Hadoop:The Definitive Guide 3rd Edition


  6. Built as part of the Ceph distributed file system [27], RADOS facilitates an evolving, balanced distribution of data and workload across a dynamic and heterogeneous storage cluster while providing applications with the illusion of a single logical object store with well-defined safety semantics and strong consistency guarantees. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. We said before that the input to the reducer is a key and a corresponding list of values, and there is some magic that happens between the map and reduce methods to collect together the values for each key that facilitates this, which we''ll not describe right now. Hadoop Beginners Guide


  8. Sqoop is a tool that facilitates bidirectional exchange of data between HDFS and relational databases. Hadoop Operations


  9. By providing a MOM that utilizes the JMS spec, ActiveMQ facilitates application architectures that support such reliability and scalability. ActiveMQ in Action


  10. This facilitates compiled tests being able to reference JUnit or TestNG annotations or base classes. Building and Testing with Gradle



788、单词 iostat 的匹配文本:


  1. Divide the IOPS count by the value of $PERIOD to find the actual IOPS rate for a system running in real time (the simulation runs faster than real time, so the rate reported by iostat is meaningless) Monitoring with Ganglia


  2. We captured iostat output for 30 seconds, one sample per second. High Performance MySQL 3rd Edition


  3. After a while, by using the iostat operating system command (we were working on Linux based nodes), we confirmed that something strongly uses I/O operations: both reads and writes. Mastering ElasticSearch


  4. The iostat command uses counter information from the /proc file system to calculate system utilization. Cassandra High Performance Cookbook


  5. System- and network-level information can be seen from Ganglia (figure 10.3) and from several Linux tools such as lsof, top, iostat, netstat, and so on. HBase in Action


  6. Solution The Linux tool iostat can be used to look at drive request queues and IO wait times. Hadoop in Practice


  7. The complete battery usually includes OS utilities such as top, vmstat, sar, iostat, and netstat, but also Hadoop-specific tools such as hadoop dfsadmin, and hadoop fsck. Hadoop Operations


  8. System performance measuring utilities (vmstat, iostat, sar, top on Unix; the task manager and performance monitor on Windows) Java Performance Tuning 2nd Edition


  9. Disk I/O utilization along with system or kernel CPU utilization can be monitored using iostat on Linux and Solaris. Java Performance


  10. Modeled after iostat and other similar tools, mongostat polls the server at a fixed interval and displays an array of statistics, from the number of inserts per second to the amount of resident memory, to the frequency of B-tree page misses. MongoDB in Action



789、单词 reentrant 的匹配文本:


  1. The Java language provides a synchronized keyword for this purpose, and Java SE 5.0 introduced the ReentrantLock class. Core Java Volume I Fundamentals 9th Edition


  2. Java 5 introduced the ReentrantLock class and the ability to create a condition. Getting Started with Google Guava


  3. Because locks in the Java programming language are reentrant, such calls won’t deadlock. Effective Java 2nd Edition


  4. This package defines Lock and associated Condition interfaces as well as concrete implementations (such as ReentrantLock) that provide an alternative to locking with synchronized blocks and methods and to waiting with the wait( ), notify( ), and notifyAll( ) methods of Object. Java in a Nutshell 5th Edition


  5. In particular, most of the well-known algorithms often execute in an incremental or reentrant fashion, building up structures representing processed and pending nodes, and working through the latter while reducing the former. Hadoop Beginners Guide


  6. Remember that Connection objects are not reentrant: only one client of the pool can use the Connection object at a time. Java Performance Tuning 2nd Edition


  7. It''s based on the Lock interface and classes that implement it (as ReentrantLock) Java 7 Concurrency Cookbook


  8. This reentrant or recursive locking differs from the default policy used for example in POSIX threads.) Among other effects, this allows one synchronized method to make a self-call to another synchronized method on the same object without freezing up. Concurrent Programming in Java Design Principles and Pattern


  9. Using a reentrant lock, you can prevent multiple threads accessing the same critical area of the code. Java 7 Recipes A Problem-Solution Approach


  10. When you use a ReentrantLock, you are required to lock and unlock with the idiom. Java SE 8 for the Really Impatient



790、单词 giraph 的匹配文本:


  1. The port to YARN of Giraph is very natural, the execution coordinator role is taken by the ApplicationMaster, and resources are requested dynamically. Apache Hadoop YARN Yet Another Resource Negotiator


  2. One of Apache’s projects, Giraph (which is discussed later in this section), is the open source counterpart to Pregel that uses Hadoop. Professional Hadoop Solutions


  3. This book covers unloading/loading to and from HDFS, graph analytics with Giraph, batch data analysis using Hive, Pig, and MapReduce, machine-learning approaches with Mahout, debugging and troubleshooting MapReduce jobs, and columnar storage and retrieval of structured data using Apache Accumulo. Hadoop RealWorld Solutions Cookbook


  4. Most notably there are inmemory/single machine graph compute engines like Cassovary, and distributed graph compute engines like Pegasus or Giraph. Graph Databases


  5. You crafted your own MapReduce code in the techniques—if you wanted to leverage a framework you could have used Giraph (http://incubator.apache.org/giraph/), which provides a vertex message-passing mechanism. Hadoop in Practice



791、单词 geohash 的匹配文本:


  1. In practice, it is convenient to use several geotile systems in parallel with different tile sizes (geohash levels) Professional Hadoop Solutions


  2. In the area of geospatial search, David open sourced his geohash prefix/ grid based work to the Solr community tracked as SOLR-2155 Solr 3 Enterprise Search Server


  3. We’ll spend a major portion of this chapter explaining an algorithm called the geohash, which is a solution to this problem. HBase in Action


  4. See http://en.wikipedia.org/wiki/Geohash for a good description of what a geohash is. Lucene in Action 2nd Edition



792、单词 stdout 的匹配文本:


  1. In this example, we want to record the system load via the Linux uptime command, which prints out some system information to stdout and exits: Apache Flume Distributed Log Collection for Hadoop


  2. The script is invoked with the additional arguments of the paths to the stdout, stderr, and syslog output files for the task. Pro Hadoop


  3. For example, the current implementation includes a log aggregation service that will upload data written by the application to stdout and stderr to HDFS once the application completes. Apache Hadoop YARN Yet Another Resource Negotiator


  4. These counter values are also output when the job completes, so writing to stdout may be redundant if you are obtaining these values by scraping log files. MapReduce Design Patterns


  5. Both mappers and reducers receive their input on stdin and emit output (key/value pairs) to stdout. Professional Hadoop Solutions


  6. Script to be run as the map function; records passed via STDIN/STDOUT. Hadoop RealWorld Solutions Cookbook


  7. Depending on the problem, you may find additional useful information in the logs, or in the standard out (stdout) or standard error (stderr) of the task process. Hadoop in Practice


  8. MapReduce task logs Users Each tasktracker child process produces a logfile using log4j (called syslog), a file for data sent to standard out (stdout), and a file for standard error (stderr) Hadoop:The Definitive Guide 3rd Edition


  9. Inputs come in through STDIN and outputs go to STDOUT. Hadoop in Action


  10. So the only action to take is to print the message to standard output (stdout) ActiveMQ in Action



793、单词 comprehensions 的匹配文本:


  1. Loops in Scala have all the usual suspects like the while loop and do-while, but the most interesting looping construct is for or for-comprehensions. Scala in Action


  2. There are many features such as pattern matching, for comprehensions and implicits supported in Scala but not in Java, that we won’t discuss. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. Notice we''ve added IO and  functions so  can be used in for-comprehensions. Functional Programming in Scala


  4. Replacing Iterator gives us a way to access items in a collection sequentially—here we’ll see how we can solve many of the problems we’d solve with Iterator using higher-order functions and sequence comprehensions, which give us solutions that are more declarative. Functional Programming Patterns in Scala and Clojure


  5. Scala for Comprehensions Another familiar control structure that’s particularly feature-rich in Scala is the for loop, referred to in the Scala community as a for comprehension or for expression. Programming Scala


  6. You don’t have multiple for comprehensions nested in order to have nested expressions, thus there’s an entire iteration of the inner variable, j, for each value of the outer variable, i. Beginning Scala


  7. Examples of monads List comprehensions, that is,for comprehensions that create lists, can be expressed in terms of the monad presented above: Steps in Scala An Introduction to Object-Functional Programming



794、单词 hardcoded 的匹配文本:


  1. On a small scale, you can use simplistic approaches such as hardcoded hostnames or round-robin DNS (which points a single hostname to multiple IP addresses) High Performance MySQL 3rd Edition


  2. As a result, we highly recommend you use the all setting for x-ha-policy whenever possible so that your mirrored queue declarations aren’t hardcoded to specific node names that may change. RabbitMQ in Action


  3. Also, this job is hardcoded to find the top ten records, but could easily be configured to find the top K records using a variable captured in the setup method. MapReduce Design Patterns


  4. For code comprehensibility you hardcoded the BloomFilter parameters; in reality you’ll want to either calculate them dynamically or move them into a configuration file. Hadoop in Practice


  5. The last line in the preceding excerpt has a hardcoded server name and port. HBase:The Definitive Guide


  6. For example, our joining program in chapter 5 is hardcoded to take the. Hadoop in Action


  7. Hardcoded filenames A portable program contains no hardcoded file or directory names. Java in a Nutshell 5th Edition


  8. The implementation of the topology script is entirely up to the administrator and may be as simple as a shell script that has a hardcoded list of machines and rack names, or as sophisticated as a C executable that reads data from a relational database. Hadoop Operations


  9. My editor Mike Loukides points out that a variation on hardcoded values, used by state-of-the-art high-performance mathematical functions, is a partial table of values together with an interpolation method to calculate intermediate values. Java Performance Tuning 2nd Edition


  10. Looking at our previous example, we see that we are selecting data from a hardcoded day FROM BCO WHERE dt=20120426 Programming Hive



795、单词 usernames 的匹配文本:


  1. The usernames you use must be the ones provided by whatever authentication system you are using in Apache. Monitoring with Ganglia


  2. Caution Usernames may be overwritten with a different username by any user. Pro Hadoop


  3. In this step, some services such as Hive and Nagios may ask you to enter administrative usernames and passwords, which are required for service installation. Hadoop Operations and Cluster Management Cookbook


  4. This data set also contains human-generated natural language text as well as “structured” elements like usernames and dates. MapReduce Design Patterns


  5. You may have users coming in from multiple domains, which means that you may want to do some inventive mapping of Kerberos principals to usernames. Professional Hadoop Solutions


  6. Alternatively, you can configure default usernames in your users’ SSH configuration files. Scaling CouchDB


  7. The format for an ACL is a comma-separated list of usernames, followed by whitespace, followed by a comma-separated list of group names. Hadoop:The Definitive Guide 3rd Edition


  8. Your output should be similar to the following, except the PC and usernames should be different. Java 7 New Features Cookbook


  9. This indicates that there are three users allowed to access Keyspace1, and spaces are allowed in the usernames. Cassandra The Definitive Guide


  10. In Linux, we use usernames, whereas the relational database MySQL, for instance, has its own notion of a user. Hadoop Operations



796、单词 berkeley 的匹配文本:


  1. In 1999, I packed everything I owned into my car for a cross-country trip to begin my new job as Staff Researcher at the University of California, Berkeley Computer Science Department. Monitoring with Ganglia


  2. We would like to thank the Mesos team at UC Berkeley for many fruitful and interesting discussions about Mesos, and Joseph Hellerstein for his early work on modeling scheduler interference in Omega. Omega - flexible, scalable schedulers for large compute clusters


  3. Spark is an open-source research project from UC Berkeley [32], that targets machine learning and interactive querying workloads. Apache Hadoop YARN Yet Another Resource Negotiator


  4. The standard Lucene distribution contains several Directory implementations, including filesystembased and memory-based, Berkeley DB-based (in the Lucene contrib module), and several others. Professional Hadoop Solutions


  5. You could either use one of the free databases, such as MySQL or PostgreSQL, or a pure key/value store like Berkeley DB. HBase in Action


  6. Systems like Berkeley DB, Coherence, GT.M, and object-oriented database systems have been around for years, with some dating back to the early 1980s, and they fall into the NoSQL group by definition as well. HBase:The Definitive Guide


  7. The first version of Chubby used the replicated version of Berkeley DB [20] as its database. Chubby - The lock service for loosely-coupled distributed systems


  8. Oracle Berkeley XML DB Berkeley XML DB is an open source database written in Java that started as a research project at Harvard and is now supported by Oracle. Cassandra The Definitive Guide


  9. Ganglia was built by a group at the University of California, Berkeley, specifically to collect, aggregate, and plot a large number of metrics from large clusters of machines. Hadoop Operations


  10. There is also a project to port Hive to Spark, called Shark (http://shark.cs.berkeley.edu/) Programming Hive



797、单词 idl 的匹配文本:


  1. The interface is defined in Thrift IDL as shown here: Mahout in Action


  2. Such schemas are expressed using the Interface Definition Language (IDL) Java Network Programming and Distributed Computing


  3. In addition to pure serialization Avro also supports Avro RPCs, allowing you to define Avro Interactive Data Language (IDL), which is based on the Avro schema definitions. Professional Hadoop Solutions


  4. Java EE applications may use L to act as clients of CORBA services, but only Java EE application clients ired to be allowed to use Java IDL directly to present CORBA services ves. Java EE 7 Specification


  5. Thrift has an Interface Definition Language (IDL) for describing services and objects. HBase in Action


  6. There are a number of other serialization frameworks that approach the problem in a different way: rather than defining types through code, you define them in a languageneutral, declarative fashion, using an interface description language (IDL) Hadoop:The Definitive Guide 3rd Edition


  7. Protocol Buffers Similar to Avro, you can use the Protocol Buffer’s IDL to define an external schema, which is then used to serialize complex data structures into HBase columns. HBase:The Definitive Guide


  8. These descriptions are formatted in a special language, called Interface Definition Language (IDL) for CORBA and Web Services Description Language (WSDL) for web services. Core Java Volume II Advanced Features 9th Edition


  9. Notable omissions are the RMI and IDL tools that are specific to enterprise programming and which are documented in Java Enterprise in a Nutshell (O’Reilly) Java in a Nutshell 5th Edition


  10. Just add one method, which sets all three instance variables in one call in the following IDL definition: Java Performance Tuning 2nd Edition



798、单词 cartesian 的匹配文本:


  1. It is incredible what a wide range of problems can be solved with such a straightforward paradigm, from simple numerical aggregations to complex join operations and Cartesian products. MapReduce Design Patterns


  2. The first-level search is based on a Cartesian Grid search. Professional Hadoop Solutions


  3. Traditional SQL features like from clause subqueries, various types of joins – inner, left outer, right outer and outer joins, cartesian products, group bys and aggregations, union all, create table as select and many useful functions on primitive and complex types make the language very SQL like. Hive A Petabyte Scale Data Warehouse Using Hadoop


  4. After the reducer has read all of the input records for the join value and partitioned them in memory, it performs a Cartesian product across all partitions and emits the results of each join. Hadoop in Practice


  5. Pig Latin includes the cross-product operator (also known as the cartesian product), which joins every tuple in a relation with every tuple in a second relation (and with every tuple in further relations if supplied) Hadoop:The Definitive Guide 3rd Edition


  6. For example, you might define a class named Point to represent a data point in the two-dimensional Cartesian coordinate system. Java in a Nutshell 5th Edition


  7. Lexicographic Ordering Lexicographic ordering is the natural (alphabetic) ordering of the product of two ordered Cartesian sets. Cassandra The Definitive Guide


  8. Cartesian Product JOINs A Cartesian product is a join where all the tuples in the left side of the join are paired with all the tuples of the right table. Programming Hive


  9. Problem You want to create points that are similar to points on a Cartesian coordinate system. Java 7 Recipes A Problem-Solution Approach


  10. In the absence of join conditions between two entities, queries will produce a Cartesian product. Pro JPA2



799、单词 subclassing 的匹配文本:


  1. Writing a Pregel program involves subclassing the predefined Vertex class (see Figure 3) Pregel  -  A System for Large-Scale Graph Processing


  2. For example, let us suppose we want to prevent others from subclassing the Executive class. Core Java Volume I Fundamentals 9th Edition


  3. In the FJTask framework, these tasks subclass FJTask rather than subclassing Thread, both of which implement Runnable. A Java ForkJoin Framework


  4. Different branches can now provide different implementations of the method makeCustomerHappy by subclassing the OnlineBanking class. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. All implementations of Tool need to implement Configurable (since Tool extends it), and subclassing Configured is often the easiest way to achieve this. Hadoop:The Definitive Guide 3rd Edition


  6. The Java SE documentation states that the DocumentFilter class was invented to avoid subclassing. Core Java Volume II Advanced Features 9th Edition


  7. Unlike The Java Language Specification, this book uses inheritance as a synonym for subclassing. Effective Java 2nd Edition


  8. Method overriding involves subclassing and is covered in Chapter 3 Java in a Nutshell 5th Edition


  9. A class or object is reusable to the extent that it can be readily employed across different contexts, either as a black-box component or as the basis of white-box extension via subclassing and related techniques. Concurrent Programming in Java Design Principles and Pattern


  10. Lots of functionality is obtained by subclassing the Applet or JApplet class. Java 7 Recipes A Problem-Solution Approach



800、单词 drbd 的匹配文本:


  1. Other installations use a tool such as DRBD (http://www.drbd.org/) to replicate the host file system in near real time to a separate physical machine. Pro Hadoop


  2. A variation on this procedure is to use a reliable way to store the master’s binary log files, such as a SAN or a distributed replicated block device (DRBD) High Performance MySQL 3rd Edition


  3. At the time of writing, however, the Pacemaker/DRBD based approach remains the recommended one for OpenStack environments. high-availability-guide


  4. There have also been applications of specialized Linux software such as DRBD 5 Hadoop in Action



涉及文献数目:164
1、A Java ForkJoin Framework
2、ActiveMQ in Action
3、Apache Flume Distributed Log Collection for Hadoop
4、Apache Hadoop Goes Realtime at Facebook
5、Apache Hadoop YARN Yet Another Resource Negotiator
6、Apache Kafka
7、Apache Solr 4 Cookbook
8、Apache Sqoop Cookbook
9、BTrace Java Platform Observability by Bytecode Instrumentation
10、Beginning CouchDB
11、Beginning Scala
12、Bigtable - A Distributed Storage System for Structured Data
13、Building LinkedIn’s Real-time Activity Data1
14、Building and Testing with Gradle
15、Cassandra A Decentralized Structured Storage System
16、Cassandra High Performance Cookbook
17、Cassandra The Definitive Guide
18、Ceph A Scalable, High-Performance Distributed File System
19、Chubby - The lock service for loosely-coupled distributed systems
20、Cloudera Impala
21、Concurrent Programming in Java Design Principles and Pattern
22、Core Java Volume I Fundamentals 9th Edition
23、Core Java Volume II Advanced Features 9th Edition
24、CouchDB The Definitive Guide
25、Dapper a Large-Scale Distributed Systems Tracing Infrastructure
26、Data Structures and Algorithm Analysis in Java 3rd Edition
27、Deploying OpenStack
28、Dremel - Interactive Analysis of WebScale Datasets
29、Effective Java 2nd Edition
30、Elasticsearch The Definitive Guide
31、Elasticsearch in Action MEAP V07 cHAPTER 1
32、FlumeJava - Easy, Efficient Data-Parallel Pipelines
33、Functional Programming Patterns in Scala and Clojure
34、Functional Programming in Scala
35、Fundamental Networking in Java
36、Getting Started with CouchDB
37、Getting Started with Google Guava
38、Getting Started with Storm
39、Gradle Beyond The Basics
40、Gradle Effective Implementation Guide
41、Gradle in Action
42、Graph Databases
43、HBase Administration Cookbook
44、HBase in Action
45、HBase:The Definitive Guide
46、HDFS scalability the limits to growth
47、Hadoop Beginners Guide
48、Hadoop MapReduce Cookbook
49、Hadoop Operations
50、Hadoop Operations and Cluster Management Cookbook
51、Hadoop RealWorld Solutions Cookbook
52、Hadoop in Action
53、Hadoop in Practice
54、HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads
55、Hadoop:The Definitive Guide 3rd Edition
56、High Performance MySQL 3rd Edition
57、Hive A Petabyte Scale Data Warehouse Using Hadoop
58、Incoop MapReduce for Incremental Computations
59、Interpreting the Data Parallel Analysis with Sawzall
60、JSR-133 Java Memory Model and Thread
61、Java 7 Concurrency Cookbook
62、Java 7 New Features Cookbook
63、Java 7 Recipes A Problem-Solution Approach
64、Java 8 in Action Lambdas, Streams and Functional-style Programming
65、Java EE 7 Essentials
66、Java EE 7 Specification
67、Java Network Programming 4th Edition
68、Java Network Programming and Distributed Computing
69、Java Performance
70、Java Performance Tuning 2nd Edition
71、Java SE 8 for the Really Impatient
72、Java Security 2nd Edition
73、Java Threads 3rd Edition
74、Java in a Nutshell 5th Edition
75、Jenkins Continuous Integration Cookbook
76、Jenkins The Definitive Guide
77、Kafka a Distributed Messaging System for Log Processing
78、Lucene in Action 2nd Edition
79、Mahout in Action
80、Managing Infrastructure with Puppet
81、MapReduce - Simplied Data Processing on Large Clusters
82、MapReduce Design Patterns
83、MapReduce a flexible data processing tool
84、MapReduce and Parallel DBMSs Friends or Foes
85、Mastering ElasticSearch
86、Megastore - Providing Scalable, Highly Available Storage for Interactive Services
87、MongoDB The Definitive Guide 2nd Edition
88、MongoDB in Action
89、Monitoring with Ganglia
90、Namenode High Availability
91、Netty in Action MEAP V08
92、NoSQL Distilled
93、Omega - flexible, scalable schedulers for large compute clusters
94、OpenStack Operations Guide
95、Oracle JRockit The Definitive Guide
96、Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications
97、Performance Testing With JMeter 2.9
98、PowerDrill - Processing a Trillion Cells per Mouse Click
99、Pregel  -  A System for Large-Scale Graph Processing
100、Pro Hadoop
101、Pro JPA2
102、Pro Java 7 NIO.2
103、Pro Puppet
104、Pro Spring 3
105、Pro Spring MVC
106、Pro Spring Security
107、Professional Hadoop Solutions
108、Programming Concurrency on the JVM
109、Programming Hive
110、Programming Pig
111、Programming Scala
112、Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine
113、Programming in Scala 2nd Edition
114、RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.
115、RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems
116、RabbitMQ in Action
117、Redis in Action
118、Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing
119、SQL Antipatterns
120、Scala for the Impatient
121、Scala in Action
122、Scala in Depth
123、Scalable IO in Java
124、Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr
125、Scaling CouchDB
126、Scaling MongoDB
127、Selenium 2 Testing Tools Beginner''s Guide
128、Selenium Testing Tools Cookbook
129、Solr 3 Enterprise Search Server
130、Solr 3.1 Cookbook
131、Solr in action
132、Sonar Code Quality Testing Essentials
133、Spanner - Google’s Globally-Distributed Database
134、Spring Batch in Action
135、Spring Data Modern Data Access for Enterprise Java
136、Spring Dynamic Modules in Action
137、Spring Integration in Action
138、Spring Roo in Action
139、Spring Security 3.1
140、Spring in Action 4th Edition
141、Spring in Practice
142、Steps in Scala An Introduction to Object-Functional Programming
143、Storage Architecture and Challenges
144、Storm Blueprints Patterns for Distributed Real-time Computation
145、Storm Real-Time Processing Cookbook
146、Tajo A Distributed Data Warehouse System on Large Clusters
147、Tenzing - A SQL Implementation On The MapReduce Framework
148、Testable JavaScript
149、The Definitive Guide to MongoDB
150、The Hadoop Distributed File System
151、The Java Language Specification Java SE 7 Edition
152、The Java Virtual Machine Specification Java SE 7 Edition
153、The anatomy of a large-scale hypertextual Web search engine
154、Thinking in Java 4th Edition
155、Tika in Action
156、Understanding MySQL Internals
157、Web Crawling and Data Mining with Apache Nutch
158、Web search for a planet the google cluster architecture
159、ZeroMQ
160、admin-user-guide-trunk
161、bk-admin-guide-cloud-latest
162、bk-compute-adminguide-grizzly
163、high-availability-guide
164、user-guide

2000 个软件开发领域的高频特殊词及精选例句(二)

2000 个软件开发领域的高频特殊词及精选例句(二)

高春辉、王春生、朱峰:关于开源创业的 15 件小事

superword 是一个 Java 实现的英文单词分析软件,主要研究英语单词音近形似转化规律、前缀后缀规律、词之间的相似性规律等等。

276、单词 tokenizer 的匹配文本:


  1. Analysis is done by the analyzer, which is built of tokenizer and zero or more filters, and can also have zero or more character mappers. Mastering ElasticSearch


  2. Up until we start using the StringTokenizer towards the bottom of the code, we’re just cleaning up the string. MapReduce Design Patterns


  3. The mapper takes a column and breaks it into tokens (individual words) using StringTokenizer, a class that splits strings on common tokens such as spaces and columns. Cassandra High Performance Cookbook


  4. Ironically, the next step may be to fall back on the StringTokenizer class, use of which is discouraged in the class Javadocs comments in favor of the newer regular expression-based tokenizers. Hadoop in Practice


  5. TokenCounterMapper A mapper that tokenizes the input value into words (using Java’s StringTokenizer) and emits each word along with a count of one. Hadoop:The Definitive Guide 3rd Edition


  6. Tokenizer Next, the string is tokenized into individual terms by a tokenizer. Elasticsearch The Definitive Guide


  7. We see q that WordCount uses Java’s StringTokenizer in its default setting, which tokenizes based only on whitespaces. Hadoop in Action


  8. It has a more complicated API and has more powerful features than StringTokenizer. Java in a Nutshell 5th Edition


  9. Note that better tokenization could be provided by use of regular expressions or the StringTokenizer class, but for our purposes this simple approach will suffice. Hadoop Beginners Guide


  10. For example, if a low-level IO exception occurs in the midst of a StreamTokenizer or ObjectInputStream operation, there is no sensible. Concurrent Programming in Java Design Principles and Pattern



277、单词 accumulo 的匹配文本:


  1. By the end of this chapter, you will be able to integrate Apache Nutch with Apache Accumulo as well as with MySQL. Web Crawling and Data Mining with Apache Nutch


  2. Adam Shook is a software engineer at ClearEdge IT Solutions, LLC, working with a number of big data technologies such as Hadoop, Accumulo, Pig, and ZooKeeper. MapReduce Design Patterns


  3. Providing column and table-level authorization works well for situations that require column-based security, but it does not does address fine-grained access controls at the cell level, such as what is provided by Apache Accumulo (discussed in Chapter 12) Professional Hadoop Solutions


  4. This book covers unloading/loading to and from HDFS, graph analytics with Giraph, batch data analysis using Hive, Pig, and MapReduce, machine-learning approaches with Mahout, debugging and troubleshooting MapReduce jobs, and columnar storage and retrieval of structured data using Apache Accumulo. Hadoop RealWorld Solutions Cookbook



278、单词 jvms 的匹配文本:


  1. With sFlow instrumentation of the JVM, data is pushed from the JVMs to gmond, with no polling necessary. Monitoring with Ganglia


  2. There will be multiple instances of your map function running, potentially in multiple Java Virtual Machines (JVMs), and potentially on multiple machines. Pro Hadoop


  3. The number of map and reduce slots determines the number of concurrent map/reduce tasks for a TaskTracker, which forks multiple JVMs to run these tasks. Hadoop Operations and Cluster Management Cookbook


  4. In the case of very small, fast-running map tasks (where the order of execution time is in seconds), Hadoop allows you to enable several tasks to reuse JVMs by specifying the job configuration mapreduce.job .jvm.numtasks. Professional Hadoop Solutions


  5. However, the licensing and maturity of these other JVMs vary. Cassandra High Performance Cookbook


  6. If multiple reduce tasks are specified, cityA and cityB could be processed independently on separate reduce JVMs. Hadoop RealWorld Solutions Cookbook


  7. Bear in mind, too, that tasktrackers in a mini-cluster launch separate JVMs to run tasks in, which can make debugging more difficult. Hadoop:The Definitive Guide 3rd Edition


  8. Although there is a single TaskTracker per slave node, each TaskTracker can spawn multiple JVMs   to handle many map or reduce tasks in parallel. Hadoop in Action


  9. The Hadoop community keeps a list of tested JVMs at http://wiki.apache.org/hadoop/HadoopJavaVersions where users can post their experiences with various Java VMs and versions. Hadoop Operations


  10. Gradle provides an intelligent compromise in that it offers a maxParallelForks that governs the maximum simultaneous JVMs that are spawned. Building and Testing with Gradle



279、单词 programmatically 的匹配文本:


  1. Being able to control the versioning scheme programmatically will become a necessity the more you want to automate your project lifecycle. Gradle in Action


  2. Configuration changes, which we would normally make in Eclipse, can now be done programmatically in a Gradle build file. Gradle Effective Implementation Guide


  3. The ElasticSearch API has some methods to programmatically create JSON documents which we will show. Mastering ElasticSearch


  4. To rename files programmatically, we can pass a closure to the rename method (Example 1-7) Gradle Beyond The Basics


  5. This will generate object allocation stacks sizes that are too small to be useful, so instead you’ll programmatically set custom HPROF parameters: Hadoop in Practice


  6. Section 3.1.2 covers Hadoop Java libraries for handling HDFS files programmatically. Hadoop in Action


  7. Programmatically Defining Keyspaces and Column Families You can create keyspaces and column families through the API as well. Cassandra The Definitive Guide


  8. It is possible to use the JobConf class to programmatically set configuration properties on the executing job. Hadoop Beginners Guide


  9. If you want to embed an ActiveMQ broker inside an application, the message store can also be configured programmatically. ActiveMQ in Action


  10. Gradle also implements a strong domain model that is easily programmatically accessible. Building and Testing with Gradle



280、单词 ping 的匹配文本:


  1. If a worker does not receive a ping message after a specified interval, the worker process terminates. Pregel  -  A System for Large-Scale Graph Processing


  2. We can use this plugin to send a ping to a Hadoop/HBase daemon''s RPC port, to check whether the daemon is alive. HBase Administration Cookbook


  3. Imagine a check_ping plug-in that, when executed by the Nagios scheduler, pings a host and then return the following output: Monitoring with Ganglia


  4. The counter values from individual worker machines are periodically propagated to the master (piggybacked on the ping response) MapReduce - Simplied Data Processing on Large Clusters


  5. The master node will send the ping requests to other nodes and wait for the response. Mastering ElasticSearch


  6. If an OSD has not heard from a peer recently, an explicit ping is sent. Ceph A Scalable, High-Performance Distributed File System


  7. Open a shell or Terminal window on each machine, and use the ping command to see whether it can find the other. Beginning CouchDB


  8. Creating a Rule Let’s create a rule to ping an arbitrary server and store the results of the ping. Gradle Beyond The Basics


  9. Sessions are kept alive by the client sending ping requests (also known as heartbeats) whenever the session is idle for longer than a certain period. Hadoop:The Definitive Guide 3rd Edition


  10. When a RPC client detects a tcp-socket timeout, instead of declaring a RPC timeout, it sends a ping to the RPC server. Apache Hadoop Goes Realtime at Facebook



281、单词 ascii 的匹配文本:


  1. For example, the following command uses sflowtool to convert the binary sFlow HTTP operation data into ASCII CLF so that the operations can be visually inspected or exported to a web log analyzer such as Webalizer: Monitoring with Ganglia


  2. If they are the same, the _rev values are compared in ASCII sort order, and the highest wins. CouchDB The Definitive Guide


  3. Lowercase filter: It makes all the tokens lowercased • ASCII folding filter: It removes non ASCII parts from tokens. Mastering ElasticSearch


  4. Use assume so CLI formats the columns as ASCII text, and then fetch all the columns of the ''mynewcar'' row: Cassandra High Performance Cookbook


  5. If both documents have the same number of revisions, then CouchDB simply does an ASCII comparison of the revision number and the document with the highest sort order wins. Scaling CouchDB


  6. The data is stored using a line-oriented ASCII format, in which each line is a record. Hadoop:The Definitive Guide 3rd Edition


  7. Without this clause, Hive defaults to the newline character as the row delimiter and an ASCII value. Hadoop in Action


  8. This sorts by directly comparing the bytes, validating that the input can be parsed as US-ASCII. Cassandra The Definitive Guide


  9. Our file is tab separated, but Hive, by default, expects its input files to have fields separated by the ASCII code 00 (control-A) Hadoop Beginners Guide


  10. Finally the frame is ended by the ASCII null character (^@) ActiveMQ in Action



282、单词 wikipedia 的匹配文本:


  1. You may be bound by other regulatory limitations on access to data such as Payment Card Industry (PCI—http://en.wikipedia. Apache Flume Distributed Log Collection for Hadoop


  2. The Wikipedia article on B-trees is a good starting point for further investigations. CouchDB The Definitive Guide


  3. For more information about SPNEGO, please refer to its wiki page at http://en.wikipedia.org/wiki/SPNEGO. Hadoop Operations and Cluster Management Cookbook


  4. From the client point of view, document is a JSON object (see more about JSON format at http://en.wikipedia.org/wiki/JSON) Mastering ElasticSearch


  5. Suppose we want to add StackOverflow links to each Wikipedia page that is referenced in a StackOverflow comment. MapReduce Design Patterns


  6. A more in-depth explanation can be found in Wikipedia at http://en.wikipedia.org/wiki/Z-order_curve. Hadoop RealWorld Solutions Cookbook


  7. You can find more information about Bayesian classifier from http://en.wikipedia.org/wiki/Naive_Bayes_classifier and Bayes theorem from http://betterexplained.com/articles/an-intuitiveand-short-explanation-of-bayes-theorem/ Hadoop MapReduce Cookbook


  8. Wikipedia provides some good background articles on XML and JSON, if needed. Hadoop in Practice


  9. The reader is configured by setting job configuration properties to tell it the patterns for the start and end tags (see the class documentation for details).5 To take an example, Wikipedia provides dumps of its content in XML form, which are appropriate for processing in parallel with MapReduce using this approach. Hadoop:The Definitive Guide 3rd Edition


  10. The original Google paper and the Wikipedia entry use the CamelCase version MapReduce. Hadoop in Action



283、单词 enumeration 的匹配文本:


  1. When you use the switch statement with enumerated constants, you need not supply the name of the enumeration in each label—it is deduced from the switch value. Core Java Volume I Fundamentals 9th Edition


  2. When the argument is create (or an equivalent enumeration value), then the index operation will fail if a document with a given identifier already exists. Mastering ElasticSearch


  3. The interface provides a public enumeration type, named ReturnCode, that is used by the filterKeyValue() method to indicate what the execution framework should do next. HBase:The Definitive Guide


  4. Obviously, this is not what we want to happen when dealing with identification codes, or any kind of precise enumeration. Elasticsearch The Definitive Guide


  5. Enumeration entries() returns an Enumeration object that enumerates the ZipEntry objects that describe the entries of the ZipFile. Core Java Volume II Advanced Features 9th Edition


  6. The basic idea behind Java’s enum types is simple: they are classes that export one instance for each enumeration constant via a public static final field. Effective Java 2nd Edition


  7. The getStatus operation is not an enumeration with some set list of possible statuses. Cassandra The Definitive Guide


  8. What just happened? The first thing we need to do in order to add new counters is to create a standard Java enumeration that will hold them. Hadoop Beginners Guide


  9. The getPropertyNames() method returns an Enumeration of all the properties on a given message to easily iterate through all of them. ActiveMQ in Action


  10. Another possibility is to use the sleep() method of an element of the TimeUnit enumeration. Java 7 Concurrency Cookbook



284、单词 percona 的匹配文本:


  1. Two of the three (Percona Server and MariaDB) are drop-in replacements, so the effort involved in trying them out is not large. High Performance MySQL 3rd Edition


  2. In fact, for some systems, such as MariaDB or Percona, this may be your only option.) For example, to check the status of the cluster, open the MySQL client and check the status of the various parameters: high-availability-guide



285、单词 deletes 的匹配文本:


  1. Each calendar receiving the message will process the invitation in its own transaction which updates the invitee''s state and deletes the message. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. The merge cost is calculated in a way that merges reclaiming more deletes and having a smaller size is favored. Mastering ElasticSearch


  3. FlumeJava automatically deletes any temporary intermediate files it creates when they are no longer needed by later operations in the execution plan. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. Deletes in Cassandra are implemented as special writes known as tombstones. Cassandra High Performance Cookbook


  5. Description: Deletes documents marked as deleted and old revisions of documents. Beginning CouchDB


  6. Whenever CouchDB deletes a document, it creates a new revision with _deleted field set to true. Scaling CouchDB


  7. After some interval (a minute, say), the master deletes ephemeral files that have no open file handles. Chubby - The lock service for loosely-coupled distributed systems


  8. Each segment is almost a self sufficient index by itself, immutable up to deletes. elasticsearch. The Road to a Distributed, (Near) Real Time, Search Engine


  9. It is still a common design requirement to store IDs related to other entities in your tables, but operations such as cascading deletes are not available. Cassandra The Definitive Guide


  10. Irrelevant details were elided to keep the example short.) The call to Apply performs an atomic mutation to the Webtable: it adds one anchor to www.cnn.com and deletes a different anchor. Bigtable - A Distributed Storage System for Structured Data



286、单词 callable 的匹配文本:


  1. A Callable is similar to a Runnable, but it returns a value. Core Java Volume I Fundamentals 9th Edition


  2. In practice, a command named runMyAwesomeTask would be callable with the abbreviation rMAT. Gradle in Action


  3. We can also use a closure or an instance of the Callable interface to define a list of files: Gradle Effective Implementation Guide


  4. Functions in Pig are callable units of work that can be referenced in the context of specific operators. Hadoop in Practice


  5. Endpoints are dynamic extensions to the RPC protocol, adding callable remote procedures. HBase:The Definitive Guide


  6. In our example here, we are submitting a Callable object to the ExecutorService instance. Getting Started with Google Guava


  7. There are two kinds of tasks: Runnable and its close cousin, Callable (which is like Runnable, except that it returns a value) Effective Java 2nd Edition


  8. Another important advantage of the Executor framework is the Callable interface. Java 7 Concurrency Cookbook


  9. In the same way that interface Runnable describes any pure action, a Callable interface can be used to describe any service method that accepts an Object argument, returns an Object result, and may throw an Exception: Concurrent Programming in Java Design Principles and Pattern


  10. The CallableStatement in the following code executes this stored procedure that is contained within the database, passing the necessary parameters. Java 7 Recipes A Problem-Solution Approach



287、单词 solrconfig 的匹配文本:


  1. File name Description solrconfig.xml This is the  main configuration file of your Solr install. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. Solr''s solrconfig.xml file contains lots of parameters that can be tweaked. Solr 3 Enterprise Search Server


  3. I''m talking about files like schema.xml, solrconfig.xml, and so on. Solr 3.1 Cookbook


  4. We''ll learn more about the main Solr configuration file for a core, named solrconfig.xml, in chapter 4 Solr in action


  5. I''m talking about files such as schema.xml, solrconfig.xml, solr.xml, and so on. Apache Solr 4 Cookbook



288、单词 resolver 的匹配文本:


  1. In SBT, a resolver is mapped to a URL that hosts dependencies (like Maven repositories) Scala in Action


  2. This resolver handles certain standard Spring MVC exceptions by setting a specific response status code. Pro Spring 3


  3. NameResolver’s act method can be rewritten to use loop as shown in Listing 32.4 Programming in Scala 2nd Edition


  4. If you must parse from an input stream, supply an entity resolver (see the following note) Core Java Volume II Advanced Features 9th Edition


  5. The DefaultHandler class provides an empty implementation of all the methods of the ContentHandler, ErrorHandler, DTDHandler, and EntityResolver interfaces. Java in a Nutshell 5th Edition


  6. Hive uses Java Reflection to select the correct evaluation method, and if you require a finer-grained selection, you can develop your own utility class that implements the UDFMethodResolver interface. Hadoop Beginners Guide


  7. Hostname canonicalization is the process of finding the complete, official, hostname according to the resolution system, in this case, the host’s resolver library. Hadoop Operations


  8. As an example, we will look at a new UDF, MetroResolver. Programming Pig


  9. One way to improve the performance is to load the entities into memory using an entity resolver. Java Performance


  10. You can catch these exceptions and handle the conflicts manually using a SyncResolver object. Java 7 Recipes A Problem-Solution Approach



289、单词 perl 的匹配文本:


  1. He has worked with Perl/ PHP/Python, primarily for coding and MySQL/Oracle as the backend, for several years prior to moving into Hadoop fulltime. Apache Flume Distributed Log Collection for Hadoop


  2. The prospect of firing up rsync or kludging together some Perl is altogether different when 20,000 hosts are involved. Monitoring with Ganglia


  3. The multiple language code generation allows programs written in C++ or Perl to call the same methods as a Java client. Cassandra High Performance Cookbook


  4. Most of these packages (Perl, gcc, GNU make, and so on) are preinstalled on most Linux distributions. Beginning CouchDB


  5. The results, shown in Table 1, demonstrate Sawzall is significantly faster than Python, Ruby, or Perl, at least for these microbenchmarks. Interpreting the Data Parallel Analysis with Sawzall


  6. Hadoop has a contrib project (contributed in JIRA ticket HADOOP-3754) that contains a Thrift server and bindings for various client languages including Python, Ruby, and Perl. Hadoop in Practice


  7. We wrote a naive single-node Perl   hash-based program as an example of a typical quick solution a sysadmin may create. Hadoop in Action


  8. It’s increasingly popular and is used by projects such as Android, Fedora, Ruby on Rails, Perl, and many Cassandra clients (as we’ll see in Chapter 8) Cassandra The Definitive Guide


  9. If you know Perl, Python, Ruby, JavaScript, or even shell script, you can learn Pig’s syntax in the morning and be running MapReduce jobs by lunchtime. Hadoop Operations


  10. Although ActiveMQ is written in Java, APIs for many languages other than Java are provided, including C/C++, .NET, Perl, PHP, Python, Ruby, and many more. ActiveMQ in Action



290、单词 urlconnection 的匹配文本:


  1. The type of object returned will vary, depending on the MIME content type of the remote resource and the available content handlers (classes responsible for processing and retrieving objects from a URLConnection) Java Network Programming and Distributed Computing


  2. ContentHandler objects are typically d indirectly using the getContent method of URL and URLConnection. Java EE 7 Specification


  3. The HttpURLConnection class that uses this method is abstract, so the actual behavior of this class may be overridden in a particular implementation. Java Security 2nd Edition


  4. The URL and URLConnection classes encapsulate much of the complexity of retrieving information from a remote site. Core Java Volume II Advanced Features 9th Edition


  5. At a slightly more complex level, a URLConnection object can be obtained from a given URL object. Java in a Nutshell 5th Edition


  6. Chapter 7, URLConnections, shows you how to use the URLConnection and HttpURLConnection classes not just to download data from web servers, but to upload documents and configure connections. Java Network Programming 4th Edition



291、单词 validator 的匹配文本:


  1. Use the CLI to see that the data is stored in a single column: [default@parking] assume parking validator as ascii; Cassandra High Performance Cookbook


  2. ValidatorFactory for ValidatorFactory, or by requesting the injection of an object of the appropriate type via the Resource annotation. Java EE 7 Specification


  3. This command runs the SortValidator program, which performs a series of checks on the unsorted and sorted data to check whether the sort is accurate. Hadoop:The Definitive Guide 3rd Edition


  4. This is achieved by way of a nested f:convert er and f:validator, just like with any h:inputText, and can be done as shown: Java EE 7 Essentials


  5. Several HTML validity checkers are available for download and you can validate HTML online [W3C-validator] Effective Java 2nd Edition


  6. Obtain a Validator from the Schema, and then use the Validator to validate your XML document. Java in a Nutshell 5th Edition


  7. Once the rows have been added to the validator, if the column family needs to be validated, it will create the Merkle tree and broadcast it to the neighboring nodes. Cassandra The Definitive Guide


  8. The example has two UserValidator objects that return a random boolean value. Java 7 Concurrency Cookbook


  9. The following code snippet shows how to use a validator to validate a document using SAX. Java Performance


  10. You can use the Schema instance to retrieve a Validator instance that understands this grammar. Java 7 Recipes A Problem-Solution Approach



292、单词 hypervisor 的匹配文本:


  1. The hypervisor acts as a virtual switch, connecting virtual machines to each other and to the physical network. Monitoring with Ganglia


  2. This refers to the time during which something was runnable on the virtual machine, but the hypervisor chose to run something else instead. High Performance MySQL 3rd Edition


  3. The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor''s APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, etc.) bk-compute-adminguide-grizzly


  4. A worker daemon that creates and terminates virtual machine instances through hypervisor APIs. bk-admin-guide-cloud-latest


  5. The Vagrant file simply defines the hypervisor-level configuration and provisioning; the remaining provisioning is done through Puppet and is defined at two levels. Storm Real-Time Processing Cookbook


  6. The piece of software that enables multiple guests, for example operating systems, to run on a single system is called a hypervisor. Oracle JRockit The Definitive Guide


  7. Virtualization works by running a hypervisor either in a host OS or directly on bare metal, replacing the host OS entirely. Hadoop Operations


  8. To view the list of valid compute hosts, use the nova hypervisor-list command, as follows: admin-user-guide-trunk


  9. The actual device name might differ due to hypervisor settings. user-guide


  10. Probably the most important factor in your choice of hypervisor is your current usage or experience. OpenStack Operations Guide



293、单词 pojo 的匹配文本:


  1. Imagine you want to have a codec that decodes bytes to some kind of message (a POJO) and encodes the message back to bytes. Netty in Action MEAP V08


  2. You’ll create a new Java class called ToDoItem, a plain old Java object (POJO) acting as a model. Gradle in Action


  3. This is a simple POJO (plain old Java object) with three properties—nothing much to it. Scala in Action


  4. There is little else to configure: just add a dependency to the test framework (or frameworks) your tests are using, and Maven will automatically look for and execute the JUnit, TestNG, or even Plain Old Java Objects (POJO) tests contained in this directory structure. Jenkins The Definitive Guide


  5. Enables a WebSocket client and server endpoint to be defined declaratively via annotations on a POJO, or programmatically via interface implementation. Java EE 7 Essentials


  6. A configurable POJO is associated with the ActiveMQ connection that you can tune to set different policies. ActiveMQ in Action


  7. Arcane interfaces for using persistent entities have been removed, creating a simplified Plain Old Java Object (POJO) based persistence model. Java Performance


  8. I created a constraint object called MyCellConstraint which is a plain old Java object (POJO) used when calling the add(JComponent comp, Object constraint) method of a JPanel component. Java 7 Recipes A Problem-Solution Approach


  9. Because the objects were regular Java objects, this persistence model came to be known as Plain Old Java Object (POJO) persistence. Pro JPA2


  10. Formally, each of these kind of fields is known as a property in Gradle since we are saving state in POJO objects under the hood. Building and Testing with Gradle



294、单词 traversal 的匹配文本:


  1. When you add many components into a window, you need to give some thought to the traversal order. Core Java Volume I Fundamentals 9th Edition


  2. There is also a version of parallelDo() that allows multiple output PCollections to be produced simultaneously from a single traversal of the input PCollection. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. For example, Cascalog adds a logic-based query capability inspired by Datalog, while Scalding adds math libraries that are useful for graph-traversal problems and many machine-learning algorithms. Professional Hadoop Solutions


  4. For example, owner and mode are required for the security check during path traversal but rarely change, requiring very few states, while the file lock reflects a wider range of client access modes as it controls an MDS’s ability to issue client capabilities. Ceph A Scalable, High-Performance Distributed File System


  5. Because of the limited set of methods available in the Iterator interface, it is hard to use the Iterator for anything more than a simple traversal through the Collection. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. Computer scientists also call this postorder traversal because the search process visits the children before visiting the parents. Core Java Volume II Advanced Features 9th Edition


  7. The serialization logic has no knowledge of the topology of the object graph, so it must go through an expensive graph traversal. Effective Java 2nd Edition


  8. It performs a depth first traversal, starting at a specific root directory. Java 7 New Features Cookbook


  9. The most fundamental graph traversal algorithms are depth-first search (DFS) and breadth-first search (BFS).The difference between the algorithms is the ordering in which a node is processed in relationship to its neighbors. Hadoop Beginners Guide


  10. However, this strategy doesn''t work for another common usage of collections, traversal. Concurrent Programming in Java Design Principles and Pattern



295、单词 pdf 的匹配文本:


  1. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub. Apache Flume Distributed Log Collection for Hadoop


  2. Such a problem often occurs when Hadoop is used to store and retrieve large items, such as PDF files, large data samples, images, movies, or other multimedia data. Professional Hadoop Solutions


  3. Aside from his work at Core, Joe has worked on several successful web development projects, including Learn French by Podcast, a language-learning web site that offers its members free highquality MP3 podcasts and allows them to purchase accompanying professional PDF guides for a small fee. Beginning CouchDB


  4. This example uses PNG, but GNU plot supports many other terminals like SCREEN, PDF, EPS, and so on. Hadoop MapReduce Cookbook


  5. Nutch provides parsers for documents in many popular formats, such as HTML, PDF, Open Office and Microsoft Office, RSS, and others. Hadoop:The Definitive Guide 3rd Edition


  6. See Eric Brewer’s original paper on this topic and the follow-up post by Coda Hale, as well as this PDF by Gilbert and Lynch. HBase:The Definitive Guide


  7. The MIME standard defines common data formats such as "image/jpeg" and "application/pdf" Core Java Volume II Advanced Features 9th Edition


  8. Because the Times had stored its older articles as scanned TIFF images , they needed image processing to combine different pieces of each article together into a single file in the desired PDF format. Hadoop in Action


  9. You can read the original Phi Accrual Failure Detection paper on which Cassandra’s design is based at http://ddg.jaist.ac.jp/pub/HDY+04.pdf. Cassandra The Definitive Guide


  10. This book was written and typeset by the author using Adobe FrameMaker, Acrobat, and Distiller on Macintosh and PC platforms, and supplied to the publisher and printer as an Adobe Portable Document Format (PDF) file. Fundamental Networking in Java



296、单词 operands 的匹配文本:


  1. When two values are combined with a binary operator (such as n + f where n is an integer and f is a floating-point value), both operands are converted to a common type before the operation is carried out. Core Java Volume I Fundamentals 9th Edition


  2. Italic is used for Java Virtual Machine "assembly language", its opcodes and operands, as well as items in the Java Virtual Machine''s run-time data areas. The Java Virtual Machine Specification Java SE 7 Edition


  3. The most common use for operators that modify their operands is to generate the side effect, but you should keep in mind that the value produced is available for your use, just as in operators without side effects. Thinking in Java 4th Edition


  4. The leaves of an expression tree are operands, such as constants or variable names, and the other nodes contain operators. Data Structures and Algorithm Analysis in Java 3rd Edition


  5. Numeric promotions are used to convert the operands of a numeric operator to a common type where an operation can be performed. The Java Language Specification Java SE 7 Edition


  6. All bytecode operations, with few exceptions, are computed on an evaluation stack by popping operands from the stack, executing the operation and pushing the result back to the stack. Oracle JRockit The Definitive Guide


  7. In a bitwise AND operation, each bit of the operands are Anded with each other. Java 7 New Features Cookbook


  8. An operator is a key part of Java syntax: it performs a computation on one or two operands to produce a new value. Java in a Nutshell 5th Edition


  9. Other constraints that check the type and number of arguments for other instructions requires dynamic analysis of the code to determine which operands will be present on the expression stack during execution. Java Performance


  10. If the operands are integer types, the quotient of the division is returned. Programming Hive



297、单词 gmetad 的匹配文本:


  1. These metrics are then subsequently polled to Ganglia Meta Daemon (Gmetad) servers, where the metrics are computed and saved in round-robin, time-series databases using round-robin database tool (RRDtool) HBase Administration Cookbook


  2. I should mention here that Federico Sacerdoti was heavily involved in the implementation of gmetad and wrote a nice academic paper2 highlighting the strength of its design. Monitoring with Ganglia


  3. Ganglia is composed of two servers: the gmetad server, which provides historical data and collects current data, and the gmond server, which collects and serves current statistics. Pro Hadoop


  4. Add all the hostnames in the cluster to the gmetad configuration /etc/ganglia/ gmetad.conf file, for example, this file should contain the following: Hadoop Operations and Cluster Management Cookbook


  5. You will also need the Ganglia meta daemon (gmetad) running on at least one node, and another node running the Ganglia web frontend. Hadoop RealWorld Solutions Cookbook


  6. Ganglia meta daemon (gmetad) The meta daemon is installed on a central node and acts as the federation node to the entire cluster. HBase:The Definitive Guide


  7. Each gmond process relays data to a central gmetad process that records data in a series of RRD, or round-robin database files, which are fixed-size files that efficiently store time series data. Hadoop Operations



298、单词 hashes 的匹配文本:


  1. In this way, the hash table is just a mapping from hashes to backend database URIs. CouchDB The Definitive Guide


  2. The Random Partitioner hashes the user supplied key and uses the result to place this data on nodes in the cluster. Cassandra High Performance Cookbook


  3. When the high write levels are detected, Ceph hashes the shared directory and relaxes the directory’s mtime coherence to distribute the workload across all MDS nodes. Ceph A Scalable, High-Performance Distributed File System


  4. Redis supports many other data structures, including hashes, lists, and sorted sets. Hadoop RealWorld Solutions Cookbook


  5. Gradle relies heavily on hashes to optimize download performance; when resolving a dependency, it will first attempt to download the much-smaller SHA from the repo, skipping the download if it determines that it already has that content in the artifact cache. Gradle Beyond The Basics


  6. The hashes are the same as th you generated with the write, Hadoop in Practice


  7. The default partitioner is HashPartitioner, which hashes a record’s key to determine which partition the record belongs in. Hadoop:The Definitive Guide 3rd Edition


  8. The last native JSON datatype that we need to discuss is the object—known in other languages as hashes, hashmaps, dictionaries or associative arrays. Elasticsearch The Definitive Guide


  9. By default, Hadoop will use a strategy that hashes the output key to perform the partitioning. Hadoop Beginners Guide


  10. Note that the field has many distinct values, leading to large internal hash-tables; computing the hashes themselves on possibly large strings is already computationally quite expensive. PowerDrill - Processing a Trillion Cells per Mouse Click



299、单词 virtualization 的匹配文本:


  1. The libvirt project has created a common set of tools for managing virtualization resources on different virtualization platforms, currently including: Xen, QEMU, KVM, LXC, OpenVZ, User Mode Linux, VirtualBox, and VMware ESX and GSX. Monitoring with Ganglia


  2. In contrast, a SAN can be great when you don’t demand a lot of high-performance random I/O, but you are interested in features such as snapshots, storage consolidation, data deduplication, and virtualization. High Performance MySQL 3rd Edition


  3. Server virtualization is the partitioning of a physical server into smaller virtual servers to help maximize the server resources. Gradle in Action


  4. Virtualization Virtualization is a great way to get cheap hardware and be able to expand fast. MongoDB The Definitive Guide 2nd Edition


  5. It may seem strange, but think about virtualization and the modern world, where the operating system cannot determine on which physical machine it works. Mastering ElasticSearch


  6. Some virtualization types are better than others, but none come without an impact on performance. HBase in Action


  7. Billing for resources consumed, including the cost of hardware, virtualization, network provision, and even the costs of software development and operations (since these are a simply logical extension of the system we see here) Graph Databases


  8. Since 2008, he works for Oracle on a fast virtualization platform. Oracle JRockit The Definitive Guide


  9. Blades, SANs, and Virtualization The large-scale data storage and processing industry moves in cycles. Hadoop Operations


  10. It is also a good practice in the setup and configuration of a benchmark to consider leveraging operating system capabilities such as processor sets, or partitioning off portions of a system so they are viewed as distinct systems, such as virtualization or Oracle Solaris zones. Java Performance



300、单词 hfile 的匹配文本:


  1. This includes putting data directly into an HBase table and writing to HFile format files on Hadoop Distributed File System (HDFS) HBase Administration Cookbook


  2. A storage file format, HFile, is under development and due for Hbase version 0.20.0, and is expected to solve many of the performance and reliability issues. Pro Hadoop


  3. The implementation of HFile in a region server is responsible for reading and writing HFiles to and from HDFS. Professional Hadoop Solutions


  4. Its contents are flushed to disk to form an HFile when the MemStore fills up. HBase in Action


  5. All columns in a column family are stored together in the same lowlevel storage file, called an HFile. HBase:The Definitive Guide


  6. When the MemStore reaches a certain threshold it is written out as an HFile. Apache Hadoop Goes Realtime at Facebook


  7. The HFile writer has a straightforward write path, with four elements: Hadoop in Action



301、单词 jconsole 的匹配文本:


  1. While JMX is fine for causal browsing of metric values, the number of eyeballs looking at JConsole doesn''t scale when you have hundreds or even thousands of servers sending data all over the place. Apache Flume Distributed Log Collection for Hadoop


  2. It is also possible to change logging levels in a running program by using the jconsole program. Core Java Volume I Fundamentals 9th Edition


  3. Use the following command to start the monitor user interface: jconsole. Hadoop Operations and Cluster Management Cookbook


  4. Also described are the Java profiling tools such as JConsole. Cassandra High Performance Cookbook


  5. The JDK comes with a tool called JConsole for viewing MBeans in a running JVM. Hadoop:The Definitive Guide 3rd Edition


  6. JConsole Java ships with a helper application called JConsole, which can be used to connect to local and remote Java processes. HBase:The Definitive Guide


  7. The jconsole application that is available as part of the JDK, provides an alternative technique for determining which MXBeans are available. Java 7 New Features Cookbook


  8. You may start jconsole with no local or remote process specified and use its Connection menu to establish connections. Java in a Nutshell 5th Edition


  9. To do this, open a console, start the jconsole application, and connect to Cassandra via JMX. Cassandra The Definitive Guide


  10. It shows how to configure ActiveMQ for JMX monitoring and demonstrates this using JConsole. ActiveMQ in Action



302、单词 tasktracker 的匹配文本:


  1. You can also check the detailed statistics of your tasks per job by opening the Tasktracker component of Apache Hadoop. Web Crawling and Data Mining with Apache Nutch


  2. Jobtracker and tasktracker tunable parameters The JobTracker is the server that handles the management of the queued and executing jobs. Pro Hadoop


  3. Another two EC2 small instances are used with roles hadoop-datanode and hadoop-tasktracker in each instance. Hadoop MapReduce Cookbook


  4. There are two types of client: external clients (run by users) and internal clients (run on the cluster as a part of the system, e.g., datanode and tasktracker daemons) Hadoop:The Definitive Guide 3rd Edition


  5. Tasktracker The process responsible for running map or reduce tasks from the jobtracker. Cassandra The Definitive Guide


  6. Daemons There are two major daemons in Hadoop MapReduce: the jobtracker and the tasktracker. Hadoop Operations


  7. Core Instance Group The nodes in the core instance group have the same function as Hadoop slave nodes that run both the datanode and tasktracker daemons. Programming Hive



303、单词 subdirectory 的匹配文本:


  1. This option, in turn, defaults to a subdirectory named Ganglia in the directory where libganglia will be installed. Monitoring with Ganglia


  2. Cluster Administration Tools The Hadoop Core installation provides a number of scripts in the bin subdirectory of the installation that are used to stop and start the entire cluster or various pieces of the cluster. Pro Hadoop


  3. In addition, you can specify a directory and CouchApp will load all the files in this directory and any subdirectory. CouchDB The Definitive Guide


  4. Because I don’t want to make this example too complex to grasp, you’ll just copy it to the subdirectory build/backup. Gradle in Action


  5. I will now show how to change to the source subdirectory inside this new directory and change the permissions of a few files to make sure they are ready for the build process: Beginning CouchDB


  6. Whirr generates a script that we can use to start this proxy, under a subdirectory named after your Hadoop cluster inside the ~/.whirr directory. Hadoop MapReduce Cookbook


  7. The extra trailing /foo subdirectory is needed on the destination, because now the contents of the source directory are copied to the contents of the destination directory. Hadoop:The Definitive Guide 3rd Edition


  8. A task can have more than one attempt if the first one fails.) These user log files are under the /logs/userlogs subdirectory. Hadoop in Action


  9. Now we have a subdirectory named after the Git project so that we can build the project and start using it. Cassandra The Definitive Guide


  10. If you include in a JAR file class the files from a subdirectory, the class may not be stored with the path you expect. Hadoop Beginners Guide



304、单词 parses 的匹配文本:


  1. A user request comes in, and the web server takes care of the networking and parses the HTTP request. CouchDB The Definitive Guide


  2. The compiler parses the Groovy source files and generates stubs for them. Gradle in Action


  3. It parses key/value pairs out of the input data and passes each pair to the user-defined Map function. MapReduce - Simplied Data Processing on Large Clusters


  4. ElasticSearch parses the input as lines, where every pair of lines contains information about the target index along with additional parameters and a query itself. Mastering ElasticSearch


  5. The program parses the input record and then does a relatively sophisticated emit statement. Interpreting the Data Parallel Analysis with Sawzall


  6. It parses out all the links in every Web page and stores important information about them in an anchors tile. The anatomy of a large-scale hypertextual Web search engine


  7. The query server parses the query, applies optimizations and sends the plan to the master for execution. Tenzing - A SQL Implementation On The MapReduce Framework


  8. As we know there is only one file in the cache we feel safe in using the first index in this array, and pass that to a utility method that parses the file and uses the contents to populate the state abbreviation lookup map. Hadoop Beginners Guide


  9. We can do this using the receive() method, which reads frames from the TCP socket and parses them. ActiveMQ in Action


  10. In the background, the “engine” provides an abstraction layer for the UI based on SQL: the user constructs charts via drag’n’drop operations, they get translated to group-by SQL queries, which the engine parses and processes. PowerDrill - Processing a Trillion Cells per Mouse Click



305、单词 recursively 的匹配文本:


  1. To recursively list files in the home directory, we can use the command hadoop fs -lsr ... Hadoop Operations and Cluster Management Cookbook


  2. The procedure is invoked recursively on nested records (Line 18) Dremel - Interactive Analysis of WebScale Datasets


  3. There are convenience methods for converting paths into project-relative java.io.File objects, making collections of files, and recursively turning directory trees into file collections. Gradle Beyond The Basics


  4. The –R option recursively sets the replication factor for files and directories within a directory. Hadoop MapReduce Cookbook


  5. PageRank handles both these cases and everything in between by recursively propagating weights through the link structure of the Web. The anatomy of a large-scale hypertextual Web search engine


  6. Both files and directories (which are scanned recursively for LZOP files) are supported: Hadoop in Practice


  7. In practice we don’t need to check on all files recursively, and we may restrict ourselves to what’s in our own working directory. Hadoop in Action


  8. This can be done by using the -setrep command, which takes a replication factor and an optional flag (-R) to indicate it should operate recursively (see Example 2-5) Hadoop Operations


  9. For example, STOCKS.> means the entry applies to all destinations in the STOCKS path recursively. ActiveMQ in Action


  10. This rewrite can be applied recursively, to support deeper trees. PowerDrill - Processing a Trillion Cells per Mouse Click



306、单词 subflow 的匹配文本:


  1. However, in the subflow, when selecting the customer, the information about the order should be available because it contains some filter criteria (for example, an international order can be placed only by customers living in certain countries) Pro Spring 3


  2. The other states are decision state, action state, subflow state, and end state. Pro Spring MVC


  3. The start state is a marker state that designates another state (view, action, decision, end, subflow) defined in the flow as being the flow’s starting point. Spring in Practice


  4. The boxes represent various states (the View state, Action state, Decision state, Subflow state, and others), and the arrows represent transitions. Pro Spring Security


  5. Subflow A subflow state starts a new flow in the context of a flow that is already underway. Spring in Action 4th Edition



307、单词 configures 的匹配文本:


  1. The globals section configures the general characteristics of the daemon itself. Monitoring with Ganglia


  2. Configures each subproject by project path; actual configuration happens in the closure. Gradle in Action


  3. In addition, the master node is a critical failure point in a default cluster configuration, which configures only one master node. Hadoop Operations and Cluster Management Cookbook


  4. The build script configures the Project object, and finally the set of tasks to be executed is determined and executed. Gradle Effective Implementation Guide


  5. After validating the authenticity of the lease [R7], the NM configures the environment for the container, including initializing its monitoring subsystem with the resource constraints specified in the lease. Apache Hadoop YARN Yet Another Resource Negotiator


  6. To recover from a failed namenode in this situation, an administrator starts a new primary namenode with one of the filesystem metadata replicas and configures datanodes and clients to use this new namenode. Hadoop:The Definitive Guide 3rd Edition


  7. When Elasticsearch detects a new string field in your documents, it automatically configures it as a full text string field and analyzes it with the standard analyzer. Elasticsearch The Definitive Guide


  8. The driver instantiates, configures, and passes a JobConf   object named job to JobClient. Hadoop in Action


  9. Since the tasktracker runs as an unprivileged user (user mapred, by default, in the case of CDH and whatever user the administrator configures in Apache Hadoop), it isn’t possible for it to launch task JVMs as a different user. Hadoop Operations


  10. As you can see, listing 7.2 instantiates the BrokerService and configures the brokerName and dataDirectory properties. ActiveMQ in Action



308、单词 lang 的匹配文本:


  1. Whenever you use a class that is not defined in the basic java.lang package, you need to use an import directive. Core Java Volume I Fundamentals 9th Edition


  2. The assumptions are that the hadoopprobook and commons-lang JARs are in the current working directory. Pro Hadoop


  3. Your Java source code has a dependency on a class from the external library Apache Commons Lang. Gradle in Action


  4. In the preceding example, ElasticSearch will take the value of the lang field and will use it as the analyzer for that document. Mastering ElasticSearch


  5. The following snippet shows how we could define and use a trim UDF that uses the Apache Commons Lang StringUtils class. Hadoop:The Definitive Guide 3rd Edition


  6. Minimally, a Java client needs the following JAR files specified in its CLASSPATH, when connecting to HBase: hbase, hadoop-core, zookeeper, log4j, commons-logging, and commons-lang. HBase:The Definitive Guide


  7. For more sophisticated caches, you may need to use java.lang.ref directly. Effective Java 2nd Edition


  8. Using a string reference variable that is assigned a null value will result in a java.lang. Java 7 New Features Cookbook


  9. Primitive wrapper classes These classes are conceptually similar to the primitive wrapper classes, such as Integer and Long found in java.lang. Hadoop Beginners Guide


  10. The lang and country arguments are optional; if omitted, the default locale is used. Programming Hive



309、单词 subquery 的匹配文本:


  1. There were rumors or promises of many new features, such as online backups and server-level foreign keys for all storage engines, subquery improvements, and thread pooling. High Performance MySQL 3rd Edition


  2. The CLUSTER BY clause in the sub-query specifies the output columns that are hashed on to distributed the data to the reducers and finally the REDUCE clause specifies the user program to invoke (python wc_reduce.py  in this case) on the output columns of the subquery. Hive A Petabyte Scale Data Warehouse Using Hadoop


  3. As mentioned earlier, if the job is to return all the twits for all the users you follow, you can’t rely on a join or subquery to do this, as you can in SQL: HBase in Action


  4. The second subquery calculates the second leg of the route, which comprises the path from the end location to another delivery center elsewhere in the network. Graph Databases


  5. Consider this IN subquery, which finds all the items in the things table that are in the sales table: Hadoop:The Definitive Guide 3rd Edition


  6. A correlated subquery contains a reference to the outer query and so. SQL Antipatterns


  7. A clause is a subquery that can be optional, required, or prohibited. Lucene in Action 2nd Edition


  8. Each subquery of the union query must produce the same number of columns, and for each column, its type must match all the column types in the same position. Programming Hive


  9. A subquery is a complete select query inside a pair of parentheses that is embedded within a conditional expression. Pro JPA2


  10. The optimizer needs quick access to various parts of the WHERE clause, table, field, and key lists, ORDER BY and GROUP BY expressions, subquery structuring, and other data. Understanding MySQL Internals



310、单词 tasklet 的匹配文本:


  1. In the job definition, only one step is defined, which contains one tasklet and chunk process. Pro Spring 3


  2. The chunk element C configures the chunk-processing step, in a step element, which is itself in a tasklet element. Spring Batch in Action


  3. The most common way to implement a step is delegating to a tasklet, which defines the activity that must be performed, leaving the proper step implementation to deal with the boilerplate aspects of the execution (maintaining state, sending events, and so on) Spring Integration in Action


  4. Processing and Loading Data from a Database To process and load data from a relational database to HDFS, we need to configure a Spring Batch tasklet with a JdbcItemReader and a HdfsTextItemWriter. Spring Data Modern Data Access for Enterprise Java



311、单词 aggregated 的匹配文本:


  1. A graph mining algorithm might simply output aggregated statistics mined from the graph. Pregel  -  A System for Large-Scale Graph Processing


  2. The gweb Aggregated Graphs Tab Aggregate graphs (Figure 4-11) allow you to create composite graphs combining different metrics. Monitoring with Ganglia


  3. That is, an integer; a string; or a small, fixed-size list or object that includes an aggregated value (or values) from the values argument. CouchDB The Definitive Guide


  4. After running gradle build, you’ll find the aggregated HTML test report under the directory build/reports/test. Gradle in Action


  5. Enterprise-level switches often support Link Aggregation Groups (LAG), which bundle multiple interfaces together in an active/active fashion to make a single logical interface that is as fast as the sum of the links aggregated. Cassandra High Performance Cookbook


  6. In all the examples so far, you have aggregated by department. Beginning CouchDB


  7. Solid arcs represent data flowing from the analysis machines to the aggregators; dashed arcs represent the aggregated data being merged, first into one file per aggregation machine and then to a single final, collated output file. Interpreting the Data Parallel Analysis with Sawzall


  8. Good keyword search is often not enough: you need some aggregated data, so you can narrow the results down to what you''re interested in. Elasticsearch in Action MEAP V07 cHAPTER 1


  9. However, shared storage services like those described in Section 6.5 require aggregated information as soon as possible during a sudden spike in user activity. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  10. Read requests are issued on these multiple HFiles in parallel & aggregated for a final result. Apache Hadoop Goes Realtime at Facebook



312、单词 ivy 的匹配文本:


  1. Make sure that the HBasegora-hbase dependency is available in ivy.xml. Web Crawling and Data Mining with Apache Nutch


  2. It will be helpful to have some knowledge of tools like Ant, Ivy, and Maven; practices like continuous integration and delivery; and concepts like dependency management. Gradle in Action


  3. Maven and Ivy repositories are supported to publish or fetch dependencies. Gradle Effective Implementation Guide


  4. Apache Ivy is a dependency manager with flexibility and configurability. Scala in Action


  5. They also play a key role for non-Maven projects using tools like Ivy and Gradle, both of which rely on standard Maven repositories. Jenkins The Definitive Guide


  6. There are three kinds of repositories currently supported by Gradle: Maven repositories, Ivy repositories, and static directories. Gradle Beyond The Basics


  7. It is straightforward to use another build tool, such as Gradle or Ant with Ivy, as long as you use the same set of dependencies defined here.) For building MapReduce jobs you only need to have the hadoop-core dependency, which contains all the Hadoop classes. Hadoop:The Definitive Guide 3rd Edition


  8. The better approach is to let Ivy load the appropriate version from the remote repositories, and then compile Hive normally. HBase:The Definitive Guide


  9. For example, Apache Ivy is used to build Cassandra, and a popular client (Hector) is available via Git. Cassandra The Definitive Guide


  10. Apache Ivy later added badly needed dependency management to Java builds, but still didn’t address the lack of conventions inherent in Ant. Building and Testing with Gradle



313、单词 matcher 的匹配文本:


  1. Without such an element, significant time could be spent nailing down the right pattern matcher, as it would involve rerunning your test plan several times with various inaccurate expressions, hoping it eventually matches. Performance Testing With JMeter 2.9


  2. HBase solves the problem by using a QueryMatcher in combination with a ColumnTracker, which comes in a few variations: one for explicit matching, for when you specify a list of columns to retrieve, and another that includes all columns. HBase:The Definitive Guide


  3. Then, get a Matcher object from the pattern and call its matches method: Click here to view code imageClick here to view code image. Core Java Volume II Advanced Features 9th Edition


  4. Working with strings; specifically, how to perform common operations such as removing parts of a string, matching strings, and more using the CharMatcher and Strings classes. Getting Started with Google Guava


  5. These are defined in the documentation for the Files class'' getPathMatcher method. Java 7 New Features Cookbook


  6. Regular expression support is provided by the Pattern and Matcher. Java in a Nutshell 5th Edition


  7. By running the output through a pattern matcher to extract the GC times and amount freed and totalling those values, you can identify the cost of GC to your application. Java Performance Tuning 2nd Edition


  8. Use the regular expression Pattern and Matcher classes for a better performing and more versatile matching solution than the String matches() method. Java 7 Recipes A Problem-Solution Approach


  9. In Java 8, you can use the names in the start, end, and group methods of Matcher: Java SE 8 for the Really Impatient


  10. The power of Groovy’s operators can also be leveraged inside AntBuilder calls, such as using the regular expression matcher inside an evaluation of all System properties in Example 3-19 Building and Testing with Gradle



314、单词 ioexception 的匹配文本:


  1. The code that tries to process the information in a file therefore will need to notify the compiler that it can throw some sort of IOException. Core Java Volume I Fundamentals 9th Edition


  2. If no path can be constructed, an IOException will be thrown. Pro Hadoop


  3. You need to create one that matches the signature BufferedReader -> String and that may throw an IOException. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Calling seek() with a position that is greater than the length of the file will result in an IOException. Hadoop:The Definitive Guide 3rd Edition


  5. First, the method that is used to retrieve specific values from an HBase table: Result get(Get get) throws IOException. HBase:The Definitive Guide


  6. FileVisitResult visitFileFailed(T path, IOException exc) is called if an exception was thrown in an attempt to get information about the given file. Core Java Volume II Advanced Features 9th Edition


  7. Thus, catching the IOException may be sufficient as illustrated in the following code: Java 7 New Features Cookbook


  8. An IOException that signals that a specified file cannot be found. Java in a Nutshell 5th Edition


  9. If you try to reset too far back, an IOException is thrown. Java Network Programming 4th Edition


  10. If one thread performs s.close() on an IO object (for example, an InputStream) s, then any other thread attempting to use s (for example, s.read()) will receive an IOException. Concurrent Programming in Java Design Principles and Pattern



315、单词 neo4j 的匹配文本:


  1. The core framework is large, and dozens of portfolio projects extend that core, covering things like security, web flow, SOAP web services (REST web services are part of the core), enterprise integration, batch processing, mobile, various flavors of social (Facebook, LinkedIn, Twitter, GitHub, and so on), various flavors of NoSQL (MongoDB, Neo4j, Riak, and so on), BlazeDS/Flex, AMQP/Rabbit, and many more. Spring in Practice


  2. It’s generally applied to a number of recent nonrelational databases such as Cassandra, Mongo, Neo4J, and Riak. NoSQL Distilled


  3. He has given talks at PyCon India on Storm and Neo4J and has published. Storm Blueprints Patterns for Distributed Real-time Computation


  4. One of the biggest logistics companies in the world uses a graph database in real time to route physical parcels; a major airline is leveraging graphs for its media content metadata; and a toptier financial services firm has rewritten its entire entitlements infrastructure on Neo4j. Graph Databases


  5. Unlike single node databases like Neo4j or PostgreSQL, Riak does not support ACID transactions. A Little Riak Book


  6. Graph databases such as Neo4J and the set of databases derived at least in part from the design of Google’s Bigtable database (such as MongoDB, HBase, Hypertable, and Redis) all are focused slightly less on Availability and more on ensuring Consistency and Partition Tolerance. Cassandra The Definitive Guide


  7. They were trying to integrate the Neo4j graph database with the Spring Framework and evaluated different approaches. Spring Data Modern Data Access for Enterprise Java


  8. Whether you’re using a document database like MongoDB, a graph database such as Neo4j, or even a traditional relational database, Spring Data offers a simplified programming model for persistence. Spring in Action 4th Edition



316、单词 cron 的匹配文本:


  1. Set up cron jobs to prepare static HTML reports at desired intervals. Monitoring with Ganglia


  2. You could use this feature to synchronize database servers within a cluster or between data centers using a job scheduler such as cron, or you could use it to synchronize data with your laptop for offline work as you travel. CouchDB The Definitive Guide


  3. Think of scheduled automation as a time-based job scheduler (in the context of a Unix-based operation system, also known as a cron job) Gradle in Action


  4. It is commonly used to run commands on remote computers typically from non-interactive jobs such as rsync-based backups through cron. Cassandra High Performance Cookbook


  5. One way of doing this would be to set up a cron job that runs at a frequent interval, calling a script that uses curl to replicate your primary database to a database on your backup server. Beginning CouchDB


  6. This can be done with a cron job or through another automated process. Scaling CouchDB


  7. Summary I showed you one example of the use of the Oozie coordinator, which offers cron-like capabilities to launch periodic Oozie workflows. Hadoop in Practice


  8. For example, you may want all cron jobs to go to a single pool rather than have them run under each individual user’s pool. Hadoop in Action


  9. Obvious tools such as cron may be good enough, but as system complexity increases, you may need to investigate more sophisticated scheduling systems. Hadoop Beginners Guide


  10. AMQ_SCHEDULED_CRON String Use a cron entry to set the schedule. ActiveMQ in Action



317、单词 chubby 的匹配文本:


  1. A quorum of Chubby locks detects most network partitions and node unavailability. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. However the GFS master is now made fault tolerant using the Chubby[3] abstraction. Cassandra A Decentralized Structured Storage System


  3. Bigtable in turn uses both Chubby[7] (a distributed lock system) and GFS. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  4. ZooKeeper is the comparable system to Google’s use of Chubby for Bigtable. HBase:The Definitive Guide


  5. Abstract We describe our experiences with the Chubby lock service, which is intended to provide coarse-grained locking as well as reliable (though low-volume) storage for a loosely-coupled distributed system. Chubby - The lock service for loosely-coupled distributed systems


  6. In Bigtable there is a master server that controls operations using the Chubby persistent distributed locking mechanism; in Cassandra, all the nodes are on even par with no centralized control, and they communicate using a gossip model. Cassandra The Definitive Guide


  7. Bigtable relies on a highly-available and persistent distributed lock service called Chubby [8] Bigtable - A Distributed Storage System for Structured Data


  8. Running workers write a token into the Chubby lockservice [8] to indicate they belong to the system; other workers can use the existence of this token as a sign that the worker is alive (the token is automatically deleted when the process exits) Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



318、单词 english 的匹配文本:


  1. In plain English, if you flip the parameters of compareTo, the sign (but not necessarily the actual value) of the result must also flip. Core Java Volume I Fundamentals 9th Edition


  2. Now if our user wants to narrow down its results to the English-language films category we need to send the following query: Mastering ElasticSearch


  3. The description provides a useful English description of what the metric actually measures. Beginning CouchDB


  4. You don’t want words that commonly appear in the English language (referred to in text mining circles as stopwords) to be used as features because they’ll appear in every single email. Hadoop in Practice


  5. Then create a new index, specifying that the tweet field should use the english analyzer: Elasticsearch The Definitive Guide


  6. We will close this chapter with a complete example—a retirement calculator with a user interface in English, German, and Chinese. Core Java Volume II Advanced Features 9th Edition


  7. The English language has about one million words, a size that fits comfortably into an iPod, but our word counting program will deal with many unique words not found in any standard English dictionary. Hadoop in Action


  8. It has been said, though it is hard to verify, that the 17th-century English poet John Milton had actually read every published book on the face of the earth. Cassandra The Definitive Guide


  9. Apache Camel uses simple English prepositions, such as from and to, to denote a route. ActiveMQ in Action


  10. The easyB specification file reads like plain English in its execution of web browser driving events and querying of the page responses. Building and Testing with Gradle



319、单词 formatter 的匹配文本:


  1. You need to extend the Formatter class and override the method. Core Java Volume I Fundamentals 9th Edition


  2. A StringBuilder and Formatter are ThreadLocal instance variables, making this class thread-safe. Pro Hadoop


  3. You can now represent multiple formatting behaviors by implementing the Apple-Formatter interface: Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Therefore, we will write a Hadoop input formatter to process the e-mail archive. Hadoop MapReduce Cookbook


  5. There are several formatter classes that take these differences into account. Core Java Volume II Advanced Features 9th Edition


  6. As we’ll see below, a Formatter object can send its output to any Appendable object. Java in a Nutshell 5th Edition


  7. The final component in log4j is the layout which acts as a formatter for log events. Hadoop Operations


  8. First of all, you have implemented the MyFormatter class to give a format to the log messages. Java 7 Concurrency Cookbook


  9. Some properties are good candidates for being transient, like cached calculations, or a date formatter that you always instantiate to the same value. Java 7 Recipes A Problem-Solution Approach


  10. When this log level is selected, Gradle automatically provides a richer log formatter, including the timestamp, log level, and logger name of each message. Building and Testing with Gradle



320、单词 keyspace 的匹配文本:


  1. Because Lounge allocates a portion of the hash (known as a keyspace) to each node, you can add as many nodes as you like. CouchDB The Definitive Guide


  2. Many cryptographic algorithms suffer from the problem that a small number of keys in the keyspace are considered weak. Pro Spring 3


  3. This randomly distributes the keyspace evenly over the reducers, but still ensures that keys with the same value in different mappers end up at the same reducer. MapReduce Design Patterns


  4. A keyspace is similar to a database in RDBMS where all column families related to the application are stored. NoSQL Distilled


  5. These need to be created so data can be stored in them: [default@unknown] create keyspace testkeyspace. Cassandra High Performance Cookbook


  6. There are more keywords that may be used in the USING clause, including "skewed" (for large datasets with a skewed keyspace) and "merge" (to effect a merge join for inputs that are already sorted on the join key) Hadoop:The Definitive Guide 3rd Edition


  7. Schema-Free Cassandra requires you to define an outer container, called a keyspace, that contains column families. Cassandra The Definitive Guide


  8. The following is an example of creating an external Hive table that maps to an existing Cassandra keyspace and column family: Programming Hive


  9. The channel names are not related to the Redis keyspace so you don’t have to worry about conflicts with existing keys. Redis Cookbook



321、单词 dfs 的匹配文本:


  1. The Hadoop DFS metrics provide information about data block operations (read, removed, replicated, verify, written), verification failures, bytes read and written, volume failures, and local/remote client reads and writes. Monitoring with Ganglia


  2. The dfs.data.dir property specifies the location of the data directory on the host Linux filesystem. Hadoop Operations and Cluster Management Cookbook


  3. The number of mappers launched is roughly equal to the input size divided by dfs.block.size (the default block size is 64 MB) Hadoop RealWorld Solutions Cookbook


  4. Finally, shut down the HDFS cluster using the following command: >bin/stop-dfs.sh. Hadoop MapReduce Cookbook


  5. Check the value of dfs.block.size to see what it’s set to in your cluster. Hadoop in Practice


  6. We set the second property, dfs.replication, to 1 so that HDFS doesn’t replicate filesystem blocks by the default factor of three. Hadoop:The Definitive Guide 3rd Edition


  7. The dfs stands for Distributed Frequency Search and it tells Elasticsearch to first retrieve the local IDF from each shard in order to calculate the global IDF. Elasticsearch The Definitive Guide


  8. Some older documentation shows file utilities in the form of hadoop dfs -cmd  Hadoop in Action


  9. The dfs.replication variable specifies how many times each HDFS block should be replicated. Hadoop Beginners Guide


  10. For this reason, the underlying disks need not be part of a RAID group, although some administrators choose to do so and forego specifying multiple directories in dfs.name.dir (although an NFS mount should still be used, no matter what) Hadoop Operations



322、单词 namespaces 的匹配文本:


  1. In addition, federated NameNodes can isolate namespaces for different applications or situations. Hadoop Operations and Cluster Management Cookbook


  2. With multiple NameNodes, different categories of applications and users can be isolated to different namespaces. Professional Hadoop Solutions


  3. Compared to P2P storage systems that only support flat namespaces, distributed file systems typically support hierarchical namespaces. Cassandra A Decentralized Structured Storage System


  4. Hadoop version 2.x supports HDFS Federation, where HDFS is partitioned into multiple distinct namespaces, with each independently managed by a separate NameNode. Hadoop in Practice


  5. Namespace volumes are independent of each other, which means namenodes do not communicate with one another, and furthermore the failure of one namenode does not affect the availability of the namespaces managed by other namenodes. Hadoop:The Definitive Guide 3rd Edition


  6. Our near-term solution to scalability is to allow multiple namespaces (and NameNodes) to share the physical storage within a cluster. The Hadoop Distributed File System


  7. Data is organized into buckets, which are little more than flat namespaces for logically grouping key-value pairs. Cassandra The Definitive Guide


  8. In Example 5-16, we generate a one gigabyte file and copy it into the two separate filesystem namespaces. Hadoop Operations


  9. You’ll see how to configure ActiveMQ using both pure Spring XML and custom namespaces, as well as the standard ActiveMQ XML via Apache XBean (http://mng.bz/EAfX) ActiveMQ in Action


  10. When you use this feature, Hive puts the key-value pair in the hivevar “namespace” to distinguish these definitions from three other built-in namespaces, hiveconf, system, and env. Programming Hive



323、单词 oplog 的匹配文本:


  1. The way MongoDB accomplishes this is by keeping a log of operations, or oplog, containing every write that a primary performs. MongoDB The Definitive Guide 2nd Edition


  2. We’ll also look at the oplog and how it fits in with the communication between members of a replication topology. The Definitive Guide to MongoDB


  3. Each member of a replica set logs all its writes to a special capped collection called oplog.rs. MongoDB in Action



324、单词 checksum 的匹配文本:


  1. You''ll see two compressed tar archives, available along with checksum and gpg signature files used to verify the archives. Apache Flume Distributed Log Collection for Hadoop


  2. When it finds a footer pair, it makes some checks: if the first 2k are corrupt (a footer includes a checksum), CouchDB replaces it with the second footer and all is well. CouchDB The Definitive Guide


  3. Gradle detects if an artifact was changed in the repository by comparing its local and remote checksum. Gradle in Action


  4. Current doc status, a pointer into the repository, a doc checksum, and various statistics. The anatomy of a large-scale hypertextual Web search engine short


  5. The information stored in each entry includes the current document status, a pointer into the repository, a document checksum, and various statistics. The anatomy of a large-scale hypertextual Web search engine


  6. Chubby also exposes a 64-bit file-content checksum so clients may tell whether files differ. Chubby - The lock service for loosely-coupled distributed systems


  7. To avoid this problem we had to change the format of the edits log to have a transaction length, transaction id and the checksum per each transaction written to the file. Apache Hadoop Goes Realtime at Facebook


  8. This performs a checksum against the database and compares checksums of peers; if the checksums differ, then the data is exchanged. Cassandra The Definitive Guide


  9. By default, there is a checksum for each 512-byte chunk within the block. Hadoop Beginners Guide


  10. Figure 2: Example usage of the Percolator API to perform basic checksum clustering and eliminate documents with the same content. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



325、单词 compilers 的匹配文本:


  1. After all, what can go wrong? Unfortunately, with modern processors and compilers, there is plenty of room for error. Core Java Volume I Fundamentals 9th Edition


  2. In these times of nearly continuous performance improvements of compilers and JVMs, performance measurements are only of transient value. A Java ForkJoin Framework


  3. It’s typically useful when building compilers or engines for processing business rules. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Similar to compilers in traditional databases, the Hive compiler processes HiveQL statements in the following steps: Hive A Petabyte Scale Data Warehouse Using Hadoop


  5. Balancing Symbols Compilers check your programs for syntax errors, but frequently a lack of one symbol (such as a missing brace or comment starter) will cause the compiler to spill out a hundred lines of diagnostics without identifying the real error. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. Avro compilers come in two flavors: they can either compile a protocol file or a schema file. Hadoop in Practice


  7. As with many modern compilers, the Pig compiler   can reorder the execution sequence to optimize performance, as long as the execution plan remains logically equivalent to the original program. Hadoop in Action


  8. Reflection is also appropriate for use in remote procedure call (RPC) systems to eliminate the need for stub compilers. Effective Java 2nd Edition


  9. To achieve this goal, it is essential to use an IDE that integrates under one common interface several tools (compilers and debuggers) that facilitate the development of applications. Java 7 Concurrency Cookbook


  10. This eliminates the possibility of some common errors, and also enables compilers to automatically and safely perform some optimizations that would need to be performed manually in C. Concurrent Programming in Java Design Principles and Pattern



326、单词 refactoring 的匹配文本:


  1. What started as a refactoring branch eventually became the main line of development as Flume 1.X. Apache Flume Distributed Log Collection for Hadoop


  2. At Acme, developers do all their coding within the IDE, from navigating through the source code, implementing new features, and compiling and refactoring code, to running unit and integration tests. Gradle in Action


  3. The HFile refactoring into a reader and writer hierarchy allows for significant improvements in code maintainability. Professional Hadoop Solutions


  4. The API is constantly being enhanced with new features, bug fixes, and occasional refactoring. Hadoop RealWorld Solutions Cookbook


  5. The Example Plug-In In this chapter, we will create a Gradle plug-in to automate the use of the open-source database refactoring tool, Liquibase. Gradle Beyond The Basics


  6. Therefore, the WebMap applications would not require extensive refactoring to fit into MapReduce. Hadoop:The Definitive Guide 3rd Edition


  7. This refactoring better aligns the configuration settings to the subsystem of Hadoop that they control. Hadoop in Action


  8. If a JMS client uses such additional features, this client may not be portable to another JMS provider without a refactoring effort. ActiveMQ in Action


  9. When the representations and behavior of one class can be partitioned into independent, noninteracting, or just non-conflicting subsets, it is almost always worth refactoring the class to use distinct finer-granularity helper objects whose actions are delegated by the host. Concurrent Programming in Java Design Principles and Pattern


  10. Only the modest addition of the useTestNG() call to the test closure is needed in addition to the refactoring of the unit test class. Building and Testing with Gradle



327、单词 mbeans 的匹配文本:


  1. The default configuration provides for only local access to the managed beans (MBeans) Pro Hadoop


  2. From the window, we can check the memory usage, threads, classes, summary of JVM, and details of MBeans. Hadoop Operations and Cluster Management Cookbook


  3. Cassandra provides access to this information through standard JMX MBeans (Java Management eXtension Managed Bean) Cassandra High Performance Cookbook


  4. Hadoop includes several managed beans (MBeans), which expose Hadoop metrics to JMX-aware applications. Hadoop:The Definitive Guide 3rd Edition


  5. There is a loose overlap between the metric context, as provided by the metrics framework, and the MBeans exposed over JMX. HBase:The Definitive Guide


  6. MBeans and the platform MBean server will be discussed in the next chapter. Oracle JRockit The Definitive Guide


  7. Overview of JMX and MBeans In this section, we explore how Cassandra makes use of Java Management Extensions (JMX) to enable remote management of your servers. Cassandra The Definitive Guide


  8. Internal MBeans in Hadoop rely on a metric plug-in that has an update thread running to collect data from the system. Hadoop Operations


  9. These interfaces consist of management beans, usually called MBeans, which expose resources of your application to external management applications. ActiveMQ in Action


  10. Some applications also build-in monitoring and management capabilities using MBeans via Java SE’s monitoring and management APIs. Java Performance



328、单词 datanodes 的匹配文本:


  1. The responsibility of Namenode is to transfer data to Datanodes. Web Crawling and Data Mining with Apache Nutch


  2. Since MapReduce servers (task-trackers) share nodes with HDFS datanodes, map output inevitably competes with HDFS reads. HDFS scalability the limits to growth


  3. Namenodes and Datanodes An HDFS cluster has two types of nodes operating in a master-worker pattern: a namenode (the master) and a number of datanodes (workers) Hadoop:The Definitive Guide 3rd Edition


  4. It has locations of data blocks stored in several datanodes and often runs on the same server as the jobtracker in smaller clusters. Cassandra The Definitive Guide


  5. Storage can be added to a cluster by adding more datanodes with additional disk capacity, or even adding disks to existing datanodes. Hadoop Operations



329、单词 codebase 的匹配文本:


  1. It is also one of the biggest contributors of the Hadoop codebase. Hadoop Operations and Cluster Management Cookbook


  2. To tell the truth, it was much harder not only because of more advanced topics covered in this book, but also because of the constantly introduced changes in the ElasticSearch codebase. Mastering ElasticSearch


  3. Because of high performance, a small codebase, and compact resulting data, there is a wide adoption of Avro not only in the Hadoop community, but also by many other NoSQL implementations (including Cassandra) Professional Hadoop Solutions


  4. Familiarity with the Pregel design will help to explain many components found in the Giraph codebase. Hadoop RealWorld Solutions Cookbook


  5. Due to the number of moving parts and the size and scope of the codebase and deployment, debugging services like universal search (described earlier in Section 1) is very challenging. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  6. That begs the question: what’s the best approach to integrate sampling support into an existing codebase? One approach would be to add a configurable option that would toggle the use of the sampling InputFormat, similar to the following code: Hadoop in Practice


  7. The shuffle is an area of the codebase where refinements and improvements are continually being made, so the following description necessarily conceals many details (and may change over time; this is for version 0.20) Hadoop:The Definitive Guide 3rd Edition


  8. These benchmarks are available as part of the Hadoop codebase. The Hadoop Distributed File System


  9. Powerset was acquired in July 2008, and after a couple of months during which Powerset employees were disallowed from contributing while Microsoft’s legal department vetted the HBase codebase to see if it impinged on SQLServer patents, we were allowed to resume contributing (I was a Microsoft employee working near full time on an Apache open source project) HBase:The Definitive Guide


  10. By unifying the code paths, the codebase was simplified and files were kept as small as possible. Apache Hadoop Goes Realtime at Facebook



330、单词 benchmarking 的匹配文本:


  1. This is not exactly how Craigslist works (because we don’t know how Craigslist works), but it is a close enough approximation to illustrate problems with benchmarking. CouchDB The Definitive Guide


  2. Benchmarking of a Hadoop cluster is the first step to tune the performance of a Hadoop cluster. Hadoop Operations and Cluster Management Cookbook


  3. We demonstrate through benchmarking that our system is effective at transforming logical computations into efficient programs. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. Stress is an easy-to-use command-line tool for stress testing and benchmarking Cassandra. Cassandra High Performance Cookbook


  5. Generic benchmarking can be of some use, but a test specifically designed for your system will be more useful. Scaling CouchDB


  6. Benchmarking HDFS Running benchmarks is a good way to verify whether your HDFS cluster is set up properly and performs as expected. Hadoop MapReduce Cookbook


  7. Benchmarking a Hadoop Cluster Is the cluster set up correctly? The best way to answer this question is empirically: run some jobs and confirm that you get the expected results. Hadoop:The Definitive Guide 3rd Edition


  8. Only after real-world usage in your environment and some performance benchmarking with the aid of heap analysis tools and observation of your specific application’s behavior should you dive into tuning the more advanced JVM. Cassandra The Definitive Guide


  9. The results of benchmarking LLVM vs vector on some typical aggregation queries is shown in table 3 Tenzing - A SQL Implementation On The MapReduce Framework


  10. After benchmarking a particular cluster size, we deleted the data directories on each node, reformatted and reloaded HDFS to ensure uniform data distribution across all nodes. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



331、单词 dhcp 的匹配文本:


  1. If the system obtains an IP address dynamically, it is possible that DHCP is not completed before the attempt to start gmond, and so gmond fails to run. Monitoring with Ganglia


  2. In an intranet, a network administrator may assign a specific address to your machine, or you may have a dynamically assigned address allocated by a Dynamic Host Control Protocol (DHCP) server. Java Network Programming and Distributed Computing


  3. Only first nic of vm will get default gateway from dhcp server. bk-compute-adminguide-grizzly


  4. We will learn steps to configure this server, including the configuration of the following two services: DHCP and FTP. Hadoop Operations and Cluster Management Cookbook


  5. FlatDHCP builds upon the Flat manager by providing DHCP services to handle instance addressing and creation of bridges. Deploying OpenStack


  6. Tracks IP address leases and records them in the database by using the dnsmasq dhcp-script facility. bk-admin-guide-cloud-latest


  7. When booting, the kernel will try to use the DHCP protocol, if available, to configure networking. Oracle JRockit The Definitive Guide


  8. High Availability for the DHCP agent is achieved by adopting Pacemaker. high-availability-guide


  9. Others, especially clients on local area networks and wireless connections, receive a different address every time they boot up, often provided by a DHCP server. Java Network Programming 4th Edition


  10. You should be familiar with concepts such as DHCP, Linux bridges, VLANs, and iptables. OpenStack Operations Guide



332、单词 metastore 的匹配文本:


  1. We recommend you to limit this user''s privileges to prevent it from creating or altering tables in the metastore database schema. HBase Administration Cookbook


  2. Solution You can take advantage of the built-in Sqoop metastore that allows you to save all parameters for later reuse. Apache Sqoop Cookbook


  3. To support this abstraction, Hive provides a metastore in the form of a relational database that enables you to capture dependencies between actual physical files (and HBase tables) and tables (virtual) used to access this data. Professional Hadoop Solutions


  4. Following this, the script issues a CREATE command with the EXTERNAL keyword, which tells the Hive Metastore that the data is not managed by the Hive warehouse in HDFS. Hadoop RealWorld Solutions Cookbook


  5. Hive also includes a system catalog - Metastore – that contains schemas and statistics, which are useful in data exploration, query optimization and query compilation. Hive A Petabyte Scale Data Warehouse Using Hadoop


  6. This merely saves the notion of this command as a job in something called the Sqoop metastore. Hadoop in Practice


  7. The Apache HCatalog project (http://incubator.apache.org/hcatalog/) solves this problem by providing a table metadata service, based on Hive’s metastore, so that Pig queries can reference schemas by name, rather than specifying them in full each time. Hadoop:The Definitive Guide 3rd Edition


  8. To support these additional features, a new and important component of Hive is a metastore   for storing schema information. Hadoop in Action


  9. Another option is to use Sqoop’s saved job feature, which stores both connection credentials and incremental state information in a directory on the local filesystem of the client machine in what is called the metastore. Hadoop Operations


  10. The Metastore is a separate relational database (usually a MySQL instance) where Hive persists table schemas and other system metadata. Programming Hive



333、单词 debian 的匹配文本:


  1. To install gweb on a Debian-based Linux distribution, execute the following command as either root or user with high privilege: Monitoring with Ganglia


  2. The Cloudera folks have Amazon machine images (AMIs), Debian and RPM installer files, and an online configuration tool to generate configuration files. Pro Hadoop


  3. Debian GNU/Linux You can install the CouchDB package by running: CouchDB The Definitive Guide


  4. If you are using a Linux distribution other than CentOS, such as Debian, please refer to corresponding documentation. Hadoop Operations and Cluster Management Cookbook


  5. At the time of writing this book, the current release is Debian package. Gradle Effective Implementation Guide


  6. Fuse development files can be found in fusedevel RPM for Redhat/Fedora and in libfuse-dev package for Debian/Ubuntu. Hadoop MapReduce Cookbook


  7. The Apache Bigtop project (http://incubator.apache.org/bigtop/) runs interoperability tests on stacks of Hadoop components and provides Linux packages (RPMs and Debian packages) for easy installation. Hadoop:The Definitive Guide 3rd Edition


  8. When installing Elasticsearch in production, you can use the method described above, or the Debian or RPM packages provided on the downloads page. Elasticsearch The Definitive Guide


  9. You can simplify your deployment of Hadoop by using their free Hadoop distribution, in either RPM   or Ubuntu /Debian   Packages. Hadoop in Action


  10. Starting around version 1.0, support for building RPM and Debian packages was added to the build system, and later releases provide these artifacts for download. Hadoop Operations



334、单词 impl 的匹配文本:


  1. The next step is to create an instance of the RMI lightbulb service defined by the RMILightBulbImpl class. Java Network Programming and Distributed Computing


  2. Putting it all together, the complete RelationCountImpl is shown next. HBase in Action


  3. To do this globally in an application, it suffices to remap the implementation for file URIs by setting the property fs.file.impl to the value org.apache. Hadoop:The Definitive Guide 3rd Edition


  4. This code returns PartitionPlanImpl, a convenient basic implementation of the PartitionPlan interface. Java EE 7 Essentials


  5. The WarehouseImpl constructor is declared to throw a RemoteException because the superclass constructor can throw that exception. Core Java Volume II Advanced Features 9th Edition


  6. SocketImplFactory objects may be registered to create SocketImpl objects for the Socket and ServerSocket classes. Java in a Nutshell 5th Edition


  7. This depends heavily on the native UDP software in the OS and the native code that interfaces between this and Java’s DatagramSocketImpl class. Java Network Programming 4th Edition


  8. You can almost do this generically using the SocketImplFactory, but if you install your own SocketImplFactory, there is no protocol to allow you to access the default socket implementation, so another way must be used. Java Performance Tuning 2nd Edition


  9. For example, assuming that we have an implementation class, say TankImpl, we can write the following AdaptedTank class. Concurrent Programming in Java Design Principles and Pattern


  10. First, a new JoinRowSet is created by instantiating a JoinRowSetImpl() object: Java 7 Recipes A Problem-Solution Approach



335、单词 riak 的匹配文本:


  1. Alternatively, you can also use Redis as a cache, storing recently posted messages in Redis, and older (rarely read) messages in a primarily on-disk storage server (like PostgreSQL, MySQL, Riak, MongoDB, and so on) Redis in Action


  2. Riak is a Dynamo-inspired key/value store that offers fault tolerance out of the box. RabbitMQ in Action


  3. The core framework is large, and dozens of portfolio projects extend that core, covering things like security, web flow, SOAP web services (REST web services are part of the core), enterprise integration, batch processing, mobile, various flavors of social (Facebook, LinkedIn, Twitter, GitHub, and so on), various flavors of NoSQL (MongoDB, Neo4j, Riak, and so on), BlazeDS/Flex, AMQP/Rabbit, and many more. Spring in Practice


  4. It’s generally applied to a number of recent nonrelational databases such as Cassandra, Mongo, Neo4J, and Riak. NoSQL Distilled


  5. Links and Walking The Riak key-value store allows each of its stored values to be augmented with link metadata. Graph Databases


  6. Riak is an open-source, distributed key/value database for high availability, fault-tolerance, and nearlinear scalability. A Little Riak Book


  7. This design is observed by Cassandra as well as by straight key/value stores such as Project Voldemort and Riak. Cassandra The Definitive Guide


  8. Although Dynamo is a proprietary system, the ideas used to build it have inspired many systems falling under the NoSQL umbrella, including Cassandra, Project Voldemort, and Riak. MongoDB in Action


  9. What it lacks in complex querying functionality (like that found in Riak or MongoDB), it makes up for in speed and efficiency. Spring Data Modern Data Access for Enterprise Java



336、单词 whitespace 的匹配文本:


  1. Fortunately, JSON doesn’t care about whitespace, and the compact nature of the output keeps the size of the responses down, but isn’t very human-readable.: Getting Started with CouchDB


  2. Now, the actual result is not as nicely formatted and doesn’t include any superfluous whitespace or newlines, but this is better for you (and us!) to read and understand. CouchDB The Definitive Guide


  3. For example, the following code will result in a request that checks how the analysis will be performed for the ElasticSearch Servers phrase in the library index using the whitespace tokenizer and the nGram filter: Mastering ElasticSearch


  4. The call to strip() method removes any leading/trailing whitespace, and then we tokenize it into an array of named variables. Hadoop RealWorld Solutions Cookbook


  5. In the previous function, I am taking the content field of the document, lowering it, and replacing any nonalphabetical characters with a space, before tokenizing the document into an array of words, split by a single whitespace token. Beginning CouchDB


  6. We already covered this topic in section 13.2.4, but I want to reemphasize that a high percentage of problems you’ll encounter in your job are due to unexpected input, and can be as simple an issue as leading or trailing whitespace characters that cause parsing issues. Hadoop in Practice


  7. The script must accept a variable number of arguments that are the hostnames or IP addresses to be mapped, and it must emit the corresponding network locations to standard output, separated by whitespace. Hadoop:The Definitive Guide 3rd Edition


  8. A simple tokenizer might split the text up into terms whenever it encounters whitespace or punctuation. Elasticsearch The Definitive Guide


  9. Tokenization is based purely on whitespace characters   and not punctuation marks, making States, States., and States: separate words. Hadoop in Action


  10. Unlike a property file, however, whitespace is not permitted between the equals sign and its key or value. Hadoop Operations



337、单词 paging 的匹配文本:


  1. HBase servers, particularly the version using memcached, are memory intensive and generally require at least a gigabyte of real memory per server; any paging will drastically affect performance. Pro Hadoop


  2. Slow Paging (Do Not Use) Don’t use this method! We just show it because it might seem natural to use, and you need to know why it is a bad idea. CouchDB The Definitive Guide


  3. Other alerting methods such as SMS messages and paging, are also available. Hadoop Operations and Cluster Management Cookbook


  4. Paging When the result list is large, it is necessary to use paging to limit the number of documents fetched in a single request and go through the subsequent ones. Mastering ElasticSearch


  5. Create a scanner over all users whose username is less than I, paging one cell at a time. HBase in Action


  6. Beware of paging too deep or requesting too many results at once. Elasticsearch The Definitive Guide


  7. In extreme cases, such memory leaks can cause disk paging and even program failure with an OutOfMemoryError, but such failures are relatively rare. Effective Java 2nd Edition


  8. Adding RAM will help you if you find the OS doing a lot of paging on reads (in general, it is better to enable the various caches Cassandra has) Cassandra The Definitive Guide


  9. The default queue configuration uses a separate thread for paging messages from the message store into the queue to be dispatched to interested message consumers. ActiveMQ in Action


  10. Improved Cache Heuristics It is a known problem in disk-cache / paging algorithms. PowerDrill - Processing a Trillion Cells per Mouse Click



338、单词 iterating 的匹配文本:


  1. The view definition below shows a new view that emits a row of data based on the ingredient text by iterating over the ingredient list from each recipe document, and emitting each ingredient name. Getting Started with CouchDB


  2. If your module requires a simple string value, referencing the module_params string from the mmodule structure is much more convenient than iterating through an APR array of name/value pairs. Monitoring with Ganglia


  3. Perhaps it is programmed to stop iterating when it sees a particular user’s document or is based on a tally it’s been keeping of some resource allocated in the configuration. CouchDB The Definitive Guide


  4. This is because you are now iterating over all values that have that key, instead of just one at a time. MapReduce Design Patterns


  5. Iterating all the columns of a large key In some designs, a particular row key may have a large number of associated columns. Cassandra High Performance Cookbook


  6. Running your MapReduce application against the dataset may take hours, and constantly iterating with code refinements and rerunning against it isn’t an optimal workflow. Hadoop in Practice


  7. And in this case, the output types of the reduce function are Text and IntWritable, for a year and its maximum temperature, which we find by iterating through the temperatures and comparing each with a record of the highest found so far. Hadoop:The Definitive Guide 3rd Edition


  8. When iterating over super columns in your mapper, each IColumn would need to be cast to a SuperColumn, and it would contain nested column information. Cassandra The Definitive Guide


  9. In other words, when “synchronoulsy” iterating over all columns, the original rows can be reconstructed. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. Iterating through all dependencies can take simultaneous advantage of the dependencies object and the Groovy each method that accepts a closure. Building and Testing with Gradle



339、单词 granularity 的匹配文本:


  1. Another set of properties you may find useful allow for rounding down event times at a hour, minute, or second granularity while still maintaining those elements in file paths. Apache Flume Distributed Log Collection for Hadoop


  2. The granularity of the data blocks can affect the distribution and parallel execution of the tasks. Hadoop Operations and Cluster Management Cookbook


  3. We can also set the maximum and minimum granularity of the log messages with the minGranularity and maxGranularity properties. Gradle Effective Implementation Guide


  4. Ultimately, HoD had too little information to make intelligent decisions about its allocations, its resource granularity was too coarse, and its API forced users to provide misleading constraints to the resource layer. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Although, technically, updates can be implemented as overwrites, the granularity of such an approach (overwrite will work only on the file level) can be cost-prohibitive in most cases. Professional Hadoop Solutions


  6. To achieve efficiency, Incoop relies on memoization, but goes beyond the straightforward task-level application of this technique by performing a stable partitioning of the input and by reducing the granularity of tasks to maximize result reuse. Incoop MapReduce for Incremental Computations


  7. However, they tend to be at the wrong level of granularity for creating sophisticated, highly composable code that can be shared among different developers. Hadoop:The Definitive Guide 3rd Edition


  8. We are currently implementing an object-granularity clone operation to create object copies with copy-on-write behavior for efficient storage utilization, and are extending the RADOS client interface to allow transparent versioning for logical point-in-time copies across sets of objects (i. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  9. Hadoop is unable to efficiently divide this task at a finer granularity. Hadoop in Action


  10. Users and groups may be granted permission to perform any combination of the above operations at the table, column family, or column qualifier granularity. Hadoop Operations



340、单词 jax 的匹配文本:


  1. End-to-end application which makes use of the new clientjar option in JAX-WS wsimport tool. Gradle Effective Implementation Guide


  2. The jaxws-api.jar file is required for JAX-WS support in Spring remoting, and jta.jar is used for JTA transaction support. Pro Spring 3


  3. For the APIs exposed via the public Internet, you can implement an HBase-based real-time application by using REST APIs and leveraging one of the JAX-RS frameworks (for example, RestEasy or Jersey) Professional Hadoop Solutions


  4. The Java API for XML Web Services (JAX-WS) and the Java API for ased RPC (JAX-RPC) both provide support for web service calls using the TTP protocol. Java EE 7 Specification


  5. Using JAX-RS annotations, we decorate each extension class to indicate to the server which HTTP requests it handles. Graph Databases


  6. Without the Client API (introduced in JAX-RS 2), developers are required to use basic HttpUrlConnection APIs and write all the surrounding code. Java EE 7 Essentials


  7. The second part of this chapter focuses on Web services performance, which starts with a brief look at the implementation of the Java API for XML Web Services (JAX-WS) Reference Implementation. Java Performance


  8. In doing so, you’ll explore various remoting technologies, including RMI, Hessian/ Burlap, and SOAP web services with JAX-WS. Spring in Action 4th Edition



341、单词 jsr 的匹配文本:


  1. To prove the point: The Java date library is a bit muddled, and a major redesign is underway; see http://jcp.org/en/jsr/detail?id=310.) Core Java Volume I Fundamentals 9th Edition


  2. If tryItOut returns without raising an exception, control is transferred to the finally block using a jsr instruction. The Java Virtual Machine Specification Java SE 7 Edition


  3. This document is the public review version of the JSR-133 specification, the Java Memory Model (JMM) and Thread Specification. JSR-133 Java Memory Model and Thread


  4. This scenario ely discussed in the Servlet expert group (JSR-53) but we were unable to consensus on the appropriate solution. Java EE 7 Specification


  5. No one deserves more credit for this version of the Java programming language than he - but any blame for its deficiencies should be directed at myself and the members of the many JSR Expert Groups! The Java Language Specification Java SE 7 Edition


  6. Thanks to my colleagues at Oracle and the different JSR specification leads for explaining the intended use cases of different technologies. Java EE 7 Essentials


  7. For more information about using Java bytecode for dynamic languages, please refer to Java Specification Request (JSR) 292 on the Internet. Oracle JRockit The Definitive Guide


  8. It''s important to mention that Guava has only one dependency, JSR-305 Getting Started with Google Guava


  9. Customize Liferay portal using JSR-286 portlets, hooks, themes, layout templates, webs plugins, and diverse portlet bridges. Java 7 New Features Cookbook


  10. One of the issues addressed with JSR-133 is the use of the Java keyword volatile. Java Performance



342、单词 timestamps 的匹配文本:


  1. Remember to include standard fields such as timestamps (created, updated), and status so that you can organize and identify the content. Getting Started with CouchDB


  2. Because validation functions are run at replication time as well as during normal client access, we can’t require that timestamps be set close to the server’s system time. CouchDB The Definitive Guide


  3. Data is stored in schematized semi-relational tables; data is versioned, and each version is automatically timestamped with its commit time; old versions of data are subject to configurable garbage-collection policies; and applications can read data at old timestamps. Spanner - Google’s Globally-Distributed Database


  4. When two columns for a row key have the same name, the timestamps of the columns are compared and the value of the column with the highest timestamp is the final value. Cassandra High Performance Cookbook


  5. A write operation in Dynamo also requires a read to be performed for managing the vector timestamps. Cassandra A Decentralized Structured Storage System


  6. In this way, we have a lower and upper bound for the span timestamps on the server side of RPCs. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. This information is stored as metadata in each HFile and queries that ask for a specific timestamp or range of timestamps will check if the request intersects with the ranges of each file, skipping those which do not overlap. Apache Hadoop Goes Realtime at Facebook


  8. Instead of relying on timestamps, causal consistency instead takes a more semantic approach, attempting to determine the cause of events to create some consistency in their order. Cassandra The Definitive Guide


  9. In Webtable, we would use URLs as row keys, various aspects of web pages as column names, and store the contents of the web pages in the contents: column under the timestamps when they were fetched, as illustrated in Figure 1 Bigtable - A Distributed Storage System for Structured Data


  10. The timestamp oracle provides strictly increasing timestamps: a property required for correct operation of the snapshot isolation protocol. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



343、单词 timeline 的匹配文本:


  1. Alternatively, you can overlay a timeline to indicate the duration of a particular event. Monitoring with Ganglia


  2. For more information on its timeline, please refer to the roadmap’s dashboard (http://www.gradle.org/roadmap) Gradle in Action


  3. You can now view trend reports and see a timeline of success and failure. Jenkins Continuous Integration Cookbook


  4. From a machine point of view, the most natural format to model time is with a single large number representing a point on a continuous timeline. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. Sorted by time, it can provide a timeline view on the data. MapReduce Design Patterns


  6. It primarily retrieves events by looking up all logs containing a keyword from a range of time.To make this efficient, the event timeline is denormalized to produce one timeline per keyword. Cassandra High Performance Cookbook


  7. Figure 6.13 shows the reduce task timeline with the units of work and potential areas impacting performance. Hadoop in Practice


  8. Start by summarizing the problem and the end result, and then walk through the timeline of events, calling out anything interesting along the way. Hadoop Operations


  9. If you observe a wide distribution of pause times, you should look at the GC Timeline tab to identify peaks in garbage collection activity. Java Performance


  10. Quite similar to a cartoon flip book, each page represents a frame or picture that will be displayed on the timeline for a period of time. Java 7 Recipes A Problem-Solution Approach



344、单词 nonblocking 的匹配文本:


  1. This API and its implementation are critical to supporting external consistency and a variety of powerful features: nonblocking reads in the past, lock-free read-only transactions, and atomic schema changes, across all of Spanner. Spanner - Google’s Globally-Distributed Database


  2. Use Job.is Complete(), a nonblocking job completion check, to constantly poll to see whether all of the jobs are complete. MapReduce Design Patterns


  3. Twisted provides a Deferred object for building chains of nonblocking event handlers. HBase in Action


  4. Thrift also includes an RPC layer with various types of servers, including a nonblocking implementation. Hadoop in Practice


  5. Updates are a nonblocking operation, so a client that loses an update (because another process updated the znode in the meantime) can decide whether to try again or take some other action, and it can do so without blocking the progress of any other process. Hadoop:The Definitive Guide 3rd Edition


  6. From the command-line help you can see that, for example, using the nonblocking server implies the framed transport. HBase:The Definitive Guide


  7. If you do synchronize your class internally, you can use various techniques to achieve high concurrency, such as lock splitting, lock striping, and nonblocking concurrency control. Effective Java 2nd Edition


  8. This flushing is a nonblocking operation; multiple memtables may exist for a single column family, one current and the rest waiting to be flushed. Cassandra The Definitive Guide


  9. The most prominent features of NIO are selectors and nonblocking I/O programming, allowing developers to use the same resources to handle more network clients and generally heavier loads on their servers. ActiveMQ in Action


  10. With event-based programming in mind, you will discover nonblocking or callback behaviors when invoking functions. Java 7 Recipes A Problem-Solution Approach



345、单词 modifiers 的匹配文本:


  1. The keyword public is called an access modifier; these modifiers control the level of access other parts of a program have to this. Core Java Volume I Fundamentals 9th Edition


  2. Other features such as class inheritance and visibility modifiers (the public, private, and protected keywords of Java) make object-oriented languages much easier and safer to work with than older procedural languages. Java Network Programming and Distributed Computing


  3. Term modifiers In addition to the standard field query with a simple term or clause, Lucene allows us to modify the terms we pass in the query with modifiers. Mastering ElasticSearch


  4. Like the access modifiers that started this discussion, bounds checking is generally thought of in terms other than security: the prime benefit to bounds checking is that it leads to fewer bugs and more robust programs. Java Security 2nd Edition


  5. Documented annotations should be treated just like other modifiers such as protected or static for documentation purposes. Core Java Volume II Advanced Features 9th Edition


  6. In order to make a class cloneable, it may be necessary to remove final modifiers from some fields. Effective Java 2nd Edition


  7. In this release of Java, there are no defined modifiers. Java 7 New Features Cookbook


  8. To return to line 5, the public and static keywords are modifiers. Java in a Nutshell 5th Edition


  9. When using the -O option with javac prior to SDK 1.3, the compiler considered inlining methods defined with any of the following modifiers: private, static, or final. Java Performance Tuning 2nd Edition


  10. In any case, constants are typically defined with the static and final modifiers: Java 7 Recipes A Problem-Solution Approach



346、单词 iterative 的匹配文本:


  1. It is similar in concept to MapReduce [14], but with a natural graph API and much more efficient support for iterative computations over the graph. Pregel  -  A System for Large-Scale Graph Processing


  2. A friend has told him about agile development, a time-boxed iterative approach for implementing and releasing software. Gradle in Action


  3. So it is ideal for iterative and responsive Big Data applications. Hadoop Operations and Cluster Management Cookbook


  4. Examples of (ab)uses include forking web servers and gang-scheduled computation of iterative workloads. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Most of these problems require iterative steps and/or knowledge of complex algorithms, which can be cumbersome to implement in MapReduce. Hadoop RealWorld Solutions Cookbook


  6. Finally, although not fully transparent, Haloop [4] provides task-level memoization techniques for memoization in the context of iterative data processing applications. Incoop MapReduce for Incremental Computations


  7. How it works… Mahout CVB version of LDA implements the Collapse Variable Bayesian inference algorithm using an iterative MapReduce approach: Hadoop MapReduce Cookbook


  8. A basic implementation uses a sequential iterative process to traverse the entire graph from the starting node, as seen in the algorithm presented in figure 7.7 Hadoop in Practice


  9. Iterative application If we apply this algorithm once, we will get node 1 marked as Done, several more (its immediate neighbors) as Current, and a few others as Pending. Hadoop Beginners Guide


  10. In order to reassemble the files into a single sorted file, we open some number of files (defined by io.sort.fac tor), and begin the iterative process of comparing the heads of the streams, picking the lowest key, and writing the result to a new file. Hadoop Operations



347、单词 acls 的匹配文本:


  1. Notice that in case of conflicting ACLs, the first match wins. Monitoring with Ganglia


  2. If no ACLs have been configured, we will get an output similar to the following: Hadoop Operations and Cluster Management Cookbook


  3. What is not covered by this LDIF example is the adding of objectClasses and Access Control Lists (ACLs) Jenkins Continuous Integration Cookbook


  4. Service-level authorization is a capability that provides access control lists (ACLs) of which users have permission to access particular services. Professional Hadoop Solutions


  5. By default, all ACLs are set to *, which means that all users have permission to access each service, but on a real cluster you should lock the ACLs down to only those users and groups that should have access. Hadoop:The Definitive Guide 3rd Edition


  6. Each node has various meta-data, including three names of access control lists (ACLs) used to control reading, writing and changing the ACL names for the node. Chubby - The lock service for loosely-coupled distributed systems


  7. We displayed the list of current ACLs only to see what they were, and to eventually verify that the attributes for the file have been changed. Java 7 New Features Cookbook


  8. The metadata server is also responsible for storing ACLs (Access Control Lists) and other security related information about the tables. Tenzing - A SQL Implementation On The MapReduce Framework


  9. When enabling security, this feature should be enabled as well and meaningful ACLs configured. Hadoop Operations


  10. For example, does user X have the necessary permissions to execute program Y on system Z? Such privileges are often referred to as access control lists (ACLs) and determine who or what can access a given resource to perform a given operation. ActiveMQ in Action



348、单词 nfs 的匹配文本:


  1. In the latest Ganglia versions, some of these metrics now have modules that report their values including NFS. Monitoring with Ganglia


  2. There are other installation methods other than FTP, for example, we can also use NFS and HTTP. Hadoop Operations and Cluster Management Cookbook


  3. Traditional solutions, exemplified by NFS [20], provide a straightforward model in which a server exports a file system hierarchy that clients can map into their local name space. Ceph A Scalable, High-Performance Distributed File System


  4. This time is mostly dominated by disk IO over NFS (since our disks are spread over a number of machines) The anatomy of a large-scale hypertextual Web search engine


  5. Scribe supports multiple data sinks, including HDFS, regular filesystems, and NFS. Hadoop in Practice


  6. Previously, data generated by Scribe was stored in expensive and hard to manage NFS servers. Apache Hadoop Goes Realtime at Facebook


  7. NFS:  May not be a better fit for many deployments as NFS is an external device, costly, less control on timeouts etc. Namenode High Availability


  8. As the dfs.name.dir directory on the SNN is unused, we can expose it to the NameNode via the Network File System (NFS) Hadoop in Action


  9. Be it block storage via a Storage Area Network (SAN) or file-based via Network File System (NFS) or similar protocols, these approaches constrain Hadoop by introducing unnecessary bottlenecks and additional shared devices that would have a critical impact on failure. Hadoop Beginners Guide


  10. Currently, an NFS filer is required as the shared storage, although there are plans to remove this dependency.2 As the active namenode writes to the edits log, the standby namenode is constantly replaying transactions to ensure it is up to date and ready to take over in the case of failure. Hadoop Operations



349、单词 rss 的匹配文本:


  1. You will pull in the Jenkins RSS feeds using a Firefox add-on. Jenkins Continuous Integration Cookbook


  2. You can also subscribe to RSS feeds for various levels of log messages. Jenkins The Definitive Guide


  3. RabbitMQ development RSS feed—The RabbitMQ Mercurial repository offers an RSS feed with the latest code changes to the server. RabbitMQ in Action


  4. Tip  Other ideas for Puppet report processors include RSS feeds for new reports, IRC, XMPP or instant messaging, or SMS notifications of new reports. Pro Puppet


  5. It’s employed by many standards today such as SOAP and RSS, and used as an open data format for products such as Microsoft Office. Hadoop in Practice


  6. Streamy crawls thousands of RSS feeds and aggregates hundreds of millions of items from them. Hadoop:The Definitive Guide 3rd Edition


  7. To emulate this use case, the book makes use of a freely available data set on the Internet: the Delicious RSS feed. HBase:The Definitive Guide


  8. You can create Atom or RSS feeds in just a few lines that transform database records into the appropriate XML. Beginning Scala


  9. Then, in the early part of this century, RSS feeds, blogs, and wikis started cropping up, adding automation and linking of reference data in new ways that stretched the relational model. Cassandra The Definitive Guide


  10. How It Works If you’ve ever seen a simple RSS reader or the Javadocs, you’ll notice that the screen is divided into sections with dividers that allow the user to adjust. Java 7 Recipes A Problem-Solution Approach



350、单词 datagrams 的匹配文本:


  1. Through the membership key, you can block/unblock datagrams from different addresses, drop membership, get the channel and/or multicast group for which this membership key was created, and more. Pro Java 7 NIO.2


  2. Troubleshooting Most problems with sFlow deployments occur because the sFlow datagrams are dropped somewhere between the sFlow agent and gmond. Monitoring with Ganglia


  3. Information is exchanged between two hosts in the form of IP packets, also known as IP datagrams. Java Network Programming and Distributed Computing


  4. In this book, we cover only the Transmission Control Protocol (TCP).The Java platform also supports the User Datagram Protocol (UDP), which can be used to send packets (also called datagrams) with much less overhead than that of TCP. Core Java Volume II Advanced Features 9th Edition


  5. This class implements a communication channel based on network datagrams. Java in a Nutshell 5th Edition


  6. The internet layer fragments the segments into IP datagrams of the necessary size for the local network and passes them to the host-to-network layer for transmission onto the wire. Java Network Programming 4th Edition


  7. The purpose of these two protocols is identical—to send and receive data packets (datagrams) over the network. ActiveMQ in Action


  8. Its sequencing with respect to other datagrams between the same two endpoints is not guaranteed. Fundamental Networking in Java


  9. Problem You would like to broadcast datagrams to zero or more hosts identified by a single address. Java 7 Recipes A Problem-Solution Approach



351、单词 visualvm 的匹配文本:


  1. In this chapter, we will use VisualVM (http://visualvm.java.net/index.html) as the application monitoring tool. Pro Spring 3


  2. You start it in this manner so you can connect a local JMX console (like JConsole or Java VisualVM) to it. Spring in Practice


  3. You can try to reproduce the problem locally by downloading the file that the task is failing on and running the job locally, possibly using a debugger such as Java’s VisualVM. Hadoop:The Definitive Guide 3rd Edition


  4. This is what we’ll look at in the next section on how to monitor the Course Manager application using tools like VisualVM. Spring Roo in Action


  5. The following graphical tools can be used to monitor the HotSpot VM: JConsole, VisualGC, and VisualVM. Java Performance


  6. Solution 2 Analyze your program’s memory consumption by using VisualVM (an external GNU Open Source Java profiler and visualization tool).To use and install VisualVM, go to http://visualvm.java.net/, download and follow the steps for installation (usually uncompressing in the folder of your choice and then clicking the provided executable) Java 7 Recipes A Problem-Solution Approach


  7. As illustrated in figure 20.1, exporting Spring beans as JMX MBeans makes it possible for a JMX-based management tool such as JConsole or VisualVM to peer inside a running application to view the beans’ properties and invoke their methods. Spring in Action 4th Edition



352、单词 mesos 的匹配文本:


  1. Mesos uses resource hoarding to achieve gang scheduling, and can potentially deadlock as a result. Omega - flexible, scalable schedulers for large compute clusters


  2. The system runs over the Mesos cluster manager [17], allowing it to share resources with Hadoop, MPI and other applications. Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing


  3. HoD proved itself as a versatile platform, anticipating some qualities of Mesos[17], which would extend. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Paco is an evangelist for the Mesos and Cascading open source projects. Storm Real-Time Processing Cookbook


  5. Abstract We present Mesos, a platform for sharing commodity clusters between multiple diverse cluster computing frameworks, such as Hadoop and MPI. Mesos A Platform for Fine-Grained Resource Sharing in the Data Center



353、单词 tweets 的匹配文本:


  1. The screenshot in Figure 10-10 shows the word cloud generated from a MapReduce view of CouchDB’s archived tweets. CouchDB The Definitive Guide


  2. You need to be able to provide credentials so that your plugin can send tweets. Jenkins Continuous Integration Cookbook


  3. In order to get the data we needed, we decided to use the Twitter river plugin to index some public tweets from Twitter. Mastering ElasticSearch


  4. Let’s say you have a list of strings representing tweets people are sending to you. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. By monitoring, collecting, and aggregating data from various Internet sources such as blogs, boards, news feeds, tweets, and social media, companies are using Hadoop to extract and aggregate information about their products, services, and competitors, discovering patterns and revealing upcoming trends important for understanding their business. Professional Hadoop Solutions


  6. HBase stores structured and semistructured data naturally so you can load it with tweets and parsed log files and a catalog of all your products right along with their customer reviews. HBase in Action


  7. Imagine you wanted to join that set of users with all their tweets, which is a much larger dataset than the set of users. Hadoop in Practice


  8. For example, emitting a stream of trend analysis by processing a stream of tweets. Apache Kafka


  9. Something seems to be broken: we have 12 tweets in our indices, and only one. Elasticsearch The Definitive Guide


  10. You can use what’s likely a familiar data model (Twitter’s) and see how users, time lines, and tweets all fit into a simple Cassandra data model. Cassandra The Definitive Guide



354、单词 bson 的匹配文本:


  1. Each document has to be converted from BSON to a JavaScript object and then run through the "$where" expression. MongoDB The Definitive Guide 2nd Edition


  2. The format of the document in which the information is stored in MongoDB is BSON (binary JSON) Scala in Action


  3. Finally, you don’t need to worry about mapping the Java Article object to MongoDB’s native BSON format,8 because Spring Data MongoDB also handles that for you. Spring in Practice


  4. In addition to standard relational data, Drill supports (using ANSI SQL extensions) hierarchical data, including XML, JavaScript Object Notation (JSON), Binary JSON (BSON), Avro, protocol buffers, and so on. Professional Hadoop Solutions


  5. The database stores and retrieves documents, which can be XML, JSON, BSON, and so on. NoSQL Distilled


  6. MongoDB uses these BSON objects to efficiently serialize, transfer, and store data. Hadoop RealWorld Solutions Cookbook


  7. MongoDB doesn’t actually use JSON to store the data; rather, it uses an open data format developed by the MongoDB team called BSON (pronounced Bee-Son), which is short for Binary-JSON. The Definitive Guide to MongoDB


  8. Chapter 3 introduces the MongoDB drivers and MongoDB’s data format, BSON. MongoDB in Action


  9. MongoDB stores these documents in BSON, a binary derivative of JSON. Spring Data Modern Data Access for Enterprise Java



355、单词 batis 的匹配文本:


  1. The support for Hibernate, MyBatis, JDO, and JPA makes already simple APIs even simpler, thus easing the burden on developers. Pro Spring 3


  2. For example, it provides a consistent and transparent way to work with a variety of different data access frameworks (e.g., JDBC, JPA, Hibernate, iBATIS, and JDO) Pro Spring MVC


  3. A quick note about iBATIS Apache iBATIS was retired in 2010 and has been superseded by MyBatis (mybatis.org) Spring in Practice


  4. There are many other tools for such upgrades, including Liquibase [Liquibase], MyBatis. NoSQL Distilled


  5. As an example, consider the ORM module in the Spring Framework: this module includes support for popular ORM tools such as Hibernate, iBATIS, and OpenJPA. Spring Dynamic Modules in Action


  6. Just configure your Spring application with the appropriate JDBC driver and data source, and then start using the Spring JdbcTemplate API, or MyBatis (a common SQL-mapping API), or even a non-SQL database platform. Spring Roo in Action


  7. It significantly simplified the implementation of data access layers, regardless of whether JDBC, Hibernate, TopLink, JDO, or iBatis was used as persistence technology. Spring Data Modern Data Access for Enterprise Java


  8. Spring doesn’t attempt to implement its own ORM solution but does provide hooks into several popular ORM frameworks, including Hibernate, Java Persistence API, Java Data Objects, and iBATIS SQL Maps. Spring in Action 4th Edition



356、单词 iff 的匹配文本:


  1. Iff the predicate classIsTypeSafe is not true, the type checker must throw the exception VerifyError to indicate that the class file is malformed. The Java Virtual Machine Specification Java SE 7 Edition


  2. For every other kind of expression, the expression can throw an exception class E iff one of its immediate subexpressions can throw E. The Java Language Specification Java SE 7 Edition


  3. Apply a predicate p to all elements of this iterable object and // return true, iff there is at least one element for which p yields true. Programming Scala



357、单词 compiles 的匹配文本:


  1. If it does not find that file, it automatically searches for Employee.java and compiles it. Core Java Volume I Fundamentals 9th Edition


  2. This code compiles and executes on Android without any conspicuous error. Monitoring with Ganglia


  3. The build task compiles your code, runs your tests, and assembles the JAR file, all in the correct order. Gradle in Action


  4. The apiClasses task is the task that compiles the Java source files, and processes the resources into the build directory, so Gradle will first invoke the apiClasses task before the apiJar task. Gradle Effective Implementation Guide


  5. Query Compiler – The component that compiles HiveQL into a directed acyclic graph of map/reduce tasks. Hive A Petabyte Scale Data Warehouse Using Hadoop


  6. For example, the build task in a Java project compiles and tests all of the code. Gradle Beyond The Basics


  7. When we run the Pig script, Pig internally compiles Pig commands to MapReduce jobs in an optimized form and runs it in a MapReduce cluster. Hadoop MapReduce Cookbook


  8. Pig compiles all the statements together and can locate the dependency and redundancy. Hadoop in Action


  9. The command compiles the Java examples and starts up a simple JMS consumer. ActiveMQ in Action


  10. All commands and queries go to the Driver, which compiles the input, optimizes the computation required, and executes the required steps, usually with MapReduce jobs. Programming Hive



358、单词 cloudera 的匹配文本:


  1. In the past, you had to come up with a solution for this problem, but no more! Flume started as a project out of Cloudera when their integration engineers had to keep writing tools over and over again for their customers to import data automatically. Apache Flume Distributed Log Collection for Hadoop


  2. Hadoop was created by Doug Cutting, who now works as an architect at Cloudera and serves as chair of the Apache Software Foundation. Monitoring with Ganglia


  3. Prerequisites For those of you who are new to Hadoop, I strongly urge you to try Cloudera’s open source Distribution for Hadoop (http://www.cloudera.com/hadoop) Pro Hadoop


  4. The success of MR led to a wide range of third-party implementations (notably open-source Hadoop [15]), and a number of hybrid systems that combine parallel DBMSs with MR, offered by vendors like Aster, Cloudera, Greenplum, and Vertica. Dremel - Interactive Analysis of WebScale Datasets


  5. Prior to the release of MRUnit by Cloudera, even the simplest tests running in local mode would have to read from the disk and take several seconds each to set up and run. Hadoop RealWorld Solutions Cookbook


  6. After tackling Hadoop’s architecture and its weaknesses you’re probably ready to roll up your sleeves and get hands-on with Hadoop, so let’s take a look at how to get the Cloudera Distribution for Hadoop (CDH)12 up and running on your system, which you can use for all the examples in this book. Hadoop in Practice


  7. There are Hadoop distributions from the large, established enterprise vendors, including EMC, IBM, Microsoft, and Oracle, as well as from specialist Hadoop companies such as Cloudera, Hortonworks, and MapR. Hadoop:The Definitive Guide 3rd Edition


  8. Cloudera is now bringing Hadoop to a broader enterprise audience. Hadoop in Action


  9. Though Doug has now moved on to Cloudera (another prominent startup supporting the Hadoop community) and much of the Yahoo''s Hadoop team has been spun off into a startup called Hortonworks, Yahoo remains a major Hadoop contributor. Hadoop Beginners Guide


  10. Special thanks to Christophe Bisciglia for giving me an opportunity at Cloudera and for the advice along the way. Hadoop Operations



359、单词 xen 的匹配文本:


  1. The libvirt project has created a common set of tools for managing virtualization resources on different virtualization platforms, currently including: Xen, QEMU, KVM, LXC, OpenVZ, User Mode Linux, VirtualBox, and VMware ESX and GSX. Monitoring with Ganglia


  2. The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor''s APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, etc.) bk-compute-adminguide-grizzly


  3. If our system is running within Xen virtualization, those numbers can be closer to 200–300ms per gigabyte of memory used by Redis, depending on the Xen configuration. Redis in Action


  4. While libvirt may be the most common, novacompute also uses the Xen API, vSphere API, Windows Management Interface, and others to support other virtualization technologies. Deploying OpenStack


  5. However, the majority of OpenStack development is done on systems using KVM and Xen-based hypervisors. bk-admin-guide-cloud-latest


  6. Hardware device drivers can be provided by the hypervisor, either in a special virtual machine for isolation (for example in Oracle VM- and Xen-based solutions) or as part of the hypervisor itself (for example in VMware ESX) Oracle JRockit The Definitive Guide


  7. To boot a Xen image from a volume, the image you launch in must be the same type, fully virtualized or paravirtualized, as the one on the volume. user-guide



360、单词 initializer 的匹配文本:


  1. However, C++ has a special initializer list syntax, such as. Core Java Volume I Fundamentals 9th Edition


  2. A compiler writer can work around this bug by limiting the maximum size of the generated Java Virtual Machine code for any method, instance initialization method, or static initializer (the size of any code array) to 65534 bytes. The Java Virtual Machine Specification Java SE 7 Edition


  3. The value of the variable Point.origin can never change, so it always refers to the same Point object, the one created by its initializer. The Java Language Specification Java SE 7 Edition


  4. The delegate injection point follows the normal rules for injection and therefore must be an injected field, initializer method parameter, or bean constructor method parameter: Java EE 7 Essentials


  5. This statement causes the driver class to be loaded, thereby executing a static initializer that registers the driver. Core Java Volume II Advanced Features 9th Edition


  6. The version that follows avoids this inefficiency with a static initializer: Effective Java 2nd Edition


  7. A variable declaration can also include an initializer: an expression that specifies an initial value for the variable. Java in a Nutshell 5th Edition


  8. Rather than initialize explicitly in the constructor (or class static initializer), it is left until access time for the variable to be initialized, using a test for null to determine if it has been initialized. Java Performance Tuning 2nd Edition


  9. As of this writing, the JLS does not yet clearly state that the visible initial value read for an initialized final field is the value assigned in its initializer or constructor. Concurrent Programming in Java Design Principles and Pattern


  10. The initGrades2() method combines array creation and initialization in one line using the special array initializer syntax: Java 7 Recipes A Problem-Solution Approach



361、单词 ftp 的匹配文本:


  1. Java has an extensive library of routines for coping with TCP/IP protocols like HTTP and FTP. Core Java Volume I Fundamentals 9th Edition


  2. The backup server could either be accessible over a mounted shared drive or you could transfer the file over FTP. Gradle in Action


  3. We will learn steps to configure this server, including the configuration of the following two services: DHCP and FTP. Hadoop Operations and Cluster Management Cookbook


  4. Once a jar file is created, you can FTP it to the cluster’s edge node, and executed using the command shown in Listing 3-4 Professional Hadoop Solutions


  5. This recipe requires an SSH Server and SSH Client, but any method of transferring binary data such as FTP is sufficient. Cassandra High Performance Cookbook


  6. Using curl curl is a lightweight command-line tool for transferring data to or from a server using various protocols, such as HTTP, HTTPS, and FTP. Beginning CouchDB


  7. These techniques are particularly helpful in situations where the data you’re working with is in a binary or semistructured form, or is only accessible via interfaces such as HTTP or FTP. Hadoop in Practice


  8. It functions much like a GUI-based FTP client in managing remote storage. Hadoop in Action


  9. The act of copying files to or from HDFS is more like SCP or FTP than working with an NFS mounted filesystem, for example. Hadoop Operations


  10. The blob itself is transferred out of bounds, by either FTP or HTTP. ActiveMQ in Action



362、单词 statically 的匹配文本:


  1. Part of the transition from a single executable also included splitting out other components such as the Apache Portable Runtime (APR) library, which was previously being statically linked with gmond as well. Monitoring with Ganglia


  2. You may prefer Java for its statically typed nature and library support to implement your business logic. Gradle in Action


  3. Structural conditions — These are statically defined in a Workflow DAG (transitions and fork-join construct) Professional Hadoop Solutions


  4. The mapper class defines a statically initialized position to read the country from each line, as well as the regex pattern to split each line. Hadoop RealWorld Solutions Cookbook


  5. Many US hotels and conference centers are still woefully unprepared for a few hundred software developers, each with two or three devices on the wireless network and a seemingly insatiable appetite for bandwidth.) While I strongly prefer dependencies to be managed by my build tool, it might make sense for me to prepare lab materials with all of the dependencies statically located in the project in the style of old Ant builds.4 For some Java frameworks and APIs, chasing all of these JARs down by hand can be a burden. Gradle Beyond The Basics


  6. In contexts where the Writable is defined by type, such as in SequenceFile keys or values or as input to MapReduce in general, you need to subclass ArrayWritable (or TwoDAr rayWritable, as appropriate) to set the type statically. Hadoop:The Definitive Guide 3rd Edition


  7. In more basic designs, the ID is hashed statically to one of a fixed number of database machines. Hadoop in Action


  8. Keyspaces Keyspaces used to be defined statically in an XML configuration file, but as of 0.7, you can use the API to create keyspaces and column families. Cassandra The Definitive Guide


  9. When you want to configure a network of brokers, the first obvious question is, do you know the exact network address of each broker in the network? If the answer is yes, then you can proceed configuring your network statically and also connect your clients to predefined broker URIs. ActiveMQ in Action


  10. The concrete nature (and ideally, number) of all objects and threads comprising the subsystem are statically known. Concurrent Programming in Java Design Principles and Pattern



363、单词 zookeeper 的匹配文本:


  1. It consisted of a federation of worker daemons (agents) configured from a centralized master (or masters) via Zookeeper (a federated configuration and coordination system) Apache Flume Distributed Log Collection for Hadoop


  2. The examples at http://hadoop.apache.org/zookeeper/docs/current/recipes.html provide ZooKeeper recipes for two-phase commit, leader election, barriers, queues, and locks. Pro Hadoop


  3. HBase uses Zookeeper for its management to ensure that all of its components are up and running. Professional Hadoop Solutions


  4. In Chapter 10, Third-party Libraries and Applications, the recipes Setting up Zookeeper to support Cages for transactional locking and Using Cages to implement an atomic read and set. Cassandra High Performance Cookbook


  5. The shell script in this recipe assumes that Zookeeper is running on the host localhost on port 2181; you can change this to suit your environment needs. Hadoop RealWorld Solutions Cookbook


  6. Cassandra system elects a leader amongst its nodes using a system called Zookeeper[13] Cassandra A Decentralized Structured Storage System


  7. Create two directories to store the HDFS data and Zookeeper data. Hadoop MapReduce Cookbook


  8. Windows is supported only as a development platform, not as a production platform.) Download a stable release of ZooKeeper from the Apache ZooKeeper releases page at http://zookeeper.apache.org/releases.html, and unpack the tarball in a suitable location: Hadoop:The Definitive Guide 3rd Edition


  9. Our plan is to use Zookeeper, Yahoo’s distributed consensus technology to build an automated failover solution. The Hadoop Distributed File System


  10. Zookeeper, also part of the Hadoop project, is a centralized service for maintaining configuration information and distributed synchronization across nodes in a cluster. Cassandra The Definitive Guide



364、单词 descriptors 的匹配文本:


  1. Use the ulimit command to increase the permitted number of open files or file descriptors for the rrdcached process. Monitoring with Ganglia


  2. File Descriptors Hadoop Core uses large numbers of file descriptors for MapReduce, and the DFSClient uses a large number of file descriptors for communicating with the HDFS NameNode and DataNode server processes. Pro Hadoop


  3. All this takes processing time and enough sockets (or file descriptors) must be available. CouchDB The Definitive Guide


  4. Writing a plugin is surprisingly easy and doesn’t require a lot of additional descriptors. Gradle in Action


  5. But those descriptor files can be misconfigured, and so we may want to override the descriptors ourselves to ensure the dependencies are correct. Gradle Effective Implementation Guide


  6. Because of this, HBase tends to see issues not normally encountered by MapReduce clients: Running out of file descriptors. Hadoop:The Definitive Guide 3rd Edition


  7. Clients open nodes to obtain handles that are analogous to UNIX file descriptors. Chubby - The lock service for loosely-coupled distributed systems


  8. The new intermediate file, under output/test, is missing the quoted field descriptors. Hadoop in Action


  9. Linux supports, via Pluggable Authentication Modules (PAM) system, the ability to control resources such as file descriptors and virtual memory at the user level. Hadoop Operations


  10. This is common when working with a Java EE application server, since each one requires its own custom configuration file and possibly more deployment descriptors. ActiveMQ in Action



365、单词 delimited 的匹配文本:


  1. For each named sink group, you need to specify the sinks it contains using the sinks property consisting of a space-delimited list of sink names: Apache Flume Distributed Log Collection for Hadoop


  2. Because the output is just a text string, that means you can reformat the content as XML, a tab delimited string, even as a UUencoded string if that is what you need. Getting Started with CouchDB


  3. Then add a list of checks that are delimited with a colon. Monitoring with Ganglia


  4. This recipe will use delimited text as an alternative to storing multiple columns under the same key. Cassandra High Performance Cookbook


  5. Note that the expression passed to the expand() method is a Groovy map literal—it is enclosed by square brackets, and a series of key/value pairs are delimited by commas, with the key and the value themselves separated by colons. Gradle Beyond The Basics


  6. It reads records that are delimited by a specific XML begin and end tag. Hadoop in Practice


  7. Popular formats, such as tab-delimited text files, are natively supported. Hadoop in Action


  8. For example, for delimited text format, the user can specify the delimiter, encoding, quoting, escaping, headers, etc. Tenzing - A SQL Implementation On The MapReduce Framework


  9. For simplicity, we keep to our previous validation strategy of looking for the number of fields and discarding lines that don''t break into exactly six tab-delimited fields. Hadoop Beginners Guide


  10. The users.properties file defines each user in a line-delimited manner along with its password, as shown: ActiveMQ in Action



366、单词 rowkey 的匹配文本:


  1. Using specific rowkey filtering, scanning will be faster over a subset, rather than scanning over the entire table. Web Crawling and Data Mining with Apache Nutch


  2. Within the enterprise, all applications can write their events to Cassandra with their own columns and the rowkey of the form appname:timestamp. NoSQL Distilled


  3. It’s often described as a sparse, distributed, persistent, multidimensional sorted map, which is indexed by rowkey, column. HBase in Action


  4. Iterate through all values for the rowkey and scan criteria (defined in the main method) Hadoop in Practice



367、单词 memcached 的匹配文本:


  1. Originally targeted at embedded network hardware, sFlow collectors now exist for general-purpose operating systems as well as popular applications such as Tomcat, memcached, and the Apache Web Server. Monitoring with Ganglia


  2. More recent versions of HBase also provide a memcached-based intermediate layer between the user and the MapFiles (http://www.danga.com/memcached/) Pro Hadoop


  3. BigTable, Hadoop, SimpleDB, and memcached restrict object lookups by key for exactly these reasons. CouchDB The Definitive Guide


  4. Cache documents and query results using memcached or another caching system. Scaling CouchDB


  5. Service becomes more popular; too many reads hitting the database Add memcached to cache common queries. Hadoop:The Definitive Guide 3rd Edition


  6. Even projects like memcached are lumped into the NoSQL category, as if anything that is not an RDBMS is automatically NoSQL. HBase:The Definitive Guide


  7. Instead, we have used a combination of large clusters of MySQL databases and caching tiers built using memcached[9] Apache Hadoop Goes Realtime at Facebook


  8. Caching in Guava gives us more power and flexibility than using plain HashMap but is not as robust as EHCache or Memcached. Getting Started with Google Guava


  9. This was true of many tools and techniques for scaling data processing, including caching (for example, memcached), replication, sharding, and, of course, MapReduce/ Hadoop. Hadoop in Action


  10. For larger systems, this might include distributed caches such as memcached, EHCache, Oracle Coherence, or other related products. Cassandra The Definitive Guide



368、单词 stm 的匹配文本:


  1. He’s then taught the modern concurrency tools provided directly within the Java SDK before embarking upon an adventure through the exciting realms of STM and actors. Programming Concurrency on the JVM


  2. Instead of working with tables and rows, STM controls the access to shared memory. Scala in Action



369、单词 charset 的匹配文本:


  1. This method gets the path to the file, an iterable object over the char sequence, a charset to use for encoding, and options specifying how the file is opened. Pro Java 7 NIO.2


  2. Passing this to Charset.forName( ) produces the Charset object that can be used to decode the string. Thinking in Java 4th Edition


  3. You can use the CHARSET(), COLLATION(), and COERCIBILITY() functions to help debug errors related to character sets and collations. High Performance MySQL 3rd Edition


  4. Each supports the same attributes (other than a charset attribute that''s only present on the string transformer), so the primary difference is the type of the payload of the outgoing message. Spring Integration in Action


  5. To obtain a Charset, call the static forName method with either the official name or one of its aliases: Core Java Volume II Advanced Features 9th Edition


  6. The third parameter of this method defines the Charset to use. Java 7 New Features Cookbook


  7. Charset and its associated classes, CharsetEncoder and CharsetDecoder, define methods for encoding strings of characters into sequences of bytes and decoding sequences of bytes into strings of characters. Java in a Nutshell 5th Edition


  8. When you create a String instance from a byte array, you must provide a Charset argument to the String constructor so that the platform knows how to perform the mapping from the legacy encoding to Unicode. Java 7 Recipes A Problem-Solution Approach


  9. In the (hopefully unlikely) case that you need a different encoding, you can supply a Charset argument. Java SE 8 for the Really Impatient


  10. THR_LOCK_charset Protects the variables and data structures relevant to character set operations. Understanding MySQL Internals



370、单词 yahoo 的匹配文本:


  1. In addition to extremely large-scale pipelines for Yahoo! Search, scientists optimizing advertising analytics, spam filtering, and content optimization drove many of its early requirements. Apache Hadoop YARN Yet Another Resource Negotiator


  2. The next recipe, Running the Yahoo! Cloud Serving Benchmark for a more sophisticated load testing system. Cassandra High Performance Cookbook


  3. Also, pages that have perhaps only one citation from something like the Yahoo! h homepage are also generally worth looking at. The anatomy of a large-scale hypertextual Web search engine


  4. You’ve most likely come across Hadoop as it’s been adopted by technology giants like Yahoo!, Facebook, and Twitter to address their big data needs, and it’s making inroads across all industrial sectors. Hadoop in Practice


  5. Around that time, Yahoo! got interested, and quickly put together a team that I joined. Hadoop:The Definitive Guide 3rd Edition


  6. BookKeeper is in production use at Yahoo! for guaranteed delivery of log messages to Hedwig Servers. Namenode High Availability


  7. Doug Cutting got it all started, and Yahoo had the foresight to support it early on. Hadoop in Action


  8. Pig Pig (http://hadoop.apache.org/pig) is a platform for data analytics developed at Yahoo! Cassandra The Definitive Guide


  9. Thanks, Yahoo Yahoo hired Doug Cutting in 2006 and quickly became one of the most prominent supporters of the Hadoop project. Hadoop Beginners Guide


  10. The team at Yahoo! ran into a number of scalability limitations that were difficult to overcome given Hadoop’s existing architecture and design. Hadoop Operations



371、单词 programmatic 的匹配文本:


  1. CouchDB’s JSON documents are great for programmatic access in most environments. CouchDB The Definitive Guide


  2. The Project instance gives you programmatic access to all Gradle features in your build, like task creation and dependency management. Gradle in Action


  3. Introduction Programmatic access to a cluster of Cassandra servers is done though the Application Programmer Interface. Cassandra High Performance Cookbook


  4. Dapper is used widely at Google, both directly through the Dapper user interface and indirectly through the programmatic APIs or applications built on top of those APIs. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  5. You have programmatic access to counters, and the job history command will also include the counters: Hadoop in Practice


  6. The listStatus() and globStatus() methods of FileSystem take an optional PathFilter, which allows programmatic control over matching: Hadoop:The Definitive Guide 3rd Edition


  7. Programmatic Authentication If you have set up authentication on your keyspace, your client application code will need to log in. Cassandra The Definitive Guide


  8. Using web, commandline, or programmatic interfaces to create objects, which can be everything from text files to images to MP3s, you can store and retrieve your data based on a hierarchical model. Hadoop Beginners Guide


  9. Oozie presents a REST service for programmatic management of workflows and status retrieval. Hadoop Operations


  10. This kind of programmatic modification of that otherwise inaccessible build logic can be very powerful. Building and Testing with Gradle



372、单词 cms 的匹配文本:


  1. HBase uses CMS GC as its default garbage collector for the old generation. HBase Administration Cookbook


  2. For instance, if you are using the blog.posts collection in the cms database, the namespace of that collection would be cms.blog.posts. MongoDB The Definitive Guide 2nd Edition


  3. In the XML world he is known as the developer of DB Generator for the Apache Cocoon project, the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration by using Oracle JVM Directory implementation and in the Restlet.org project the Oracle XDB Restlet Adapter, an alternative to write native REST web services inside the database-resident JVM. Mastering ElasticSearch


  4. A content management system (CMS) allows for storing and serving content, as well as managing everything from a central location. HBase in Action


  5. This is mitigated by using the Concurrent Mark-Sweep Collector (CMS), enabled with the latter option shown earlier. HBase:The Definitive Guide


  6. Use the name or id attributes to name these variables as follows: public WebElement heightCMS; public WebElement weightKg; public WebElement Calculate; public WebElement bmi; public WebElement bmi_category; Selenium Testing Tools Cookbook


  7. Use Liferay tools (CMS, WCM, collaborative API and social API) to create your own Web sites and WAP sites with hands-on examples. Java 7 New Features Cookbook


  8. The CMS API (described later in this chapter) defines an API similar to JMS, and since the pyactivemq is just a wrapper around a CMS API implementation, we can expect a JMS-like API for Python. ActiveMQ in Action


  9. Figure 3-8a illustrates how a garbage collection cycle works in CMS. Java Performance


  10. In the XML world, he is known as the developer of the DB Generator for the Apache Cocoon project, the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration by using Oracle JVM Directory implementation, and the Restlet.org project – the Oracle XDB Restlet Adapter, an alternative to writing native REST web services inside the database resident JVM. Apache Solr 4 Cookbook



373、单词 extensible 的匹配文本:


  1. Flume was created to meet this need and create a standard, simple, robust, flexible, and extensible tool for data ingestion into Hadoop. Apache Flume Distributed Log Collection for Hadoop


  2. Ganglia 1.0 was not extensible, whereas Ganglia today can publish custom metrics via Python and C modules or a simple command-line tool. Monitoring with Ganglia


  3. The nested data model backs a platform-neutral, extensible mechanism for serializing structured data at Google. Dremel - Interactive Analysis of WebScale Datasets


  4. In fact, the AM can run arbitrary user code, and can be written in any programming language since all communication with the RM and NM is encoded using extensible communication protocols6—as an example consider. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Attain a more flexible and extensible framework with the world class Axis2 architecture. Mastering ElasticSearch


  6. It has extensible rules that can permit or deny traffic based on many attributes, including, but not limited to, source IP, destination IP, source port, and destination port. Cassandra High Performance Cookbook


  7. The DDL forms a clear, compact, extensible notation describing the layout of the binary records and naming the fields. Interpreting the Data Parallel Analysis with Sawzall


  8. One of the main design goals driving toward Hadoop’s major 1.0 release is a stable and extensible MapReduce API. Hadoop in Action


  9. First we’ve started the XMPP transport connector on port 61222 to expose the broker to clients via XMPP (the Extensible Messaging and Presence Protocol) ActiveMQ in Action


  10. Ant provided a way to describe a Java build that was extensible and platform independent, which was a large part of its success. Building and Testing with Gradle



374、单词 customizing 的匹配文本:


  1. If your view of the world is different, Gradle gives you the option of customizing the conventions. Gradle in Action


  2. Next, we will go to a web page for customizing services, for example, configuring the location for the NameNode directory. Hadoop Operations and Cluster Management Cookbook


  3. Customizing the Gradle wrapper We can change the names of the script files that are generated with the scriptFile property of the Wrapper task. Gradle Effective Implementation Guide


  4. Our books and publications share the experiences of your fellow IT professionals in adapting and customizing today''s systems, applications, and frameworks. Cassandra High Performance Cookbook


  5. CouchDB’s configuration file includes many options for customizing your CouchDB installation. Beginning CouchDB


  6. While this is the fundamental role of configurations, they also provide methods for querying the declared dependencies and customizing the resolution process. Gradle Beyond The Basics


  7. The site XML files (those with site in their filenames) will grow as you start customizing your Hadoop cluster, and it can quickly become challenging to keep track of what changes you’ve made, and how they relate to the default configuration values. Hadoop in Practice


  8. Inverted index Analysis and analyzers Built-in analyzers When analyzers are used Testing analyzers Specifying analyzers Mapping Core simple field types Viewing the mapping Customizing field mappings Updating a mapping Testing the mapping Complex core field types Multi-value fields Empty fields Multi-level objects Mapping for inner objects How inner objects are indexed Arrays of inner objects Empty search Query DSL Structure of a query clause Combining multiple clauses Queries and Filters Performance differences When to use which Most important queries and filters Combining queries with filters Filtering a query Just a filter A query as a filter Validating queries Understanding errors Understanding queries Conclusion Sorting Sorting by field values Multi-level sorting Sorting on multi-value fields String sorting and multi-fields What is relevance? Understanding the score. Elasticsearch The Definitive Guide


  9. In using FileInputFormat you focus on customizing RecordReader, which is responsible for parsing an input split into records and then parsing each record into a key/value pair. Hadoop in Action


  10. To begin customizing the ActiveMQ resource adapter to start up an embedded ActiveMQ instance, you need to make some changes to META-INF/ra.xml as shown in the following steps: ActiveMQ in Action



375、单词 hiveql 的匹配文本:


  1. Hive supports a SQL-like query language called HiveQL (HQL) to access data in its table. HBase Administration Cookbook


  2. Similar to Apache Pig, the Hive runtime engine translates HiveQL statements into a sequence of MapReduce jobs for execution. Hadoop Operations and Cluster Management Cookbook


  3. As will be seen throughout this book in the resemblances sections, Pig and SQL (or HiveQL) can be significantly more terse than the raw Hadoop implementations in Java. MapReduce Design Patterns


  4. HiveQL supports ad-hoc queries, joins, summarization, and so on, by transparently converting them into MapReduce execution. Professional Hadoop Solutions


  5. Hive lets you build analytics and manage data using a declarative, SQL-like dialect known as HiveQL. Hadoop RealWorld Solutions Cookbook


  6. Discussion The following HiveQL creates a table which you’ll partition by date as well as by country. Hadoop in Practice


  7. Its query language, HiveQL, is based on SQL, and anyone who is familiar with SQL would have little trouble writing queries in HiveQL. Hadoop:The Definitive Guide 3rd Edition


  8. Its target users remain data analysts who are comfortable with SQL and who need to do ad hoc queries , summarization , and data analysis on Hadoop-scale data.3 You interact with Hive by issuing queries in a SQL-like language called HiveQL. Hadoop in Action


  9. In particular, it provides a query language called HiveQL that closely resembles the common Structured Query Language (SQL) standard. Hadoop Beginners Guide


  10. Developers, analysts, and existing third-party packages already know and speak SQL (Hive’s dialect of SQL is called HiveQL and implements only a subset of any of the common standards) Hadoop Operations



376、单词 wildcards 的匹配文本:


  1. Intuitively speaking, wildcards with supertype bounds let you write to a generic object, while wildcards with subtype bounds let you read from a generic object. Core Java Volume I Fundamentals 9th Edition


  2. The most common modifiers, which you are surely familiar with, are wildcards. Mastering ElasticSearch


  3. Java supports wildcards as a flexible mechanism to allow subtyping for generics (more generally referred to as use-site variance) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. We can specify all the part prefixed subdirectories as input to the dependent job using wildcards. Hadoop MapReduce Cookbook


  5. Bash by default performs globbing, meaning that it’ll expand wildcards like “*” Hadoop in Practice


  6. Fieldnames can be specified with wildcards—any field which matches the wildcard pattern will be included in the search. Elasticsearch The Definitive Guide


  7. If an input parameter is both a producer and a consumer, then wildcard types will do you no good: you need an exact type match, which is what you get without any wildcards. Effective Java 2nd Edition


  8. Lower-bounded wildcards are much less commonly used than upperbounded wildcards, and we discuss them later in the chapter. Java in a Nutshell 5th Edition


  9. Chapter 11 presents advanced features provided by ActiveMQ such as wildcards and composite destinations, advisory messages, virtual topics, some info about ActiveMQ plug-ins, and an introduction to message routing with Apache Camel. ActiveMQ in Action


  10. Wildcards are used to declare parameterized types that are either bounded or unbounded. Java 7 Recipes A Problem-Solution Approach



377、单词 sphinx 的匹配文本:


  1. And finally, Appendix F is an introduction to Sphinx, a high-performance, full-text indexing system that can complement MySQL’s own abilities. High Performance MySQL 3rd Edition


  2. Indexing and searching is fast in Sphinx Search, and it supports distributed queries as well. SQL Antipatterns



378、单词 versioning 的匹配文本:


  1. It’s common practice that a version control system (VCS) manages the versioning of the build definition and source code files. Gradle in Action


  2. ElasticSearch tries to reduce these issues and provide additional mechanisms as versioning. Mastering ElasticSearch


  3. Things to think about: Major impact on storage design and performance Are these storage- or application-level features? Versioning of collaborative documents. Storage Architecture and Challenges


  4. If you are developing an application that requires revision history, it is highly recommended that you build an additional versioning layer rather than use CouchDB''s internal layer. Beginning CouchDB


  5. This is a very efficient way to create a prototype, but of course a properly managed plug-in will need a build of its own, so it can be decorated with all of the appropriate testing, versioning, and release processes that attend mature software development. Gradle Beyond The Basics


  6. Versioning—The ability for the file format to support backward or forward compatibility. Hadoop in Practice


  7. We are currently implementing an object-granularity clone operation to create object copies with copy-on-write behavior for efficient storage utilization, and are extending the RADOS client interface to allow transparent versioning for logical point-in-time copies across sets of objects (i. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  8. Versioning support The data types are capable of being versioned to support updates to the client API. Cassandra The Definitive Guide


  9. A note on versioning Flume has gone through some major changes in recent times. Hadoop Beginners Guide


  10. Versioning As newer versions of Hadoop are related, the process of upgrading the software on disk can be deferred to the package manager. Hadoop Operations



379、单词 https 的匹配文本:


  1. In order to get your authentication details you need to log in to https://dev. Mastering ElasticSearch


  2. This specification defines the g of servlets and JSP pages to the HTTP and HTTPS protocols, and the g of EJB components to IIOP and SOAP protocols. Java EE 7 Specification


  3. Using curl curl is a lightweight command-line tool for transferring data to or from a server using various protocols, such as HTTP, HTTPS, and FTP. Beginning CouchDB


  4. You can find more details about Hive commands from https://cwiki. Hadoop MapReduce Cookbook


  5. The NameNode has an embedded Jetty HTTP/HTTPS web server, which is used for the SecondaryNameNode to read images and merge them back. Hadoop in Practice


  6. For that reason, the URL class only works with schemes that the Java library knows how to handle, such as http:, https:, ftp:, the local file system (file:), and JAR files (jar:) Core Java Volume II Advanced Features 9th Edition


  7. As of Java 1.4, secure HTTP is also supported with the https:// protocol. Java in a Nutshell 5th Edition


  8. It is valid to specify the wild card IP 0.0.0.0 to indicate the HTTPS server should listen on all interfaces. Hadoop Operations


  9. Standalone Java applications can indicate the proxy server to use by setting the socksProxyHost and socksProxyPort properties (if you’re using a SOCKS proxy server), or http.proxySet, http.proxyHost, http.proxyPort, https.proxySet, https.proxy Host, https.proxyPort, ftpProxySet, ftpProxyHost, ftpProxyPort, gopherProxy Set, gopherProxyHost, and gopherProxyPort system properties (if you’re using protocol-specific proxies) Java Network Programming 4th Edition


  10. Secure HTTP (HTTP over SSL or HTTPS) is also supported by this transport: ActiveMQ in Action



380、单词 incremented 的匹配文本:


  1. Databases in CouchDB have a sequence number that gets incremented every time the database is changed. CouchDB The Definitive Guide


  2. They’ll change less often (for example, to indicate a new feature) and have to be incremented manually. Gradle in Action


  3. For each record, numWords is incremented by the number of terms in input.txtField returned by the CountWords function. Dremel - Interactive Analysis of WebScale Datasets


  4. The cluster map also includes a list of down or inactive devices and an epoch number, which is incremented each time the map changes. Ceph A Scalable, High-Performance Distributed File System


  5. Generate a new vector with all the values incremented by one. Hadoop in Practice


  6. In the response, we can see that Elasticsearch has incremented. Elasticsearch The Definitive Guide


  7. For each word, its corresponding entry in a multiset called wordCount is incremented by one. Hadoop in Action


  8. Each time a process experiences an event, its clock is incremented by 1 Cassandra The Definitive Guide


  9. Each reduce task writes to a single file with the filename part-r-nnnnn, where nnnnn starts at 00000 and is incremented. Hadoop Beginners Guide


  10. When a data log is full, a new one will be created, and the log number incremented. ActiveMQ in Action



381、单词 filesystems 的匹配文本:


  1. The problem with HDFS and streaming data/logs HDFS isn''t a real filesystem, at least not in the traditional sense, and many of the things we take for granted with normal filesystems don''t apply here, for example being able to mount it. Apache Flume Distributed Log Collection for Hadoop


  2. The scan and DFSIO jobs are canonical benchmarks used to evaluate HDFS and other distributed filesystems run under Hadoop MapReduce; the results in table 1 are a coarse measure of the effect attributable to HDFS in our experiments. Apache Hadoop YARN Yet Another Resource Negotiator


  3. Its implementation addresses a number of problems that are present in a number of distributed filesystems such as Network File System (NFS) Professional Hadoop Solutions


  4. Scribe supports multiple data sinks, including HDFS, regular filesystems, and NFS. Hadoop in Practice


  5. Chapter 3 looks at Hadoop filesystems, and in particular HDFS, in depth. Hadoop:The Definitive Guide 3rd Edition


  6. Other developments include HBase running on filesystems other than Apache HDFS, such as MapR. HBase:The Definitive Guide


  7. You can store a big data set of (say) 100 TB as a single file in HDFS , something that would overwhelm most other filesystems. Hadoop in Action


  8. Like most filesystems, Hadoop has the concept of a home directory for each user. Hadoop Beginners Guide


  9. This is similar to common Linux filesystems such as ext3 Hadoop Operations


  10. Programming Hive introduces Hive, an essential tool in the Hadoop ecosystem that provides an SQL (Structured Query Language) dialect for querying data stored in the Hadoop Distributed Filesystem (HDFS), other filesystems that integrate with Hadoop, such as MapR-FS and Amazon’s S3 and databases like HBase (the Hadoop database) and Cassandra. Programming Hive



382、单词 awt 的匹配文本:


  1. Chapter 8 is a detailed discussion of the event model of the AWT, the abstract window toolkit. Core Java Volume I Fundamentals 9th Edition


  2. The AWT event-handling model requires application developers to implement a listener interface and have that listener register with each component that needs to be monitored. Java Network Programming and Distributed Computing


  3. A number of classes in the java.awt.geom package, such as Point2D.Double, are not serializable. Core Java Volume II Advanced Features 9th Edition


  4. Prominent examples include the Point and Dimension classes in the java.awt package. Effective Java 2nd Edition


  5. However, they were implemented as part of the private com.sun.awt. Java 7 New Features Cookbook


  6. For the second edition, John Zukowski reviewed my Java 1.1 AWT quick reference material, and George Reese reviewed most of the remaining new material. Java in a Nutshell 5th Edition


  7. In comparison to the AWT or I/O, there have been almost no changes and only a few additions. Java Network Programming 4th Edition


  8. This technique is used extensively in the java.awt and javax.swing packages, which define interfaces and abstract classes associated with different kinds of event-handling methods. Concurrent Programming in Java Design Principles and Pattern


  9. For example, you can’t serialize any Swing components (like JButton, JSpinner), and you can’t serialize File objects, but you can serialize the Color class (awt.color, to be more precise) Java 7 Recipes A Problem-Solution Approach


  10. NOTE: This method uses the Color and Image classes from JavaFX, not from java.awt. Java SE 8 for the Really Impatient



383、单词 dev 的匹配文本:


  1. Packaged distributions (particularly on Linux and Debian) typically distribute those headers in a dev package and install them into the directory /usr/include. Monitoring with Ganglia


  2. To push your application to a development database, you could use couchapp push dev. CouchDB The Definitive Guide


  3. Special thanks to Spencer Allain, Jonathan Keam, and Robert Wenner for thoroughly reading every chapter of the book and providing me with line-by-line edits and comments at different stages of development; Michael McGarr and Samuel Brown for bouncing around ideas that involved content on continuous delivery and DevOps; and Baruch Sadogursky from JFrog for the technical review of chapter 14 and for promoting the book even before it was released. Gradle in Action


  4. In the sample build file, we only want to sign the artifacts if the uploadArchives task is part of the Gradle task graph to be executed and if the version of the project doesn''t end with the String value DEV: Gradle Effective Implementation Guide


  5. In order to get your authentication details you need to log in to https://dev. Mastering ElasticSearch


  6. For those interested in developing or fixing features in Cassandra, it is common to join the dev mailing list and chat room. Cassandra High Performance Cookbook


  7. Fuse development files can be found in fusedevel RPM for Redhat/Fedora and in libfuse-dev package for Debian/Ubuntu. Hadoop MapReduce Cookbook


  8. Replace /dev/md1 with the name of your disk device (consult /etc/fstab for device names): Hadoop in Practice


  9. Download the JDBC driver for your database; for MySQL, we find it at http://dev. Hadoop Beginners Guide


  10. The continued growth in software engineering maturity today can be seen in the DevOps movement, climaxing with the publication last year of Jez Humble and David Farley’s book Continuous Delivery (AddisonWesley) Building and Testing with Gradle



384、单词 combinators 的匹配文本:


  1. You can easily combine these functions with the orElse and andThen combinators you learned at the beginning of this section: Scala in Action


  2. In the next chapter, we look more closely at functional programming techniques such as higher-order functions, currying, persistent data structures, lazy lists, pattern matching, and combinators. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. The MapReduce model originates from the map and reduce combinators concept in functional programming languages such as Lisp. Professional Hadoop Solutions


  4. Examples can be found in standard libraries such as parser combinators and actors. Programming in Scala 2nd Edition


  5. We want to start writing combinators that let us avoid explicitly passing along the  state. Functional Programming in Scala


  6. External DSLs with Parser Combinators When you write a parser for an external DSL, you can use a parser generator tool like Antlr (see [Antlr]) Programming Scala


  7. Let''s explore advanced CSS selectors such as adjacent sibling combinators and pseudoclasses as described in the following sections. Selenium Testing Tools Cookbook


  8. In this chapter, we’re going to explore combinators and Scala’s parser combinatory library. Beginning Scala



385、单词 oop 的匹配文本:


  1. Object-oriented programming (OOP) is now in the mainstream of programming practice, and Java is an object-oriented programming language. Core Java Volume I Fundamentals 9th Edition


  2. As Java programmers, we’re well versed in object-oriented programming (OOP) Programming Concurrency on the JVM


  3. However, I will be introducing the concepts of object-oriented programming (OOP) and Java’s basic control mechanisms. Thinking in Java 4th Edition


  4. This approach borrowed from an OOP world solves the problem of maintaining jobs sharing a number of identical properties. Jenkins The Definitive Guide


  5. Because in the client you have to create a JSON object, you’ll import the JSONStringer object that lets you create JSON strings in an OOP way. RabbitMQ in Action


  6. An OOP is normally the same size as the machine pointer. Cassandra High Performance Cookbook


  7. Other languages that have integrated some elements of functional programming into OOP include Ruby, Smalltalk, and Python. Programming in Scala 2nd Edition


  8. Smalltalk captured the essence of object-oriented programming (OOP) as that model of programming entered the mainstream. Programming Scala


  9. But in fact, most of the Java classes we define have getters and setters, so there is data exposed in OOP. Beginning Scala


  10. The HotSpot VM internally accesses the instanceKlass using an internal data structure called a klassOop. Java Performance



386、单词 filenames 的匹配文本:


  1. It also assumes that filenames never change; otherwise, the source would loose its place on restarts as to which files have been sent and which have not. Apache Flume Distributed Log Collection for Hadoop


  2. As of Ganglia 3.3, hostnames are converted to lowercase, RRD files are created with lowercase filenames, and lowercase hostnames should be used in URLs to access the metrics. Monitoring with Ganglia


  3. This section will explain how to configure your project to use custom build filenames. Gradle in Action


  4. The matching pattern style is described as an ANT-style matching pattern because the ANT build tool uses this style to define a syntax for matching filenames in file trees. Gradle Effective Implementation Guide


  5. Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "What we would like to do is, use the BM25 similarity model for the name field and the contents field." Mastering ElasticSearch


  6. A real-world build routinely copies files from place to place, recursing directory trees, pattern-maching filenames, and performing string operations on file content. Gradle Beyond The Basics


  7. By Hadoop convention filenames for these files end in .deflate. Hadoop in Action


  8. Indicates new terms, URLs, email addresses, filenames, and file extensions. Cassandra The Definitive Guide


  9. On a computer filesystem, filenames (keys) allow access to any sort of data, such as text, images, and sound (values) Hadoop Beginners Guide


  10. Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Building and Testing with Gradle



387、单词 jta 的匹配文本:


  1. However, for those applications that were built on EJB or need to deploy the Spring-based applications in a JEE container and utilize the application server’s enterprise services (e.g., JTA Transaction Manager, data source connection pooling, JMS connection factories, etc.), Spring also provides simplified support for those technologies. Pro Spring 3


  2. There is a transaction manager that handles the transactions, and it even has support for JTA (Java Transaction API), which enables distributed or global transactions. Pro Spring MVC


  3. Additional threads that are created for any purpose uld not attempt to start JTA transactions. Java EE 7 Specification


  4. You can use the Java Transaction API (JTA) to make transactions span both databases: the batch tables and the business tables will always be synchronized, but you’ll add overhead because managing multiple transactional resources is more expensive than managing just one. Spring Batch in Action


  5. Spring deals with common resource management “plumbing” code and allows for declarative transaction management in various situations ranging from managed environments with JTA (Java Transaction API, the transaction standard in Java) to native database transaction with plain JDBC. Spring Dynamic Modules in Action


  6. TransactionScoped is a new CDI scope that defines bean instances whose life cycle is scoped to the currently active JTA transaction. Java EE 7 Essentials


  7. It’s transactional, supporting JTA/JTS, two-phase commit, deadlock detection, and MVCC. Cassandra The Definitive Guide


  8. Note that the JTA transaction manager has been added and a reference to it has been added to the listener-container. ActiveMQ in Action


  9. The typical situation when running inside the Java EE container environment is that the standard Java Transaction API (JTA) is used. Pro JPA2


  10. It is written predominantly in Java and leverages a custom storage format and the facilities of the Java Transaction Architecture (JTA) to provide XA transactions. Spring Data Modern Data Access for Enterprise Java



388、单词 finalizer 的匹配文本:


  1. Gradle’s answer to such a scenario is finalizer tasks, which are regular Gradle tasks scheduled to run even if the finalized task fails. Gradle in Action


  2. If you don’t want to return this key, you can use a finalizer to change the final accumulator document into anything, even a nondocument (e.g., a number or string) MongoDB The Definitive Guide 2nd Edition


  3. The particular definition of finalize that can be invoked for an object is called the finalizer of that object. JSR-133 Java Memory Model and Thread


  4. If an object declares a finalizer, the finalizer is executed before the object. The Java Language Specification Java SE 7 Edition


  5. However, since a finalize method can contain arbitrary code, there are several potentially bad scenarios to consider as well, for example, the case that a finalizer resurrects the object, or clones a new object from the dying one, in effect preventing an object from being garbage collected. Oracle JRockit The Definitive Guide


  6. It can take arbitrarily long between the time that an object becomes unreachable and the time that its finalizer is executed. Effective Java 2nd Edition


  7. A finalizer in Java is the opposite of a constructor. Java in a Nutshell 5th Edition


  8. Finalizers postpone garbage collection until the finalizer is run, adding yet more overhead to the GC algorithm. Java Performance Tuning 2nd Edition


  9. These arise from a combination of internal HotSpot VM threads and HotSpot VM library related threads such as the reference handler and finalizer threads. Java Performance


  10. In this case, you use the finalizer to calculate the average number of votes per review. MongoDB in Action



389、单词 sflow 的匹配文本:


  1. In fact, in cooperation with the sFlow agent, Ganglia may be used to monitor network gear such as routers and switches (see Chapter 8 for more information) Monitoring with Ganglia



390、单词 subprojects 的匹配文本:


  1. The Maven Reactor lays out the build order: the parent pom, then the subprojects. Netty in Action MEAP V08


  2. You’ll start by deconstructing the existing project structure and break out individual, functional subprojects. Gradle in Action


  3. In this case you have only one project, but multiproject builds will declare all the subprojects here. Scala in Action


  4. Example 3-2 has three builds, the parent project and two subprojects. Gradle Beyond The Basics


  5. Over the course of three years, the Hadoop project has blossomed and spun off half a dozen subprojects. Hadoop:The Definitive Guide 3rd Edition


  6. The most prominent and well-supported ones have officially become subprojects under the umbrella of the Apache Hadoop   project.1 These subprojects include. Hadoop in Action


  7. Parts of Hadoop The top-level Hadoop project has many component subprojects, several of which we''ll discuss in this book, but the two main ones are Hadoop Distributed File System (HDFS) and MapReduce. Hadoop Beginners Guide


  8. Instead, you’ll find three subprojects called Common, HDFS, and MapReduce shown as tabs along the top of the page. Hadoop Operations


  9. We’ve also seen how NMS and CMS subprojects help create more complex, JMS-like APIs for environments that deserve this kind of support. ActiveMQ in Action


  10. However, for a given build, not all tasks must be present in the top-level build file, since Gradle supports dependent subprojects, or nested builds. Building and Testing with Gradle



391、单词 iterates 的匹配文本:


  1. The rest of the code block defines the parameters that will be passed to the metric init function and iterates through each metric definition, calls the specified callback function with the metric name as a parameter, and finally prints out the metric value that was gathered. Monitoring with Ganglia


  2. Iterates through all subprojects accessible through root projectSets custom build filename for a subproject by using subprojects’ names, Gradle in Action


  3. The while-loop of the algorithm (Line 5) iterates over all atomic and record-valued fields contained in a given record. Dremel - Interactive Analysis of WebScale Datasets


  4. The reduce worker iterates over the sorted intermediate data and for each unique intermediate key encountered, it passes the key and the corresponding set of intermediate values to the user’s Reduce function. MapReduce - Simplied Data Processing on Large Clusters


  5. When an active OSD receives an updated cluster map, it iterates over all locally stored placement groups and calculates the CRUSH mapping to determine which ones it is responsible for, either as a primary or replica. Ceph A Scalable, High-Performance Distributed File System


  6. The process iterates until the skip range is within the acceptable size. Hadoop in Action


  7. Our engine iterates on input rows and uses generated procedures that do both. Tenzing - A SQL Implementation On The MapReduce Framework


  8. The example created a Ruby script that opens an Avro datafile, iterates through each datum, and displays it based on explicitly named fields. Hadoop Beginners Guide


  9. Next it iterates through all available queues and prints their current size and number of consumers subscribed to them. ActiveMQ in Action


  10. This Groovy code declares a list of four strings (each of which is the name of a poet), then iterates over that list. Building and Testing with Gradle



392、单词 connectors 的匹配文本:


  1. Two Jetty connectors are defined for HTTP and for the secure TLS traffic. Jenkins Continuous Integration Cookbook


  2. This specification also describes standard ways to extend Java EE with connectors to other non-Java EE application systems, such as me systems and ERP systems. Java EE 7 Specification


  3. Sqoop has the notion of Connectors, which contain the specialized logic to read and write to external systems. Hadoop in Practice


  4. Applications using the Thrift, JDBC, and ODBC connectors need to run a Hive server to communicate with Hive. Hadoop:The Definitive Guide 3rd Edition


  5. To add a user-defined connection, simply click on the Connectors folder and click on the Create Connection button. Oracle JRockit The Definitive Guide


  6. Many of these connectors are open source, while others are free or available from commercial vendors at a cost. Hadoop Operations


  7. It discusses ActiveMQ URIs and all the transport connectors for both client-to-broker and broker-to-broker communications including TCP, NIO, STOMP, failover, SSL, HTTP/S, and much more. ActiveMQ in Action


  8. The server consists of one or more connectors that share a single engine component. Java Performance


  9. Such tools are typically designed to make it easy to load content stored in various systems, and sometimes provide pre-built connectors to common content stores, such as web sites, databases, popular content management systems, filesystems, etc. Lucene in Action 2nd Edition


  10. Many NoSQL databases have implemented Hive connectors using custom adapters. Programming Hive



393、单词 buf 的匹配文本:


  1. If it is successful create a ByteBuf to hold the data. Netty in Action MEAP V08


  2. The default implementation of this method simply calls the engineDigest( ) method and then copies the result into buf. Java Security 2nd Edition


  3. The “buf.type” that appears in the interpreter responses in this and several other examples in this section is a singleton type. Programming in Scala 2nd Edition


  4. Abbreviations, as in buf holding a pointer to a buffer of some kind. The Java Language Specification Java SE 7 Edition


  5. The buf argument is the address of the String buffer that stores the resulting message. Understanding MySQL Internals



394、单词 subproject 的匹配文本:


  1. Gradle figures out for you which of the subproject dependencies need to be rebuilt, without having to store a subproject’s artifact in the local cache. Gradle in Action


  2. We apply the Java plugin to each subproject and add a testCompile dependency on the JUnit libraries. Gradle Effective Implementation Guide


  3. This causes Gradle to look explicitly in the individual subproject’s directory for the grammars directory, rather than only at the root project level. Gradle Beyond The Basics


  4. Chukwa is an Apache subproject of Hadoop that also offers a large-scale mechanism to collect and store data in HDFS. Hadoop in Practice


  5. If you need to know which package a class is in, you can easily look it up in Hadoop’s Java API documentation for the relevant subproject, linked to from the Apache Hadoop home page at http://hadoop.apache.org/ Hadoop:The Definitive Guide 3rd Edition


  6. Hadoop started out as a subproject of Nutch , which in turn was a subproject of Apache Lucene. Hadoop in Action


  7. HBase HBase is a clone of Google’s Bigtable, originally created for use with Hadoop (it’s actually a subproject of the Apache Hadoop project) Cassandra The Definitive Guide


  8. Doug started work on the implementations of these Google systems, and Hadoop was soon born, firstly as a subproject of Lucene and soon was its own top-level project within the Apache open source foundation. Hadoop Beginners Guide


  9. The Common subproject was to contain code shared by both HDFS and MapReduce, while HDFS and MapReduce would be home to their respective components. Hadoop Operations


  10. Ant, though the older of the JVM build tools, has been supplemented by the Ivy subproject to accomplish exactly this. Building and Testing with Gradle



395、单词 schedulers 的匹配文本:


  1. We compare this approach to existing cluster scheduler designs, evaluate how much interference between schedulers occurs and how much it matters in practice, present some techniques to alleviate it, and finally discuss a use case highlighting the advantages of our approach – all driven by real-life Google production workloads. Omega - flexible, scalable schedulers for large compute clusters


  2. Other administrative tasks include the management of Hadoop jobs, for example configuring job scheduling policy with schedulers. Hadoop Operations and Cluster Management Cookbook


  3. This makes it possible to tightly enforce global scheduling properties (different schedulers in YARN focus on different global properties, such as capacity or fairness), but it requires the scheduler to obtain an accurate understanding of applications’ resource requirements. Apache Hadoop YARN Yet Another Resource Negotiator


  4. When talking about merge policy and merge schedulers it would be nice to visualize it. Mastering ElasticSearch


  5. The good news is that Hadoop scheduler is pluggable, and it comes with two other schedulers. Hadoop MapReduce Cookbook


  6. Summary The amount of parallelism available to you is a function of your cluster capacity, the other jobs running at the same time as your job, and any schedulers configured for your environment. Hadoop in Practice


  7. The default in MapReduce 1 is the original FIFO queue-based scheduler, and there are also multiuser schedulers called the Fair Scheduler and the Capacity Scheduler. Hadoop:The Definitive Guide 3rd Edition


  8. Fortunately, starting with version 0.19, Hadoop has a pluggable architecture for the scheduler, and two new schedulers have become available for resolving job contention. Hadoop in Action


  9. Unless we use one of the alternative schedulers that we will discuss in later sections, the FIFO scheduler dedicates the full cluster to the sole currently running job. Hadoop Beginners Guide


  10. In Chapter 7, we’ll cover how each of the schedulers may be configured to suit different environments and use cases. Hadoop Operations



396、单词 atomically 的匹配文本:


  1. The WAL is used to track all input and output from the channel in an atomically safe way. Apache Flume Distributed Log Collection for Hadoop


  2. Storing all metadata in a single Bigtable row allows us to update it atomically through a single Bigtable transaction. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  3. The entry data is accessed atomically, and changes are ordered. Pro Hadoop


  4. When a reduce task completes, the reduce worker atomically renames its temporary output file to the final output file. MapReduce - Simplied Data Processing on Large Clusters


  5. When it has moved all but a nominal amount of the data, it uses a transaction to atomically move that nominal amount and update the metadata for the two Paxos groups. Spanner - Google’s Globally-Distributed Database


  6. It turns out there’s no direct analogue in the relational world! Column families exist in HBase as a way for a single row to contain disjoint sets of columns in a way that’s physically efficient but can be processed atomically. HBase in Action


  7. Files must be atomically moved into the source directory (file moves in both the Linux3 and HDFS filesystems are atomic) Hadoop in Practice


  8. Cell values can be interpreted as counters and updated atomically. HBase:The Definitive Guide


  9. The contents of a file are read atomically and in their entirety. Chubby - The lock service for loosely-coupled distributed systems


  10. In particular, composite read/write operations such as the "++'''' operation on volatile variables are not performed atomically. Concurrent Programming in Java Design Principles and Pattern



397、单词 classloader 的匹配文本:


  1. Launch the console and issue the x command, which clears the classloader cache. Core Java Volume I Fundamentals 9th Edition


  2. If a user-defined classloader prefetches binary representations of classes and interfaces, or loads a group of related classes together, then it must reflect loading errors only at points in the program where they could have arisen without prefetching or group loading. The Java Virtual Machine Specification Java SE 7 Edition


  3. This is generally a good idea, as it can avoid classloader issues related to conflicts with Ant’s own libraries. Jenkins The Definitive Guide


  4. Classes in Java are loaded by a classloader object, which consults the directories and. Java Threads 3rd Edition


  5. It indicates that the dependencies you’re providing will be available to the classloader during the rest of the build script execution, which is exactly what we need. Gradle Beyond The Basics


  6. The JNDI naming context, classloader, and security context is propagated to the proxied object. Java EE 7 Essentials


  7. The option of splitting the classes per classloader is also available in the Type Graph. Oracle JRockit The Definitive Guide


  8. A further extension to this idea is to include your own classloader that opens the ZIP/JAR file itself and reads in all files sequentially, loading them into memory immediately. Java Performance Tuning 2nd Edition


  9. This will cause classloader issues and will prevent ActiveMQ from being deployed correctly. ActiveMQ in Action


  10. Java 7 simply adds a close method to close the classloader. Java SE 8 for the Really Impatient



398、单词 deployer 的匹配文本:


  1. You’ll configure the Maven deployer to upload both files to a local directory. Gradle in Action


  2. We must configure the repository for our uploadArchives task via a closure argument of the mavenDeployer() method. Gradle Effective Implementation Guide


  3. Because there’s no gold copy of the configuration for a given set of server instances, the deployer has to go onto each server and update the configuration manually. Spring in Practice


  4. The Application Assembler is responsible for providing assembly ions describing external dependencies of the application that the Deployer solve in the deployment process. Java EE 7 Specification


  5. At the heart of the Spring DM web support is its web extender, which contains a WAR deployer, which we’ll take a look at to see how it differs from a standard web deployer. Spring Dynamic Modules in Action


  6. You decide what components to include via the Geronimo deployer. ActiveMQ in Action


  7. For GlassFish, it is recommended that the deployer use commit option C for entity beans that are used only in transactions or used only once or twice (hence no benefit in caching the instances) Java Performance


  8. The deployment process is optimized for these kinds of deployment scenarios to minimize the amount of metadata that a deployer has to specify. Pro JPA2



399、单词 traversable 的匹配文本:


  1. The file chooser calls the isTraversable method to decide whether to open a directory when a user clicks on it. Core Java Volume I Fundamentals 9th Edition


  2. Then we instantiate a variable x as an anonymous subclass of A, such that type B is stabilized at Traversable[Int] Scala in Depth


  3. Traversable is the parent trait for all the collection types in Scala, and the ++ method is only defined in this trait. Scala in Action


  4. Nodes know about INCOMING and OUTGOING relationships that are traversable both ways. NoSQL Distilled


  5. If the current vertex value is still equal to Integer.MAX, we know that the graph never received any incoming messages intended for that vertex, which implies it is not traversable by the source vertex. Hadoop RealWorld Solutions Cookbook


  6. This method selects those elements of a collection that pass a test the user sup3A foreach method is defined in trait Traversable, a common supertrait of List, Set, Array, and Map. Programming in Scala 2nd Edition


  7. But traversable data types are too numerous for us to write specialized and  methods for each of them. Functional Programming in Scala


  8. The xs parameter can be any collection with the TraversableOnce trait, the most general trait in the Scala collections hierarchy. Scala for the Impatient


  9. As described earlier in Section 2.1, Dapper traces are modelled as trees of trace spans, so the Trace data structure is consequently a simple traversable tree of individual Span structures. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  10. In addition to receiving confined references along call chains, the method invocations executing within a single thread can access the Thread object representing the thread they are running in, and any further information traversable from there. Concurrent Programming in Java Design Principles and Pattern



400、单词 htable 的匹配文本:


  1. In the Java source, to connect to HBase, we first create a Configuration object and then use it with the table name to create an HTable instance. HBase Administration Cookbook


  2. All programmatic data-manipulation access to HBase is done through either the HTableInterface or the HTable class that implements HTableInterface. Professional Hadoop Solutions


  3. The HTable constructor reads the default configuration information to locate HBase, similar to the way the shell did. HBase in Action


  4. Finally, we call close() on our HTable instance to flush out any write buffers not yet cleared. Hadoop:The Definitive Guide 3rd Edition


  5. Here is a summary of the points we just discussed: • Create HTable instances only once, usually when your application. HBase:The Definitive Guide


  6. The HTable class is the main way in Java to interact with HBase. Spring Data Modern Data Access for Enterprise Java



401、单词 sudo 的匹配文本:


  1. You can now start CouchDB using the new user: sudo -i -u couchdb couchdb -b. Getting Started with CouchDB


  2. Finally, to install gmond, type: user@host:# sudo yum install ganglia-gmond. Monitoring with Ganglia


  3. Build and install the CouchDB ebuild by running: sudo emerge couchdb. CouchDB The Definitive Guide


  4. This user should have sudo privileges to install software packages, configure system services, and so on. Hadoop Operations and Cluster Management Cookbook


  5. Please remember to run the jstat command from the same account ElasticSearch is running or if that is not possible, run the jstat command with administrator privileges (for example, using sudo command on Linux systems) Mastering ElasticSearch


  6. If you get a message informing you that curl is not available, simply use the command sudo apt-get install curl to download and install it automatically. Beginning CouchDB


  7. You’ll need sudo access for your user to run this command (it launches the Hadoop services via init.d scripts): Hadoop in Practice


  8. We also need to change the owner of the Hadoop files to be the hadoop user and group: % sudo chown -R hadoop:hadoop hadoop-x.y.z. Hadoop:The Definitive Guide 3rd Edition


  9. In order to get autoscan, which the Thrift bootstrap needs, you’ll have to run this command: $ sudo apt-get install automake. Cassandra The Definitive Guide


  10. For instance, assuming the HDFS super user is hdfs, it would normally be possible to perform administrative activities using sudo like in Example 6-3 Hadoop Operations



402、单词 bytecodes 的匹配文本:


  1. Of course, interpreting bytecodes is necessarily slower than running machine instructions at full speed, so it isn’t clear that this is even a good idea. Core Java Volume I Fundamentals 9th Edition


  2. A class file contains Java Virtual Machine instructions (or bytecodes) and a symbol table, as well as other ancillary information. The Java Virtual Machine Specification Java SE 7 Edition


  3. When a class must be loaded (typically, the first time you want to create an object of that class), the .class file is located, and the bytecodes for that class are brought into memory. Thinking in Java 4th Edition


  4. But if the program came from an unknown source, how do we know that the bytecodes we''ve received are actually legal? Java Security 2nd Edition


  5. Thus, the JVM is required to turn the bytecodes into native code for the CPU on which the Java application executes. Oracle JRockit The Definitive Guide


  6. That lets you carry out custom checks before you pass the bytecodes to the virtual machine. Core Java Volume II Advanced Features 9th Edition


  7. A long time ago, when Java was Oak and it was being designed as a way to distribute untrusted code into set-top boxes (and later browsers), the rules defining how a program executed and what were the means of the instruction set (bytecodes) was super-important. Beginning Scala


  8. Generating the bytecodes without the extra constructor is perfectly legal. Java Performance Tuning 2nd Edition


  9. People loved the simplicity of the language, the portability of bytecodes, and the safety of garbage collection (versus traditional malloc/free memory management of other systems) Java Performance


  10. This mapping strategy also applies at the level of the CPU when operations are compiled into native code rather than interpreted as bytecodes. Concurrent Programming in Java Design Principles and Pattern



403、单词 workflows 的匹配文本:


  1. How all of these components can be used to build complicated data workflows will become clear as the book proceeds. Apache Flume Distributed Log Collection for Hadoop


  2. With its plugin system and rapidly growing community, you’ll need to take into consideration existing JavaScript automation workflows. Gradle in Action


  3. Still worse, instead of losing a single workflow, a JobTracker failure caused an outage that would lose all the running jobs in a cluster and require users to manually recover their workflows. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Hive and Pig provide a clean abstraction layer between your data flow and meaningful queries, and the complex MapReduce workflows they compile to. Hadoop RealWorld Solutions Cookbook


  5. We will also be discussing other advanced Hadoop features such as using DistributedCache for distribute data, using Hadoop Streaming for quick prototyping of Hadoop computations, and using Hadoop counters to report custom metrics for your computation as well as adding job dependencies to manage simple DAG-based workflows of Hadoop MapReduce computations. Hadoop MapReduce Cookbook


  6. For our scenario Oozie has a coordinator engine that can start workflows based on data and time triggers. Hadoop in Practice


  7. Oozie A service for running and scheduling workflows of Hadoop jobs (including MapReduce, Pig, Hive, and Sqoop jobs) Hadoop:The Definitive Guide 3rd Edition


  8. Each page in the ES2 collection is pushed through multiple logical workflows, each consisting of a local analysis phase, a global analysis phase, and an appropriate variant generation strategy. Hadoop in Action


  9. Smart software, dumb hardware If we wish to see a cluster of hardware used in as flexible a way as possible, providing hosting to multiple parallel workflows, the answer is to push the smarts into the software and away from the hardware. Hadoop Beginners Guide


  10. Workflows can be triggered by time or events such as data arriving in a directory, and job failure handling logic can be implemented so that policies are adhered to. Hadoop Operations



404、单词 keystone 的匹配文本:


  1. Identity (codenamed "Keystone") provides authentication and authorization for all the OpenStack services. bk-compute-adminguide-grizzly


  2. Keystone Provides an authentication and authorization service for other OpenStack services. bk-admin-guide-cloud-latest


  3. As a result of the influence elephants have over many plants and animals, they are often referred to as a keystone species because they are vital to the long-term survival of the ecosystems in which they live. Hadoop:The Definitive Guide 3rd Edition


  4. OpenStack services that are stateless include nova-api, nova-conductor, glance-api, keystone-api, neutron-api and nova-scheduler. high-availability-guide


  5. To see the version of the keystone client, run the following command: admin-user-guide-trunk


  6. The OpenStack Identity Service (Keystone) is the point that provides the authentication decisions and user attribute information, which is then used by the other OpenStack services to perform authorization. OpenStack Operations Guide



405、单词 env 的匹配文本:


  1. In order to inform HBase where Java is installed, we will set JAVA_HOME in hbase-env. HBase Administration Cookbook


  2. Per-process runtime environment: The file hadoop-env.sh is responsible for tailoring the per-process environment. Pro Hadoop


  3. Based on the provided environment value with the key env, Maven will pick the appropriate profile. Gradle in Action


  4. File Description hadoop-env.sh Configures the environment variable used by Hadoop core-site.xml Configures parameters for the whole Hadoop cluster hdfs-site.xml Configures parameters for HDFS and its clients mapred-site. Hadoop Operations and Cluster Management Cookbook


  5. The #!/usr/bin/env python statement is a hint to tell the shell how to execute the script. Hadoop RealWorld Solutions Cookbook


  6. You can check if it''s there by using the env command on UNIX-like systems and the set command on Windows. Elasticsearch in Action MEAP V07 cHAPTER 1


  7. If you’ve set up your cluster with LZO compression, you’ll need to create a flume-env.sh file and set the directory that contains the native compression codecs: Hadoop in Practice


  8. Environment Settings In this section, we consider how to set the variables in hadoop-env.sh. Hadoop:The Definitive Guide 3rd Edition


  9. In hadoop-env.sh   define the JAVA_HOME   environment variable to point to the Java installation directory. Hadoop in Action


  10. Later, we’ll use configuration variables in hadoop-env.sh to specify alternate locations for Hadoop’s log and pid file directories. Hadoop Operations



406、单词 tmp 的匹配文本:


  1. Finally, while files are being written to the HDFS, a .tmp extension is added. Apache Flume Distributed Log Collection for Hadoop


  2. We also need to create a /tmp directory on HDFS, so that Hive can save its temporary files under it. HBase Administration Cookbook


  3. For instance, here’s a sample command for piping the output of running gmond in debug mode to the file /tmp/gmond.log: Monitoring with Ganglia


  4. In a standard Cygwin installation, the /tmp directory is also the C:\cygwin\tmp directory. Pro Hadoop


  5. Set the sticky bit on HDFS directory to prevent the directories or files from being deleted by unauthorized users with the following command: sudo -u hdfs hadoop fs -chmod 1777 /tmp. Hadoop Operations and Cluster Management Cookbook


  6. You would therefore (incorrectly) assume that the command hadoop fs -ls /tmp/* would work. Hadoop in Practice


  7. Test Drive To start a temporary instance of HBase that uses the /tmp directory on the local filesystem for persistence, type: Hadoop:The Definitive Guide 3rd Edition


  8. The default values for dfs.name.dir and dfs.data.dir point to directories under /tmp, which is intended only for temporary storage in almost all Unix systems. Hadoop in Action


  9. Regardless of the mode, Hadoop by default uses the hadoop.tmp.dir property as the base directory under which all files and data are written. Hadoop Beginners Guide


  10. Many new to Hadoop have quickly setup Hadoop, missed setting dfs.name.dir, and found themselves with an unusable filesystem after rebooting the namenode because / tmp was cleared during boot. Hadoop Operations



407、单词 leveraging 的匹配文本:


  1. You’ve most likely created these connections between nodes by declaring a dependsOn relationship for a task or by leveraging the implicit task dependency interference mechanism. Gradle in Action


  2. It redefines the way that data is managed and processed by leveraging the power of computing resources composed of commodity hardware. Hadoop Operations and Cluster Management Cookbook


  3. We showcase this by leveraging the notion of container reuse and push-based communications provided by REEF. Apache Hadoop YARN Yet Another Resource Negotiator


  4. This chapter will introduce readers to these two frameworks, which are capable of leveraging the distributed power of MapReduce. Hadoop RealWorld Solutions Cookbook


  5. By declaring and modelling dependencies, tooling such as Gradle can automate working with dependencies by leveraging this information. Gradle Beyond The Basics


  6. Finally, by opening Dapper’s trace repositories to internal developers we have enabled the creation of many more analysis tools than the Dapper team alone could have been able to produce in isolation, greatly leveraging the design and implementation effort. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Usage was important to us because we think some of the most interesting research will involve leveraging the vast amount of usage data that is available from modern Web systems. The anatomy of a large-scale hypertextual Web search engine


  8. Among many notable “Aha!” moments I had while working with Cynthia, the biggest one was when she steered me into leveraging visual aids to help explain some of the complex concepts in this book. Hadoop in Practice


  9. We present the design and implementation of RADOS, a reliable object storage service that can scales to many thousands of devices by leveraging the intelligence present in individual storage nodes. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  10. When considering leveraging an existing investment in Apache Ant, or perhaps using the broad set of tasks that the Ant community has created, Gradle has a great story to tell. Building and Testing with Gradle



408、单词 sawzall 的匹配文本:


  1. If you’re interested in pushing the edge of CouchDB’s incremental reduce functionality, have a look at Google’s paper on Sawzall, which gives examples of some of the more exotic reductions that can be accomplished in a system with similar constraints. CouchDB The Definitive Guide


  2. The handoptimized Sawzall version likewise intermixes logical computation with lower-level implementation details, in an effort to get better performance. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. The set of aggregations is limited but the query phase can involve more general computations, which we express in a new interpreted, procedural programming language called Sawzall.1 (An interpreted language is fast enough: most of the programs are small and on large data sets the calculation tends to be I/O bound, as is discussed in the section on performance.) Interpreting the Data Parallel Analysis with Sawzall


  4. HBase has coprocessors that are different from what Sawzall, the scripting language used in Bigtable to filter or aggregate data, or the Bigtable Coprocessor framework,† provides. HBase:The Definitive Guide


  5. Queries are also sometimes written in a scripting language developed at Google called Sawzall. Cassandra The Definitive Guide


  6. An increasing number of analysts were being forced to write custom code for more complex analysis, often directly against the source (such as Sawzall against logs) Tenzing - A SQL Implementation On The MapReduce Framework


  7. The scripts are written in a language developed at Google for processing data called Sawzall [28] Bigtable - A Distributed Storage System for Structured Data



409、单词 delimiter 的匹配文本:


  1. Careful with those delimiters! With so many Nagios plug-ins, written by so many different authors, it’s important to carefully choose your delimiter and avoid using the same one returned by a plug-in. Monitoring with Ganglia


  2. While these values can be crammed into a Text object with some delimiter, it is typically a better practice to create a custom Writable. MapReduce Design Patterns


  3. A standard delimiter “/” is used to split a key name.) Professional Hadoop Solutions


  4. Each line of the input data is broken into a key (text) and value (text) pair using a delimiter character. Hadoop MapReduce Cookbook


  5. The default record delimiter is newline, but it can be overridden with. Hadoop in Practice


  6. Each line is broken into fields using a configurable field delimiter (defaults to a tab character) to be stored in the tuple’s fields. Hadoop:The Definitive Guide 3rd Edition


  7. To ignore standard punctuation marks, we add them to the StringTokenizer’s list of delimiter characters: Hadoop in Action


  8. Instead of representing columns within a super column, the composite key approach means that you use a regular column family with regular columns, and then employ a custom delimiter in your key name and parse it on client retrieval. Cassandra The Definitive Guide


  9. For example, for delimited text format, the user can specify the delimiter, encoding, quoting, escaping, headers, etc. Tenzing - A SQL Implementation On The MapReduce Framework


  10. In order to access the different attributes at run time, the Map and Reduce functions split the record by the delimiter into an array of strings. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



410、单词 untrusted 的匹配文本:


  1. In the real world, Java uses the "sandbox" approach, wherein untrusted code, which includes classes downloaded over a network within a Web browser, is placed within the sandbox and required to meet certain expectations. Java Network Programming and Distributed Computing


  2. If you’re deploying CouchDB on the Web, or any place where untrusted parties can access your sever, it behooves you to research the recommended security measures for your operating system and take any additional steps needed. CouchDB The Definitive Guide


  3. When we discuss the methods below, we speak of them in terms of trusted and untrusted classes. Java Security 2nd Edition


  4. By contrast, SETI@home runs a perpetual computation on untrusted machines on the Internet with highly variable connection speeds and no data locality. Hadoop:The Definitive Guide 3rd Edition


  5. If you write a class whose security depends on the immutability of a BigInteger or BigDecimal argument from an untrusted client, you must check to see that the argument is a “real” BigInteger or BigDecimal, rather than an instance of an untrusted subclass. Effective Java 2nd Edition


  6. Chapter 6: Java Security This chapter explains the Java security architecture that allows untrusted code to run in a secure environment from which it cannot do any malicious damage to the host system. Java in a Nutshell 5th Edition


  7. If, however, you are running Hadoop in an untrusted, multitenant environment or where accidental data exposure would be catastrophic, secure mode is the appropriate option. Hadoop Operations


  8. An untrusted applet under the control of the default security manager will only be allowed to get the IP address of the host it came from (its codebase) and possibly the local host. Java Network Programming 4th Edition


  9. This is a generic error all clients will receive when trying to connect to the untrusted broker (without the proper keystore and truststore) ActiveMQ in Action


  10. These systems address concerns that do not arise for Bigtable, such as highly variable bandwidth, untrusted participants, or frequent reconfiguration; decentralized control and Byzantine fault tolerance are not Bigtable goals. Bigtable - A Distributed Storage System for Structured Data



411、单词 polymorphism 的匹配文本:


  1. In order to be truly object-oriented, a programming language must also support inheritance and polymorphism. Core Java Volume I Fundamentals 9th Edition


  2. This code calculates the address of the method body, using information stored in the object (this process is covered in great detail in the Polymorphism chapter) Thinking in Java 4th Edition


  3. For such data types, you might also use polymorphism and method overriding as an alternative to if-then-else, but there’s still language-design discussion as to which is more appropriate.[9] We’d say that both are useful tools and you should have both in your armory. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Serialization or polymorphism can be used if this is truly a problem, but adds a bit of complexity. MapReduce Design Patterns


  5. The producer methods provide runtime polymorphism where the concrete type of the bean to be injected may vary at runtime, the injected object may not even be a bean, and objects may require custom initialization. Java EE 7 Essentials


  6. In object-oriented polymorphism, two subtypes can be referenced similarly because they implicitly share a common supertype. SQL Antipatterns


  7. Creating Buffers The buffer class hierarchy is based on inheritance but not really on polymorphism, at least not at the top level. Java Network Programming 4th Edition


  8. On the other hand, good objectoriented design actually encourages many small methods and significant polymorphism in the method hierarchy. Java Performance Tuning 2nd Edition


  9. Because of polymorphism, dynamic loading, etc., it is generally impossible to identify them all beforehand; instead, their identities become known only as the action unfolds. Concurrent Programming in Java Design Principles and Pattern


  10. Inheritance is the natural way to express these relationships and allows for polymorphism in the application. Pro JPA2



412、单词 aggregator 的匹配文本:


  1. For instance, a sum aggregator applied to the out-degree of each vertex yields the. Pregel  -  A System for Large-Scale Graph Processing


  2. Starting Up the Processes Starting the processes in a specific order is not necessary; however, if the daemons are started in the order recommended here, there won’t be a delay waiting for metadata to be retransmitted to the UDP aggregator and users won’t get error pages or incomplete data from the web server: Monitoring with Ganglia


  3. The framework provides a set of aggregator services (see Table 8-14 for descriptions of the predefined aggregator services) Pro Hadoop


  4. Of course, a node can be configured to neither be a master or a data node and in such case, the node will be only used as aggregator node that will have user queries. Mastering ElasticSearch


  5. Pipes consist of types Each (applying a function or filter), GroupBy (which groups streams on tuple fields), CoGroup (which joins a common set of values), Every (which applies an aggregator or sliding window operation), and SubAssembly (which combines other pipe assemblies) Professional Hadoop Solutions


  6. The keyword table introduces an aggregator type; aggregators are called tables in Sawzall even though they may be singletons. Interpreting the Data Parallel Analysis with Sawzall


  7. Use Case: HBase at Streamy.com Streamy.com is a real-time news aggregator and social sharing platform. Hadoop:The Definitive Guide 3rd Edition


  8. The output string starts with the name of a value aggregator function (from the set of predefined functions available in the Aggregate package) Hadoop in Action


  9. Flume is a relatively new log aggregator developed by Cloudera. Kafka a Distributed Messaging System for Log Processing


  10. At first the value of aggregator is the initial document. MongoDB in Action



413、单词 syslog 的匹配文本:


  1. Many have already used the filesystem as a handoff point between the application creating the data (for instance, log4j) and the mechanism responsible for moving those files someplace else (for instance, syslog) Apache Flume Distributed Log Collection for Hadoop


  2. Logs gmond and gmetad do not log many things to syslog; however, these logs should always be checked when there is a problem. Monitoring with Ganglia


  3. The script is invoked with the additional arguments of the paths to the stdout, stderr, and syslog output files for the task. Pro Hadoop


  4. As shown in Figure 5-4, log pages contain stdout, stderr, and syslog logs. Professional Hadoop Solutions


  5. This recipe shows how to configure Cassandra’s logging mechanism, Log4J, to send events to its local logfile as well as a remote syslog server. Cassandra High Performance Cookbook


  6. Sending data and running queries by sending JSON objects over HTTP makes it easy for someone to extend anything, from a syslog daemon like rsyslog to a connecting framework like ManifoldCF, to interact with Elasticsearch. Elasticsearch in Action MEAP V07 cHAPTER 1


  7. You’ll want to preserve the original format of your syslog file, so you’ll need to create and edit flume-site.xml and indicate the raw output format. Hadoop in Practice


  8. Flume supports a large variety of sources; some of the more commonly used ones include tail (which pipes data from a local file being written to into Flume, just like Unix tail), syslog, and Apache log4j (allowing Java applications to write events to files in HDFS via Flume) Hadoop:The Definitive Guide 3rd Edition


  9. It supports native connectivity and support for writing directly to HDFS, and simplifies reliable, streaming data delivery from a variety of sources including RPC services, log4j appenders, syslog, and even the output from OS commands. Hadoop Operations


  10. The Unix syslog utility provides a good starting point for designing such a framework. Java Performance Tuning 2nd Edition



414、单词 implicits 的匹配文本:


  1. This book provides in-depth coverage of several of the more intricate areas of Scala, including advanced aspects of its type system, implicits, composition techniques with traits, collections, actors, functional categories. Scala in Depth


  2. There are many features such as pattern matching, for comprehensions and implicits supported in Scala but not in Java, that we won’t discuss. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. This chapter shows you how implicits work, and it presents some of the most common ways they are used. Programming in Scala 2nd Edition


  4. Trim the stream to at most a single element; see chapter code We can give eval infix syntax using implicits; see chapter code for details. Functional Programming in Scala


  5. Instead, this syntax is defined with normal definitions of types and methods, combined with a few general-purpose parsing conventions, such as support for implicits. Programming Scala


  6. They are very powerful and very dangerous, and I only invite them into my program’s scope when there is a very good reason.5 Using implicits to convert to a class that has a particular method is a good reason. Beginning Scala


  7. Beware that this power comes with a price, as having too many implicits in scope can render the code not only less understandable but also incorrect. Steps in Scala An Introduction to Object-Functional Programming



415、单词 vlan 的匹配文本:


  1. They consist of a separate VLAN, volumes, instances, images, keys, and users. bk-compute-adminguide-grizzly


  2. In this mode, nova-network creates a VLAN, a subnet, and a separate bridge for each project. Deploying OpenStack


  3. They consist of an individual VLAN, and volumes, instances, images, keys, and users. bk-admin-guide-cloud-latest


  4. With VLAN networking, this allows access between instances within the same project. admin-user-guide-trunk


  5. VLANs VLAN configuration can be as simple or as complicated as desired. OpenStack Operations Guide



416、单词 println 的匹配文本:


  1. Here, we are using the System.out object and calling its println method. Core Java Volume I Fundamentals 9th Edition


  2. If the specified boolean flag is set to "true," whenever a byte array, println method, or newline character is sent, the underlying buffer will be automatically flushed. Java Network Programming and Distributed Computing


  3. So, we are defining here that we want to add the closure (with the statement println ''Hello world.'') to our task with the name helloWorld. Gradle Effective Implementation Guide


  4. This rule definition can be further expanded with actual network reachability code in place of the println() call, with that functionality comfortably factored into methods as we see fit. Gradle Beyond The Basics


  5. To write to a print writer, use the same print, println, and printf methods that you used with System.out. Core Java Volume II Advanced Features 9th Edition


  6. The toString method is automatically invoked when an object is passed to println, printf, the string concatenation operator, or assert, or printed by a debugger. Effective Java 2nd Edition


  7. The object System.out has an instance method named println( ) Java in a Nutshell 5th Edition


  8. For example, it’s relatively rare to use the write() method of PrintStream instead of one of its print() and println() methods. Java Network Programming 4th Edition


  9. Add a breakpoint in the first call to the println() method of the run() method of the Task1 class. Java 7 Concurrency Cookbook


  10. In a real-world build (still relying on println statements in place of actual testing actions), we would be more likely to structure the task as done in Example 2-16 Building and Testing with Gradle



417、单词 san 的匹配文本:


  1. Consequently, if there is a truly huge number of RRD files and if the disk storage system does not cope well with a random access workload (e.g., if it is not an SSD or SAN), the startup phase may take many hours or may not even complete at all. Monitoring with Ganglia


  2. There are too many to name, but all of you in Dublin, Portland, Lisbon, London, Zurich, San Francisco, Mountain View, Dortmund, Stockholm, Hamburg, Frankfurt, Salt Lake City, Blacksburg, San Diego, and Amsterdam: you know who you are—thanks! To my family, friends, and coworkers: thanks you for your support and your patience with me over the last year. CouchDB The Definitive Guide


  3. In contrast to systems like Lustre [4], which assume one can construct sufficiently reliable OSDs using mechanisms like RAID or fail-over on a SAN, we assume that in a petabyte or exabyte system failure will be the norm rather than the exception, and at any point in time several OSDs are likely to be inoperable. Ceph A Scalable, High-Performance Distributed File System


  4. Broadly, the approach in HPC is to distribute the work across a cluster of machines, which access a shared filesystem, hosted by a Storage Area Network (SAN) Hadoop:The Definitive Guide 3rd Edition


  5. Block pools are analogous to LUNs in a SAN storage system and a namespace with its pool of blocks is analogous as a file system volume. The Hadoop Distributed File System


  6. As in conventional storage systems based on local or network-attached (SAN) disk drives or those embracing the proposed T10 OSD standard, devices passively respond to read and write commands, despite their potential to encapsulate significant intelligence. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. Thanks to my editor, Mike Loukides, for being a charming conversationalist at dinner in San Francisco. Cassandra The Definitive Guide


  8. Be it block storage via a Storage Area Network (SAN) or file-based via Network File System (NFS) or similar protocols, these approaches constrain Hadoop by introducing unnecessary bottlenecks and additional shared devices that would have a critical impact on failure. Hadoop Beginners Guide


  9. In the classic HPC model, data is usually stored on a large shared centralized storage system such as a SAN or NAS. Hadoop Operations


  10. CPUs have become so much faster than disks that a few CPUs in a shared-memory machine can drive enough disk heads to service required loads without the complexity of distributed transactions: the top TPC-E benchmark results today are achieved on large shared-memory machines connected to a SAN. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



418、单词 middleware 的匹配文本:


  1. He has more than 7 years of experience in software and middleware (Apache, Tomcat, PostgreSQL, Hadoop eco system) engineering. HBase Administration Cookbook


  2. The use of technologies like CORBA as middleware is very important in organizations with many systems written in many languages. Java Network Programming and Distributed Computing


  3. Because the test suite is run from the browser, not only does it test that CouchDB is functioning properly, it also verifies that your browser’s connection to the database is properly configured, which can be very handy for diagnosing misbehaving proxies or other HTTP middleware. CouchDB The Definitive Guide


  4. Her interests and experiences lie across multiple tiers, from web frontend GUI to middleware, from middleware to backend SQL RDBMS and NoSQL data storage. Cassandra High Performance Cookbook


  5. Before starting the graduate studies, Thilina worked as a Senior Software Engineer at WSO2 Inc., focusing on open source middleware development. Hadoop MapReduce Cookbook


  6. We find that there are many middleware systems which return a result to their caller before all. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Message-oriented middleware (MOM) allows sending and receiving messages between distributed systems. Java EE 7 Essentials


  8. Instead, it calls on a middleware layer on the server that in turn makes the database queries. Core Java Volume II Advanced Features 9th Edition


  9. Type 3 drivers provide a connection to a middleware product, which in turn connects to a database. Java Performance Tuning 2nd Edition


  10. Commercial message-oriented middleware (MOM) did not help to encourage adoption of the process; most of the MOMs on the market were closed source, were costly to buy and support, and required trained system administrators for proper installation. ActiveMQ in Action



419、单词 jruby 的匹配文本:


  1. The import command is a basic JRuby feature, and it is not HBase-specific. HBase Administration Cookbook


  2. I’ve written this book for experienced Java programmers who are interested in learning how to manage and make use of concurrency on the JVM, using languages such as Java, Clojure, Groovy, JRuby, and Scala. Programming Concurrency on the JVM


  3. With R, simply handling the training data volumes and getting good integration with the existing Ruby/JRuby infrastructure at Shop It To Me proved very difficult. Mahout in Action


  4. JRuby and JVM languages Those of you unfamiliar with Java may be confused by this JRuby concept. HBase in Action


  5. This will bring up a JRuby IRB interpreter that has had some HBase-specific commands added to it. Hadoop:The Definitive Guide 3rd Edition


  6. Since the shell is JRuby-based, you can mix Ruby with HBase commands, which enables you to do things like this: HBase:The Definitive Guide


  7. He envisages himself using functional languages in projects and the decision to learn Clojure (a little bit of JRuby, Scala, F#, and Dart lately) influences his current self-learning activities. Java 7 New Features Cookbook


  8. By using JRuby, you write ruby code that has access to any Java code, but will generally lose access to Ruby extensions that are implemented in C. Lucene in Action 2nd Edition


  9. All are JVM (Java Virtual Machine) libraries that can be used from programming languages like Java, Clojure, Scala, JRuby, Groovy, and Jython, as opposed to tools with their own languages, like Hive and Pig. Programming Hive


  10. Not only is Java 7 the most modern, statically typed, object-oriented language available for development, it also enhances the ability to integrate different languages that run on the JVM, such as Jython, Groovy, JRuby, and Scala to name a handful. Java 7 Recipes A Problem-Solution Approach



420、单词 hashtable 的匹配文本:


  1. The classic Hashtable class serves the same purpose as the HashMap and has essentially the same interface. Core Java Volume I Fundamentals 9th Edition


  2. Thus, read and write operations have improved performance compared to the synchronized Hashtable alternative. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. You’ll use the distributed cache4 to copy the small dataset to the nodes running the map tasks, and use the initialization method of each map task to load the small dataset into a hashtable. Hadoop in Practice


  4. The program reads this source file into a Java Hashtable called joinData that’s available throughout the mapper’s lifespan. Hadoop in Action


  5. Failure to do so will result in a violation of the general contract for Object.hashCode, which will prevent your class from functioning properly in conjunction with all hash-based collections, including HashMap, HashSet, and Hashtable. Effective Java 2nd Edition


  6. The Collections Framework was added in Java 1.2, but prior to that release you can use Vector and Hashtable, which are approximately the same as ArrayList and HashMap. Java in a Nutshell 5th Edition


  7. When using regular columns, as we saw earlier, Cassandra looks like a four-dimensional hashtable. Cassandra The Definitive Guide


  8. The java.util.Map interface is the parent of commonly used classes such as HashMap and (through some library backward reengineering) even the original Hashtable. Hadoop Beginners Guide


  9. You should also prefer the URI class if you need to store URLs in a hashtable or other data structure, since its equals() method is not blocking. Java Network Programming 4th Edition


  10. Even though the fully synchronized Hashtable class allows an Inventory class to be defined without worrying about some low-level synchronization details, we still want to place some semantic constraints on the store and retrieve operations. Concurrent Programming in Java Design Principles and Pattern



421、单词 nonzero 的匹配文本:


  1. So this loop need only execute for each nonzero element of the user vector. Mahout in Action


  2. IOException— reads a byte from the input stream, and returns "true" if the byte is nonzero. Java Network Programming and Distributed Computing


  3. Each of the four value sets includes not only the finite nonzero values that are ascribed to it above, but also the five values positive zero, negative zero, positive infinity, negative infinity, and NaN. The Java Virtual Machine Specification Java SE 7 Edition


  4. In C and C++ the result of this assignment will always be true if y is nonzero, and you’ll probably get an infinite loop. Thinking in Java 4th Edition


  5. If any of the executed commands return a nonzero value, the build will fail. Jenkins The Definitive Guide


  6. A compile-time error does not occur if a non-zero floating-point literal has a small value that, on rounded conversion to its internal representation, becomes a nonzero denormalized number. The Java Language Specification Java SE 7 Edition


  7. The test is very fast because it is inmemory, but it has a nonzero probability of false positives, in which case the regular get() method is called. Hadoop:The Definitive Guide 3rd Edition


  8. Most products have nonzero values for only a few of these optional fields. Effective Java 2nd Edition


  9. It sets its exit code to a nonzero value if an extension with the same name is already installed and has a specification version number equal to or greater than the version of the specified file. Java in a Nutshell 5th Edition


  10. However, if the scan rate is nonzero and the trend of reported free memory is decreasing, then swapping is likely occurring. Java Performance



422、单词 suggester 的匹配文本:


  1. With ElasticSearch 0.90.3 we have the possibility of using the prefix-based suggester which is very handy for implementing the autocomplete functionality and which we will discuss in the Completion suggester section. Mastering ElasticSearch


  2. Query term completion via the Suggester New in Solr 3 is a high-speed approach to implement term completion, called the Suggester. Solr 3 Enterprise Search Server


  3. Using the Suggester component Nowadays, it''s common for web pages to give a search suggestion (or autocomplete as I tend to call it), just like many "big" search engines do—just like Google, Microsoft, and others. Solr 3.1 Cookbook


  4. The completion suggester in Elasticsearch takes a completely different approach. Elasticsearch The Definitive Guide



423、单词 logfile 的匹配文本:


  1. The source, an exec source, will tail the /var/log/app.log file and follow any rotations outside applications may perform on that logfile. Apache Flume Distributed Log Collection for Hadoop


  2. The check_log plugin greps a particular query word incrementally in a specified logfile. HBase Administration Cookbook


  3. The HTTP operation records contain a superset of the attributes in the widely supported Combined Logfile Format (CLF) commonly used in web server logging. Monitoring with Ganglia


  4. To troubleshoot the HDFS startup problem, we can open a new terminal and monitor the NameNode logfile on the master node with the following command: Hadoop Operations and Cluster Management Cookbook


  5. This recipe shows how to configure Cassandra’s logging mechanism, Log4J, to send events to its local logfile as well as a remote syslog server. Cassandra High Performance Cookbook


  6. Benchmark writes the results to the console and appends the results to a logfile similarly to the write benchmark. Hadoop MapReduce Cookbook


  7. SequenceFile Imagine a logfile where each log record is a new line of text. Hadoop:The Definitive Guide 3rd Edition


  8. Whenever you are not sure what a byte array contains you should use this method to print its content, for example, to the console, or into a logfile. HBase:The Definitive Guide


  9. Watching the logfile is a good way to determine that the node is finished bootstrapping, but to watch for progress while it’s happening, use nodetool streams. Cassandra The Definitive Guide


  10. Also, when starting a session, we’ll omit the line about the logfile. Programming Hive



424、单词 initializing 的匹配文本:


  1. The time for initializing the cluster, generating the test graphs in-memory, and verifying results is not included in the measurements. Pregel  -  A System for Large-Scale Graph Processing


  2. The framework calls the configure() method upon initializing a task, and it calls the close() method when the task has finished processing its input split: Pro Hadoop


  3. You’ll also add two more targets for initializing and cleaning up the required output directories. Gradle in Action


  4. After validating the authenticity of the lease [R7], the NM configures the environment for the container, including initializing its monitoring subsystem with the resource constraints specified in the lease. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Initializing our river After we are done with installation, we can initialize our river. Mastering ElasticSearch


  6. We start by initializing the output result for each input group. MapReduce Design Patterns


  7. There''s more Initializing connections in this manner does not account for server fail-over or retries. Cassandra High Performance Cookbook


  8. Before a task starts, it shows its status as “initializing,” and then once it starts reading records, it shows the split information for the split it is reading as a filename with a byte offset and length. Hadoop:The Definitive Guide 3rd Edition


  9. Backups provide both disaster recovery and a means for initializing the database of a newly replaced replica without placing load on replicas that are in service. Chubby - The lock service for loosely-coupled distributed systems


  10. The JMS API can be tedious, as it requires you to write a lot of code for initializing all the necessary JMS objects such as connections, sessions, producers, consumers, and so forth. ActiveMQ in Action



425、单词 inet 的匹配文本:


  1. More precisely, this method will establish an association between the socket and a local address, which is usually explicitly specified as an InetSocketAddress instance (this class represents a socket address with IP (or hostname) and port, and extends the abstract SocketAddress class) Pro Java 7 NIO.2


  2. The InetAddress class is used to represent IP addresses within a Java networking application. Java Network Programming and Distributed Computing


  3. The list of attributes the record must have is defined by the objectClass inetOrgPerson. Jenkins Continuous Integration Cookbook


  4. This variable has a single use: it must be set by the security manager before the security manager calls most methods of the InetAddress class. Java Security 2nd Edition


  5. This is used to force the built-in SNMP agent to bind to the given inetAddress. Oracle JRockit The Definitive Guide


  6. However, you can use the InetAddress class if you need to convert between host names and Internet addresses. Core Java Volume II Advanced Features 9th Edition


  7. In Java 5.0 the InetAddress class has an isReachable( ) method that attempts to determine whether the host is reachable. Java in a Nutshell 5th Edition


  8. It shows how Java programs interact with the Domain Name System through the InetAddress class, the one class that’s needed by essentially all network programs. Java Network Programming 4th Edition


  9. A session can cache the IP address explicitly after the first lookup, but this needs to be done at the application level by holding on to the InetAddress object. Java Performance Tuning 2nd Edition


  10. The client is then bound to the port using a new InetSocketAddress instance. Java 7 Recipes A Problem-Solution Approach



426、单词 bidirectional 的匹配文本:


  1. If you want bidirectional replication, you need to trigger two replications with source and target swapped. CouchDB The Definitive Guide


  2. The key to making this work is called joint compilation, which allows you to freely mix Java and Groovy source code with bidirectional dependencies on each other. Gradle in Action


  3. Replication CouchDB is designed to allow bidirectional replication in an efficient and reliable manner. Beginning CouchDB


  4. Finally, having assigned the bidirectional mapping, the UID is returned. HBase in Action


  5. For example, you would use a directed graph to model relationships between users in a social network because relationships are not always bidirectional. Hadoop in Practice


  6. We can see that one of the edges is bidirectional. Cassandra The Definitive Guide


  7. Depending on the type of graph, the edges may be bidirectional or unidirectional and may have weights associated with them. Hadoop Beginners Guide


  8. Sqoop is a tool that facilitates bidirectional exchange of data between HDFS and relational databases. Hadoop Operations


  9. In some situations, you may want to create a bidirectional communication channel between brokers—a channel that communicates not only outward to the brokers on the other side of the connection, but also receives messages from other brokers on that same channel. ActiveMQ in Action


  10. A Java channel represents an open data path, possibly bidirectional, to an external data source or sink such as a file or a socket. Fundamental Networking in Java



427、单词 gridfs 的匹配文本:


  1. GridFS, a protocol for storing large files, uses subcollections to store file metadata separately from content chunks (see Chapter 6 for more information about GridFS) MongoDB The Definitive Guide 2nd Edition


  2. As we move through the book, we cover essential topics such as how to store large files using the GridFS feature and how to administer and optimize your MongoDB installation. The Definitive Guide to MongoDB


  3. Appendix C shows how to work with binary data in MongoDB and how to use GridFS, a spec implemented by all the drivers, to store especially large files in the database. MongoDB in Action



428、单词 uris 的匹配文本:


  1. Caution The paths and URIs for DistributedCache items are stored as comma-separated lists of strings in the configuration. Pro Hadoop


  2. Everything in CouchDB has an address, a URI, and you use the different HTTP methods to operate on these URIs. CouchDB The Definitive Guide


  3. This EL function is commonly used to pass the URIs of data set instances that will be consumed by a Workflow job triggered by a Coordinator action. Professional Hadoop Solutions


  4. The setup() method in the Mapper is told to pull the complete list of cache file URIs currently in the distributed cache. Hadoop RealWorld Solutions Cookbook


  5. The subsequent rewrite rule then proxies all requests to URIs starting with /api to the equivalent URI on http://couch-master.example.com:5984 (again, only if the previous rewrite condition has been met): Scaling CouchDB


  6. You’ll notice that all of the directory names are HDFS URIs. Hadoop in Practice


  7. This is managed in configuration using ViewFileSystem and the viewfs:// URIs. Hadoop:The Definitive Guide 3rd Edition


  8. Similar to the URIs used in a web browser, the file: scheme addresses local files. HBase:The Definitive Guide


  9. The JobTracker will take this list of URIs and create a local copy of the files in all the TaskTrackers when it starts the job. Hadoop in Action


  10. It discusses ActiveMQ URIs and all the transport connectors for both client-to-broker and broker-to-broker communications including TCP, NIO, STOMP, failover, SSL, HTTP/S, and much more. ActiveMQ in Action



429、单词 zsets 的匹配文本:


  1. As shown in table 1.1, Redis allows us to store keys that map to any one of five different data structure types; STRINGs, LISTs, SETs, HASHes, and ZSETs. Redis in Action


  2. Gets the intersection of a given number of ZSETS and store the result in a new ZSET. Redis Cookbook



430、单词 cryptographic 的匹配文本:


  1. Cryptographic techniques ensure that such a certificate cannot be forged, and that any effort to tamper with the signed file will be detected. Core Java Volume I Fundamentals 9th Edition


  2. With the addition of new cryptographic codecs, a new token-based authentication mechanism that supports more authentication protocols, a unified authorization system that supports Attribute Based Access Control (ABAC) and supports policy enforcement using open standards and XACML, and changes to HBase to allow cell-level authorization, Hadoop will be able to move from isolated cluster environments with perimeter-level security to very secure deployments that can meet the requirements of highly secure environments. Professional Hadoop Solutions


  3. They’re both faster than cryptographic hashers such as SHA-1 and produce good distributions. Hadoop in Practice


  4. More significantly, objects in PAST are immutable, facilitating cryptographic protection and simplifying consistency and caching, but limiting the systems usefulness as a general storage service. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  5. Code signing, whereby code authors can use standard cryptographic algorithms to authenticate Java code. Core Java Volume II Advanced Features 9th Edition


  6. Cryptographic hash functions While a full description of a cryptographic hash function is beyond the scope of this book, we can say that cryptographic hash functions are used for information security. Getting Started with Google Guava


  7. These include support to protect against cipher block chaining attacks and new cryptographic algorithms. Java 7 New Features Cookbook


  8. This package and its subpackages support cryptographic message digests and digital signatures. Java in a Nutshell 5th Edition


  9. When a block is first stored, there is also a hidden file written to the same HDFS directory containing cryptographic checksums for the block. Hadoop Beginners Guide


  10. The Handshake Protocol allows the server and client to authenticate each other and to negotiate an encryption algorithm and secret cryptographic keys before the application protocol transmits or receives its first byte of data. Fundamental Networking in Java



431、单词 natively 的匹配文本:


  1. As such, this chapter is a great opportunity to showcase an example of natively serving up a dynamic web application using nothing more than CouchDB’s integrated web server, something you may wish to do with your own applications. CouchDB The Definitive Guide


  2. The use of SequenceFile provides a storage mechanism natively supporting key/value structure, thus making implementations using this data layout much simpler. Professional Hadoop Solutions


  3. For browsers that do not support JSON natively, the eval() function can be used to translate a string containing JSON code into JavaScript objects. Beginning CouchDB


  4. There are few dependency management scenarios it can’t handle natively or be adapted to through a few easy customizations. Gradle Beyond The Basics


  5. But when you’re working with big data volumes you’ll need serialization formats with compact serialized forms that natively support partitioning and have schema evolution features. Hadoop in Practice


  6. It is, however, possible to disable checksums, typically when the underlying filesystem supports checksums natively. Hadoop:The Definitive Guide 3rd Edition


  7. Keys and values can take on types beyond the basic ones which Hadoop natively supports. Hadoop in Action


  8. You can do the same thing if you’re using Windows, but Windows doesn’t include a tail program natively. Cassandra The Definitive Guide


  9. With its support for nested complex types, Avro can natively support a representation of a node that is much closer to the runtime object. Hadoop Beginners Guide


  10. Nevertheless, if you know it, or already have a monitoring system that is natively JMX-aware, it’s a perfectly valid option for integration. Hadoop Operations



432、单词 timeouts 的匹配文本:


  1. Timeouts and workers Finally, there are two miscellaneous properties related to timeouts and two for worker pools that you can change: Apache Flume Distributed Log Collection for Hadoop


  2. Prior to Hadoop 0.18, blocking operations and fixed timeouts were used for the RPC calls. Pro Hadoop


  3. It turns out that most of these overwrites came from client retries due to errors or timeouts. GFS - The Google File System


  4. By default, both types of cached artifacts expire after 24 hours, but both timeouts can be set to arbitrary periods (including zero) using the resolution Strategy block. Gradle Beyond The Basics


  5. But setting the value too high would mean that the client’s interaction with HBase would have longer pauses, and this could result in timeouts on HBase’s side. HBase in Action


  6. I felt like we’d need to precisely control how things like connections, timeouts and buffers are handled, and RMI gives you little control over those. Hadoop:The Definitive Guide 3rd Edition


  7. TCP’s back off policies pay no attention to higher-level timeouts such as Chubby leases, so TCP-based KeepAlives led to many lost sessions at times of high network congestion. Chubby - The lock service for loosely-coupled distributed systems


  8. NFS:  May not be a better fit for many deployments as NFS is an external device, costly, less control on timeouts etc. Namenode High Availability


  9. Time for action – causing task failure Let''s cause a task to fail; before we do, we will need to modify the default timeouts: Hadoop Beginners Guide


  10. In general, timeouts should be set slightly too long rather than slightly too short.19 Fundamental Networking in Java



433、单词 subtree 的匹配文本:


  1. You can export the preferences of a subtree (or, less commonly, a single node) by calling the methods. Core Java Volume I Fundamentals 9th Edition


  2. The interactive facility allows simple subtree and summation views of the grid state XML tree. Monitoring with Ganglia


  3. Ceph [9] has a cluster of namespace servers (MDS) and uses a dynamic subtree partitioning algorithm in order to map the namespace tree to MDSes evenly. HDFS scalability the limits to growth


  4. The bin/hadoop script and all of the Cygwin utilities see a file system that is a subtree of the Windows file system, with the Windows drives mapped to the /cygdrive directory. Pro Hadoop


  5. Ceph utilizes a novel metadata cluster architecture based on Dynamic Subtree Partitioning [30] that adaptively and intelligently distributes responsibility for managing the file system directory hierarchy among tens or even hundreds of MDSs. Ceph A Scalable, High-Performance Distributed File System


  6. The problem only gets more complicated as we allow more complex changes, where for example the order of records may be permuted; such changes can be common, for instance, if a crawler uses a depth-first strategy to crawl the web, and a single link changing can move an entire subtree’s position in the input file. Incoop MapReduce for Incremental Computations


  7. If we changed a file in the /foo subtree on the first cluster from the previous example, we could synchronize the change with the second cluster by running: Hadoop:The Definitive Guide 3rd Edition


  8. Ceph [17] has a cluster of namespace servers (MDS) and uses a dynamic subtree partitioning algorithm in order to map the namespace tree to MDSs evenly. The Hadoop Distributed File System


  9. A special cell, named global, contains a subtree /ls/global/master that is mirrored to the. Chubby - The lock service for loosely-coupled distributed systems


  10. The line sticking out from the handle icon points to the right when the subtree is collapsed and down when the subtree is expanded (see Figure 6.20) Core Java Volume II Advanced Features 9th Edition



434、单词 alice 的匹配文本:


  1. Alice in Wonderland has 5,909 unique words, including the copyright notice at the beginning.) The words appear in random order. Core Java Volume I Fundamentals 9th Edition


  2. To continue with our previous examples, we may wish to assign the "alice" user the "compute-user" role in the "acme" tenant: bk-compute-adminguide-grizzly


  3. Suppose that Alice, an engineer at Google, comes up with a novel idea for extracting new kinds of signals from web pages. Dremel - Interactive Analysis of WebScale Datasets


  4. You might assign the computeuser role to the alice user in the acme tenant: bk-admin-guide-cloud-latest


  5. Bob uses the algorithm specification to generate his own public and private keys; he sends the public key to Alice. Java Security 2nd Edition


  6. Based on our sample data, the answer is Alice and Zach. Graph Databases


  7. Suppose Alice wants to send Bob a message, and Bob wants to know this message came from Alice and not an impostor. Core Java Volume II Advanced Features 9th Edition


  8. The sender of a message, Alice, can compute a digest for a message and then encrypt that digest with her private key. Java in a Nutshell 5th Edition


  9. So any commands executed by a user called alice will create files with an owner named alice and will only be able to read or write files to which this user has the correct access. Hadoop Beginners Guide


  10. Consider the example of an over-supervised girl (Alice) trying to write to her boyfriend (Bob) when her outgoing mail is scrutinized by her parents. Fundamental Networking in Java



435、单词 concatenation 的匹配文本:


  1. Indeed, it isn’t efficient to generate a new string that holds the concatenation of "Hel" and "p!" Core Java Volume I Fundamentals 9th Edition


  2. The combiner can be used to do some concatenation prior to the reduce phase. MapReduce Design Patterns


  3. When obtaining execution context, the class overwrites the output directory with a new value — concatenation of the base output with the output name. Professional Hadoop Solutions


  4. This includes string concatenation, external table mapping, simple table joins, custom functions, and dependency distribution across the cluster. Hadoop RealWorld Solutions Cookbook


  5. It consists of the concatenation of the compressed HTML of every page, preceded by a small header. The anatomy of a large-scale hypertextual Web search engine


  6. One of the oldest rules in Java has always been that string concatenation using the plus operator should be avoided at all cost, which the following code example shows: Hadoop in Practice


  7. However, there is no easy way to combine the files (by concatenation, for example, in the case of plain-text files) to produce a globally sorted file. Hadoop:The Definitive Guide 3rd Edition


  8. The row keys in the catalog tables are the region names, which are a concatenation of the region’s table name, its start row, and an ID (usually the current time in milliseconds) HBase:The Definitive Guide


  9. In addition, the number of fields in the output relation is the sum of the number of fields in the input relations, and the output relation’s schema is a concatenation of the input relations’ schemas. Hadoop in Action


  10. The key is essentially the concatenation of four dimension keys (row, column family, column qualifier, and timestamp) Cassandra The Definitive Guide



436、单词 znodes 的匹配文本:


  1. Client connections dump Lists all the sessions and ephemeral znodes for the ensemble. Hadoop:The Definitive Guide 3rd Edition


  2. It offers filesystem-like access with directories and files (called znodes) that. HBase:The Definitive Guide


  3. ZooKeeper (http://zookeeper.apache.org) allows distributed processes coordinating with each other through a shared hierarchical name space of data registers (znodes), much like a file system. Apache Kafka


  4. Like files in a traditional filesystem, ZNodes can store arbitrary bytes, have access control lists associated with them, and can be organized hierarchically. Hadoop Operations


  5. Recipes include ZooKeeper operations that manipulate small data nodes, called znodes, that are organized hierarchically as a tree, just like in a file system. ZooKeeper Distributed Process Coordination



437、单词 ips 的匹配文本:


  1. Currently, there can only be one network, and this set up would use the max IPs available in a /24 bk-compute-adminguide-grizzly


  2. Nova defines two different types of IP addresses for an instance: Fixed IPs and Floating IPs. Deploying OpenStack


  3. HAProxy uses a single configuration file to define everything from the frontend IPs being advertised to the servers behind them. RabbitMQ in Action


  4. The templates enable you to create most OpenStack resource types, such as instances, floating IPs, volumes, security groups, users, and so on. bk-admin-guide-cloud-latest


  5. To calculate page views, we need to calculate distinct IPs by page. Hadoop RealWorld Solutions Cookbook


  6. You’ll then use sorting to order them by the number of IPs that viewed that movie category, thus giving you the most popular categories first. Hadoop in Practice


  7. This snitch helps Cassandra know for certain if two IPs are in the same data center or on the same rack—because you tell it that they are. Cassandra The Definitive Guide


  8. Global Network Usage tab to view duration and creation requests for networks, subnets, routers, ports, and floating IPs, per tenant (project) admin-user-guide-trunk


  9. The templates allow creation of most OpenStack resource types such as instances, floating ips, volumes, security groups, users, etc. user-guide


  10. Fixed IPs are assigned to instances on boot, whereas Floating IP address55 OpenStack Operations Guide



438、单词 checkstyle 的匹配文本:


  1. In the Java space, you can choose from a wide range of open source and commercial solutions, such as Checkstyle, PMD, Cobertura, FindBugs, and Sonar. Gradle in Action


  2. This recipe shows you how to do this for checkstyle. Jenkins Continuous Integration Cookbook


  3. There are tools already available for Java and Groovy projects to analyze and check source code, such as Checkstyle, JDepend, PMD, FindBugs, CodeNarc, and Sonar. Gradle Effective Implementation Guide


  4. Modern IDEs have great support for many code quality tools—Checkstyle, PMD, and FindBugs all have plugins for Eclipse, NetBeans, and IntelliJ, which provide rapid feedback for developers on code quality issues. Jenkins The Definitive Guide


  5. Earlier we discussed the jscheckstyle npm package, which coincidentally can output its report in Checkstyle format—and wouldn’t you know it, there is a Jenkins plug-in for that format! Even though you may be checking complexity numbers pre-commit, it is still a great idea to also generate those numbers as part of your build, for two reasons. Testable JavaScript


  6. Checkstyle Checkstyle ensures that all source code adheres to coding standards. Sonar Code Quality Testing Essentials


  7. These tools, such as Checkstyle, PMD, or FindBugs have a set of predefined rules of good practices and parse the source code looking for violations of those rules. Java 7 Concurrency Cookbook


  8. For the first example, the Checkstyle Ant task will be loaded from a combination of a properties file and a set of local directory JARs. Building and Testing with Gradle



439、单词 readability 的匹配文本:


  1. Figure 2.2 Comparing build script size and readability between Maven and Gradle. Gradle in Action


  2. Some of the major benefits of using these higher-level of abstractions include readability, maintainability, development time, and automatic optimization. MapReduce Design Patterns


  3. Adding a custom action node with a well-defined and meaningful business name can significantly improve the overall readability of the process. Professional Hadoop Solutions


  4. One basic example might be the min() operation of the Reducer class that specifically formats output in a certain way for readability. Hadoop RealWorld Solutions Cookbook


  5. The following code shows the implementations of the Serialized interface in the RegexSerDe class, with some unimportant code removed to help with readability: Hadoop in Practice


  6. Statements that have to be terminated with a semicolon can be split across multiple lines for readability: Hadoop:The Definitive Guide 3rd Edition


  7. For easier readability, the related details were broken up into groups using blank lines. HBase:The Definitive Guide


  8. In the interest of readability, we’re ignoring the fact that this doesn’t yield identically formatted output; the data is the same. Hadoop Operations


  9. Although the output has been truncated for readability, the command starts up a simple JMS producer and you can see from the output that it. ActiveMQ in Action


  10. This technique increases readability by breaking up a long complex WHERE clause into two or more UNION queries. Programming Hive



440、单词 utf 的匹配文本:


  1. If your input is comprised of tailed logfiles, the array is most likely a UTF-8 encoded String containing a line of text. Apache Flume Distributed Log Collection for Hadoop


  2. The UTF-16 encoding represents all Unicode code points in a variable-length code. Core Java Volume I Fundamentals 9th Edition


  3. In this case, the output is assumed by CouchDB to be HTML (i.e., with a content type of text/html) and UTF-8 formatting. Getting Started with CouchDB


  4. The input bytes are considered to be in the UTF-8 character set. Pro Hadoop


  5. This recipe is only applicable when your data is ASCII or UTF-8 Cassandra High Performance Cookbook


  6. Your implementation stored them as strings, and used the DataOutput’s writeUTF method to write them out: Hadoop in Practice


  7. Because of its emphasis on using standard UTF-8, there are some differences between Text and the Java String class. Hadoop:The Definitive Guide 3rd Edition


  8. The writeUTF method writes string data using a modified version of 8-bit Unicode Transformation Format. Core Java Volume II Advanced Features 9th Edition


  9. BytesType is the default for a reason: it provides the correct sorting for most types of data (UTF-8 and ASCII included) Cassandra The Definitive Guide


  10. Regardless of operating system, you should save the file in the UTF-8 encoding if your editor allows. Java 7 Recipes A Problem-Solution Approach



441、单词 smtp 的匹配文本:


  1. Interactive queries are performed via a text protocol (similar to SMTP or HTTP) Monitoring with Ganglia


  2. Finally, we describe interfacing directly with CouchDB from Erlang, which can be a useful technique if you want to integrate CouchDB storage with a server for non-HTTP protocols, like SMTP (email) or XMPP (chat) CouchDB The Definitive Guide


  3. The way you want to send an email (for example, via SMTP or a third-party email service provider) and the list of recipients are very specific to your build. Gradle in Action


  4. The output tells us that the local machine has the following services running: ftp, ssh, smtp, rpcbind (service for remote procedure calls), and jpp (service for Java packaging) Hadoop Operations and Cluster Management Cookbook


  5. The JavaMail API specification does not require any protocol providers, but the JavaMail reference implementation includes an essage store provider, a POP3 message store provider, and an SMTP. Java EE 7 Specification


  6. However, our core data model is not restricted to our particular RPC framework; we also trace activities such as SMTP sessions in Gmail, HTTP requests from the outside world, and outbound queries to SQL servers. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. The Exchange environment is independent by necessity, but one class of Postfix machines acts as an added layer of protection and uses SMTP to transfer messages between mailboxes hosted in each environment. Hadoop:The Definitive Guide 3rd Edition


  8. In the past, it was simple to write a program that sends e-mail by making a socket connection to port 25, the SMTP port. Core Java Volume II Advanced Features 9th Edition


  9. You’ll also get a behind-the-scenes look at HTTP, SMTP, TCP/IP, and the other protocols that support the Internet and the Web. Java Network Programming 4th Edition


  10. In the example that follows, application-wide parameters are set up to store an organization’s e-mail address and SMTP host for sending mail. Java 7 Recipes A Problem-Solution Approach



442、单词 accessor 的匹配文本:


  1. Methods that change instance fields are called mutator methods, and those that only access instance fields without modifying them are called accessor methods. Core Java Volume I Fundamentals 9th Edition


  2. While you can write accessor functions to obtain the value of a variable and/or modify it, an easier alternative is to define an attribute. Java Network Programming and Distributed Computing


  3. But it isn’t always possible to use a method reference as a classification function, because you may wish to classify using something more complex than a simple property accessor. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. A large number of accessor methods give you this information. Core Java Volume II Advanced Features 9th Edition


  5. Note that a nonzero-length array is always mutable, so it is wrong for a class to have a public static final array field, or an accessor that returns such a field. Effective Java 2nd Edition


  6. Certain important or frequently used fields can also be public, but it is common practice to make fields non-public and encapsulate them with public accessor methods. Java in a Nutshell 5th Edition


  7. It is easy to use a method in a loop that has side effects such as making copies, or an accessor that returns a copy of some object you need only once. Java Performance Tuning 2nd Edition


  8. The HotSpot VM’s TemplateTable defines all the templates and provides accessor functions to get the template for a given bytecode. Java Performance


  9. We can again apply our default rules, which in this case lead to the creation of little synchronized get and set methods (also known as accessor and assignment methods) for particles, otherwise avoiding direct access of the particles variable itself. Concurrent Programming in Java Design Principles and Pattern


  10. The following code demonstrates the declaration of a private field, followed by accessor and mutator methods that can be used to obtain or set the value of that field from an outside class: Java 7 Recipes A Problem-Solution Approach



443、单词 logout 的匹配文本:


  1. We also define a logout method that takes a UserSession object and invalidates it and performs any cleanup that may be needed. Scala in Depth


  2. To log out at the end of the transaction, use the method logout, passing to it the GUID. Jenkins Continuous Integration Cookbook


  3. The logout() method sets any value that might be stored in the ThreadLocal to null. Pro Spring 3


  4. Finally, we will change the logout so it is handled by Spring Security as well. Pro Spring MVC


  5. To implement it, you’ll need to configure Spring Security, configure web.xml, and add login and logout links to the app. Spring in Practice


  6. LogoutFilter no-op interaction when the logout URL is not requested. Pro Spring Security


  7. From an authentication standpoint, you’ll use the form-login D and logout E elements to enable a login page, with a login failure URL and a security logout page, respectively, for your web application. Spring Roo in Action


  8. Invokes the logout method on the managers in the JAAS configuration file. Core Java Volume II Advanced Features 9th Edition


  9. This is one of the most important classes in the JAAS API for application programmers: it defines the login( ) method (and the corresponding logout( ) method) that allows an application to authenticate a user. Java in a Nutshell 5th Edition


  10. As it turns out, logout capability is already enabled by your configuration without you having to do anything else. Spring in Action 4th Edition



444、单词 searchable 的匹配文本:


  1. Views are functions that take your document data and produce searchable lists of information based on the document’s contents. Getting Started with CouchDB


  2. The package must live in the directory indicated by its name, which must be a directory that is searchable starting from the CLASSPATH. Thinking in Java 4th Edition


  3. Why Subscribe? f Fully searchable across every book published by Packt f Copy and paste, print and bookmark content f On demand and accessible via web browser. Hadoop Operations and Cluster Management Cookbook


  4. Lucene operates on searchable documents, which are collections of fields, each having a value. Professional Hadoop Solutions


  5. Why Subscribe? f Fully searchable across every book published by Packt. Cassandra High Performance Cookbook


  6. Subscribers have access to thousands of books, training videos, and prepublication manuscripts in one fully searchable database from publishers like O’Reilly Media, Prentice Hall Professional, Addison-Wesley. Java EE 7 Essentials


  7. Not only does it store them, it also indexes the contents of each document in order to make them searchable. Elasticsearch The Definitive Guide


  8. If this is not the case, you may need to break objects and searchable data into fields that are individually compressed. Java Performance Tuning 2nd Edition


  9. Organizations would like to build this living knowledge repository of Hive assets that is easily searchable by users. Programming Hive


  10. Safari Books Online is a digital library providing searchable, on-demand access to thousands of technology, digital media, and professional development books and videos from leading publishers. Java SE 8 for the Really Impatient



445、单词 subdirectories 的匹配文本:


  1. Over time, the hdfs.path directory will get very full so you will want to add some kind of time element into the path to partition the files into subdirectories. Apache Flume Distributed Log Collection for Hadoop


  2. The attributes group, name, and version directly map to subdirectories in the file system. Gradle in Action


  3. The views directory is structured into subdirectories, each of which is the name of the view. Beginning CouchDB


  4. The simplest use of fileTree() simply points it at a parent directory, allowing it to recurse through all subdirectories and add all of the files it finds into the resulting file collection. Gradle Beyond The Basics


  5. We can specify all the part prefixed subdirectories as input to the dependent job using wildcards. Hadoop MapReduce Cookbook


  6. I also sometimes have data that’s specific to a chapter, which exists within chapter-specific subdirectories under the same GitHub location. Hadoop in Practice


  7. If you want to see all the subdirectories, in a way similar to Unix’s ls with the -r option, you can use Hadoop’s lsr command. Hadoop in Action


  8. Double-click on the ufodata directory to open it and within it create two subdirectories called ufo and states. Hadoop Beginners Guide


  9. The AMQ message store directory contains subdirectories for all the brokers that are running on the machine. ActiveMQ in Action


  10. The master project goes in a top-level directory, with subprojects arranged in subdirectories underneath. Building and Testing with Gradle



446、单词 bundlor 的匹配文本:


  1. Tools (such as Eclipse PDE, Bnd, and Bundlor) can help you easily and efficiently manage the manifest file, and they can easily be integrated with building tools like Ant or Maven. Spring Dynamic Modules in Action



447、单词 lzo 的匹配文本:


  1. On the other hand, using snappy or lzo results in faster compression performance, but results in a lower compression ratio. Apache Flume Distributed Log Collection for Hadoop


  2. This step requires the HBase cluster to support LZO compression. HBase Administration Cookbook


  3. The framework supports the gzip, zip, sometimes LZO, and bzip2 compression codecs. Pro Hadoop


  4. Also covered is how to set the block size and replication, and enable LZO compression. Hadoop RealWorld Solutions Cookbook


  5. Various compression codecs are available to be used with HBase, including LZO, Snappy, and GZIP. HBase in Action


  6. Twitter LZO-compresses all of its data, and uses Protocol Buffers for serialization purposes, all of which are geared to optimizing the use of its storage and computing resources. Hadoop in Practice


  7. The LzopCodec is compatible with the lzop tool, which is essentially the LZO format with extra headers, and is the one you normally want. Hadoop:The Definitive Guide 3rd Edition


  8. Before Snappy was made available in 2011, the recommended algorithm was LZO, even if it did not have the best compression ratio. HBase:The Definitive Guide


  9. As result of these experiments we chose a variant of LZO for production, since it gave an about 10% better compression ratio and was up to twice as fast when decompressing compared to Zippy. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. However, BZip2 and LZO provide block-level compression, where each block has complete records, so Hadoop can split these files on block boundaries. Programming Hive



448、单词 gzip 的匹配文本:


  1. For example, if you specify the codec as follows all files written will have a .gzip extension, so you don''t need to specify a hdfs.fileSuffix property in this case: Apache Flume Distributed Log Collection for Hadoop


  2. The most notable things to do for performance are keeping data small (GZIP compression, high JPG compression) and avoiding requests all together (HTTP-level caching in the browser) CouchDB The Definitive Guide


  3. We suggest downloading a .gzip archived file with the filename ending with tar.gz. Hadoop Operations and Cluster Management Cookbook


  4. To create a TAR archive with the optional gzip or bzip2 compression, we must use the tarFiles task. Gradle Effective Implementation Guide


  5. Since files compressed using the bzip2, gzip, and DEFLATE codecs cannot be split, the whole file must be given as a single input split to the mapper. Hadoop RealWorld Solutions Cookbook


  6. Deflate Deflate is similar to zlib, which is the same compression algorithm that gzip uses without the gzip headers. Hadoop in Practice


  7. Note that the gzip file format is DEFLATE with extra headers and a footer.) The .deflate filename extension is a Hadoop convention. Hadoop:The Definitive Guide 3rd Edition


  8. This problem was solved by GZIP compressing the intermediate map output. Apache Hadoop Goes Realtime at Facebook


  9. For example, to use GZIP compression, you can set the configuration object: Hadoop in Action


  10. On Linux, GZip extraction utilities should be preinstalled; on Windows, you’ll need to get a program such as WinZip, which is commercial, or something like 7-Zip, which is freeware. Cassandra The Definitive Guide



449、单词 leverages 的匹配文本:


  1. As compared to Ganglia, which stores its data in RRDtool, OpenTSDB leverages HBase''s scalability to monitor it at a larger scale. HBase Administration Cookbook


  2. Ganglia 1.0 had no time-series support, whereas Ganglia today leverages the power. Monitoring with Ganglia


  3. The ElasticSearch point of view On top of all this is ElasticSearch which leverages Apache Lucene and thankfully allows us to change the default scoring algorithm (more about this can be found in the Altering Apache Lucene scoring section, Chapter 3, Low-level Index Control) Mastering ElasticSearch


  4. Sqoop leverages database to describe the schema for the imported/ exported data and MapReduce for parallelization operation and fault tolerance. Professional Hadoop Solutions


  5. To control granularity we propose a new Contraction phase that leverages Combiner functions, normally used to reduce network traffic by anticipating a small part of the processing done by Reduce tasks. Incoop MapReduce for Incremental Computations


  6. To support this throughput HDFS leverages unusually large (for a filesystem) block sizes and data locality optimizations to reduce network input/output (I/O) Hadoop in Practice


  7. Instead of implementing a separate RPC server, it leverages the Accept header of HTTP to send and receive the data encoded in Protocol Buffers. HBase:The Definitive Guide


  8. Although both consider only independent failures, RADOS leverages CRUSH to mitigate correlated failure risk with failure domains. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  9. HBase in turn leverages HDFS for scalable and fault tolerant storage and ZooKeeper for distributed consensus. Apache Hadoop Goes Realtime at Facebook


  10. One of these methods leverages the Bloom filter, a data structure that’s useful in many data processing tasks. Hadoop in Action



450、单词 enums 的匹配文本:


  1. Actually, the OpenOption is an interface from the java.nio.file package and it has two implementations: the LinkOption class (remember the well-known NOFOLLOW_LINKS enum constant) and the StandardOpenOption class, which defines the following enums: Pro Java 7 NIO.2


  2. For instance, enums are compared by the order in which the symbol is defined and not by the value of the symbol string. Hadoop:The Definitive Guide 3rd Edition


  3. The property values are defined on CacheRetrieveMode and CacheStoreMode enums and explained in Table 13-5 Java EE 7 Essentials


  4. Typically, most of your string fields will be exact-value not_analyzed fields such as tags or enums, plus a handful of full-text fields which will use some default analyzer like standard orenglish or some other language. Elasticsearch The Definitive Guide


  5. The language supports four kinds of types: interfaces (including annotations), classes (including enums), arrays, and primitives. Effective Java 2nd Edition


  6. Arrays, enumerated types (or “enums”) and annotation types are the other three. Java in a Nutshell 5th Edition


  7. Avro supports six kinds of complex types: records, enums, arrays, maps, unions, and fixed. Cassandra The Definitive Guide


  8. Avro also supports primitive types such as integers, floats, and booleans as well as other structured types such as arrays and enums. Hadoop Beginners Guide


  9. I could have used strings or int constants to specify the kind of record to search for and the database to search in, but because there are only a small number of valid values, lookUpNames() defines enums with a fixed number of members instead. Java Network Programming 4th Edition


  10. By calling conversion methods against these enums, long values representing the duration between two dates can be converted. Java 7 Recipes A Problem-Solution Approach



451、单词 predicates 的匹配文本:


  1. Using onlyIf predicates Every task has a method onlyIf that accepts a closure as an argument. Gradle Effective Implementation Guide


  2. In addition, coprocessors can be used for implementation of new features — for example, secondary indexing, complex filtering (push down predicates), and access control. Professional Hadoop Solutions


  3. Predicate pushdown – Predicates are pushed down to the scan if possible so that rows can be filter early in the processing. Hive A Petabyte Scale Data Warehouse Using Hadoop


  4. Filters are sometimes called push-down predicates, allowing you to push data-filtering criteria down to the server (see figure 4.16) HBase in Action


  5. In addition, it provides push-down predicates, that is, filters, reducing data transferred over the network. HBase:The Definitive Guide


  6. A second important and growing use case for graph databases is the Semantic Web, where predicates are given equal status to subjects and objects in a triple. Cassandra The Definitive Guide


  7. The execution engine will ignore the file after processing the header if it can determine that the file does not contain any records of interest, based on the predicates defined for that table in the query. Tenzing - A SQL Implementation On The MapReduce Framework


  8. As with traditional relational databases, Hive allows tables to be partitioned based on the values of virtual columns and for these values to then be used in query predicates later. Hadoop Beginners Guide


  9. When we add predicates to WHERE clauses that filter on partition values, these predicates are called partition filters. Programming Hive


  10. Many formalisms are available for precisely stating predicates describing requirements (as discussed in most of the texts on concurrent design methods listed in the Further Readings) Concurrent Programming in Java Design Principles and Pattern



452、单词 deadlocks 的匹配文本:


  1. The synchronicity of this model makes it easier to reason about program semantics when implementing algorithms, and ensures that Pregel programs are inherently free of deadlocks and data races common in asynchronous systems. Pregel  -  A System for Large-Scale Graph Processing


  2. If no other thread calls signal again, then the system deadlocks. Core Java Volume I Fundamentals 9th Edition


  3. To the scheduler they appeared as map-only jobs with radically different resource curves, thwarting the assumptions built into to the platform and causing poor utilization, potential deadlocks, and instability. Apache Hadoop YARN Yet Another Resource Negotiator


  4. Reads within read-write transactions use woundwait [33] to avoid deadlocks. Spanner - Google’s Globally-Distributed Database


  5. Locking, waits, and deadlocks It is a known fact that complex transactional processing, like two-phase commits, can increase the possibility of multiple clients waiting for a resource to become available. HBase:The Definitive Guide


  6. Complete non blocking, event IO based communication (no blocking threads on sockets, no deadlocks, scalable with large number of shards/replicas) elasticsearch. The Road to a Distributed, (Near) Real Time, Search Engine


  7. Depending on what an alien method does, calling it from a synchronized region can cause exceptions, deadlocks, or data corruption. Effective Java 2nd Edition


  8. Java 7 has improved the use of class loaders by modifying the locking mechanism to avoid deadlocks. Java 7 New Features Cookbook


  9. More Info You have to be very careful with the use of Locks to avoid deadlocks. Java 7 Concurrency Cookbook


  10. In addition, you will find detailed coverage of all aspects of concurrent programming, including such topics as confinement and synchronization, deadlocks and conflicts, state-dependent action control, asynchronous message passing and control flow, coordinated interaction, and structuring web-based and computational services. Concurrent Programming in Java Design Principles and Pattern



453、单词 metamodel 的匹配文本:


  1. In this new criteria API, the criteria being passed into the query is based on the mapped entity classes’ metamodel. Pro Spring 3


  2. Most of the time, online store applications don’t have a static structure for the products in their catalog: they use a metamodel configured with the structure of the products and a generic engine that uses this metamodel to display products dynamically. Spring Batch in Action


  3. Criteria API The Criteria API is an object-based, type-safe API and operates on a metamodel of the entities. Java EE 7 Essentials


  4. This reflects additions to support many modeling constructs natural to Java developers, expansion of the standardized mapping options, an object-based criteria query API, a metamodel API, support for automatic validation, support for pessimistic locking, and much more. Pro JPA2


  5. Introduction to Querydsl When working with Querydsl, you will usually start by deriving a metamodel from your domain classes. Spring Data Modern Data Access for Enterprise Java



454、单词 jaas 的匹配文本:


  1. Note: Using well-supported third-party frameworks, such as JAAS and JDBC implementations, you can connect to a much wider set of services than mentioned in the table. Jenkins Continuous Integration Cookbook


  2. Tomcat, for example, allows you to authenticate users against a relational database (using direct JDBC or a DataSource), JNDI, JAAS, or an XML configuration file. Jenkins The Definitive Guide


  3. Standards such as Java Authentication and Authorization Service (JAAS) or Java EE Security do offer some ways of performing some of the same authentication and authorization functions, but Spring Security is a winner because it packages up everything you need to implement a top-to-bottom application security solution in a concise and sensible way. Spring Security 3.1


  4. On the other hand, if you expected SASL to work, please fix your JAAS configuration. Professional Hadoop Solutions


  5. So it overlaps mainly with the JAAS Java API, although they can be used together, as you will see later in the book. Pro Spring Security


  6. Chapter 15 This chapter discusses how the Java Authentication and Authorization Service (JAAS) enables applications to authenticate users and grant them particular permissions based on their login ID or other credentials. Java Security 2nd Edition


  7. Specifies the name of a Java Authentication and Authorization Service (JAAS) login configuration entry to use when the JMX agent authenticates users. Oracle JRockit The Definitive Guide


  8. The string parameter "Login1" in the LoginContext constructor refers to an entry with the same name in the JAAS configuration file. Core Java Volume II Advanced Features 9th Edition


  9. With the addition of the JAAS API to the core Java platform in Java 1.4, policytool allows the specification of a Principal to whom a set of permissions is granted. Java in a Nutshell 5th Edition


  10. ActiveMQ also supports its own simple style of authentication and authorization using properties files as well as standard JAAS login modules. ActiveMQ in Action



455、单词 wal 的匹配文本:


  1. This durability is provided by a combination of a Write Ahead Log (WAL) and one or more file storage directories. Apache Flume Distributed Log Collection for Hadoop


  2. So, HBase can guarantee that when a region server dies, data can be recovered and replayed on other region servers using its Write-Ahead Log (WAL) HBase Administration Cookbook


  3. It also means that if writing the record to the WAL fails, the whole operation must be considered a failure. Professional Hadoop Solutions


  4. When a write is made, by default, it goes into two places: the write-ahead log (WAL), also referred to as the HLog, and the MemStore (figure 2.1) HBase in Action


  5. When data is updated it is first written to a commit log, called a write-ahead log (WAL) in HBase, and then stored in the in-memory memstore. HBase:The Definitive Guide


  6. A common practice for highthroughput data stores, use of a write ahead log (WAL) such as the edits file reduces I/ O operations to sequential, append-only operations (in the context of the namenode, since it serves directly from RAM), which avoids costly seek operations and yields better overall performance. Hadoop Operations



456、单词 initializes 的匹配文本:


  1. But in case you have to know, it initializes to null several other internal elements of the mmodule structure and fills in a little bit of static information. Monitoring with Ganglia


  2. At the start of a simulation, the lightweight simulator initializes cluster state using task-size data extracted from the. Omega - flexible, scalable schedulers for large compute clusters


  3. More subtly, because the JobTracker needs to allocate tracking structures for every job it initializes, its admission control logic includes safeguards to protect its own availability; it may delay allocating fallow cluster resources to jobs because the overhead of tracking them could overwhelm the JobTracker process. Apache Hadoop YARN Yet Another Resource Negotiator


  4. For each input record, Sawzall initializes the pre-defined variable input to the uninterpreted byte string of the input record. Interpreting the Data Parallel Analysis with Sawzall


  5. The only feature that sets them apart from regular POJOs4 (or POGOs) is that they must have a property called name and a constructor that accepts a String and initializes the name property with it. Gradle Beyond The Basics


  6. When we run the code, it initializes the K-means MapReduce job and executes it using the MapReduce framework. Hadoop MapReduce Cookbook


  7. The first time we run this request, the upsert value is indexed as a new document, which initializes the views field to 1 Elasticsearch The Definitive Guide


  8. When a task initializes, it calls configure(), which you override to retrieve and store your properties. Hadoop in Action


  9. This tends to be a side effect of how the jobtracker initializes jobs submitted to the cluster and how it handles memory. Hadoop Operations


  10. This simple class starts a minimal ActiveMQ broker configuration and initializes the Spring application context to start the JMS clients. ActiveMQ in Action



457、单词 regex 的匹配文本:


  1. In order to extract one or more fields, you start by specifying the regex property with group matching parentheses. Apache Flume Distributed Log Collection for Hadoop


  2. If you’re interested in something specific, you can specify a system name, or a regex or type-glob to combine various metrics from various hosts to create a custom graph of exactly what you want to see. Monitoring with Ganglia


  3. ElasticSearch allows us to use two types of field data loading filtering: by term frequency, by using regex, or by combining both of them. Mastering ElasticSearch


  4. We use the setup function to retrieve the map regex from the job configuration. MapReduce Design Patterns


  5. The mapper class defines a statically initialized position to read the country from each line, as well as the regex pattern to split each line. Hadoop RealWorld Solutions Cookbook


  6. To rename files using regular expressions, we can simply provide a source regex and a destination filename. Gradle Beyond The Basics


  7. Keep in mind that this is a regex over the un-interpreted bytes in the rowkey. HBase in Action


  8. Solution Use the RegexSerDe bundled with Hive and define a regular expression that can be used to parse the contents of Apache log files. Hadoop in Practice


  9. RegexMapper A mapper that finds matches of a regular expression in the input value and emits the matches along with a count of one. Hadoop:The Definitive Guide 3rd Edition


  10. Returns whether the string s matches the Java regular expression regex. Hadoop in Action



458、单词 gwt 的匹配文本:


  1. However, Spring can also integrate with other popular web frameworks such as Struts, JSF, Google Web Toolkit (GWT), and so on. Pro Spring 3


  2. Google’s GWT compiler can also output source maps, and work is being done for the CoffeeScript compiler as well, so as far as you can tell you are debugging your original native code while the browser is actually running the compiled version. Testable JavaScript


  3. Google Web Toolkit (GWT) integration There are a number of ways to set up a GWT-based application to communicate with the server. Spring Security 3.1


  4. We’ll show you how to use servlet and JSP technologies with Spring DM, and how to use web frameworks like Spring MVC, JSF, and GWT to implement real web applications. Spring Dynamic Modules in Action


  5. Using a similar approach, we can also build support for other widgets in jQuery UI or other UI frameworks such as Yahoo UI, Doojo, and GWT. Selenium Testing Tools Cookbook


  6. You can use this technique when doing rapid prototyping; perhaps somebody has created a set of domain object models, and you want to try using them from various front ends such as Spring MVC, JSF, or GWT. Spring Roo in Action


  7. If you are working with GWT and would like to take advantage of Guava in your code, there is also a GWT compatible version that can be downloaded by clicking on the guava-gwt-14.jar link on the same page. Getting Started with Google Guava



459、单词 ioc 的匹配文本:


  1. IoC is a technique that externalizes the creation and management of component dependencies. Pro Spring 3


  2. To be able to do the things mentioned here, we will explore the Inversion of Control (IoC) container; the application context. Pro Spring MVC


  3. Spring’s Core Container provides the inversion of control (IoC) and DI capabilities on which all the other modules are built. Spring in Practice


  4. Dependency Injection The basic idea of DI, a type of Inversion of Control (IoC), is simply that instead of having an object instantiate its needed dependencies, the dependencies are somehow given to the object. Pro Spring Security


  5. Dependency Injection in Scala: The Cake Pattern Dependency injection (DI), a form of inversion of control (IoC), is a powerful technique for resolving dependencies between “components” in larger applications. Programming Scala


  6. Spring application context management in OSGi environments is an example of the inversion of control (IoC) pattern: Spring DM–powered bundles wait for the extender to inspect them and create their application context; no application code is needed for this task. Spring Dynamic Modules in Action


  7. The theme of inversion of control (IoC) is central to this investigation because it’s a significant part of the Spring philosophy.3 For the purpose of this discussion, we consider IoC in broad terms. Spring Integration in Action


  8. Using the Spring IoC container, these may configured as normal Spring beans and injected into GemFire components. Spring Data Modern Data Access for Enterprise Java



460、单词 lua 的匹配文本:


  1. Part 3 completes the content with methods to help you scale Redis using memory-saving techniques, horizontal sharding, and Lua scripting. Redis in Action


  2. Redis 2.6 includes a built-in Lua interpreter which developers can leverage to write more advanced queries to be. The Little Redis Book


  3. Integration of Lua and R has been proposed, and work is in progress. Tenzing - A SQL Implementation On The MapReduce Framework


  4. This has been achieved by integrating the Redis server with the Lua C API. Redis Cookbook



461、单词 coprocessor 的匹配文本:


  1. The framework for coprocessors is very flexible, and allows implementing two basic coprocessor types: Professional Hadoop Solutions


  2. Writing an observer coprocessor is tricky to get right, and such a coprocessor can be extremely difficult to debug when running at scale. HBase in Action


  3. With the coprocessor feature in HBase, you can even move part of the computation to where the data lives. HBase:The Definitive Guide


  4. HBase security is implemented as a coprocessor—a plug-in that, in many ways, works like a database trigger, firing authorization checking code before each operation is performed. Hadoop Operations



462、单词 multicore 的匹配文本:


  1. How can we fulfill the need for computational speed? Moore’s law takes us some of the way, but multicore is the real future. Programming Concurrency on the JVM


  2. Stepping back from extremes of multicore CPU deployments, Ganglia has also proven itself to be truly adaptable and versatile in the face of dramatic change. Monitoring with Ganglia


  3. Now, with Moore’s Law running out of steam (at least for conventional chips), speed improvements are appearing in the form of multicore processors rather than faster chips. Thinking in Java 4th Edition


  4. It is effectively a single point of contention for the whole server, and it can cause severe stalls on multicore servers. High Performance MySQL 3rd Edition


  5. Given that today’s computers have blazingly fast multicore processors, you should use their computing powers to their fullest. Gradle in Action


  6. There’s also a hardware influence: commodity CPUs have become multicore—the processor in your laptop or desktop machine probably has four or more CPU cores within it. Java 8 in Action Lambdas, Streams and Functional-style Programming


  7. For production use, it is typical that you use multicore processors.* Quad-core are state of the art and affordable, while hexa-core processors are also becoming more popular. HBase:The Definitive Guide


  8. It is inherent in much of what we do, and a requirement if you are to obtain good performance from multicore processors, which are now commonplace. Effective Java 2nd Edition


  9. Being able to make efficient use of CPU cycles can be challenging for multithreaded applications running on multiprocessor and multicore. Java Performance


  10. By making concurrent programs, you take advantage of today’s multicore CPUs. Java 7 Recipes A Problem-Solution Approach



463、单词 refactor 的匹配文本:


  1. During that incubation year, work had already begun to refactor Flume under the Star Trek Themed tag, Flume-NG (Flume the Next Generation) Apache Flume Distributed Log Collection for Hadoop


  2. We refactor the TimerTest example and extract a TalkingClock class. Core Java Volume I Fundamentals 9th Edition


  3. It’s easy to refactor the existing project structure into the identified modules. Gradle in Action


  4. At Stage 4 of the cycle (where MapReduce job should be invoked from an Oozie action), it is possible to either use an existing driver as is by using a java action, or refactor it for use as an Oozie map-reduce action. Professional Hadoop Solutions


  5. As the plug-in takes shape, we will slowly refactor it into a distributable plug-in project with a lifecycle of its own. Gradle Beyond The Basics


  6. We can refactor the MapReduce program to track the count explicitly. Hadoop in Action


  7. Once you do this, you can refactor the method to give it a single return: Effective Java 2nd Edition


  8. Lastly, we revisited the portfolio example from chapter 3 in order to refactor it to use Spring JMS. ActiveMQ in Action


  9. When locking presents liveness or performance problems for a given class or program, usually the best solution is to refactor the design to use one or more of the other approaches presented in this chapter. Concurrent Programming in Java Design Principles and Pattern


  10. While you are always free to solve problems in a locally optimized way by writing Groovy code inside your build, the most maintainable builds will eventually refactor this code into organized plug-ins that expose new build language idioms to the build files that use them. Building and Testing with Gradle



464、单词 jre 的匹配文本:


  1. There is also a Java Runtime Environment (JRE) that contains the virtual machine but not the compiler. Core Java Volume I Fundamentals 9th Edition


  2. OSNaming JNDI service provider is a part of the Java SE 7 SDK and JRE. Java EE 7 Specification


  3. HBase needs the Java Runtime Environment (JRE) to be installed and available on the system. HBase in Action


  4. JAVA_HOME must be set to point to a JDK, not to a JRE. Hadoop MapReduce Cookbook


  5. All of the following assumes you have the Java Runtime Environment (JRE) installed. HBase:The Definitive Guide


  6. You can run into grief if you drop a JAR file into the jre/lib/ext directory and one of its classes needs to load a class that is not a system or extension class. Core Java Volume II Advanced Features 9th Edition


  7. The deployJava.js is part of the Java Deployment Toolkit and is used to detect the presence of a JRE, install one if necessary, and then run an applet. Java 7 New Features Cookbook


  8. Ivy requires Ant, and building from source requires the complete JDK, version 1.6.0_20 or better, not just the JRE. Cassandra The Definitive Guide


  9. Although the JRE (Java Runtime Environment) is all you need to run Hive, you will need the full JDK (Java Development Kit) to build examples in this book that demonstrate how to extend Hive with Java code. Programming Hive


  10. The Java platform consists of a hierarchy of components, starting with the Java Development Kit (JDK), which is composed of the Java Runtime Environment (JRE), the Java programming language, and platform tools that are necessary to develop and run Java applications. Java 7 Recipes A Problem-Solution Approach



465、单词 instantiating 的匹配文本:


  1. You do this by instantiating implementations of AppleFormatter and giving them as arguments to prettyPrintApple: Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. To do that, you must first connect to the Zookeeper instance keeping track of Accumulo by instantiating a ZookeeperInstance class, which will return a connector. Professional Hadoop Solutions


  3. Using an HTablePool is more common in practice than instantiating HTables directly: HBase in Action


  4. Instantiating a Configuration object and obtaining a FileSystem handle within a Hadoop environment will point it to the HDFS NameNode of that environment. Hadoop MapReduce Cookbook


  5. There is a cost to instantiating an HTable, so if you do this for each insert, you may have a negative impact on performance, hence our setup of HTable in the configure() step. Hadoop:The Definitive Guide 3rd Edition


  6. You must provide a row key when instantiating an Increment, which sets the row containing all the counters that the subsequent call to increment() should modify. HBase:The Definitive Guide


  7. Another difference is that Pig programs are written in Pig Latin, whereas Cascading works more like a Java framework in which you create a data processing flow through instantiating various Java classes (Each, Every, etc.) Hadoop in Action


  8. For example, the following uses a raw type when instantiating the identifier, list: Java 7 New Features Cookbook


  9. We introduce them here, however, because they are the next higher level of syntax after methods, and because the rest of this chapter requires a basic familiarity with the concept of class and the basic syntax for defining a class, instantiating it, and using the resulting object. Java in a Nutshell 5th Edition


  10. The method with this signature can be invoked without instantiating its containing class because it is marked as static. Java 7 Recipes A Problem-Solution Approach



466、单词 usr 的匹配文本:


  1. For example, in a 32-bit Intel-compatible Linux host, the default is usually at /usr/ lib/ganglia. Monitoring with Ganglia


  2. The argument -reducer "/usr/bin/perl -w wordCount.pl" causes the Perl program wordCount.pl to be used to perform the reduce. Pro Hadoop


  3. Be sure to read through the Debian-specific system documentation that can be found under /usr/share/couchdb. CouchDB The Definitive Guide


  4. Suppose we have both OpenJDK and Oracle Java installed in /usr/openjdk and / usr/jdk respectively. Hadoop Operations and Cluster Management Cookbook


  5. One variant uses a shared /lib for moderate sharing, while the other shares /usr/include, which is very heavily read. Ceph A Scalable, High-Performance Distributed File System


  6. In this example, we connected to the MySQL server using hdp_usr. Hadoop RealWorld Solutions Cookbook


  7. This will create a new directory called otp_src_R13B01 under /usr/local/erlang. Beginning CouchDB


  8. Next you’ll need to untar the tarball that the build created under /usr/local: Hadoop in Practice


  9. Python script to filter out poor-quality weather records #!/usr/bin/env python. Hadoop:The Definitive Guide 3rd Edition


  10. This directory can be made to be read only when configured correctly and usually lives in /usr/local, /opt, or /usr when Hadoop is installed via packages. Hadoop Operations



467、单词 postgresql 的匹配文本:


  1. He has more than 7 years of experience in software and middleware (Apache, Tomcat, PostgreSQL, Hadoop eco system) engineering. HBase Administration Cookbook


  2. The command will automatically install the PostgreSQL database, which is required by Ambari. Hadoop Operations and Cluster Management Cookbook


  3. Using HDFS in a Greenplum external table Greenplum is a parallel database that distributes data and queries to one or more PostgreSQL instances. Hadoop RealWorld Solutions Cookbook


  4. You could either use one of the free databases, such as MySQL or PostgreSQL, or a pure key/value store like Berkeley DB. HBase in Action


  5. He has more than seven years'' experience in software and middleware (Apache, Tomcat, PostgreSQL, and Hadoop eco system) engineering. Hadoop MapReduce Cookbook


  6. What are the disadvantages of fast connectors? First, only MySQL and PostgreSQL are currently supported. Hadoop in Practice


  7. For a majority of small- to medium-volume applications, there is no substitute for the ease of use, flexibility, maturity, and powerful feature set of available open source RDBMS solutions such as MySQL and PostgreSQL. Hadoop:The Definitive Guide 3rd Edition


  8. Today, Sqoop includes native connectors (called direct support) for MySQL and PostgreSQL. Hadoop Operations


  9. The first iteration of the data warehouse was built using an open source system with a Java SQL optimizer and a set of underlying PostGreSQL databases. Programming Hive


  10. Hence, we use PostgreSQL as the database layer and Hadoop as the communication layer, Hive as the translation layer, and all code we add we release as open source [2] HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



468、单词 mutability 的匹配文本:


  1. Programming with mutability in pure Java is like working with the mother-in-law who’s just waiting for you to fail. Programming Concurrency on the JVM


  2. Another aspect of functional programming is that it doesn’t have side effects or mutability. Scala in Action


  3. The old java.util.Date class and all other classes used to model date and time in Java before Java 8 have many inconsistencies and design flaws, including their mutability and some poorly chosen offsets, defaults, and naming. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. For sets and maps, Scala models mutability in the class hierarchy. Programming in Scala 2nd Edition


  5. These patterns focus on writing code that avoids mutability and favors a declarative style, which helps us write simpler, more maintainable code. Functional Programming Patterns in Scala and Clojure


  6. If you remove mutability, then the problems essentially go away. Programming Scala


  7. Given that mutability leads to unexpected defects, minimizing mutability in code minimizes mutability-related defects. Beginning Scala


  8. If a class cannot be made immutable, limit its mutability as much as possible. Effective Java 2nd Edition


  9. Its mutability makes it suitable for processing text in place, which is not possible with the immutable String class. Java in a Nutshell 5th Edition


  10. Criteria Objects and Mutability Typical usage of the criteria API will result in many different objects being created. Pro JPA2



469、单词 thymeleaf 的匹配文本:


  1. Thymeleaf: It is a modern, tempting framework that provides an excellent alternative to JSPs. Spring Security 3.1


  2. Chapter 6 will explore various view technologies that can be used with Spring, including JSP, Apache Tiles, and Thymeleaf. Spring in Action 4th Edition



470、单词 gfs 的匹配文本:


  1. The Google File System (GFS [14]) is one such distributed storage layer widely used in the company. Dremel - Interactive Analysis of WebScale Datasets


  2. A GFS cluster consists of a single master and multiple. GFS - The Google File System


  3. We conserve network bandwidth by taking advantage of the fact that the input data (managed by GFS [8]) is stored on the local disks of the machines that make up our cluster. MapReduce - Simplied Data Processing on Large Clusters


  4. The Google File System (GFS)[9] is another distributed file system built for hosting the state of Google’s internal applications. Cassandra A Decentralized Structured Storage System


  5. The data sets are often stored in GFS, the Google File System [9] Interpreting the Data Parallel Analysis with Sawzall


  6. Bigtable in turn uses both Chubby[7] (a distributed lock system) and GFS. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. Another storage data structure that is often required at scale is a FIFO queue, like that provided by GFS [7] RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  8. In addition, both GFS and Bigtable use Chubby as a well-known and available location to store a small amount of meta-data; in effect they use Chubby as the root of their distributed data structures. Chubby - The lock service for loosely-coupled distributed systems


  9. Bigtable runs on top of the Google File System (GFS) Cassandra The Definitive Guide


  10. Bigtable uses the distributed Google File System (GFS) [17] to store log and data files. Bigtable - A Distributed Storage System for Structured Data



471、单词 autocomplete 的匹配文本:


  1. You can update or remove tag cloud, autocomplete, country code, and calendar widgets from reuters.js. Web Crawling and Data Mining with Apache Nutch


  2. Additionally, Suggestor with autocomplete feature starts providing users options right at the time when user is typing search query enhancing the overall experience. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  3. Figure 4-9 shows how gweb search autocomplete allows you to find metrics across your entire deployment. Monitoring with Ganglia


  4. We’ll begin by building autocomplete functions to quickly find users in short and long lists of items. Redis in Action


  5. But the book doesn''t only pay attention to low-level aspects of ElasticSearch; it includes user search experience improvements tips, such as dealing with spelling mistakes, highly effective autocomplete feature, and a tutorial on how you can deal with query related improvements. Mastering ElasticSearch


  6. The name-to-UID rows enable support for autocomplete of tag names. HBase in Action


  7. You can customize the completion tokens by creating a file named autocomplete and placing it on Pig’s classpath (such as in the conf directory in Pig’s install directory) or in the directory you invoked Grunt from. Hadoop:The Definitive Guide 3rd Edition


  8. We can do this by using the autocomplete analyzer at index time and the standard analyzer at search time. Elasticsearch The Definitive Guide


  9. If you start typing and hit the Tab key, the CLI will autocomplete possible keywords and function names. Programming Hive


  10. Appendix, Real Life Situations, describes how to handle real-life situations such as implementing different autocomplete functionalities, using near real-time search, or improving query relevance. Apache Solr 4 Cookbook



472、单词 paxos 的匹配文本:


  1. A novel extension to Paxos, detailed in Section 4.4.1, allows local reads at any up-to-date replica. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. CouchDB differs from others by accepting eventual consistency, as opposed to putting absolute consistency ahead of raw availability, like RDBMS or Paxos. CouchDB The Definitive Guide


  3. At the highest level of abstraction, it is a database that shards data across many sets of Paxos [21] state machines in datacenters spread all over the world. Spanner - Google’s Globally-Distributed Database


  4. I suggest that you should look at Paxos and Cassandra''s use of that algorithm to affect conditional updates at the following URLs: Storm Blueprints Patterns for Distributed Real-time Computation


  5. Improved primitives: Distributed cross-group transactions Synchronous replication groups (Paxos) Automatic failover of client requests. Storage Architecture and Challenges


  6. The cluster, which is based in part on the Paxos part-time parliament algorithm [14], is designed to favor consistency and durability over availability and update latency. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. One might argue that we should have built a library embodying Paxos, rather than a library that accesses a centralized lock service, even a highly reliable one. Chubby - The lock service for loosely-coupled distributed systems


  8. Traditional protocols to implement replicated state machines, like Paxos [2], do not enable such a feature directly, however. Zab High-performance broadcast for primary-backup systems



473、单词 mutex 的匹配文本:


  1. Because this clause produces mutual exclusion, a common name for such a mechanism is mutex. Thinking in Java 4th Edition


  2. A mutex is a data structure that guarantees mutually exclusive access to some resource, preventing concurrent access from causing problems.) High Performance MySQL 3rd Edition


  3. The volatile keyword accomplishes this: it forces reads and writes to the variable to go all the way out to main memory (where they’re visible to all threads), but there’s no mutex. Spring in Practice


  4. A mutex lock is also known as a mutually exclusive lock. Java Threads 3rd Edition


  5. Notice in the preceding code that we used a mutex to sequence the updates to scores, thereby eliminating the race condition mentioned earlier. Storm Blueprints Patterns for Distributed Real-time Computation


  6. The Monitor class that functions as a Mutex, ensuring serial access to the defined areas in our code, much like the synchronized keyword but with much easier semantics and some useful additional features. Getting Started with Google Guava


  7. This allowed for operating system tools such as Solaris mpstat to easily monitor a Java application for lock contention by observing the “smtx” (spin on mutex) column along with observing system or kernel CPU utilization. Java Performance


  8. A Mutex (short for mutual exclusion lock) class can be defined as (omitting implementation code): Concurrent Programming in Java Design Principles and Pattern


  9. Most global variables have an associated mutex that other threads will lock before accessing it. Understanding MySQL Internals



474、单词 jni 的匹配文本:


  1. The methods in turn use JNI to actually interact with an embedded Java Virtual Machine (JVM) which actually interacts with HDFS. Pro Hadoop


  2. The only way to write a true ping application in Java would be to use the Java Native Interface (JNI) to access native code; such a discussion is beyond the scope of this book. Java Network Programming and Distributed Computing


  3. For example, a Java version of MapReduce is implemented as a JNI veneer on top of the C++ version of MapReduce. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. Using JNI JNI is a programming framework that allows Java code running in a JVM to call and to be called by native applications. Professional Hadoop Solutions


  5. You also have to include the JNI header files of your Java installation. Hadoop MapReduce Cookbook


  6. It works using the Java Native Interface (JNI) to call a Java filesystem client. Hadoop:The Definitive Guide 3rd Edition


  7. Refer to the source code of the JNI library to find out what is available to you. HBase:The Definitive Guide


  8. The usual Java mechanism for accessing non-native libraries is JNI [15], but it is regarded as slow and cumbersome. Chubby - The lock service for loosely-coupled distributed systems


  9. The Java platform has an API for interoperating with native C code called the Java Native Interface (JNI) Core Java Volume II Advanced Features 9th Edition


  10. In Java 1.1, the -jni option specified that javah should generate new-style files. Java in a Nutshell 5th Edition



475、单词 topologies 的匹配文本:


  1. Further, it is possible to mix and match the deaf/mute, and default topologies to create a system architecture that better suits your environment. Monitoring with Ganglia


  2. Preventing infinite loops is important for some of the more useful replication topologies, such as master-master replication.5 High Performance MySQL 3rd Edition


  3. As shown in Figure 9-19, Storm applications (or topologies) are defined in the form of input streams (called spouts) and a connected graph of processing nodes (called bolts) Professional Hadoop Solutions


  4. Next in the Storm cluster, jobs are typically referred to as topologies; the only difference is that these topologies run forever. Apache Kafka


  5. A table lookup may make more sense for more complex cluster topologies. Hadoop in Action


  6. Network Usage in Hadoop: A Review Hadoop was developed to exist and thrive in real-world network topologies. Hadoop Operations


  7. This is known as a network of brokers and can support many different topologies. ActiveMQ in Action


  8. In this chapter, we will cover the following replication topologies that MongoDB supports: The Definitive Guide to MongoDB


  9. Approaches that assume particular architectures, topologies, processor capabilities, or other fixed environmental constraints are ill suited to commonly available JVM implementations. Concurrent Programming in Java Design Principles and Pattern


  10. Replica sets provide other enhancements too, such as easier recovery and more sophistical deployment topologies. MongoDB in Action



476、单词 latencies 的匹配文本:


  1. With replicas spread over a wide area, communication latencies limit overall throughput. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. Cluster allocation latency was so high, users would often share long-awaited clusters with colleagues, holding on to nodes for longer than anticipated, raising latencies still further. Apache Hadoop YARN Yet Another Resource Negotiator


  3. Our implementation of Paxos is pipelined, so as to improve Spanner’s throughput in the presence of WAN latencies; but writes are applied by Paxos in order (a fact on which we will depend in Section 4) Spanner - Google’s Globally-Distributed Database


  4. Using nodetool cfhistograms to see the distribution of query latencies. Cassandra High Performance Cookbook


  5. Larger clusters have imperfect load distributions, resulting in lower average per-MDS throughput (but, of course, much higher total throughput) and slightly higher latencies. Ceph A Scalable, High-Performance Distributed File System


  6. Since users are served from data centers that are geographically distributed, being able to replicate data across data centers was key to keep search latencies down. Cassandra A Decentralized Structured Storage System


  7. So, in this example, we can see that there’s a roughly log normal distribution of latencies for the selected execution pattern. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  8. After launching to employees, we noticed that our put and sync latencies were very high. Apache Hadoop Goes Realtime at Facebook


  9. This allows us to track down all clusters, discover how big they are, see which versions of our software they are running, how much traffic they are receiving, and whether or not there are any problems such as unexpectedly large latencies. Bigtable - A Distributed Storage System for Structured Data


  10. Here and in all further experiments we do not show the corresponding latencies, since they do not change significantly (the main goal is to reduce the memory footprint) PowerDrill - Processing a Trillion Cells per Mouse Click



477、单词 xxx 的匹配文本:


  1. This registration is performed by calling an addXXXListener( ) method in the event-firing component, in which "XXX" represents the type of event listened for. Thinking in Java 4th Edition


  2. Note that lack of _source on the list of returned fields causes the sourceXXX() methods not to be working (as discussed in the next point) Mastering ElasticSearch


  3. The super column version of these commands uses an extra [''xxx''] to represent the extra level of the map called the sub-column. Cassandra High Performance Cookbook


  4. A signature file (XXX.SF, where XXX is the name of the entity that signed the archive) that contains signature information. Java Security 2nd Edition


  5. The resource method parameters, fields, or bean properties are bound by way of xxxParam annotations added during object creation time. Java EE 7 Essentials


  6. Xxx getXxx(int columnNumber) • Xxx getXxx(String columnLabel) (Xxx is a type such as int, double, String, Date, etc.) Core Java Volume II Advanced Features 9th Edition


  7. To single out one record, look it up with "xxx", where xxx is one of the of the records displayed above. Java Network Programming 4th Edition


  8. If you printed out the class name of the Connection object or other objects that you are using, you would probably see something like XXX Connection , XXX Statement , XXX ConnectionImpl , or XXX StatementImpl , where XXX is the name of the database you are using (Oracle, for example) Java Performance Tuning 2nd Edition


  9. Lucene has a sandbox component, spellchecker, covered in section 8.XXX, that you can use. Lucene in Action 2nd Edition


  10. In addition, you can call one of the ResultSet object’s updateXXX() methods to update any value within the ResultSet. Java 7 Recipes A Problem-Solution Approach



478、单词 log4j 的匹配文本:


  1. The reason for this is the log4j configuration file would be included on the classpath. Apache Flume Distributed Log Collection for Hadoop


  2. A dependency is generally an external, reusable library in the form of a JAR file (for example, Log4J for logging support) Gradle in Action


  3. The following table shows all the logging levels provided by Log4j: Hadoop Operations and Cluster Management Cookbook


  4. Forwarding Log4j logs to a central sever The faster a problem can be diagnosed and corrected, the better. Cassandra High Performance Cookbook


  5. Within each of the directories in the previous code there are three files, corresponding to standard out, standard error, and the system log (output from both the infrastructure task code, as well as any of your own log4j logging): Hadoop in Practice


  6. Flume supports a large variety of sources; some of the more commonly used ones include tail (which pipes data from a local file being written to into Flume, just like Unix tail), syslog, and Apache log4j (allowing Java applications to write events to files in HDFS via Flume) Hadoop:The Definitive Guide 3rd Edition


  7. There are three basic functions: the storage-conf.xml file allows you to create your data store by configuring your keyspace and column families; there are files related to setting up authentication; and finally, the log4j properties let you change the logging levels to suit your needs. Cassandra The Definitive Guide


  8. More precisely, this is the default behavior as specified within the log4j property file within our conf directory. Hadoop Beginners Guide


  9. It supports native connectivity and support for writing directly to HDFS, and simplifies reliable, streaming data delivery from a variety of sources including RPC services, log4j appenders, syslog, and even the output from OS commands. Hadoop Operations


  10. By adding the custom Log4J RepositorySelector to the application, you’ll avoid this error. ActiveMQ in Action



479、单词 lzop 的匹配文本:


  1. The following example shows how to use the --hiveoverwrite in conjunction with enabling LZOP compression. Hadoop in Practice


  2. The LzopCodec is compatible with the lzop tool, which is essentially the LZO format with extra headers, and is the one you normally want. Hadoop:The Definitive Guide 3rd Edition



480、单词 osds 的匹配文本:


  1. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable object storage devices (OSDs) Ceph A Scalable, High-Performance Distributed File System


  2. Emerging clustered storage architectures constructed from storage bricks or object storage devices (OSDs) seek to distribute low-level block allocation. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.



481、单词 pmd 的匹配文本:


  1. In the Java space, you can choose from a wide range of open source and commercial solutions, such as Checkstyle, PMD, Cobertura, FindBugs, and Sonar. Gradle in Action


  2. You will also find recipes on static code review through PMD and FindBugs. Jenkins Continuous Integration Cookbook


  3. There are tools already available for Java and Groovy projects to analyze and check source code, such as Checkstyle, JDepend, PMD, FindBugs, CodeNarc, and Sonar. Gradle Effective Implementation Guide


  4. Modern IDEs have great support for many code quality tools—Checkstyle, PMD, and FindBugs all have plugins for Eclipse, NetBeans, and IntelliJ, which provide rapid feedback for developers on code quality issues. Jenkins The Definitive Guide


  5. Although JSLint does a better/more thorough job, it is relatively easy to add new rule sets to PMD for more custom checking in your environment. Testable JavaScript


  6. Head on to PMD''s home page for a more comprehensive list. Sonar Code Quality Testing Essentials


  7. These tools, such as Checkstyle, PMD, or FindBugs have a set of predefined rules of good practices and parse the source code looking for violations of those rules. Java 7 Concurrency Cookbook


  8. In the dependencies closure, we once address the myPmd grouping and indicate that it should download the PMD JAR as needed. Building and Testing with Gradle



482、单词 multipart 的匹配文本:


  1. JavaMail supports MIME attachments, and makes it easy to compose and transmit a multipart message with a file attachment. Java Network Programming and Distributed Computing


  2. In addition to checking the option to make a post request multipart, you will need to specify the absolute path of the file, in cases where the file you are uploading is not within JMeter''s bin directory, or the relative path in cases where the file resides within JMeter''s bin directory. Performance Testing With JMeter 2.9


  3. Instead of integration with Apache Commons File Upload, we will discuss how we can use Spring MVC with the Servlet 3.0 container’s built-in multipart support for file upload. Pro Spring 3


  4. Next, the incoming request is checked for whether it is a multipart HTTP request (this is used when doing file uploads) Pro Spring MVC


  5. There are two flavors of a put operation — a single upload and a multipart upload. Professional Hadoop Solutions


  6. The getParts method provides a Collection of parts for this multipart request. Java EE 7 Essentials


  7. Link walking always returns a multipart/mixed, since a single key can contain any number of links, meaning any number of objects returned. A Little Riak Book


  8. An XOP package is created by placing a serialization of the XML infoset inside of an extensible packaging format (such as MIME Multipart/Related) Java Performance


  9. Solution Creating a message that contains different parts (a multipart message) is what allows you to send attachments such as files and images. Java 7 Recipes A Problem-Solution Approach


  10. In this chapter, you’ll learn how to customize Spring MVC configuration, handle multipart file uploads, deal with exceptions that may occur in a controller, and pass data between requests with flash attributes. Spring in Action 4th Edition



483、单词 throwable 的匹配文本:


  1. Here, we print a stack trace by using the printStackTrace method of the Throwable class. Core Java Volume I Fundamentals 9th Edition


  2. Throwable In case the servers return an exception for the operation it is returned to the client as-is. HBase:The Definitive Guide


  3. In this code, the onComplete method is invoked when the request processing is finished, after a response is processed and is sent back to the client, or when an unmapped throwable has been propagated to the hosting I/O container. Java EE 7 Essentials


  4. This exception chain is in addition to the “cause” chain of Throwable objects that every exception has. Core Java Volume II Advanced Features 9th Edition


  5. In the event of a failure, we are getting the error message from the Throwable object. Getting Started with Google Guava


  6. There is some confusion among programmers as to when it is appropriate to use each kind of throwable. Effective Java 2nd Edition


  7. The first is the message to be associated with the AssertionError and the second is a Throwable clause. Java 7 New Features Cookbook


  8. The Throwable class and all its subclasses include a String field that stores a human-readable error message that describes the exceptional condition. Java in a Nutshell 5th Edition


  9. All exceptions inherit from Throwable, as shown in Figure 6-1 Java 7 Recipes A Problem-Solution Approach


  10. NOTE: The classes Throwable, Exception, RuntimeException, and Error have constructors with an option for disabling suppressed exceptions and for disabling stack traces. Java SE 8 for the Really Impatient



484、单词 inline 的匹配文本:


  1. If you define a method inside a class, then it is, automatically, an inline method. Core Java Volume I Fundamentals 9th Edition


  2. In this case, we’re using them to import a template and some metadata about the blog (as JSON data), as well as to include link and template rendering functions as inline code. CouchDB The Definitive Guide


  3. The special symbol @ specifies that the configuration is pointing to a file rather than inline. Hadoop Operations and Cluster Management Cookbook


  4. Script location must be set to File instead of Inline. Gradle Effective Implementation Guide


  5. This recipe will outline a shorthand technique for inline table creation when the query is executed. Hadoop RealWorld Solutions Cookbook


  6. The shell provides you with tab-completion of your commands and inline access to command documentation: HBase in Action


  7. For short scripts, you can use the -e option to specify the commands inline, in which case the final semicolon is not required: Hadoop:The Definitive Guide 3rd Edition


  8. The C++ version of the JNIEnv class has inline member functions that take care of the function pointer lookup for you. Core Java Volume II Advanced Features 9th Edition


  9. Views are simply named SELECT statements which are expanded inline during compilation. Tenzing - A SQL Implementation On The MapReduce Framework


  10. The result file is large and printing the complete results inline will kill too many trees. Programming Hive



485、单词 dsls 的匹配文本:


  1. However, Java’s syntax isn’t a natural fit for producing DSLs. Gradle in Action


  2. How can you simplify MapReduce development using domain-specific languages (DSLs)? Professional Hadoop Solutions


  3. And second, they support writing domainspecific languages (DSLs) within the language. Programming in Scala 2nd Edition


  4. The Gradle APIs are rich, the possibilities for DSLs matching your domain are abundant, and the path towards finally having a build system that conforms to your product is clear. Gradle Beyond The Basics


  5. On the other hand, we’ve got internal DSLs, also known as embedded languages. Functional Programming Patterns in Scala and Clojure


  6. Domain-Specific Languages Domain-Specific Languages, or DSLs, provide a convenient syntactical means for expressing goals in a given problem domain. Programming Scala


  7. First off, developers can create domain-specific languages (DSLs) using their favorite language, such as Groovy, JRuby, Jython, Scala, and others (see the project site for examples) Hadoop:The Definitive Guide 3rd Edition


  8. Cascading to Hive and Pig offers a Java API, as opposed to the domain-specific languages (DSLs) provided by the others. HBase:The Definitive Guide


  9. Although we refer to the Query DSL, in reality there are two DSLs: the Query. Elasticsearch The Definitive Guide


  10. Choosing implicit conversions and designing domain-specific languages (DSLs) takes time, thought, and deliberation. Beginning Scala



486、单词 ctrl 的匹配文本:


  1. The description starts with zero or more modifiers shift control ctrl meta alt altGraph and ends with either the string typed, followed by a one-character string (for example, "typed a") or an optional event specifier (pressed or released, with pressed being the default), followed by a key code. Core Java Volume I Fundamentals 9th Edition


  2. If you started in a console, then hitting Ctrl-C and rerunning the same command you used to start it is the simplest way. CouchDB The Definitive Guide


  3. Windows: Open the task manager with the keyboard shortcut Ctrl+Shift+Esc and click the Processes tab. Gradle in Action


  4. To stop the Jetty web container, we press Ctrl + C at the command line, to return to our prompt. Gradle Effective Implementation Guide


  5. Not to worry, just open another Terminal tab by hitting Shift+Ctrl+T. Beginning CouchDB


  6. This lets you see what it''s doing, and you can you can easily stop it with CTRL+C. Elasticsearch in Action MEAP V07 cHAPTER 1


  7. To run the program, we specify the name of the file to write output to (pairs.avro) and send input pairs over standard in, marking the end of file by typing Ctrl-D: Hadoop:The Definitive Guide 3rd Edition


  8. Stopping the server requires a Ctrl-C to abort the start script. HBase:The Definitive Guide


  9. We type a word on each line and hit Enter then kill the session with a Ctrl + C command. Hadoop Beginners Guide


  10. You can press CTRL-C in the second terminal to shut down the Consumer at this point. ActiveMQ in Action



487、单词 rdds 的匹配文本:


  1. Abstract We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing



488、单词 atomicity 的匹配文本:


  1. To prevent conflicts from causing starvation, Omega schedulers typically choose to use incremental transactions, which accept all but the conflicting changes (i.e., the transaction provides atomicity but not independence) Omega - flexible, scalable schedulers for large compute clusters


  2. This 5-tuple key provides atomicity, locality, uniqueness, access control, and versioning. Professional Hadoop Solutions


  3. Future works involves adding compression, ability to support atomicity across keys and secondary index support. Cassandra A Decentralized Structured Storage System


  4. The CouchDB layout and committing system adheres to the ACID properties (atomicity, consistency, isolation, and durability), which guarantee the reliable execution of database transactions. Beginning CouchDB


  5. For those who don’t know (or don’t remember), ACID is an acronym standing for atomicity, consistency, isolation, and durability. HBase in Action


  6. Designing the schema in a way to completely avoid explicit locking, combined with row-level atomicity, gives you the ability to scale your system without any notable effect on read or write performance. HBase:The Definitive Guide


  7. It also provides row-level atomicity guarantees, but no native cross-row transactional support. Apache Hadoop Goes Realtime at Facebook


  8. Record-Level Atomicity on Writes Cassandra automatically gives you record-level atomicity on every write operation. Cassandra The Definitive Guide


  9. The performance gains and atomicity of the service co-located with its broker can be an attractive architectural change, too. ActiveMQ in Action


  10. In other words, synchronized is not equivalent to atomic, but synchronization can be used to achieve atomicity. Concurrent Programming in Java Design Principles and Pattern



489、单词 instanceof 的匹配文本:


  1. Use instanceof to check before casting from a superclass to a subclass. Core Java Volume I Fundamentals 9th Edition


  2. Even if an HttpURLConnection object is expected, it is good programming practice to test the class type using the instanceof keyword. Java Network Programming and Distributed Computing


  3. Adding Scala-style pattern matching in full generality to Java seems quite a big job, but following the recent generalization to switch to allow Strings, you can imagine a more-modest syntax extension, which allows switch to operate on objects, using the instanceof syntax. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. As a result, the typecast will generate a warning, and a corresponding instanceof test is illegal. Data Structures and Algorithm Analysis in Java 3rd Edition


  5. You have two options available: you can either determine the type (using instanceof) in your execmethod, or you can specify in your UDF type-specific alternative implementations, which is neater and removes ugly type-switching code. Hadoop in Practice


  6. You can use the instanceof operator to find out whether a particular remote object implements an interface. Core Java Volume II Advanced Features 9th Edition


  7. If o is a normal Point, do a color-blind comparison if (!(o instanceof ColorPoint)) Effective Java 2nd Edition


  8. The instanceof operator requires an object or array value as its left operand and the name of a reference type as its right operand. Java in a Nutshell 5th Edition


  9. In the application layer where Java programs reside, you simply don’t need to know this (and even if you do need to know, it’s quicker to check the size of the byte array returned by getAddress() than to use instanceof to test which subclass you have) Java Network Programming 4th Edition


  10. Or you could focus on particular classes within a more general monitoring class by filtering interesting hierarchies using instanceof. Java Performance Tuning 2nd Edition



490、单词 nodetool 的匹配文本:


  1. The nodetool program communicates with the JMX port to confirm that the server is alive. Cassandra High Performance Cookbook


  2. It also has scripts to run the nodetool, which is a utility for inspecting a cluster to determine whether it is properly configured, and to perform a variety of maintenance operations. Cassandra The Definitive Guide



491、单词 covariant 的匹配文本:


  1. Nowadays, the covariant return type feature lets you specify the correct return. Core Java Volume I Fundamentals 9th Edition


  2. Covariant return types Java SE5 adds covariant return types, which means that an overridden method in a derived class can return a type derived from the type returned by the base-class method: Thinking in Java 4th Edition


  3. The Maybe class is declared as a covariant on type A. Scala in Action


  4. Or, since it just has one type parameter, you could say simply that Queues are covariant. Programming in Scala 2nd Edition


  5. In Java, as we mentioned in Section 1.4.4, arrays are covariant. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. This rule allows for covariant return types - refining the return type of a method when overriding it. The Java Language Specification Java SE 7 Edition


  7. This is especially true for library writers, who will want to understand when to use parameterized types versus abstract types, which type parameters should be covariant, contravariant, or invariant under subtyping, and so forth. Programming Scala


  8. We’ll discuss why this happened and the very complex topic of invariant, covariant, and contravariant types later in this chapter. Beginning Scala


  9. Covariant returns to allow a subclass to override a superclass method and narrow the return type of the method. Java in a Nutshell 5th Edition


  10. If List was immutable, as it is in a functional programming language, then the problem would disappear, and one could have a covariant list. Java SE 8 for the Really Impatient



492、单词 ntp 的匹配文本:


  1. We will start two EC2 instances; one is a DNS/NTP server, and the other one is the client: HBase Administration Cookbook


  2. Enable the NTP server with the following command: sudo service ntpd start. Hadoop Operations and Cluster Management Cookbook


  3. Network Time Protocol (NTP) is a distributed hierarchical system used to keep system clocks in sync. Cassandra High Performance Cookbook


  4. As we described earlier, this can be achieved using NTP. HBase in Action


  5. We synchronized the two masters’ clocks using NTP and measured the mean time. Mesos A Platform for Fine-Grained Resource Sharing in the Data Center


  6. Run NTP on your cluster, or an equivalent application, to synchronize the time on all servers. HBase:The Definitive Guide


  7. One good strategy here is the use of a Network Time Protocol (NTP) server. Cassandra The Definitive Guide


  8. This contrasts with the various NTP and Time Server protocols, which do not return data that can be easily read by humans. Java Network Programming 4th Edition


  9. Clock synchronization can be achieved through the use of Network Time Protocol (NTP) or using the rdate utility available on Oracle Solaris and Linux systems. Java Performance


  10. You need to ensure that each of your servers uses NTP (Network Time Protocol) to keep their clocks synchronized. MongoDB in Action



493、单词 installer 的匹配文本:


  1. The recommended solution is the current beta project for a CouchDB installer. Getting Started with CouchDB


  2. The Cloudera folks have Amazon machine images (AMIs), Debian and RPM installer files, and an online configuration tool to generate configuration files. Pro Hadoop


  3. CouchDB intends to provide an official Windows installer at some point in the future, so this may change. CouchDB The Definitive Guide


  4. Download the distribution for your OS (at the time of writing, this is version 7.3) and run the installer. Gradle in Action


  5. We simply run the installer software to install Jenkins on our computer. Gradle Effective Implementation Guide


  6. This recipe also requires Python and the Python installer pip. Cassandra High Performance Cookbook


  7. To build the file, download the Jython java installer, run the installer, and select Standalone from the installation menu. Hadoop RealWorld Solutions Cookbook


  8. The Xcode Tools Installer will now load, and you can click the Continue button to begin the install. Beginning CouchDB


  9. RedHat uses packages called RPMs for installation, and Yum as a package installer that can fetch RPMs from remote Yum repositories. Hadoop in Practice


  10. Crucially, they also provide hooks to run scripts at the end of the process, which are invaluable for doing the final system tweaks and customization that are not covered by the standard installer. Hadoop:The Definitive Guide 3rd Edition



494、单词 packtpub 的匹配文本:


  1. Any existing errata can be viewed by selecting your title from http://www.packtpub. Apache Flume Distributed Log Collection for Hadoop


  2. If you purchased this book elsewhere, you can visit http://www.packtpub. HBase Administration Cookbook


  3. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Hadoop Operations and Cluster Management Cookbook


  4. If you find any errata, please report them by visiting http://www.packtpub. Gradle Effective Implementation Guide


  5. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. Mastering ElasticSearch


  6. If you find any errata, please report them by visiting http://www.packtpub.com/support, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Cassandra High Performance Cookbook


  7. Downloading the example code for this book You can download the example code files for all the Packt books you have purchased from your account at http://www.packtpub.com. Hadoop RealWorld Solutions Cookbook


  8. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support. Hadoop MapReduce Cookbook


  9. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Hadoop Beginners Guide


  10. Get in touch with us at service@ packtpub.com for more details. Java 7 Concurrency Cookbook



495、单词 facter 的匹配文本:


  1. I’ll cover the basics of using Puppet manifests for configuration management and techniques for executing and managing those configurations with MCollective and Facter. Managing Infrastructure with Puppet


  2. It shows you how to create Puppet recipes, extend Puppet, and use Facter to gather configuration data from your servers. Pro Puppet



496、单词 uuid 的匹配文本:


  1. For each GPU discovered on the system, the NVML modules expose the maximum and current clock speeds utilization information for the GPU memory and SM, temperature, fan speeds, power draw, ECC mode, used and total GPU memory, performance state, and identifiers such as the PCI bus ID, the GPU UUID, and the brand. Monitoring with Ganglia


  2. Values generated by UUID() have different characteristics from those generated by a cryptographic hash function such as SHA1(): the UUID values are unevenly distributed and are somewhat sequential. High Performance MySQL 3rd Edition


  3. Make sure to leave the document ID blank, and CouchDB will generate a UUID for you. CouchDB The Definitive Guide


  4. Similarly, if you’re sharding on something coarsely-grained key like log level, add a second field to your shard key with a lot of granularity, such as an MD5 hash or UUID. MongoDB The Definitive Guide 2nd Edition


  5. As shown in Listing 9-1, this table will have a key that contains a concatenation of the user’s Universally Unique Identification (UUID) and a timestamp for a picture that shows when the picture was taken. Professional Hadoop Solutions


  6. This unique ID can be assigned by the user or application, or it can use a universally unique identifier (UUID)—a random number generated by CouchDB that greatly reduces the chance of duplicate IDs being used. Beginning CouchDB


  7. This is a UUID generated when HBase starts with an empty storage directory. HBase:The Definitive Guide


  8. This class is an immutable representation of 128-bit Universal Unique Identifier, or UUID, which serves as an identifier that is (with very high probability) globally unique. Java in a Nutshell 5th Edition


  9. The scheme Cassandra uses is a version one UUID, which means that it is generated based on conflating the computer’s MAC address and the number of 100-nanosecond intervals since the beginning of the Gregorian calendar. Cassandra The Definitive Guide


  10. Although the correlation ID in this case uses a random UUID, just about any ID generator can be used. ActiveMQ in Action



497、单词 mapred 的匹配文本:


  1. This file is organized into different contexts: jvm, rpc, hdfs, mapred, and hbase. Monitoring with Ganglia


  2. File Description hadoop-env.sh Configures the environment variable used by Hadoop core-site.xml Configures parameters for the whole Hadoop cluster hdfs-site.xml Configures parameters for HDFS and its clients mapred-site. Hadoop Operations and Cluster Management Cookbook


  3. The MapReduce community has made sure that applications written against 1.x can run on top of YARN in a fully binary compatible manner (mapred APIs) or just by recompiling (source compatibility for mapreduce APIs) Apache Hadoop YARN Yet Another Resource Negotiator


  4. Each Hadoop component uses information in core-site.xml and either mapred-site. Cassandra High Performance Cookbook


  5. For example, if your cluster has 10 nodes running a task tracker and the mapred. Hadoop RealWorld Solutions Cookbook


  6. Inside each node''s HADOOP_HOME/conf directory, add the following to the core-site.xml, hdfs-site.xml and mapred-site.xml. Hadoop MapReduce Cookbook


  7. Solution Working with compressed files in MapReduce involves updating the MapReduce configuration file mapred-site.xml and registering the compression codec you are using. Hadoop in Practice


  8. Clicking on the link brings one to a Hadoop Map/Reduce Tutorial (http://hadoop.apache.org/core/docs/current/mapred_ tutorial.html) explaining the Map/Reduce framework. Hadoop in Action


  9. There are many configuration files, but the ones we need to modify are core-site.xml, hdfs-site.xml and mapred-site.xml. Hadoop Beginners Guide


  10. Child tasks mapred a a In secure mode, the user that submitted the job. Hadoop Operations



498、单词 bnd 的匹配文本:


  1. Tools (such as Eclipse PDE, Bnd, and Bundlor) can help you easily and efficiently manage the manifest file, and they can easily be integrated with building tools like Ant or Maven. Spring Dynamic Modules in Action


  2. This becomes important in more complex add-ons, because the Roo shell will take BND’s defaults and export all packages found in the source code. Spring Roo in Action



499、单词 combinator 的匹配文本:


  1. You can then use this to define an operation, which captures internal iteration as a combinator. Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. To figure out what to do, you need to know first what the individual parsers in the combinator frameworks return as a result (provided they succeed in parsing the input) Programming in Scala 2nd Edition


  3. What we need is a new combinator intDouble doubleInt , that can combine two RNG actions into one using a binary rather than unarymap2 Functional Programming in Scala


  4. However, the Scala library includes a powerful parser combinator library that can be used for parsing most external DSLs that have a context-free grammar. Programming Scala


  5. Chapter 8 rolls the pieces together with an exploration of Scala’s parser combinator library. Beginning Scala


  6. If we want to specify repetition, we need to use the rep parser combinator. Steps in Scala An Introduction to Object-Functional Programming



500、单词 embeddable 的匹配文本:


  1. Jetty (http://www.mortbay.org/jetty/) is an embeddable servlet container, which serves a function similar to that of Tomcat or Resin. Mahout in Action


  2. G2: Low memory footprint Tika should be embeddable within Java applications at low memory cost so that it’s as easy to use Tika in a desktop-class environment with capacious network and memory as it is within a mobile PDA with limited resources on which to operate. Tika in Action


  3. Underneath, Storm makes use of zeromq (0mq, zeromq), an advanced, embeddable networking library that provides wonderful features that make Storm possible. Getting Started with Storm


  4. Embeddable API The Embeddable EJB API allows client code and its corresponding enterprise beans to run within the same JVM and class loader. Java EE 7 Essentials


  5. But even complex systems, such as an embeddable spreadsheet application, can function as individual beans. Java in a Nutshell 5th Edition


  6. Berkeley XML DB is embeddable, so it can run as a JAR within your application. Cassandra The Definitive Guide


  7. Not only is it written in 100% Java, but it’s also designed to be embeddable. ActiveMQ in Action


  8. An embedded type is marked as such by adding the @Embeddable annotation to the class definition. Pro JPA2


  9. In terms of persistence mapping, the EmailAddress class is an @Embeddable, which will cause the persistence provider to flatten out all properties of it into the table of the surrounding class. Spring Data Modern Data Access for Enterprise Java



501、单词 async 的匹配文本:


  1. The async mode allows the client to batch small random messages into larger data chunks before sending it over the network. Building LinkedIn’s Real-time Activity Data1


  2. The first takes the String argument which can be one of the following values: sync, async, or default. Mastering ElasticSearch


  3. To achieve this you first have to turn the getPrice method into a getPriceAsync method and change its return value: Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Async allows you to build parallel data-processing pipelines by chaining successive actions onto asynchronous computations. HBase in Action


  5. The createTable() method, for example, wraps the createTableAsync() method, while adding a loop that waits for the table to be created on the remote servers before yielding control back to the caller. HBase:The Definitive Guide


  6. There is also AsyncListener, which is used to manage async events such as completed, timed out, or an error. Java EE 7 Essentials


  7. If you set replication to async, then it will return success to the client as soon as the request has been executed on the primary shard. Elasticsearch The Definitive Guide


  8. The SettableFuture, AsyncFunction, and FutureFallback classes that are useful utility classes we can use when working with Future instances and doing asynchronous transformation of objects. Getting Started with Google Guava


  9. But you can also get the same performance gain for persistent messages by setting the useAsyncSend property on the message producer’s connection factory, as shown next. ActiveMQ in Action


  10. However, the doInOrderAsync returns right away and does the work in a separate thread, so it is not possible to have the method rethrow the exception. Java SE 8 for the Really Impatient



502、单词 mixin 的匹配文本:


  1. We bring that method into our Account class using incude—JRuby’s mixin facility. Programming Concurrency on the JVM


  2. Chapter 4 covers new issues arising in object-oriented design due to Scala’s mixin inheritance. Scala in Depth


  3. Mixins The term mixin seems to have acquired numerous meanings over time, but the fundamental concept is that of mixing in capabilities from multiple classes in order to produce a resulting class that represents all the types of the mixins. Thinking in Java 4th Edition


  4. Modular mixin composition—This feature of Scala has traits in common with both Java interfaces and abstract classes. Scala in Action


  5. Objects are constructed by mixin composition, which takes the members of a class and adds the members of a number of traits to them. Programming in Scala 2nd Edition


  6. We’ll then use Scala’s self-type annotation and mixin inheritance to specify wiring in a typesafe manner. Functional Programming Patterns in Scala and Clojure


  7. Scala improves upon Java’s support for OOP with the addition of traits, a clean way of implementing classes using mixin composition. Programming Scala


  8. Ruby has a singleton mixin that provides the singleton pattern in Ruby programs. Beginning Scala


  9. For example, Comparable is a mixin interface that allows a class to declare that its instances are ordered with respect to other mutually comparable objects. Effective Java 2nd Edition


  10. Remember that all GUI related classes are actually wrappers around Java’s JFC/Swing classes, thus, trait SuperMixin is used to redirect certain calls from the peer to the wrapper and back. Steps in Scala An Introduction to Object-Functional Programming



503、单词 idf 的匹配文本:


  1. These and many other tricks are part of a popular weighting method called TF-IDF (term frequency–inverse document frequency) Mahout in Action


  2. In this section, we''ll look at the default Apache Lucene scoring mechanism: the TF/IDF (term frequency/ inverse document frequency) algorithm and how it affects the returned document. Mastering ElasticSearch


  3. He has been a software engineer in Intelligence Corps, Israel Defense Force (IDF), 2005-2008, working on a variety of military IT systems, and later a software engineer and a team leader in web-based startup named AnyClip, making a dream of "Find any moment from any film, instantly" to come true. Cassandra High Performance Cookbook


  4. You will get scores that are of lesser quality due to sub-optimal document frequency values, a component of the IDF part of the score. Solr 3 Enterprise Search Server


  5. Introduction This chapter will present the implementation of a very well-known data processing algorithm, Term Frequency–Inverse Document Frequency (TF-IDF), using Storm''s Trident API. Storm Real-Time Processing Cookbook


  6. The Term frequency-inverse document frequency (TF-IDF) model solves this issue by utilizing the inverted document frequencies (IDF) to scale the term frequencies (TF) Hadoop MapReduce Cookbook


  7. Calculate the sum of TF/IDF values for each word, for each category, Hadoop in Practice


  8. The standard similarity algorithm used in Elasticsearch is known as TF/IDF, or Term Frequency/Inverse Document Frequency, which takes the following. Elasticsearch The Definitive Guide


  9. At a high level, the important concepts are demonstrated by the high-level formula – namely, Term Frequency (tf), Inverse Document Frequency (idf), Term Boosts (t.getBoost), the Field Normalization (norm), the Coordination Factor (coord), and the Query Normalization (queryNorm) Solr in action


  10. Very common terms have a low idf; very rare terms have a high idf. Lucene in Action 2nd Edition



504、单词 unicast 的匹配文本:


  1. Instead of the default UDP multicast communication method we use between Gmonds, we have configured it to use unicast messages. HBase Administration Cookbook


  2. Clusters may use a unicast address instead, which is more compatible with various types of network hardware, and has performance benefits, at the cost of additional per-host configuration. Monitoring with Ganglia


  3. Hadoop supports network communication through both unicast (with normal IP addresses, which is the one we use here) and multicast, which uses multicast addresses, such as 239.2.11.71 Hadoop Operations and Cluster Management Cookbook


  4. The boostrap process When the ElasticSearch node starts, it uses multicast (or unicast, if configured) to find the other nodes in the same cluster (the key here is the cluster name defined in the configuration) and connect to them. Mastering ElasticSearch


  5. Multicast Versus Unicast While the default communication method between monitoring daemons (gmond) is UDP multicast messages, you may encounter environments where multicast is either not possible or a limiting factor. HBase:The Definitive Guide


  6. The term “unicast” refers to the fact that the remote object is located by making a call to a single IP address and port. Core Java Volume II Advanced Features 9th Edition


  7. A unicast event is a rare kind of event for which there can be only a single registered listener object. Java in a Nutshell 5th Edition


  8. The sockets in the previous chapters are unicast: they provide point-to-point communication. Java Network Programming 4th Edition


  9. For this latter scenario, a unicast-based model of message queuing, organized into a hub-and-spoke model, is more efficient than multicast. Java Performance Tuning 2nd Edition


  10. Instead of sending out unicast requests by cycling through a range of addresses where the service mught be, the client can send out a single broadcast or multicast request. Fundamental Networking in Java



505、单词 appending 的匹配文本:


  1. Commit: Use Paxos to achieve consensus for appending that entry to the log. Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. This allows different processes to access old versions while the new version is being inserted, and also means that updating the information is really just a case of appending the new data, not reading, updating, and writing back a new version. Getting Started with CouchDB


  3. Log file rotation is a function that periodically rotates a log file to a different name, for example, by appending the date to the filename, so that the original logfile name can be used as an empty file. Hadoop Operations and Cluster Management Cookbook


  4. Otherwise, we merely adjust the record structure without appending any value—which needs to be done if empty records are present. Dremel - Interactive Analysis of WebScale Datasets


  5. The function getReverseTime() achieves this by appending a reverse timestamp for the given item to the already calculated Z-order curve, separated by an underscore token. Hadoop RealWorld Solutions Cookbook


  6. Within the group, appending the postdate orders the twits chronologically. HBase in Action


  7. Pig supports a repartition map-side join by appending USING ''replicated'' to the end of the JOIN statement. Hadoop in Practice


  8. Configuration properties apply to a context name and are specified by appending the property name to the context name (separated by a dot) Hadoop:The Definitive Guide 3rd Edition


  9. Commit Logs You can set the value for how large the commit log is allowed to grow before it stops appending new writes to a file and creates a new one. Cassandra The Definitive Guide


  10. Appending a task’s actions one at a time task hello. Building and Testing with Gradle



506、单词 args 的匹配文本:


  1. The args parameters can be used to supply formatting arguments, as described in the next section. Core Java Volume I Fundamentals 9th Edition


  2. The command-line arguments accepted in args are listed in Table 8-15 Pro Hadoop


  3. While you’re at it, you’ll also fine-tune the JVM memory settings by calling the method jvmArgs to avoid potential OutOfMemoryErrors. Gradle in Action


  4. The paths for the posts and comments data are provided via the command line, and the program retrieves them from the args array. MapReduce Design Patterns


  5. For security reasons, be sure to change this magic cookie value to a new value (using the same new value on each node) in each node’s vm.args file. Scaling CouchDB


  6. Args: peerid       Id of the peer used for verification, must match the one given for replication tablename    Name of the table to verify. HBase in Action


  7. Some older documentation shows file utilities in the form of hadoop dfs -cmd <args> Hadoop in Action


  8. You will see the following output: usage: sqoop COMMAND [ARGS] Hadoop Beginners Guide


  9. The Consumer is executed using the maven-exec-plugin (http://mng.bz/bf7g) by passing it some system properties as arguments using the exec.args property. ActiveMQ in Action


  10. The args parameter expects a List, so we use Groovy list literal syntax to wrap the poet variable. Building and Testing with Gradle



507、单词 javac 的匹配文本:


  1. In step 4, we added the HBase jar to our classpath and then run the javac command to compile the Java code. HBase Administration Cookbook


  2. You should get a display such as this one: javac 1.7.0_02 Core Java Volume I Fundamentals 9th Edition


  3. Look for the string ''/bin/javac$'', using the egrep program, which searches for simple regular expressions in its input stream: Pro Hadoop


  4. The following code snippet shows the usage of the javac Ant task for compiling Java source code within your build script: Gradle in Action


  5. Internally, the core Java plug-in does this with compile-time dependencies when executing the javac compiler (Example 1-21) Gradle Beyond The Basics


  6. You can specify the character encoding of your source files with the -encoding flag, for example, javac -encoding Big5 Myfile.java. Core Java Volume II Advanced Features 9th Edition


  7. First, check what''s already available by opening up a terminal and typing the following: $ javac. Hadoop Beginners Guide


  8. This code can be compiled and run using javac dns.java followed by java dns. Hadoop Operations


  9. Alternately, the javac tool provides an option for specifying resources that need to be loaded for an application. Java 7 Recipes A Problem-Solution Approach


  10. When I started writing Java code in 1996, there weren’t many options for compiling and building a project outside of javac and a custom build script. Building and Testing with Gradle



508、单词 param 的匹配文本:


  1. All @param tags for one method must be kept together. Core Java Volume I Fundamentals 9th Edition


  2. Multiple parameters can be passed to the module’s initialization function by including one or more param sections. Monitoring with Ganglia


  3. Parameters can be specified when launching Pig, using the -param option, one for each parameter: Hadoop:The Definitive Guide 3rd Edition


  4. As explained earlier, @WebInitParam is used to specify init params for a servlet: Java EE 7 Essentials


  5. The third parameter can contain an ImageWriteParam object to set image writing details such as tiling and compression; use null for default values. Core Java Volume II Advanced Features 9th Edition


  6. If you run this script using the pig command, you specify the parameters using the -param name=value argument. Hadoop in Action


  7. Also, preconditions can be specified along with the affected parameters in their @param tags. Effective Java 2nd Edition


  8. For example, the @param tag lets you specify the name and meaning of a single parameter for a method. Java in a Nutshell 5th Edition


  9. The param> clause can be used to pass one or more name/value pairs as parameters to the included resource. Java Performance


  10. The @WebInitParam annotation accepts a name/value pair as an initialization parameter. Java 7 Recipes A Problem-Solution Approach



509、单词 superclasses 的匹配文本:


  1. In fact, the opposite is true: Subclasses have more functionality than their superclasses. Core Java Volume I Fundamentals 9th Edition


  2. This callback interceptor method may be defined only on interceptor classes and/or superclasses of interceptor classes and cannot be defined on the target class. Java EE 7 Essentials


  3. This method saves the class of the object, the signature of the class, and the values of any nonstatic, nontransient fields of the class and its superclasses. Core Java Volume II Advanced Features 9th Edition


  4. This is a very powerful concept because interfaces, superclasses, and interfaces implemented by superclasses are included, meaning we can easily make our event handlers as course- or fine-grained as we want, simply by changing the type accepted by the event-handling method. Getting Started with Google Guava


  5. If implementing the Cloneable interface is to have any effect on a class, the class and all of its superclasses must obey a fairly complex, unenforceable, and thinly documented protocol. Effective Java 2nd Edition


  6. In discussing nonstatic member classes, we saw that a member class can access any members inherited from superclasses and any members defined by its containing classes. Java in a Nutshell 5th Edition


  7. In this mirror image hierarchy, two abstract superclasses define the basic API for reading and writing characters. Java Network Programming 4th Edition


  8. The simplest way to handle all this is to have a flag on which objects are conditionally passed to the monitor from the Object constructor and to have this flag in a simple class with no superclasses so that classloading does not impose extra calls to superclasses. Java Performance Tuning 2nd Edition


  9. Loading a class requires loading all Java superclasses and all Java superinterfaces. Java Performance


  10. For example, self-calls in "superclasses" are not automatically bound to the versions of methods that have been "overridden" in delegation-based "subclasses" Concurrent Programming in Java Design Principles and Pattern



510、单词 uppercase 的匹配文本:


  1. The standard name for a Kerberos realm is your domain name in uppercase. HBase Administration Cookbook


  2. If you get a complaint about welcome (with a lowercase w), then you should reissue the java Welcome command with an uppercase W. Core Java Volume I Fundamentals 9th Edition


  3. Often the host appears twice, with the name in uppercase and lowercase, or some other differing variations of the hostname. Monitoring with Ganglia


  4. Bamboo also expects a short identifier in uppercase characters, as the project key, in the Project Key field. Gradle Effective Implementation Guide


  5. This recipe shows how to use a map-only job to locate any key with a specific column and convert the value of the column to uppercase. Cassandra High Performance Cookbook


  6. Prior experience has told you that the combination of words, and the fact that they’re uppercase, means that this email is most likely spam. Hadoop in Practice


  7. The Pig Latin functions are case-insensitive, though commonly they are written in uppercase. HBase:The Definitive Guide


  8. For example, there’s an UPPER function in PiggyBank that transforms a string to uppercase: Hadoop in Action


  9. By convention, SQL statements use uppercase for SQL language keywords and we will generally follow this when using HiveQL within files, as shown later. Hadoop Beginners Guide


  10. Throughout the book, we will follow the SQL convention of showing Hive keywords in uppercase (e.g., CREATE, TABLE, SELECT and FROM), even though case is ignored by Hive, following SQL conventions. Programming Hive



511、单词 logfiles 的匹配文本:


  1. If your input is comprised of tailed logfiles, the array is most likely a UTF-8 encoded String containing a line of text. Apache Flume Distributed Log Collection for Hadoop


  2. Hadoop logging file naming conventions Hadoop logfiles are kept under the directory $HADOOP_HOME/logs. Hadoop Operations and Cluster Management Cookbook


  3. In environments with only a few systems, connection to the server over SSH and using command-line tools to examine logfiles is usually sufficient. Cassandra High Performance Cookbook


  4. We can use the debug scripts to copy all the logfiles to a single location, e-mail them to a single e-mail account, or perform some analysis. Hadoop MapReduce Cookbook


  5. A web server log is a good example of a set of records that is not normalized (for example, the client hostnames are specified in full each time, even though the same client may appear many times), and this is one reason that logfiles of all kinds are particularly well-suited to analysis with MapReduce. Hadoop:The Definitive Guide 3rd Edition


  6. The logging framework used by HBase is creating the directory and logfiles dynamically. HBase:The Definitive Guide


  7. Like regular logfiles, commit logs have a configurable rollover threshold, and once this file size threshold is reached, the log will roll over, carrying with it any extant dirty bit flags. Cassandra The Definitive Guide


  8. The reason is that the local standalone mode prints information about each individual task execution to the screen, whereas in the other modes this information is written only to logfiles on the running hosts. Hadoop Beginners Guide


  9. Most of the real work is done on the data devices, while the OS device usually only has to contend with logfiles in /var/log. Hadoop Operations


  10. Suppose our Extract, Transform, and Load (ETL) process ingests and aggregates logfiles in our environment, converting each log message to a tab-delimited record and also decomposing the timestamp into separate year, month, and day fields, and a combined hms field for the remaining hour, minute, and second parts of the timestamp, for reasons that will become clear in a moment. Programming Hive



512、单词 testable 的匹配文本:


  1. Carla Schroer and her teams of compatibility testers in Cupertino, California, and Novosibirsk, Russia (with special thanks to Leonid Arbouzov and Alexei Kaigorodov) painstakingly wrote compatibility tests for each testable assertion in the First Edition. The Java Virtual Machine Specification Java SE 7 Edition


  2. Because you’re dealing with an actual class, any method is fully testable through unit tests. Gradle in Action


  3. Chapter 10 focuses on testing Scala applications and how you can use patterns to make your code more testable. Scala in Action


  4. Testable JavaScript, the image of a Doctor fish, and related trade dress are trademarks of O’Reilly Media, Inc. Testable JavaScript


  5. The usage of the Tool interface also makes driver implementation more testable by allowing you to inject arbitrary configurations using Configured’s setConf() method. Professional Hadoop Solutions


  6. It assumes that first is a valid word, which is an easily testable condition prior to the call. Data Structures and Algorithm Analysis in Java 3rd Edition


  7. In particular, the schema-free nature of the graph data model, coupled with the testable nature of a graph database’s application programming interface (API) and query language, empower us to evolve an application in a controlled manner. Graph Databases


  8. This makes the application more testable and conforms to accessibility standards. Selenium Testing Tools Cookbook


  9. This change in thinking has lead to lower defect rates, more modular code, and more testable code. Beginning Scala


  10. The first technique we will demonstrate to make this class testable is to introduce a subclass that eliminates entity manager calls. Pro JPA2



513、单词 xhtml 的匹配文本:


  1. Moreover, there are many file formats that support text, such as PDF, Microsoft Word, Excel, and PowerPoint, simple text files, XML, HTML, XHTML, and so forth. Pro Java 7 NIO.2


  2. The first TestSuite.xhtml defines the suite having HTML links to the tests. Jenkins Continuous Integration Cookbook


  3. Similarly, NotFound is a case class representing HTTP status code 404; when calling the xhtml method, it implicitly gets converted to a Scalaz Response object with an HTTP header value for a content-type of "application/ xhtml+xml." This is a good example of how you can use higher-order functions and combine functions to create nice APIs like the preceding example. Scala in Action


  4. I am very grateful to Dmitry Kirsanov and Alina Kirsanova who turned my manuscript from XHTML into a beautiful book, allowing me to concentrate on the content instead of fussing with the format. Scala for the Impatient


  5. By convention, web pages built with XHTML have a .xhtml extension. Java EE 7 Essentials


  6. However, most pages authored by the W3C are written in XHTML (an HTML dialect that is proper XML) Core Java Volume II Advanced Features 9th Edition


  7. Currently, no browser natively supports XForms, though plug-ins are available for Firefox, and libraries exist to allow serverside translation of XForms into XHTML. Cassandra The Definitive Guide


  8. Also, we show how to produce XHTML content with Scala. Steps in Scala An Introduction to Object-Functional Programming


  9. Tika uses the XHTML (Extensible Hypertext Markup Language) standard to model all documents, regardless of their original format. Lucene in Action 2nd Edition


  10. Dmitry Kirsanov and Alina Kirsanova once again turned an XHTML manuscript into an attractive book with amazing speed and attention to detail. Java SE 8 for the Really Impatient



514、单词 brainz 的匹配文本:


  1. Through using a large set of metadata about artists, releases, and tracks courtesy of the MusicBrainz.org project, you will have a testing ground for Solr, and will learn how to import this data in various ways. Solr 3 Enterprise Search Server



515、单词 checkbox 的匹配文本:


  1. Pressing the space bar when the focus is in the checkbox also toggles the checkmark. Core Java Volume I Fundamentals 9th Edition


  2. Handling checkboxes Capturing checkbox submission is similar to that of capturing textbox submissions, which we encountered earlier in Chapter 2, Recording Your First Test. Performance Testing With JMeter 2.9


  3. Make sure the checkbox next to the plugin name is checked and press the Install button to initiate the installation process. Gradle in Action


  4. When configuring a Job, click on Discard Old Builds, and then the Advanced checkbox, define the Max # of builds to keep with the artifacts. Jenkins Continuous Integration Cookbook


  5. If we want to use a locally installed instance of Gradle, we must uncheck the Install automatically checkbox. Gradle Effective Implementation Guide


  6. To choose this option, just tick the Install automatically checkbox. Jenkins The Definitive Guide


  7. If you want to present a set of choices to a user, and a radio button or checkbox set consumes too much space, you can use a combo box or a list. Core Java Volume II Advanced Features 9th Edition


  8. Starting, changes to Complete; then click its checkbox to see details of the job flow, as shown in the following screenshot: Hadoop Beginners Guide


  9. This interface has a text field to enter the name to be searched for and a checkbox to determine whether the match should be exact or partial. Java Network Programming 4th Edition


  10. For example, perhaps your documents have Country as a field, and your search interface presents a checkbox allowing the user to pick and choose which countries to include in the search. Lucene in Action 2nd Edition



516、单词 mvn 的匹配文本:


  1. This will ensure that the mvn command is always available. Mahout in Action


  2. Starting it up via mvn is as easy as shown in Listing 12.8 Netty in Action MEAP V08


  3. The install command will run all the tests by default; we can ignore the tests to speed up the installation process with command sudo mvn -DskipTests install. Hadoop Operations and Cluster Management Cookbook


  4. Goals bundle phases, for example, for the site goal there are four phases: pre-site, site, post-site, and site-deploy, all of which will be called in order by mvn site or directly by using the syntax mvn site:phase. Jenkins Continuous Integration Cookbook


  5. For example, in the following pom.xml file, the project URL will point to the Jenkins build job that ran the mvn site build: Jenkins The Definitive Guide


  6. For example, if you want to compile a project, you run Maven''s mvn command with the compile goal as a parameter. Sonar Code Quality Testing Essentials


  7. Change the folder into a trunk and run the build: $ mvn compile. Hadoop RealWorld Solutions Cookbook


  8. To compile all the source, run the following command: $ mvn clean install This command will compile and package the source so that it’s ready to be run. ActiveMQ in Action


  9. Then run "mvn install" from within the Tika source directory you unpacked above. Lucene in Action 2nd Edition


  10. This install process, equivalent to mvn install, is guaranteed to be compatible with Maven because it actually uses the Maven Ant Tasks produced by the Maven team and hosted at Apache. Building and Testing with Gradle



517、单词 osd 的匹配文本:


  1. This approach allows Ceph to more effectively leverage the intelligence (CPU and memory) present on each OSD to achieve reliable, highly available object storage with linear scaling. Ceph A Scalable, High-Performance Distributed File System


  2. As in conventional storage systems based on local or network-attached (SAN) disk drives or those embracing the proposed T10 OSD standard, devices passively respond to read and write commands, despite their potential to encapsulate significant intelligence. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.



518、单词 testng 的匹配文本:


  1. By the end of this chapter, you’ll write tests with JUnit, TestNG, and Spock and execute them as part of the build lifecycle. Gradle in Action


  2. JUnit (http://www.junit.org/) is a popular unit test framework for Java; TestNG (http://testng.org/doc/index.html) is another. Jenkins Continuous Integration Cookbook


  3. Using TestNG for testing We have written a test with the JUnit test framework. Gradle Effective Implementation Guide


  4. This format is also used by many other Java testing tools, such as TestNG, Spock and Easyb. Jenkins The Definitive Guide


  5. Some basic familiarity with the TestNG testing API will help make sense of the unit tests. Hadoop RealWorld Solutions Cookbook


  6. You have many options for unit testing in Scala, from established Java tools, such as JUnit and TestNG, to new tools written in Scala, such as ScalaTest, specs, and ScalaCheck. Programming in Scala 2nd Edition


  7. JUnit and TestNG are both good starting points for unit testing Scala. Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine


  8. You can drive your tests using the built-in Runner or use the provided integration with JUnit or TestNG. Programming Scala


  9. There are test goals for JUnit, TestNG, and so on. Beginning Scala


  10. This facilitates compiled tests being able to reference JUnit or TestNG annotations or base classes. Building and Testing with Gradle



519、单词 corba 的匹配文本:


  1. If you do want to write a method to change numeric parameters, you can use one of the holder types defined in the org.omg.CORBA package: IntHolder, BooleanHolder, and so on. Core Java Volume I Fundamentals 9th Edition


  2. With its choice of two distributed systems technologies (RMI and CORBA), developers can create systems that span many computers. Java Network Programming and Distributed Computing


  3. Java EE applications can use RMI-IIOP, P protocol support, to access CORBA services that are compatible with the. Java EE 7 Specification


  4. This mechanism can only be used to replace certain standard Java libraries (such as those for XML and CORBA support) with newer versions. Core Java Volume II Advanced Features 9th Edition


  5. As you can see, CORBA has more overhead than RMI, which in turn has more overhead than the proprietary system. Java Performance Tuning 2nd Edition


  6. Technologies such as those discussed in chapter 2 (COM, CORBA, DCE, and EJB) using RPC are considered to be tightly coupled. ActiveMQ in Action


  7. Request objects are used in distributed object support systems such as those in java.rmi and org.omg.corba. Concurrent Programming in Java Design Principles and Pattern


  8. Finally, entity beans were modeled as remote objects that used RMI and CORBA, introducing network overhead and restrictions that should never have been added to a persistent object to begin with. Pro JPA2


  9. Before RMI, the only remoting options available to Java programmers were CORBA (which at the time required the purchase of a third-party object request broker [ORB]) and handwritten socket programming. Spring in Action 4th Edition



520、单词 immutability 的匹配文本:


  1. However, great books such as Joshua Bloch’s Effective Java [Blo08] have advised us to promote immutability. Programming Concurrency on the JVM


  2. We first examine the concepts behind functional programming such as side effects, immutability, declarative programming, and referential transparency and relate these to Java 8 Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. Final fields must be used correctly to provide a guarantee of immutability. JSR-133 Java Memory Model and Thread


  4. However, changing modules explicitly disclaim immutability, so cached copies of those modules must be expired after some time. Gradle Beyond The Basics


  5. Immutability One of the most surprising problems in the original Java Memory Model was that objects that were declared final were sometimes not in fact final (immutable) at all. Oracle JRockit The Definitive Guide


  6. The functional approach may appear unnatural if you’re not familiar with it, but it enables immutability, which has many advantages. Effective Java 2nd Edition


  7. Due to the immutability of files in HDFS, it’s uncommon to place a quota on them. Hadoop Operations


  8. You can take advantage of immutability in your own classes. Java Network Programming 4th Edition


  9. Finally, the immutability of SSTables enables us to split tablets quickly. Bigtable - A Distributed Storage System for Structured Data


  10. However, selective use of immutability is a basic tool in concurrent OO programming. Concurrent Programming in Java Design Principles and Pattern



521、单词 interoperability 的匹配文本:


  1. Further information about sFlow interoperability is provided in Chapter 8 Monitoring with Ganglia


  2. However, the CouchDB team recommends sticking with JavaScript as it provides the highest level of support and interoperability, though other options are available. CouchDB The Definitive Guide


  3. Cross-language interoperability is achieved using a standard binary on-the-wire representation of records, in which field values are laid out sequentially as they occur in the record. Dremel - Interactive Analysis of WebScale Datasets


  4. BigTop — This is a formal process and framework for packaging and interoperability testing of Hadoop’s sub-projects and related components. Professional Hadoop Solutions


  5. The Protocol Buffers format has been used heavily by Google for interoperability. Hadoop in Practice


  6. The Apache Bigtop project (http://incubator.apache.org/bigtop/) runs interoperability tests on stacks of Hadoop components and provides Linux packages (RPMs and Debian packages) for easy installation. Hadoop:The Definitive Guide 3rd Edition


  7. The long-term goal of HCatalog is to enable interoperability between tools such as Apache Hive and Pig so that they can share dataset metadata information. Hadoop Operations


  8. You create data files with a wide choice of formats for convenience, compactness, or interoperability with different Apache Hadoop components. Cloudera Impala


  9. But the listed constraints serve to eliminate large classes of safety and liveness problems while also satisfying common reusability and performance goals: unidirectional flow avoids deadlock, connectivity management avoids unwanted interleavings across different flows, transfer protocols avoid safety problems due to inadvertent sharing without the need for extensive dynamic synchronization, and interface conformance assures type safety while still permitting interoperability among components. Concurrent Programming in Java Design Principles and Pattern


  10. Importing an Entire Ant Build File Up to this point, we’ve dealt with interoperability of Ant and Gradle on a fine-grained level, either calling existing Ant tasks, creating new Ant tasks on the fly, or manipulating the results of Ant task calls with Groovy. Building and Testing with Gradle



522、单词 multithreading 的匹配文本:


  1. Java even has built-in support for advanced programming tasks, such as network programming, database connectivity, and multithreading. Core Java Volume I Fundamentals 9th Edition


  2. This book''s coverage of advanced topics such as input/output streaming and multithreading allows even the most experienced Java developers to sharpen their skills. Java Network Programming and Distributed Computing


  3. This allows the application developer to focus on the business logic and not worry about lowlevel transaction and state management details, remoting, concurrency, multithreading, connection pooling, or other complex low-level APIs. Java EE 7 Essentials


  4. On the server side, columns are immutable in order to prevent multithreading issues. Cassandra The Definitive Guide


  5. Chapter 3, Threads, explores multithreading and synchronization, with a special emphasis on how they can be used for asynchronous I/O and network servers. Java Network Programming 4th Edition


  6. In Java, the key to making an application responsive is multithreading. Java Performance Tuning 2nd Edition


  7. A note on multithreading in JMS applications The JMS spec specifically defines concurrency for various objects in the JMS API and requires that only a few objects support concurrent access. ActiveMQ in Action


  8. The introduction of the SPARC T-series processor brought chip multiprocessing and chip multithreading to Oracle’s offering of processors. Java Performance


  9. Both simultaneous multithreading (SMT) and chip multiprocessor (CMP) architectures target thread-level parallelism and should improve the performance of many of our servers. Web search for a planet the google cluster architecture


  10. With his experience in multithreading and Java Swing he has managed to create Java clients that can consume tens of thousands of market events per second. Java 7 Recipes A Problem-Solution Approach



523、单词 opentsdb 的匹配文本:


  1. Chapter 5, Monitoring and Diagnosis: In this chapter, we will describe how to monitor and diagnose HBase cluster with Ganglia, OpenTSDB, Nagios, and other tools. HBase Administration Cookbook


  2. StumbleUpon uses OpenTSDB to monitor all of its infrastructure and software, including its HBase clusters. HBase in Action


  3. Use Case: OpenTSDB The OpenTSDB‡ project provides a time series database used to store metrics about servers and services, gathered by external collection agents. HBase:The Definitive Guide



524、单词 hfiles 的匹配文本:


  1. Since HBase 0.94, the Import utility can generate the HBase internal files (HFiles) for bulk load. HBase Administration Cookbook


  2. This is the number of blocks of StoreFiles (HFiles) in the cache. Monitoring with Ganglia


  3. The implementation of HFile in a region server is responsible for reading and writing HFiles to and from HDFS. Professional Hadoop Solutions


  4. Data manipulation and a column family can have multiple HFiles. HBase in Action


  5. The importtsv tool supports loading data to HBase directly using the Put operations as well as by generating the HBase internal HFiles. Hadoop MapReduce Cookbook


  6. The data is stored in store files, called HFiles, which are persistent and ordered immutable maps from keys to values. HBase:The Definitive Guide


  7. HFiles are immutable HDFS files containing key/value pairs in sorted order. Apache Hadoop Goes Realtime at Facebook


  8. As most HFiles are hosted on HDFS, it would be impossible anyway because HDFS files are immutable as well. Hadoop in Action



525、单词 mng 的匹配文本:


  1. The Simple Build Tool (http://mng.bz/2f7Q) provides the cc target for continuous compilation. Scala in Depth


  2. If you’re having difficulty understanding what’s going on with function definition or argument passing, the Python language tutorial offers a good introduction to what’s going on, and you can jump right to the particular section by visiting this shortened URL: http://mng.bz/KM5x. Redis in Action


  3. Make sure you explore it in the Akka documentation( http://mng.bz/wc7D) Scala in Action


  4. In the great book Java Concurrency in Practice (http://mng.bz/979c), Brian Goetz and coauthors give some advice to find the optimal size for a thread pool. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. You can read more about the built-in PropertyEditors at http://mng.bz/7CO9 Spring in Practice


  6. This versioning scheme is roughly outlined in the Apache Release Management guide: http://mng.bz/6uvM. HBase in Action


  7. I created a sample log file in Apache Common Log Format (see http://mng.bz/ L4S3) with some fake Class E IP addresses and some dummy resources and response codes. Hadoop in Practice


  8. That means the channel adapter not only handles the communication via the particular transport and protocol being used but also must provide a Messaging Mapper (http://mng.bz/Fl0P) so that whatever data representation is used by the external system is converted to and from simple Spring Integration messages. Spring Integration in Action


  9. The Consumer is executed using the maven-exec-plugin (http://mng.bz/bf7g) by passing it some system properties as arguments using the exec.args property. ActiveMQ in Action


  10. For an up-to-date list of MongoDB production deployments, see http://mng.bz/z2CH. MongoDB in Action



526、单词 normalization 的匹配文本:


  1. This is because the score depends on multiple factors, not only the boosts and query structure, but also on how many terms were matched, in which fields, and the type of matching that was used on query normalization, and so on. Mastering ElasticSearch


  2. By default, a field in Lucene is indexed with its norm, a product of the document’s boost, the field’s boost, and the field’s length normalization factor. Professional Hadoop Solutions


  3. As of today, HBase doesn’t provide the features that make normalization easy to do. HBase in Action


  4. Normalization poses problems for MapReduce because it makes reading a record a nonlocal operation, and one of the central assumptions that MapReduce makes is that it is possible to perform (high-speed) streaming reads and writes. Hadoop:The Definitive Guide 3rd Edition


  5. The usual database normalization* rules do not apply directly to HBase, and therefore the number of tables is usually very low. HBase:The Definitive Guide


  6. This process of tokenization and normalization is called analysis, which we discuss in the next section. Elasticsearch The Definitive Guide


  7. Denormalization In relational database design, we are often taught the importance of normalization. Cassandra The Definitive Guide


  8. These are useful for tasks such as normalization of data and doing complex computation involving groups of rows. Tenzing - A SQL Implementation On The MapReduce Framework


  9. In such cases, it''s common to have a precursor MapReduce job that applies a normalization or clean-up strategy to the data set. Hadoop Beginners Guide


  10. Unique Keys and Normalization Relational databases typically use unique keys, indexes, and normalization to store data sets that fit into memory or mostly into memory. Programming Hive



527、单词 customization 的匹配文本:


  1. It is easy to take the effects of the modern websites for granted, but the functionality of pop-up lists during searches, customization, and the in-page experience (traditionally referred to as AJAX) of a dynamic website. Getting Started with CouchDB


  2. This is a lot of configuration that the Gradle plugins provide out of the box without any additional customization from your side. Gradle in Action


  3. A rule set file contains several rules and allows customization of the rules. Gradle Effective Implementation Guide


  4. The most common use case for this is when a build needs to access a custom version of some other module, especially when that customization is itself under development and is not ready to publish to the outside world. Gradle Beyond The Basics


  5. We focused on understanding how Hive manages data in internal and external tables, and also looked at advanced topics such as join customization and user-defined functions. Hadoop in Practice


  6. Crucially, they also provide hooks to run scripts at the end of the process, which are invaluable for doing the final system tweaks and customization that are not covered by the standard installer. Hadoop:The Definitive Guide 3rd Edition


  7. But when performing this customization, quite often the administrators don’t edit the corresponding HTML meta headers that. Hadoop in Action


  8. Do not expect to understand all of these now, but do spend a little time browsing to get a flavor for the type of customization allowed by Hadoop. Hadoop Beginners Guide


  9. This allows for easy configuration and customization via the  element of the ActiveMQ XML configuration file. ActiveMQ in Action


  10. Let’s take a look at the different ways of defining a task, the two key aspects of task definitions, and the task API we can use to perform our own customization. Building and Testing with Gradle



528、单词 rcfile 的匹配文本:


  1. Currently, it provides various row/columnar store file formats, such as CSVFile, RowFile, RCFile, and Trevni (still unstable) Introduction to Tajo


  2. In this paper, we present a big data placement structure called RCFile (Record Columnar File) and its implementation in the Hadoop system. RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems


  3. Apart from the two formats mentioned above, Hive also provides an RCFileInputFormat which stores the data in a column oriented manner. Hive A Petabyte Scale Data Warehouse Using Hadoop


  4. Hive provides another binary storage format called RCFile, short for Record Columnar File. Hadoop:The Definitive Guide 3rd Edition


  5. Some types of storage formats store their data by fields instead of by records (for example, Hive’s RCFile) Programming Pig


  6. Use any of the Hive data loading techniques, especially for tables using the Avro, SequenceFile, or RCFile formats. Cloudera Impala


  7. You can replace TEXTFILE with one of the other built-in file formats supported by Hive, including SEQUENCEFILE and RCFILE, both of which optimize disk space usage and I/O bandwidth performance using binary encoding and optional compression. Programming Hive



529、单词 apress 的匹配文本:


  1. Note A principal named “apress" is used in the examples in this section, but this principal will not be available on your machine. Pro Java 7 NIO.2


  2. Apress and friends of ED books may be purchased in bulk for academic, corporate, or promotional use. Pro Hadoop


  3. If you need to study the basics of database design, we suggest Clare Churcher’s book Beginning Database Design (Apress) High Performance MySQL 3rd Edition


  4. For a detail description of RESTful web services, we recommend the book Ajax and REST Recipes, A Problem-Solution Approach (Apress, 2006) Pro Spring 3


  5. For information on translations, please e-mail info@apress.com, or visit http://www.apress.com. Beginning CouchDB


  6. Tip If you’re interested in Git, we strongly recommend Scott Chacon’s excellent book Pro Git – also published by Apress. Pro Puppet


  7. Although every precaution has been taken in the preparation of this work, neither the author(s) nor Apress shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work. Beginning Scala


  8. I would also like to express my gratitude towards all the people at Apress for all the work they have done to get this book out. The Definitive Guide to MongoDB


  9. For your convenience Apress has placed some of the front matter material after the index. Java 7 Recipes A Problem-Solution Approach


  10. For more information, reference our Special Bulk Sales–eBook Licensing web page at http://www.apress.com/info/bulksales. Pro JPA2



530、单词 retries 的匹配文本:


  1. In Flume 0.9, the input thread also did the writing to the output (except for failover retries) Apache Flume Distributed Log Collection for Hadoop


  2. This property configures the maximum number of reducer retries to fetch map outputs in case of fetch failure. Hadoop Operations and Cluster Management Cookbook


  3. It turns out that most of these overwrites came from client retries due to errors or timeouts. GFS - The Google File System


  4. The number of this retries can be defined by this method. Mastering ElasticSearch


  5. There''s more Initializing connections in this manner does not account for server fail-over or retries. Cassandra High Performance Cookbook


  6. When the task fails, the tasktracker retries the task, skipping the records that caused the failure. Hadoop:The Definitive Guide 3rd Edition


  7. For example, when dealing with errors, it favors retries or wait over fast failures. Apache Hadoop Goes Realtime at Facebook


  8. Should a task fail, the JobTracker will automatically relaunch the task, possibly on a different node, up to a predefined limit of retries. Hadoop in Action


  9. When a task fails, Hadoop retries it on the same block but attempts to work around the invalid records. Hadoop Beginners Guide


  10. MapReduce is dead simple for developers: no socket programming, no threading or fancy synchronization logic, no management of retries, no special techniques to deal with enormous amounts of data. Hadoop Operations



531、单词 cobertura 的匹配文本:


  1. In the Java space, you can choose from a wide range of open source and commercial solutions, such as Checkstyle, PMD, Cobertura, FindBugs, and Sonar. Gradle in Action


  2. This recipe uses Cobertura (http://cobertura.sourceforge.net/) to find the code that is not covered by unit tests. Jenkins Continuous Integration Cookbook


  3. We can define the location of the Clover or Cobertura coverage XML result file and much more. Gradle Effective Implementation Guide


  4. To see how the plugins work, we are going to integrate code coverage metrics using the Cobertura plugin. Jenkins The Definitive Guide


  5. Now that we have the aggregated lcov.info file, we need to convert it to the Cobertura XML format. Testable JavaScript


  6. Cobertura and Clover Cobertura, based on the jcoverage Java library, is used to calculate the percentage of code accessed by tests and identify which parts of your source code lack test coverage. Sonar Code Quality Testing Essentials


  7. Setting up a code-coverage tool like Clover, EMMA, or Cobertura can be a big help with managing this issue. Spring in Practice


  8. Install the Maven Reporting plug-in, configuring the Cobertura code coverage report as well as the Surefire Testing report. Spring Roo in Action



532、单词 tarball 的匹配文本:


  1. As of Ganglia 3.4.0, the web interface is a separate distribution tarball maintained in a separate source code repository. Monitoring with Ganglia


  2. If you prefer, a TAR file is available as well: http://github.com/couchapp/couchapp/tarball/master. CouchDB The Definitive Guide


  3. The locations of installed files will be different from the tarball method, and we can check the file layout with the following command: Hadoop Operations and Cluster Management Cookbook


  4. The following commands will download and unpack the tarball for CouchDB 0.10.0 Beginning CouchDB


  5. Appendix A includes the installation instructions for both the CDH tarball and the Apache Hadoop tarball. Hadoop in Practice


  6. Download and extract the tarball   into a directory that we call HIVE_HOME. Hadoop in Action


  7. Just click the link on the home page to download the latest release version as a gzipped tarball. Cassandra The Definitive Guide


  8. What just happened? Setting up Avro is a little involved; it is a much newer project than the other Apache tools we''ll be using, so it requires more than a single download of a tarball. Hadoop Beginners Guide


  9. For tarball-based installs, this leaves quite a bit of flexibility but also an equal amount of ambiguity. Hadoop Operations


  10. Click on the link to the 5.4.1 release and you’ll find both tarball and zip formats available (the tarball is for Linux and Unix; the zip is for Windows) ActiveMQ in Action



533、单词 maintainable 的匹配文本:


  1. Using interfaces for the callbacks (see the next chapter as well) leads to code that runs faster and is a lot more maintainable. Core Java Volume I Fundamentals 9th Edition


  2. The build tool of choice shouldn’t stand in the way of this effort; rather, it should provide you with a flexible and maintainable way to model your automation needs. Gradle in Action


  3. Build logic is described with a powerful DSL and empowers developers to create reusable and maintainable build logic. Gradle Effective Implementation Guide


  4. With MapReduce we’re not only battling with clean and maintainable code, but also with the performance of a job that will be distributed across hundreds of nodes to compute over terabytes and even petabytes of data. MapReduce Design Patterns


  5. Dealing with Imperative Rule Code Our current rule definition relies on 18 lines of doLast code, which is probably too much imperative code for a well-factored, maintainable build. Gradle Beyond The Basics


  6. I was at that point in late 2007 when I was facing the task of storing millions of documents in a system that needed to be fault-tolerant and scalable while still being maintainable by just me. HBase:The Definitive Guide


  7. With careful design6 and a relatively moderate amount of data, such a system is flexible and maintainable. Hadoop in Action


  8. It is about writing programs that are clear, correct, usable, robust, flexible, and maintainable. Effective Java 2nd Edition


  9. How we build this complex network in a maintainable fashion is the focus of this chapter. Hadoop Beginners Guide


  10. While you are always free to solve problems in a locally optimized way by writing Groovy code inside your build, the most maintainable builds will eventually refactor this code into organized plug-ins that expose new build language idioms to the build files that use them. Building and Testing with Gradle



534、单词 subqueries 的匹配文本:


  1. Temporary tables created to hold the results of subqueries don’t have indexes, so this seems plausible. High Performance MySQL 3rd Edition


  2. The language supports nested subqueries, inter and intra-record aggregation, top-k, joins, user-defined functions, etc; some of these features are exemplified in the experimental section. Dremel - Interactive Analysis of WebScale Datasets


  3. If necessary, any node can send subqueries parallel to other nodes and merge responses to return the full response to the user. Mastering ElasticSearch


  4. Traditional SQL features like from clause subqueries, various types of joins – inner, left outer, right outer and outer joins, cartesian products, group bys and aggregations, union all, create table as select and many useful functions on primitive and complex types make the language very SQL like. Hive A Petabyte Scale Data Warehouse Using Hadoop


  5. Hive doesn’t support IN subqueries (at the time of this writing), but you can use a LEFT SEMI JOIN to do the same thing. Hadoop:The Definitive Guide 3rd Edition


  6. Use the UNION operation only when the columns in both subqueries. SQL Antipatterns


  7. Even if you have to use several complex subqueries or join statements, this is usually true. Cassandra The Definitive Guide


  8. We use parentheses to form subqueries, enabling advanced construction of BooleanQuerys: Lucene in Action 2nd Edition


  9. A broad selection of SQL features are directly supported, including subqueries, aggregate queries, update and delete statements, numerous SQL functions, and more. Pro JPA2


  10. However, it still lacked support for transactions, subqueries, foreign keys, stored procedures, and views. Understanding MySQL Internals



535、单词 invariants 的匹配文本:


  1. Specifically, a per-cluster ResourceManager (RM) tracks resource usage and node liveness, enforces allocation invariants, and arbitrates contention among tenants. Apache Hadoop YARN Yet Another Resource Negotiator


  2. When creating a representative dataset, we try to reproduce any domain invariants we have identified: the minimum, maximum, and average number of relationships per node, the spread of different relationship types, property value ranges, and so on. Graph Databases


  3. It is now possible to account for cases where the masters are accessed directly and guarantee important system invariants. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  4. This data uses the sequence file format, first because it’s processed sequentially, and second because we couldn’t satisfy the map file invariants of sorted keys. Hadoop:The Definitive Guide 3rd Edition


  5. Several important prerequisites for code optimization, such as identifying loop invariants and loop bodies require expensive analysis in a control flow graph. Oracle JRockit The Definitive Guide


  6. Like a constructor, a builder can impose invariants on its parameters. Effective Java 2nd Edition


  7. Both scenarios break some of the invariants assumed by the security model and can inadvertently expose data to malicious applications. Hadoop Operations


  8. You should use this pattern for debug and trace statements and assertion preconditions, postconditions, and invariants. Java Performance Tuning 2nd Edition


  9. One path is first to establish conceptual-level invariants, for example the rule that water tank volumes must always be between zero and their capacities. Concurrent Programming in Java Design Principles and Pattern


  10. The indexing system starts by crawling every page on the web and processing them while maintaining a set of invariants on the index. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



536、单词 pointcuts 的匹配文本:


  1. Pointcuts: A pointcut is a collection of joinpoints that you use to define when advice should be executed. Pro Spring 3


  2. Consult the Spring Security reference documentation for information on using security pointcuts and AOP to secure Java methods. Spring in Practice


  3. Note that the new pointcut we added uses some more advanced AspectJ syntax, illustrating Boolean logic and the other supported pointcuts, args, that can be used to specify the type declaration of arguments. Spring Security 3.1


  4. The AspectJ expression language for pointcuts offers a lot of flexibility to determine which methods to match against. Pro Spring Security


  5. An aspect encapsulates pointcuts and advices, much the way a class encapsulates member fields and methods. Programming Scala


  6. Aspects are often described in terms of advice, pointcuts, and join points. Spring in Action 4th Edition



537、单词 instantiation 的匹配文本:


  1. Upon instantiation, the class expects you to provide the account credentials as well as the API’s URL, format, and version. Gradle in Action


  2. Using an in-mapper combiner is more efficient than using actual combiners — there is no additional overhead of reading and instantiation of key/value pairs. Professional Hadoop Solutions


  3. It is reset and reused every time the vertex needs to send a message, and helps to avoid unnecessary instantiation. Hadoop RealWorld Solutions Cookbook


  4. It is instantiated in parallel on many machines, with each instantiation processing one file or perhaps GFS chunk. Interpreting the Data Parallel Analysis with Sawzall


  5. You can set the block size during table instantiation like this: HBase in Action


  6. Instantiation of Generic Types It is illegal to create an instance of a generic type. Data Structures and Algorithm Analysis in Java 3rd Edition


  7. You can create an instance of FilterList while providing various parameters at instantiation time, using one of these constructors: HBase:The Definitive Guide


  8. A Supplier interface also gives you the flexibility to use lazy instantiation by not constructing an instance until the get method is called. Getting Started with Google Guava


  9. Its default instantiation is based on the resource configuration for your HDFS system. Hadoop in Action


  10. This approach is functionally equivalent to the public field approach, except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiation, even in the face of sophisticated serialization or reflection attacks. Effective Java 2nd Edition



538、单词 sdk 的匹配文本:


  1. You will need to include the flume-ng-sdk JAR in the classpath of your Java application at runtime to use Flume''s Log4J Appender. Apache Flume Distributed Log Collection for Hadoop


  2. With the master plan in place, you’ll get started by writing some tasks to interact with the CloudBees client SDK. Gradle in Action


  3. We can also use the options property to define a lot of properties we know from the Java SDK javadoc tool. Gradle Effective Implementation Guide


  4. The examples presented here use the Java SDK to show how to implement S3 data manipulation. Professional Hadoop Solutions


  5. OSNaming JNDI service provider is a part of the Java SE 7 SDK and JRE. Java EE 7 Specification


  6. Flume Highly reliable, configurable streaming data collection ZooKeeper Coordination service for distributed applications Hue User interface framework and SDK for visual Hadoop applications Whirr Library for running Hadoop, and HBase, in the cloud In regard to HBase, CDH solves the issue of running a truly reliable cluster setup, as it has all the required HDFS patches to enable durability. HBase:The Definitive Guide


  7. In the Java SDK documentation, you may see it referred to as Java 1.7 Java 7 New Features Cookbook


  8. Details on downloading and using the SDK are available in the Amazon EMR documentation. Programming Hive


  9. As of this writing, a similar class is scheduled to be supported in an upcoming SDK release. Concurrent Programming in Java Design Principles and Pattern


  10. Once you have launched the JavaFX SDK setup executable you will see the start of the wizard in Figure 15-2 Java 7 Recipes A Problem-Solution Approach



539、单词 kvm 的匹配文本:


  1. The libvirt project has created a common set of tools for managing virtualization resources on different virtualization platforms, currently including: Xen, QEMU, KVM, LXC, OpenVZ, User Mode Linux, VirtualBox, and VMware ESX and GSX. Monitoring with Ganglia


  2. The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor''s APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, etc.) bk-compute-adminguide-grizzly


  3. Real hardware, VMWare virtualization, or KVM virtualization will generally allow us to create a fork of a Redis process at roughly 10–20ms per gigabyte of. Redis in Action


  4. An example of this would be nova-compute accepting a message from the queue to create a new instance and then using the libvirt library to start a new KVM instance. Deploying OpenStack


  5. The process by which it does so is fairly complex but the basics are simple: Accept actions from the queue and perform a series of system commands, like launching a KVM instance, to carry them out while updating state in the database. bk-admin-guide-cloud-latest


  6. For performance or scalability testing, you would use real hardware in a cluster configuration.) You run the VM in VMWare, KVM, or VirtualBox, start the Impala service through the Cloudera Manager web interface, and then interact with Impala through the impalashell interpreter or the ODBC and JDBC interfaces. Cloudera Impala


  7. This deployment ran central services on a set of servers running KVM. OpenStack Operations Guide



540、单词 dnsmasq 的匹配文本:


  1. Something to look at is the amount of dnsmasq processes that are running. bk-compute-adminguide-grizzly


  2. Tracks IP address leases and records them in the database by using the dnsmasq dhcp-script facility. bk-admin-guide-cloud-latest


  3. Debugging DHCP Issues One common networking problem is that an instance boots successfully but is not reachable because it failed to obtain an IP address from dnsmasq, which is the DHCP server that is launched by the nova-network service. OpenStack Operations Guide



541、单词 uid 的匹配文本:


  1. This script comes with OpenTSDB; it creates two tables—tsdb and tsdb-uid. HBase Administration Cookbook


  2. The serialVersionUID field is used to determine whether a compatible class definition was found during deserialization that matches that of a serialized object. Java Network Programming and Distributed Computing


  3. A Solaris subject that contains a principal with a UID of 0, for example, implies a Solaris principal with any other UID. Java Security 2nd Edition


  4. The tsdb-uid table maintains an index of globally unique values for use as metric tags. HBase in Action


  5. By default, the _uid field is stored (can be retrieved) and indexed (searchable) Elasticsearch The Definitive Guide


  6. The graphical version of the serialver program All later versions of the class must define the serialVersionUID constant to the same fingerprint as the original. Core Java Volume II Advanced Features 9th Edition


  7. If you do not specify this number explicitly by declaring a static final long field named serialVersionUID, the system automatically generates it at runtime by applying a complex procedure to the class. Effective Java 2nd Edition


  8. If the specified class declares a long serialVersionUID constant, the value of that field is displayed. Java in a Nutshell 5th Edition


  9. The easiest way to accomplish this is to ensure that the uid of the user the namenode process run as is the same on all namenodes. Hadoop Operations


  10. The “UID” column reports the user id responsible for performing the disk access. Java Performance



542、单词 prefixed 的匹配文本:


  1. Note that the revision ID (rev) is prefixed with an underscore. Getting Started with CouchDB


  2. As we’ve seen, design documents are normal JSON documents, denoted by the fact that their DocID is prefixed with _design/ CouchDB The Definitive Guide


  3. The mentioned properties we just saw, should be prefixed with the index.merge. Mastering ElasticSearch


  4. Note Up until this point, all UNIX commands have been prefixed with the prompt symbol ($) Beginning CouchDB


  5. Quantifier variables are declared like regular variables, but the base type (usually int) is prefixed by a keyword specifying the form of quantifier. Interpreting the Data Parallel Analysis with Sawzall


  6. We can specify all the part prefixed subdirectories as input to the dependent job using wildcards. Hadoop MapReduce Cookbook


  7. Properties in the file are prefixed with whirr., but if they are passed as arguments on the command line, the prefix is dropped. Hadoop:The Definitive Guide 3rd Edition


  8. A job ID is a string prefixed with job_, followed by the cluster ID   (which is a timestamp of when the cluster was started), followed by an auto-incremented job number. Hadoop in Action


  9. Instead, use the Thrift API calls prefixed with system_ to configure your keyspaces and column families, as shown previously. Cassandra The Definitive Guide


  10. Type the following commands; the output should look as shown next, where the commands are included on the lines prefixed by $: Hadoop Beginners Guide



543、单词 portability 的匹配文本:


  1. Some languages give you portability, garbage collection, and the like, but they don’t have much of a library, forcing you to roll your own if you want fancy graphics or networking or database access. Core Java Volume I Fundamentals 9th Edition


  2. Ganglia will not allow IPv6=>IPv4 mapping (for portability and security reasons) Monitoring with Ganglia


  3. In addition to less portability and JVM commitment, using the REST API you can connect to different major ElasticSearch versions if you are aware of the differences in REST endpoints and response returned by ElasticSearch. Mastering ElasticSearch


  4. Portability — Your application may not require all the extra features that an RDBMS provides (for example, typed columns, secondary indexes, transactions, advanced query languages, and so on) Professional Hadoop Solutions


  5. You learned that the advantages that a 100 percent client-side application can offer in terms of flexibility and portability. Beginning CouchDB


  6. The project was created by Doug Cutting (the creator of Hadoop) to address the major downside of Hadoop Writables: lack of language portability. Hadoop:The Definitive Guide 3rd Edition


  7. Java encourages portability of entire applications at the expense of incremental adoption by making it somewhat irksome to link against other languages. Chubby - The lock service for loosely-coupled distributed systems


  8. The JMS API minimizes the amount of enterprise messaging knowledge a Java programmer is required to possess in order to develop complex messaging applications, while still maintaining a certain amount of portability across JMS provider implementations. ActiveMQ in Action


  9. For script portability, it’s typical to omit the authority, only specifying it when referring to another distributed filesystem instance (including S3 buckets) Programming Hive


  10. This enhances portability, at the expense of needing to accommodate differences in the quality of implementation of these mappings. Concurrent Programming in Java Design Principles and Pattern



544、单词 codecs 的匹配文本:


  1. Usually, each mapper is assigned a single block of a file as input (unless you have used certain compression codecs) Apache Flume Distributed Log Collection for Hadoop


  2. The framework supports the gzip, zip, sometimes LZO, and bzip2 compression codecs. Pro Hadoop


  3. Using codecs One of the most significant changes introduced by Apache Lucene 4.0 was the ability to alter how index files are written. Mastering ElasticSearch


  4. With the addition of new cryptographic codecs, a new token-based authentication mechanism that supports more authentication protocols, a unified authorization system that supports Attribute Based Access Control (ABAC) and supports policy enforcement using open standards and XACML, and changes to HBase to allow cell-level authorization, Hadoop will be able to move from isolated cluster environments with perimeter-level security to very secure deployments that can meet the requirements of highly secure environments. Professional Hadoop Solutions


  5. Meaning, once a file is compressed using any of the codecs that Hadoop provides, the file cannot be decompressed without the whole file being read. Hadoop RealWorld Solutions Cookbook


  6. If you’ve set up your cluster with LZO compression, you’ll need to create a flume-env.sh file and set the directory that contains the native compression codecs: Hadoop in Practice


  7. Table 4-2 lists the codecs that are available for Hadoop. Hadoop:The Definitive Guide 3rd Edition


  8. Hadoop supports a number of compression codecs (see table 6.4) Hadoop in Action


  9. This leaves administrators and developers to select more or less aggressive compression codecs based on the resource consumption and state of the cluster. Hadoop Operations


  10. The code itself declares a single class called Encoder, which exposes a method that calls both the Metaphone and Base64 codecs on a string argument. Building and Testing with Gradle



545、单词 contrib 的匹配文本:


  1. If you are certain the spikes are not normal, you can remove them from the RRDtool database using the contributed script removespikes.pl, which is usually shipped in the Ganglia release tarball under contrib/ Monitoring with Ganglia


  2. I once had a cluster that was using the Filesystem in Userspace (FUSE) contrib package to export HDFS as a read-only file system on a machine, which re-exported that file system via the Common Internet File System (CIFS) to a Windows server machine. Pro Hadoop


  3. The standard Lucene distribution contains several Directory implementations, including filesystembased and memory-based, Berkeley DB-based (in the Lucene contrib module), and several others. Professional Hadoop Solutions


  4. A tool to handle this is inside a contrib sub project. Cassandra High Performance Cookbook


  5. They’re so common that Hadoop ships with a contrib JAR called hadoop-datajoin to make things easier. HBase in Action


  6. The contrib package does all of the heavy lifting and only requires a handful of methods to be implemented. Hadoop in Practice


  7. Hadoop’s Fuse-DFS contrib module allows any Hadoop filesystem (but typically HDFS) to be mounted as a standard filesystem. Hadoop:The Definitive Guide 3rd Edition


  8. Hadoop has a contrib package called datajoin that works as a generic framework for data joining in Hadoop. Hadoop in Action


  9. If you tune your JVM options and see some success using a load-testing tool or something like the Python stress test in contrib, don’t get too excited. Cassandra The Definitive Guide


  10. Enabling alternative schedulers Each of the alternative schedulers is provided as a JAR file in capacityScheduler and fairScheduler directories within the contrib directory in the Hadoop installation. Hadoop Beginners Guide



546、单词 accumulator 的匹配文本:


  1. The multiple Modifier and Reader tasks provide contention on the Accumulator object. Thinking in Java 4th Edition


  2. This same accumulator will be used for each member of a given group, so any changes made to it can be persisted. MongoDB The Definitive Guide 2nd Edition


  3. As you can see, you don’t need to implement how to filter using the attributes of a dish (for example, using an iterator and an accumulator) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. We first created a UDF that extended the EvalFunc abstract class and implemented the Accumulator interface. Hadoop RealWorld Solutions Cookbook


  5. If a function can’t be algebraic but can operate on subsets of the data, it’s recommended to implement the Accumulator interface, which can be seen in figure 11.6 Hadoop in Practice


  6. Pig’s Algebraic or Accumulator interfaces for more efficient processing of the bag in chunks. Hadoop:The Definitive Guide 3rd Edition


  7. Invokers do not use the Accumulator or Algebraic interfaces, and are thus likely to be much slower and to use much more memory than UDFs written specifically for Pig. Programming Pig


  8. To accomplish aggregation in an external application, declare an accumulator before the loop that reads from the input stream and output the sum after the completion of the input: Programming Hive


  9. The LongAccumulator generalizes this idea to an arbitrary accumulation operation. Java SE 8 for the Really Impatient


  10. An in-memory table can be used to store temporary results when executing a complex set of queries; as a fast data accumulator that gets periodically flushed to disk; as a fast cache for a portion of the data from some large disk-based table; and in many other ways. Understanding MySQL Internals



547、单词 jboss 的匹配文本:


  1. Under the covers, Akka uses JBoss Netty and Google Protocol Buffers libraries to make the remoting happen seamlessly. Programming Concurrency on the JVM


  2. He has provided consultations and training to corporations around the world on various open source technologies such as Mule ESB, Activiti BPM, JBoss jBPM and Drools, Liferay Portal, Alfresco ECM, JBoss SOA, and cloud computing. Web Crawling and Data Mining with Apache Nutch


  3. If you can take an extra dependency, JBoss Marshalling25 is the way to go. Netty in Action MEAP V08


  4. This plugin can deploy across a range of server types and version ranges including Tomcat, GlassFish, and JBoss. Jenkins Continuous Integration Cookbook


  5. You can run it either as a stand-alone application, or deployed on a conventional Java application server such as Tomcat or JBoss. Jenkins The Definitive Guide


  6. He also knew the next company he started would be open source and build on the model just proved successful by JBoss and MySQL. RabbitMQ in Action


  7. MBeans can be managed remotely by a wide variety of management consoles such as the JConsole GUI that comes with Java and the webbased JMX Console that comes with the JBoss application server. Solr 3 Enterprise Search Server


  8. In these applications, JMX is simply one way of managing the container; JBoss Application Server, on the other hand, uses JMX as the primary way of interacting with the container. Cassandra The Definitive Guide


  9. Jeff Bleiel, our development editor, for his work liaising with Manning—without Jeff’s input, the book would not be what it is today; Gary Tully for his tireless technical reviews of the entire book; the Manning staff for their arduous work on the book; Filip Hanik for his assistance with Tomcat; Jan Bartel and Greg Wilkins for their assistance with Jetty; David Jencks and Kevan Miller for their assistance with Geronimo; and Jaikiran Pai for his assistance with JBoss. ActiveMQ in Action


  10. NetBeans Profiler supports profiling of many popular application servers and Web servers right out of the box such as GlassFish, Tomcat, Weblogic, and JBoss. Java Performance



548、单词 spittles 的匹配文本:


  1. The Spittr application has two essential domain concepts: spitters (the users of the application) and spittles (the brief status updates that users publish) Spring in Action 4th Edition



549、单词 yaml 的匹配文本:


  1. Cloud-config format Cloud-init supports a YAML-based config format that allows the user to configure a large number of options on a system. bk-compute-adminguide-grizzly


  2. Over time, some developers became grumpy about this: it requires a lot of explicit bean wiring, and XML compares unfavorably to terser formats like JSON, YAML, and perhaps even Java. Spring in Practice


  3. The listing below provides a minimal storm.yaml file with entries that you must override: Storm Blueprints Patterns for Distributed Real-time Computation


  4. By changing options in the cassandra.yaml configuration file, Cassandra uses specific directories created. Cassandra High Performance Cookbook


  5. YAML, which stands for Yet Another Markup Language, is also supported for the same purpose, and you can activate it by adding the format=yaml parameter to the HTTP request. Elasticsearch in Action MEAP V07 cHAPTER 1


  6. It is often used as a configuration file format; for example, the database configuration file used in Ruby on Rails applications, database.yml, is a YAML file. Pro Puppet


  7. Modify the configuration file to add your Storm cluster configuration (you can see all default configurations on defaults.yaml in the Storm repository) Getting Started with Storm


  8. The output from explain can be difficult to read in JSON, but it is easier when it is formatted as YAML. Elasticsearch The Definitive Guide


  9. For this example, we’ll define the schema in YAML and then load it, although you could also use client code to define it. Cassandra The Definitive Guide


  10. What just happened? We have just got the Selenium Grid to load our original Selenium Grid item by passing in the original YAML file. Selenium 2 Testing Tools Beginner''s Guide



550、单词 compactions 的匹配文本:


  1. Managing compactions An HBase table has the following physical storage structure: HBase Administration Cookbook


  2. These tombstones, along with the dead values, are cleaned up on major compactions. Professional Hadoop Solutions


  3. Lowering this setting to three days allows compactions to reclaim the disk space sooner. Cassandra High Performance Cookbook


  4. Future versions of CouchDB may include more advanced compaction features such as queuing compactions and processing multiple compactions with one API call. Beginning CouchDB


  5. Minor compactions are designed to be minimally detrimental to HBase performance, so there is an upper limit on the number of HFiles involved. HBase in Action


  6. There are two types of compaction: minor compactions and major compactions. HBase:The Definitive Guide


  7. To address this, we made compactions interruptible to favor responsiveness over completion. Apache Hadoop Goes Realtime at Facebook


  8. Cassandra can use a lot of memory, and garbage collection and major compactions can have a big impact on performance. Cassandra The Definitive Guide


  9. Incoming read and write operations can continue while compactions occur. Bigtable - A Distributed Storage System for Structured Data


  10. In this experiment, we configured the benchmark with increasing numbers of customers and measured both the achieved performance and the number of cores used by all parts of the system including cores used for background maintenance such as Bigtable compactions. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



涉及文献数目:167
1、A Java ForkJoin Framework
2、A Little Riak Book
3、ActiveMQ in Action
4、Apache Flume Distributed Log Collection for Hadoop
5、Apache Hadoop Goes Realtime at Facebook
6、Apache Hadoop YARN Yet Another Resource Negotiator
7、Apache Kafka
8、Apache Solr 4 Cookbook
9、Apache Sqoop Cookbook
10、Beginning CouchDB
11、Beginning Scala
12、Bigtable - A Distributed Storage System for Structured Data
13、Building LinkedIn’s Real-time Activity Data1
14、Building and Testing with Gradle
15、Cassandra A Decentralized Structured Storage System
16、Cassandra High Performance Cookbook
17、Cassandra The Definitive Guide
18、Ceph A Scalable, High-Performance Distributed File System
19、Chubby - The lock service for loosely-coupled distributed systems
20、Cloudera Impala
21、Concurrent Programming in Java Design Principles and Pattern
22、Core Java Volume I Fundamentals 9th Edition
23、Core Java Volume II Advanced Features 9th Edition
24、CouchDB The Definitive Guide
25、Dapper a Large-Scale Distributed Systems Tracing Infrastructure
26、Data Structures and Algorithm Analysis in Java 3rd Edition
27、Deploying OpenStack
28、Dremel - Interactive Analysis of WebScale Datasets
29、Effective Java 2nd Edition
30、Elasticsearch The Definitive Guide
31、Elasticsearch in Action MEAP V07 cHAPTER 1
32、FlumeJava - Easy, Efficient Data-Parallel Pipelines
33、Functional Programming Patterns in Scala and Clojure
34、Functional Programming in Scala
35、Fundamental Networking in Java
36、GFS - The Google File System
37、Getting Started with CouchDB
38、Getting Started with Google Guava
39、Getting Started with Storm
40、Gradle Beyond The Basics
41、Gradle Effective Implementation Guide
42、Gradle in Action
43、Graph Databases
44、HBase Administration Cookbook
45、HBase in Action
46、HBase:The Definitive Guide
47、HDFS scalability the limits to growth
48、Hadoop Beginners Guide
49、Hadoop MapReduce Cookbook
50、Hadoop Operations
51、Hadoop Operations and Cluster Management Cookbook
52、Hadoop RealWorld Solutions Cookbook
53、Hadoop in Action
54、Hadoop in Practice
55、HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads
56、Hadoop:The Definitive Guide 3rd Edition
57、High Performance MySQL 3rd Edition
58、Hive A Petabyte Scale Data Warehouse Using Hadoop
59、Incoop MapReduce for Incremental Computations
60、Interpreting the Data Parallel Analysis with Sawzall
61、Introduction to Tajo
62、JSR-133 Java Memory Model and Thread
63、Java 7 Concurrency Cookbook
64、Java 7 New Features Cookbook
65、Java 7 Recipes A Problem-Solution Approach
66、Java 8 in Action Lambdas, Streams and Functional-style Programming
67、Java EE 7 Essentials
68、Java EE 7 Specification
69、Java Network Programming 4th Edition
70、Java Network Programming and Distributed Computing
71、Java Performance
72、Java Performance Tuning 2nd Edition
73、Java SE 8 for the Really Impatient
74、Java Security 2nd Edition
75、Java Threads 3rd Edition
76、Java in a Nutshell 5th Edition
77、Jenkins Continuous Integration Cookbook
78、Jenkins The Definitive Guide
79、Kafka a Distributed Messaging System for Log Processing
80、Lucene in Action 2nd Edition
81、Mahout in Action
82、Managing Infrastructure with Puppet
83、MapReduce - Simplied Data Processing on Large Clusters
84、MapReduce Design Patterns
85、Mastering ElasticSearch
86、Megastore - Providing Scalable, Highly Available Storage for Interactive Services
87、Mesos A Platform for Fine-Grained Resource Sharing in the Data Center
88、MongoDB The Definitive Guide 2nd Edition
89、MongoDB in Action
90、Monitoring with Ganglia
91、Namenode High Availability
92、Netty in Action MEAP V08
93、NoSQL Distilled
94、Omega - flexible, scalable schedulers for large compute clusters
95、OpenStack Operations Guide
96、Oracle JRockit The Definitive Guide
97、Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications
98、Performance Testing With JMeter 2.9
99、PowerDrill - Processing a Trillion Cells per Mouse Click
100、Pregel  -  A System for Large-Scale Graph Processing
101、Pro Hadoop
102、Pro JPA2
103、Pro Java 7 NIO.2
104、Pro Puppet
105、Pro Spring 3
106、Pro Spring MVC
107、Pro Spring Security
108、Professional Hadoop Solutions
109、Programming Concurrency on the JVM
110、Programming Hive
111、Programming Pig
112、Programming Scala
113、Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine
114、Programming in Scala 2nd Edition
115、RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.
116、RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems
117、RabbitMQ in Action
118、Redis Cookbook
119、Redis in Action
120、Resilient Distributed Datasets - A Fault-Tolerant Abstraction for In-Memory Cluster Computing
121、SQL Antipatterns
122、Scala for the Impatient
123、Scala in Action
124、Scala in Depth
125、Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr
126、Scaling CouchDB
127、Selenium 2 Testing Tools Beginner''s Guide
128、Selenium Testing Tools Cookbook
129、Solr 3 Enterprise Search Server
130、Solr 3.1 Cookbook
131、Solr in action
132、Sonar Code Quality Testing Essentials
133、Spanner - Google’s Globally-Distributed Database
134、Spring Batch in Action
135、Spring Data Modern Data Access for Enterprise Java
136、Spring Dynamic Modules in Action
137、Spring Integration in Action
138、Spring Roo in Action
139、Spring Security 3.1
140、Spring in Action 4th Edition
141、Spring in Practice
142、Steps in Scala An Introduction to Object-Functional Programming
143、Storage Architecture and Challenges
144、Storm Blueprints Patterns for Distributed Real-time Computation
145、Storm Real-Time Processing Cookbook
146、Tenzing - A SQL Implementation On The MapReduce Framework
147、Testable JavaScript
148、The Definitive Guide to MongoDB
149、The Hadoop Distributed File System
150、The Java Language Specification Java SE 7 Edition
151、The Java Virtual Machine Specification Java SE 7 Edition
152、The Little Redis Book
153、The anatomy of a large-scale hypertextual Web search engine
154、The anatomy of a large-scale hypertextual Web search engine short
155、Thinking in Java 4th Edition
156、Tika in Action
157、Understanding MySQL Internals
158、Web Crawling and Data Mining with Apache Nutch
159、Web search for a planet the google cluster architecture
160、Zab High-performance broadcast for primary-backup systems
161、ZooKeeper Distributed Process Coordination
162、admin-user-guide-trunk
163、bk-admin-guide-cloud-latest
164、bk-compute-adminguide-grizzly
165、elasticsearch. The Road to a Distributed, (Near) Real Time, Search Engine
166、high-availability-guide
167、user-guide

2000 个软件开发领域的高频特殊词及精选例句(五)

2000 个软件开发领域的高频特殊词及精选例句(五)

高春辉、王春生、朱峰:关于开源创业的 15 件小事

superword 是一个 Java 实现的英文单词分析软件,主要研究英语单词音近形似转化规律、前缀后缀规律、词之间的相似性规律等等。

1101、单词 uuids 的匹配文本:


  1. For example, when accessing /_uuids, you get a list of UUIDs from the system. Getting Started with CouchDB


  2. The images in the replicated glance server preserve the uuids, metadata, and image data from the original. bk-compute-adminguide-grizzly


  3. When you write your first programs, we recommend assigning your own UUIDs. CouchDB The Definitive Guide


  4. Randomly distributed keys could be usernames, email addresses, UUIDs, MD5 hashes, or any other key that has no identifiable pattern in your dataset. MongoDB The Definitive Guide 2nd Edition


  5. None This is an ordered list of firewall rule uuids. bk-admin-guide-cloud-latest


  6. The columns are sorted chronologically, using unique IDs (UUIDs) for column names to avoid clashes. Cassandra High Performance Cookbook


  7. Request Parameters: count (integer, the number of UUIDs to return, default 1) Beginning CouchDB


  8. The simplest way to generate mostly monotonic document IDs is to use the default value of sequential for the algorithm option in the uuids configuration section and let CouchDB generate your document IDs. Scaling CouchDB


  9. These can be simple numbers but also UUIDs† and so on. HBase:The Definitive Guide


  10. Both row keys and column names can be strings, like relational column names, but they can also be long integers, UUIDs, or any kind of byte array. Cassandra The Definitive Guide



1102、单词 ycsb 的匹配文本:


  1. Therefore, we will start by introducing how to use Yahoo! Cloud Serving Benchmark (YCSB) to measure (benchmark) performance of a HBase cluster. HBase Administration Cookbook


  2. The Yahoo! Cloud Serving Benchmark (YCSB) provides benchmarking for the bases of comparison between NoSQL systems. Cassandra High Performance Cookbook


  3. Although YCSB is built for comparing systems, you can use it to test the performance of any of the databases it supports, including HBase. HBase in Action


  4. Installation YCSB is available in an online repository only, and you need to compile a binary version yourself. HBase:The Definitive Guide



1103、单词 des 的匹配文本:


  1. For the DES algorithm, there are a total of 256 possible keys. Pro Spring 3


  2. All the native SerDes and complex types supported in Hive are also implementations of these interfaces. Hive A Petabyte Scale Data Warehouse Using Hadoop


  3. This secret key can then be used for a variety of operations: it can be used directly for creating MACs, or it can be converted into a DES key for use in a cipher. Java Security 2nd Edition


  4. The simplicity of the format has a lot going for it, such as making it easy to process with other tools, including MapReduce programs or Streaming, but there are more compact and performant binary SerDes that you might consider using. Hadoop:The Definitive Guide 3rd Edition


  5. The Data Encryption Standard (DES) is a venerable block cipher with a key length of 56 bits. Core Java Volume II Advanced Features 9th Edition


  6. In order to use this package, you need to have a basic understanding of cryptographic algorithms such as DES. Java in a Nutshell 5th Edition


  7. Besides key lengths, there’s an important difference between DES/AES and RC4-based ciphers. Java Network Programming 4th Edition


  8. Third-party input and output formats and SerDes can be specified, a feature which permits users to customize Hive for a wide range of file formats not supported natively. Programming Hive



1104、单词 stax 的匹配文本:


  1. Java can process XML input using different techniques, including using a Streaming API for XML (StAX) parser. Spring Batch in Action


  2. In this code you use Java’s built-in Streaming API for XML (StAX) parser to extract the key and value for each property and output them. Hadoop in Practice


  3. Produce/consume JSON text in a streaming fashion (similar to StAX API for XML) • Build a Java object model for JSON text (similar to DOM API for XML) Java EE 7 Essentials


  4. In the future, support for streaming through StAX and/or SAX may be added to Spring Integration. Spring Integration in Action


  5. In the following sections, we discuss the streaming parsers supplied by the Java library: the venerable SAX parser and the more modern StAX parser that was added to Java SE 6 Core Java Volume II Advanced Features 9th Edition


  6. As you can see, we’ve used a simple StAX API (http://mng.bz/0S2s) to create an XML representation of our stock data. ActiveMQ in Action


  7. If using JAXP, the first step in parsing a document is the creation of the parser, SAXParser, XMLStreamReader, or DocumentBuilder depending on the API selected (SAX, StAX, and DOM, respectively) Java Performance


  8. The newer, simpler, and more efficient model is the Streaming API for XML (StAX) Java 7 Recipes A Problem-Solution Approach



1105、单词 frm 的匹配文本:


  1. Thus, when you create a table named MyTable, MySQL stores the table definition in MyTable.frm. High Performance MySQL 3rd Edition


  2. The Table Manager is responsible for creating, reading, and modifying the table definition files (.frm extension), maintaining a cache of table descriptors called table cache, and managing table-level locks. Understanding MySQL Internals



1106、单词 rdbmss 的匹配文本:


  1. Some of the biggest face-palm moments we’ve had were when someone published flawed benchmarks comparing MySQL to other relational database management systems (RDBMSs) with the default settings. High Performance MySQL 3rd Edition


  2. Similar to traditional relational database management systems (RDBMSs), data in HBase is organized in tables. Professional Hadoop Solutions


  3. Some RDBMSs ship powerful tools for managing these kinds of issues, but many don’t. HBase in Action


  4. We’ve seen similar contention with coarse-grained table-level locking in RDBMSs, where writes are latent even when there’s logically no contention. Graph Databases


  5. Examples include working with relational data in RDBMSs, structured files, and HBase. Hadoop in Practice


  6. However, there are several differences between the two languages, and between Pig and relational database management systems (RDBMSs) in general. Hadoop:The Definitive Guide 3rd Edition


  7. The HBase data model is not the relational approach we saw used in Hive and all other RDBMSs. Hadoop Beginners Guide


  8. Some applications use in-memory databases, while others use traditional relational database management systems (RDBMSs) Java 7 Recipes A Problem-Solution Approach


  9. Because MongoDB and most RDBMSs use the same data structure for their indexes, advice for managing indexes in both of these systems is compatible. MongoDB in Action


  10. Additionally, Sinfonia’s mini-transactions have limited semantics compared to the transactions provided by RDBMSs or Percolator: the user must specify a list of items to compare, read, and write prior to issuing the transaction. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



1107、单词 chris 的匹配文本:


  1. In the 4th edition, Chris Grindstaff was very helpful during the development of the SWT section, and Sean Neville wrote the first draft of the Flex section for me. Thinking in Java 4th Edition


  2. While talking one afternoon about how to solve an unrelated problem, Chris mentioned Redis as a database that I might find interesting (given my education in theoretical computer science) Redis in Action


  3. Chris uses is that CouchDB has felt like a boulder we’ve been pushing up a hill. CouchDB The Definitive Guide


  4. Also to the authoring team, including Chris, Manuel and Brent, in providing me extraordinary support in writing this book. Pro Spring 3


  5. Another big thanks goes to Chris Nelson for keeping all of us focused and for his advice on the book; it was a pleasure working with you. Pro Spring MVC


  6. So I consulted with Chris Mattmann, another Nutch committer with whom I had worked, about the potential for refactoring all these disparate tools in a common and standardized project. Tika in Action


  7. A search for "Chris" would match on "Chris Isaak" since he is tagged with pop, but "Chris Cagle" would be filtered out since he plays only American and country music. Solr 3 Enterprise Search Server


  8. After several years of experience with the language, and significant contributions by Ed Frank, Patrick Naughton, Jonathan Payne, and Chris Warth it was retargeted to the Internet, renamed, and substantially revised to be the language specified here. The Java Language Specification Java SE 7 Edition


  9. I would like to thank my wife Karen for editing and proofreading my chapters; my children Chris, Connor, and Michael for keeping the zombies at bay; and my dog Rex for forcing me to go on walks in the snow and the rain. ActiveMQ in Action


  10. Chris Beams for his legitimizing of Gradle through SpringSource projects. Building and Testing with Gradle



1108、单词 multiprocessor 的匹配文本:


  1. Threads in Java can take advantage of multiprocessor systems if the base operating system does so. Core Java Volume I Fundamentals 9th Edition


  2. These kinds of issues can also be exposed or amplified if a program is moved to a multiprocessor system. Thinking in Java 4th Edition


  3. In this chapter, we show a process for exploiting the power of multiprocessor. Java Threads 3rd Edition


  4. Such an approach is particularly interesting if you want to leverage multicore or multiprocessor hardware, as illustrated in figure 13.1 Spring Batch in Action


  5. Figure 10.6 A second optimal solution for the multiprocessor case. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. As with any cache mechanism, invalidation issues are a problem, especially on multiprocessor systems where we often get the situation that two processors want to access the same memory at the same time. Oracle JRockit The Definitive Guide


  7. The concurrent garbage collector should be especially effective on multiprocessor machines. Java Performance Tuning 2nd Edition


  8. Being able to make efficient use of CPU cycles can be challenging for multithreaded applications running on multiprocessor and multicore. Java Performance


  9. The cost advantages of using inexpensive, PC-based clusters over high-end multiprocessor servers can be quite substantial, at least for a highly parallelizable application like ours. Web search for a planet the google cluster architecture


  10. Each computer system may be a uniprocessor, a multiprocessor, or even a cluster of machines administered as a single unit and sharing a common operating system. Concurrent Programming in Java Design Principles and Pattern



1109、单词 cursors 的匹配文本:


  1. Table 8.3 lists the constants to use with this method along with what the cursors look like under Windows. Core Java Volume I Fundamentals 9th Edition


  2. Create an application and change the cursor to one of the stock cursors in the Cursor class. Thinking in Java 4th Edition


  3. MySQL’s implementation of stored code is pretty limited—execution plan caches are per-connection, cursors are materialized as temporary tables, there’s very. High Performance MySQL 3rd Edition


  4. Cursors The database returns results from find using a cursor. MongoDB The Definitive Guide 2nd Edition


  5. Reading with ORM cursors implies that code responsible for managing domain classes doesn’t use a first-level cache. Spring Batch in Action


  6. Scans Now that we have discussed the basic CRUD-type operations, it is time to take a look at scans, a technique akin to cursors† in database systems, which make use of the underlying sequential, sorted storage layout HBase is providing. HBase:The Definitive Guide


  7. In most cases, you can use one of the out-of-the box reader implementations and customize it for your application’s needs so that you can avoid the critical yet tedious tasks of dealing with file line reading, database cursors, and so on. Spring Integration in Action


  8. Additionally, the system usage memory limit determines the point at which messages are offloaded to disk by the pending message cursors. ActiveMQ in Action


  9. Cursors, found in many database systems, return query result sets in batches for efficiency iteratively. MongoDB in Action



1110、单词 filesort 的匹配文本:


  1. Just because a query does a filesort with a temporary table doesn’t mean it is a problem. High Performance MySQL 3rd Edition


  2. On the code level, the turning point is in the following lines inside sortlength( ) in sql/filesort.cc: Understanding MySQL Internals



1111、单词 runtimes 的匹配文本:


  1. We report runtimes for binary trees (to study scaling properties) and lognormal random graphs (to study the performance in a more realistic setting) using various graph sizes with the weights of all edges implicitly set to 1 Pregel  -  A System for Large-Scale Graph Processing


  2. This trade-off is illustrated in figure 13.7, where the runtimes for hypothetical sequential and parallel scalable algorithms are compared. Mahout in Action


  3. If the runtimes of tasks are too long, a single task may hold the cluster for a long period, or retrying a failed task becomes expensive. Pro Hadoop


  4. Any preprocessing of that data that can be done to improve execution time and reduce the byte count should be done to improve runtimes. MapReduce Design Patterns


  5. From there, we use sum() to calculate the sum off all runtimes, and we divide by the number of runs to get the average: Functional Programming Patterns in Scala and Clojure


  6. This way you’ll have a good sense of the expected compression and runtimes for the codecs. Hadoop in Practice


  7. An implementation of TaskRuntimeEstima tor used by Specula tor instances that provides estimates for task runtimes (MapReduce 2 only) Hadoop:The Definitive Guide 3rd Edition


  8. He has a background in computer security but has worked with runtimes since 1999 Oracle JRockit The Definitive Guide


  9. All objects may have to fit into memory without any garbage collection for these runtimes. Java Performance Tuning 2nd Edition


  10. Modern JVMs continue to evolve by integrating more sophisticated optimizations, runtimes, and garbage collectors. Java Performance



1112、单词 balancers 的匹配文本:


  1. Incoming web requests are handled by load balancers that distribute the requests to clusters of web servers. Monitoring with Ganglia


  2. This will include all facets of the application and all supporting system resources, including, but not limited to, infrastructure, network, database, servers, and load balancers. Performance Testing With JMeter 2.9


  3. A wide variety of load-balancing solutions are available, ranging from peer-based implementations such as Wackamole (http://www.backhand.org/wackamole/) to  DNS, LVS (Linux Virtual Server; http://www.linuxvirtualserver.org), hardware load balancers, TCP proxies, MySQL Proxy, and managing the load balancing in the application. High Performance MySQL 3rd Edition


  4. If there are no other balancers active, the mongos will take a cluster-wide lock from the config servers and do a balancing round. MongoDB The Definitive Guide 2nd Edition


  5. You may want to consider having multiple load balancers so that you can remove the load balancer as a single point of failure. Scaling CouchDB


  6. In this domain, several data centers support many applications on behalf of many customers using different pieces of infrastructure, from virtual machines to physical load balancers. Graph Databases


  7. As such, it is the home of the Staged Event-Driven Architecture (SEDA) classes, the gossiper, the balancers, and the classes that flush data in memtables. Cassandra The Definitive Guide


  8. Along with this, EC2 offers a suite of complimentary services, including load balancers, static IP addresses, high-performance additional virtual disk drives, and many more. Hadoop Beginners Guide


  9. The most common network problems included inadequate bandwidth somewhere along the communication route, and undersized, misconfigured, or incompatible routers, switches, firewalls, and load balancers. Java Performance Tuning 2nd Edition


  10. The load is balanced to the various systems through the use of hardware or software load balancers. Java Performance



1113、单词 spliterator 的匹配文本:


  1. This uses the fork/join framework introduced in Java 7 and the Spliterator abstraction that you’ll learn about in the next chapter. Java 8 in Action Lambdas, Streams and Functional-style Programming



1114、单词 sha 的匹配文本:


  1. When the first connection to a resource manager is established as part of the transaction, a resource manager specific local transaction is started on the connect can sha transact. Java EE 7 Specification


  2. A message digest may be implemented by a particular algorithm, such as MD5 or SHA. Java Security 2nd Edition


  3. Gradle relies heavily on hashes to optimize download performance; when resolving a dependency, it will first attempt to download the much-smaller SHA from the repo, skipping the download if it determines that it already has that content in the artifact cache. Gradle Beyond The Basics


  4. The newfunction block takes the incoming argument and we process it, first adding in support for working with SHA hashes by requiring the sha1 library, and then passing the argument to the hexdigest method. Pro Puppet


  5. They’re both faster than cryptographic hashers such as SHA-1 and produce good distributions. Hadoop in Practice


  6. The fingerprint is obtained by ordering the descriptions of the class, superclass, interfaces, field types, and method signatures in a canonical way, and then applying the so-called Secure Hash Algorithm (SHA) to that data. Core Java Volume II Advanced Features 9th Edition


  7. A popular algorithm in the past has been SHA-1, but researchers have. SQL Antipatterns


  8. Otherwise, a unique version number is computed by applying the Secure Hash Algorithm (SHA) to the API defined by the class. Java in a Nutshell 5th Edition


  9. Subclassing Thread Consider a program that calculates the Secure Hash Algorithm (SHA) digest for many files. Java Network Programming 4th Edition


  10. Even these hashing families have some theoretical vulnerabilities; however, no one has yet demonstrated a practical case of creating intentional collisions for the SHA family of hashes. The Definitive Guide to MongoDB



1115、单词 unary 的匹配文本:


  1. The unary minus (-) and unary plus (+) are the same operators as binary minus and plus. Thinking in Java 4th Edition


  2. For example, the operations of an IntStream are restricted to produce primitive integers: the map operation of an IntStream takes a lambda that takes an int and produces an int (an IntUnaryOperator) Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. In this case, however, the name of the method has “unary_” prepended to the operator character. Programming in Scala 2nd Edition


  4. Here’s an example of defining unary operators on a Sample class: Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine


  5. It is also possible for a node to have only one child, as is the case with the unary minus operator. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. Some operators apply unary numeric promotion to a single operand, which must produce a value of a numeric type: • If the operand is of compile-time type Byte, Short, Character, or Integer, it. The Java Language Specification Java SE 7 Edition


  7. But the identity function is special: it returns its argument unmodified, so we know that it is typesafe to use it as a UnaryFunction

    whatever the value of T. Effective Java 2nd Edition


  8. Unary operators and assignment operators are evaluated from right to left. Java in a Nutshell 5th Edition


  9. The unary negative operator (-) is also supported for both integers and floating-point numbers. Programming Pig


  10. Look carefully at the return type of the compose method of UnaryOperator

    Java SE 8 for the Really Impatient



1116、单词 multiuser 的匹配文本:


  1. These benchmarks measure online transaction processing (OLTP) throughput and are most suitable for interactive multiuser applications. High Performance MySQL 3rd Edition


  2. In a multiuser environment, multiple jobs can be submitted and run simultaneously. Hadoop Operations and Cluster Management Cookbook


  3. Because it’s embedded, Derby can’t be shared between users, and as such it can’t be used in a multiuser environment where the metastore needs to be shared. Hadoop in Practice


  4. The default in MapReduce 1 is the original FIFO queue-based scheduler, and there are also multiuser schedulers called the Fair Scheduler and the Capacity Scheduler. Hadoop:The Definitive Guide 3rd Edition


  5. OS/2 and the Integer multiuser spreadsheet engine for the JVM. Beginning Scala


  6. When you get to a multiuser production environment, you’ll have to narrow down your jobs by looking for your Hadoop user name and the name of your current job. Hadoop in Action


  7. Other ecosystem projects do have additional information or higher level constructs that must be secured in multiuser environments. Hadoop Operations


  8. Multiuser response times depending on the number of users (if applicable) Java Performance Tuning 2nd Edition


  9. These requirements include: Collaborative multiuser environments Hadoop enables new classes of analysis that were prohibitive computationally and economically with traditional RDBMS technologies. Programming Hive



1117、单词 committers 的匹配文本:


  1. Over Ganglia’s history, we’ve had nearly 40 active committers and hundreds of people who have submitted patches and bug reports. Monitoring with Ganglia


  2. Chris I would like to acknowledge all the committers of CouchDB, the people sending patches, and the rest of the community. CouchDB The Definitive Guide


  3. Code changes can be submitted as pull requests and undergo a close review process by the core committers before making it into the code base. Gradle in Action


  4. Last but not the least, Apache Hadoop YARN continues to be a community driven open source project and owes much of its success to the Apache Hadoop YARN and MapReduce communities— a big thanks to all the contributors and committers who have helped YARN in every way possible. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Typically, the key committers have insights into the problem and the code base and help code get committed faster. Cassandra High Performance Cookbook


  6. Todd’s version is downstream of the master version maintained by Kevin Weil and other committers. Hadoop in Practice


  7. Output Committers Hadoop MapReduce uses a commit protocol to ensure that jobs and tasks either succeed or fail cleanly. Hadoop:The Definitive Guide 3rd Edition


  8. Code was there for more than a year in Apache and has active contributors and committers list. Namenode High Availability


  9. Cassandra was accepted into the Apache Incubator, and by the time it graduated in March 2010, it had become a true open source success story, with committers from Rackspace, Digg, Twitter, and other companies that wouldn’t have written their own database from scratch, but together built something important. Cassandra The Definitive Guide


  10. Cloudera employs many of the Apache Hadoop committers (the people who have privileges to commit code to the Apache source repositories) who work on Hadoop full-time. Hadoop Operations



1118、单词 multivalued 的匹配文本:


  1. You can bind numeric attributes to documents, including multivalued attributes. High Performance MySQL 3rd Edition


  2. Sorting with multivalued fields With versions prior to 0.90, ElasticSearch had problems with sorting on the field that had multiple values in their contents. Mastering ElasticSearch


  3. Cardinality Prescribes whether the metadata property is multivalued, for example, metadata describing a file’s set of MimeType names. Tika in Action


  4. Lucene''s document structure is flat, but it does support multivalued fields—that is a field with an array of values. Solr 3 Enterprise Search Server


  5. For example, if we had a field named title, which was multivalued, we could set this parameter to split the data like this: Solr 3.1 Cookbook


  6. With Pig, the data structures are much richer, typically being multivalued and nested, and the set of transformations you can apply to the data are much more powerful. Hadoop:The Definitive Guide 3rd Edition


  7. A Catch all field should not be stored as it is populated from another field #B Destination field must be multivalued if any of the source fields are multi-valued. Solr in action


  8. We’ll first consider how analysis is impacted by multivalued fields. Lucene in Action 2nd Edition


  9. Bidirectional many-to-one relationships are explained later as part of the discussion of multivalued bidirectional associations. Pro JPA2


  10. Please be careful when using this component on the multivalued fields as it can be a performance bottleneck. Apache Solr 4 Cookbook



1119、单词 jstatd 的匹配文本:


  1. If a remote host is specified, it must be running the jstatd daemon. Java in a Nutshell 5th Edition


  2. The remote system must be configured to run the jstatd daemon. Java Performance



1120、单词 xcp 的匹配文本:


  1. The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor''s APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, etc.) bk-compute-adminguide-grizzly


  2. Xen Yes libvirt Most popular (along with XCP/XenServer) technology for larger scale and production deployments. Deploying OpenStack


  3. XenAPI (XenServer/XCP) Compute uses the XenAPI agent to inject passwords into guests when using the XenAPI hypervisor backend. bk-admin-guide-cloud-latest



1121、单词 jmenu 的匹配文本:


  1. JMenuItem add(JMenuItem item) adds a menu item (or a menu) Core Java Volume I Fundamentals 9th Edition


  2. You can use an Icon inside a JLabel or anything that inherits from AbstractButton (including JButton, JCheckBox, JRadioButton, and the different kinds of JMenuItem) Thinking in Java 4th Edition


  3. You will notice the apply() method creates a JMenuItem instance with an inner class definition containing an actionPerformed() method. Java 7 Recipes A Problem-Solution Approach



1122、单词 classname 的匹配文本:


  1. In fact, if I had my own custom source written, I would use its classname for the source''s type parameter. Apache Flume Distributed Log Collection for Hadoop


  2. The classname parameter specifies the name of the class, and the codebase parameter specifies a URL to the resource (though this can also be a local file:// URL) Java Network Programming and Distributed Computing


  3. You may list their classname or their class label (the string that appears in JMeter''s UI) here, and they will no longer appear in the menus. Performance Testing With JMeter 2.9


  4. Finally, the option java_outer_ classname declaration is the output class name that will be used. Hadoop RealWorld Solutions Cookbook


  5. The classname is the full classname of the login module you want to use. Java Security 2nd Edition


  6. In the previously sketched-out plug-in, we applied the plug-in directly by its classname with the line apply plugin: LiquibasePlugin at the top of the build.5 A properly packaged Liquibase plug-in should be applied just like the core plug-ins are, with the line apply plugin: ''liquibase'' Gradle Beyond The Basics


  7. When the hadoop command is invoked with a classname as the first argument, it launches a Java Virtual Machine (JVM) to run the class. Hadoop:The Definitive Guide 3rd Edition


  8. To enable assertions within a specific class, use -ea followed by a colon and the classname: Java in a Nutshell 5th Edition


  9. The first is the full package, classname, and method name. Programming Pig


  10. Use the Services API (as detailed in the JAR specification), if available, to determine the classname. Java Performance



1123、单词 authenticates 的匹配文本:


  1. When a user successfully authenticates, a hash is generated from the username and a secret key and is stored in a cookie and made available to the rest of gweb. Monitoring with Ganglia


  2. It authenticates container leases, manages containers’ dependencies, monitors their execution, and provides a set of services to containers. Apache Hadoop YARN Yet Another Resource Negotiator


  3. To begin a session, a client first authenticates to the AS of the KDC. Professional Hadoop Solutions


  4. The web server then authenticates the r in the specified realm (referred to as HTTP Realm in this document) Java EE 7 Specification


  5. A login module is the code that actually authenticates a user. Java Security 2nd Edition


  6. The client authenticates itself to the Authentication Server and receives a timestamped Ticket-Granting Ticket (TGT) Hadoop:The Definitive Guide 3rd Edition


  7. The meaning of these keywords is given by the following algorithm: A login authenticates a subject, which can have multiple principals. Core Java Volume II Advanced Features 9th Edition


  8. The request should be sent to a certificate authority (CA), which authenticates the requestor and sends back a signed certificate authenticating the requestor’s public key. Java in a Nutshell 5th Edition


  9. Authorization So far, we’ve discussed only how clients identify themselves and how Hadoop authenticates them. Hadoop Operations


  10. We’ll demonstrate this process by building a custom security plug-in that authenticates the clients based on their IP address. ActiveMQ in Action



1124、单词 dijkstra 的匹配文本:


  1. Semaphores were invented by Edsger Dijkstra in 1968, for use as a synchronization primitive. Core Java Volume I Fundamentals 9th Edition


  2. The dining philosophers problem, invented by Edsger Dijkstra, is the classic demonstration of deadlock. Thinking in Java 4th Edition


  3. Many other graph algorithms can be applied to the graph at hand, such as Dijkstra’s algorithm [Dijkstra’s] for finding the shortest or cheapest path between nodes. NoSQL Distilled


  4. After all, as Edsger Dijkstra said, testing can only prove the presence of errors, never their absence.14 So the guarantees that static typing gives may be simple, but they are real guarantees of a form no amount of testing can deliver. Programming in Scala 2nd Edition


  5. Dijkstra’s algorithm proceeds in stages, just like the unweighted shortest-path algorithm. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. A call to findSinglePath on the Dijkstra path finder returns the shortest path between the two delivery bases. Graph Databases


  7. Dijkstra’s algorithm is a shortest path algorithm commonly taught in undergraduate computer science courses. Hadoop in Practice


  8. The concept of a semaphore was introduced by Edsger Dijkstra in 1965 and was used for the first time in the THEOS operating system. Java 7 Concurrency Cookbook



1125、单词 interoperate 的匹配文本:


  1. Each operating system, server vendor, and application developer creates specific agents and software for performance monitoring, none of which interoperate. Monitoring with Ganglia


  2. For applications to interoperate, the implementation of application protocols must be precise. Java Network Programming and Distributed Computing


  3. Both Java 8 and Scala combine object-oriented and functional programming features into one programming language; both run on the JVM and to a large extent can interoperate. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. By now, you should be familiar with the main components of Oozie and the way they interoperate. Professional Hadoop Solutions


  5. This required that we enhance the Hadoop clients to be able to interoperate with Hadoop servers running different versions of the Hadoop software. Apache Hadoop Goes Realtime at Facebook


  6. Occasionally, you may need to interoperate with legacy APIs that use the File class instead of the Path class. Core Java Volume II Advanced Features 9th Edition


  7. The well-intentioned equals method in this class naively attempts to interoperate with ordinary strings. Effective Java 2nd Edition


  8. Although these are two very different network protocols that do not interoperate on the same network without special gateways and/or tunneling protocols, Java hides almost all of the differences from you. Java Network Programming 4th Edition


  9. To interoperate with pure thread-based versions, worker threads can be packaged as Executors. Concurrent Programming in Java Design Principles and Pattern


  10. Every complete source code example in this book has been compiled and executed on Windows and Solaris platforms, and in many cases has been used to interoperate between both platforms. Fundamental Networking in Java



1126、单词 tcpdump 的匹配文本:


  1. If using multicast, check the interface where packets are sent (a packet sniffer such as tcpdump might help) Monitoring with Ganglia


  2. You can use tcpdump to save the traffic to disk, then use pt-query-digest with the --type=tpcdump option to decode and analyze the queries. High Performance MySQL 3rd Edition


  3. In the first terminal, on the host running nova-network, use tcpdump to monitor DNSrelated traffic (UDP, port 53) on the VLAN interface. bk-compute-adminguide-grizzly


  4. Another option is to use a packet-sniffing tool such as tcpdump to capture HTTP traffic in PCAP format, and to then use a tool such as pcap2har to generate the HAR. Testable JavaScript


  5. You can use tcpdump to identify if packets are being routed to the inbound interface on the compute host. bk-admin-guide-cloud-latest


  6. A common example is to query the output of the tcpdump command to analyze network behavior. Programming Hive


  7. It’s recommended to use tcpdump at several points along the network path to correlate where a problem might be. OpenStack Operations Guide



1127、单词 localization 的匹配文本:


  1. That method formats the message part of the record, substituting parameters and applying localization. Core Java Volume I Fundamentals 9th Edition


  2. This feature has been renamed from push-down joins to adaptive query localization.) NDB used to have a completely different performance profile from other MySQL storage engines, but recent versions are more general-purpose. High Performance MySQL 3rd Edition


  3. The second trend reflects the increasing availability and use by applications of internet services accessible through public APIs, made available by known providers such as Google (for example, localization information), Facebook (for example, social information), and Twitter (for example, news) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Commonly these are noncompiled files such as localization bundles or images, which, understandably might not have an effect on unit or integration tests. Jenkins The Definitive Guide


  5. Examples of when this may be needed include situations with changed attributes, isolation levels, character settings, and localization ration. Java EE 7 Specification


  6. Hadoop uses the standard Java localization mechanisms to load the correct properties for the locale you are running in. Hadoop:The Definitive Guide 3rd Edition


  7. The toString() method of Date produces a textual representation of a date and time but does no localization and allows no customization of which fields (day, month and year or hours and minutes, for example) are to be displayed. Java in a Nutshell 5th Edition


  8. Furthermore, sorting international Strings requires the ability to handle many kinds of localization issues, such as the sorted location for accented characters, characters that can be treated as character pairs, and so on. Java Performance Tuning 2nd Edition


  9. This book covers fundamentals of Java development such as installation of the JDK, writing classes, and running applications.It delves into essential topics such as the development of object-oriented constructs, exception handling, unit testing, and localization. Java 7 Recipes A Problem-Solution Approach


  10. Some localization mechanisms are built into the validation message interpolator, but custom message interpolation can also be plugged into the validator to perform localization in custom ways. Pro JPA2



1128、单词 springsource 的匹配文本:


  1. For users of the Spring framework, the author of Spring Data’s Redis interface, Costin Leau, has teamed up with Redis author Salvatore Sanfilippo to produce a one-hour introduction for using Spring with Redis available at http://www.springsource.org/ spring-data/redis. Redis in Action


  2. But why install these plugins manually, when you can do it the easy and convenient way using the Spring Tool Suite (STS) (http://www.springsource.org/sts)? STS is an Eclipse-based development environment mainly targeted for building applications using the Spring framework. Gradle in Action


  3. Installing Gradle plugin To install the Gradle plugin, we use the update site, http://dist.springsource. Gradle Effective Implementation Guide


  4. So, which one is better, XML or annotations? There have been lots of debates on this topic, and you can find numerous discussions about this topic on the Internet (for example, try the Spring Community Forum at http://forum.springsource.org) Pro Spring 3


  5. Spring Security LDAP support is assisted by the Spring LDAP module (http://www.springsource.org/ldap), which is actually a separate project from the core Spring Framework and Spring Security projects. Spring Security 3.1


  6. Following is the current supported functionality of SpEL extracted directly from Spring documentation at http://static.springsource.org/spring/docs/3.0.x/reference/expressions.html: Pro Spring Security


  7. To address this problem, SpringSource has repackaged a large number of open source libraries and made them available through its EBR (http://www.springsource.com/repository/app/) Spring Dynamic Modules in Action


  8. To follow along with the book, you’ll need to download and install Spring Roo, version 1.2.1, from http://springsource.org/spring-roo. Spring Roo in Action


  9. There are many ways of building workflows; a common approach is with Extract Transform and Load (ETL) tools such as Pentaho Kettle (http://kettle.pentaho.com) and Spring Batch (http://static.springsource.org/spring-batch) Hadoop Beginners Guide


  10. Feel free to reach out at any time to the Springsource Forums, Stackoverflow, or the Neo4j Google Group for answers to your questions. Spring Data Modern Data Access for Enterprise Java



1129、单词 kahadb 的匹配文本:


  1. Although not presented here, we also tested an alternative AMQ message store and found its performance very similar to that of KahaDB. Kafka a Distributed Messaging System for Log Processing


  2. For example, ActiveMQ offers its own style of ultra-fast message persistence via KahaDB, but also supports standard JDBC-accessible databases. ActiveMQ in Action



1130、单词 squeryl 的匹配文本:


  1. You’ll build applications using the Simple Build Tool (SBT), connect to a database using Squeryl, build scalable and reusable components in Scala, and use actors to make concurrent programming easy. Scala in Action



1131、单词 caveats 的匹配文本:


  1. Many of the caveats and “gotchas” you will face over time aren’t immediately obvious. CouchDB The Definitive Guide


  2. This recipe shows how to use this setting and explains the caveats that come with it. Cassandra High Performance Cookbook


  3. Within the framework of Release-to-Release Binary Compatibility in SOM (Forman, Conner, Danforth, and Raper, Proceedings of OOPSLA ''95), Java programming language binaries are binary compatible under all relevant transformations that the authors identify (with some caveats with respect to the addition of instance variables) The Java Language Specification Java SE 7 Edition


  4. Pitfalls and false optimizations As in previous chapters, we will finish up with a discussion of obvious caveats. Oracle JRockit The Definitive Guide


  5. The API resulting from this refactoring is essentially identical to the state-testing method API in Item 57 and the same caveats apply: if an object is to be accessed concurrently without external synchronization or it is subject to externally induced state transitions, this refactoring is inappropriate, as the object’s state may change between the invocations of actionPermitted and action. Effective Java 2nd Edition


  6. Note that this method has two important caveats: first, it is intended to be used only once; second, it will probably be removed by version 0.8 Cassandra The Definitive Guide


  7. If we assume a table with 1 million records and four mappers, then each will process SQL statements to retrieve the data that each use the desired primary key column range as caveats. Hadoop Beginners Guide


  8. It’s not meant to supplant a complete understanding of the system, and as always, the normal caveats apply when dealing with systems that store data or serve critical functions. Hadoop Operations


  9. And, finally, a multitude of tuning parameters can be applied, each of which have benefits and caveats. ActiveMQ in Action


  10. Reduced-power servers are attractive for large-scale clusters, but you must keep some caveats in mind. Web search for a planet the google cluster architecture



1132、单词 jcomponent 的匹配文本:


  1. To draw on a component, you define a class that extends JComponent and override the paintComponent method in that class. Core Java Volume I Fundamentals 9th Edition


  2. So, for virtually anything you place on your form, all you need to do is say (for an object j c of any JComponent-derived class): Thinking in Java 4th Edition


  3. One way to implement a cell renderer is to create a class that extends JComponent, like this: Click here to view code imageClick here to view code image. Core Java Volume II Advanced Features 9th Edition


  4. The ability to draw over a JComponent has been added. Java 7 New Features Cookbook


  5. Most of the recipes in this chapter extend the JComponent class and also containing a main() method. Java 7 Recipes A Problem-Solution Approach



1133、单词 builtin 的匹配文本:


  1. As a proof of concept, look no further than CouchDB’s builtin administrative interface. CouchDB The Definitive Guide


  2. Our partitioning function for this benchmark has builtin knowledge of the distribution of keys. MapReduce - Simplied Data Processing on Large Clusters


  3. Use one HDFS dir to store one “table”, associated with builtin serialization format. MapReduce and Parallel DBMSs Friends or Foes


  4. Log4j also has a number of builtin appenders such as the SyslogAppender used in this recipe. Cassandra High Performance Cookbook


  5. I rounded up this chapter and this part of the book by covering some of CouchDB’s builtin specialized views for retrieving all documents and some of the parameters that can be used to manipulate the resultset returned by these views. Beginning CouchDB


  6. The only builtin Writable array type in Hadoop, ArrayWritable, can only store one type of Writable, so it’s not a good fit for storing your tuple, which contains a variety of field types. Hadoop in Practice


  7. Their name is just an arbitrary number, based on the Java builtin random generator. HBase:The Definitive Guide


  8. Relational databases have some builtin capabilities to help ensure data integrity, such as primary keys (which ensure entity integrity) and foreign key constraints (which ensure referential integrity) Cassandra The Definitive Guide


  9. The shell fencing method has no builtin provision for a timeout. Hadoop Operations


  10. See Task Types for a description of the most important builtin task types.) The task declares properties (i.e., hostname, database, sql, etc.) in conventional Groovy idiom. Building and Testing with Gradle



1134、单词 ltw 的匹配文本:


  1. AspectJ supports another weaving mechanism called load-time weaving (LTW), in which it intercepts the underlying JVM class loader and provides weaving to the bytecode when it is being loaded by the class loader. Pro Spring 3


  2. OSGi environment for the chosen JPA implementations, such as the underlying mechanisms required to configure packages of classes used at runtime, and also LTW. Spring Dynamic Modules in Action


  3. AspectJ 5’s load-time weaving (LTW) support weaves aspects this way. Spring in Action 4th Edition



1135、单词 ngram 的匹配文本:


  1. Making a misspelling-proof search If you look back at the mappings, you will see that we have the title field defined as multi_field and one of the fields is analyzed with a defined ngram analyzer. Mastering ElasticSearch


  2. Create a class named NGram.java in your JAR file at whatever source package is appropriate. Hadoop RealWorld Solutions Cookbook


  3. On a technical level, David has solved challenging problems with Lucene and Solr including geospatial search, wildcard ngram query parsing, searching multiple multi-valued fields at coordinated positions, and part-of-speech search using Lucene payloads. Solr 3 Enterprise Search Server


  4. We’ll give special attention here to the snowball analyzers and shingle and ngram filters. Lucene in Action 2nd Edition


  5. In Solr, there are multiple ways to undo that: using a spellchecker component to try and correct the user''s mistake, using the fuzzy query, or for example, using the ngram approach. Apache Solr 4 Cookbook



1136、单词 mlt 的匹配文本:


  1. The MLT capability in Solr can be used in the following three ways: Solr 3 Enterprise Search Server


  2. Next, the mlt.fl parameter specifies which fields we want to use with the more like this component. Solr 3.1 Cookbook


  3. The parameter mlt=true says that we want to add the more like this component to the result processing. Apache Solr 4 Cookbook



1137、单词 pseudodistributed 的匹配文本:


  1. The following table shows configuration files for configuring a Hadoop cluster in pseudodistributed mode: Hadoop Operations and Cluster Management Cookbook


  2. To run a Pipes job, we need to run Hadoop in pseudodistributed mode (where all the daemons run on the local machine), for which there are setup instructions in Appendix A. Hadoop:The Definitive Guide 3rd Edition


  3. A pseudodistributed mode is simply a distributed mode that is run on a single host. HBase:The Definitive Guide


  4. Now that you have all the different configurations set up and know how to put data into each configuration, let’s look at how to develop and debug in local and pseudodistributed modes. Hadoop in Action


  5. Pseudodistributed mode means that you have several instances of HBase, but they’re all running on the same host. Cassandra The Definitive Guide


  6. As we only have a single host and one DataNode in the pseudodistributed mode, we change this value to 1 Hadoop Beginners Guide


  7. Figure 7-9 shows a sample page taken from a cluster running in pseudodistributed mode on a Linux desktop. Programming Pig


  8. To address this need, a single machine can be configured to run in pseudodistributed mode, where the behavior is identical to distributed mode, namely filesystem references default to the distributed filesystem and jobs are managed by the JobTracker service, but there is just a single machine. Programming Hive



1138、单词 rna 的匹配文本:


  1. Say you want to create a new sequence type for RNA strands, which are sequences of bases A (adenine), T (thymine), G (guanine), and U (uracil) Programming in Scala 2nd Edition



1139、单词 firehose 的匹配文本:


  1. In particular, we’ll offer the ability to filter over messages with an equivalent of follow (users), track (keywords), and location filters, in addition to a randomly selected subset of messages, similar to Twitter’s firehose and sample streams. Redis in Action


  2. The Firehose Strategy If you have some servers that are more powerful than others, you might want to let them handle proportionally more load than your less-powerful servers. MongoDB The Definitive Guide 2nd Edition


  3. Twitter client application that reads a subset of the Twitter firehose and writes. Storm Blueprints Patterns for Distributed Real-time Computation



1140、单词 prepended 的匹配文本:


  1. Execute the downloaded file, which is prepended by the path to it. Web Crawling and Data Mining with Apache Nutch


  2. He is also known as mrhaki, which is simply the initials of his name prepended by mr. Gradle Effective Implementation Guide


  3. Here, chroot is a base directory that is prepended to all path operations (this effectively namespaces all Kafka znodes to allow sharing with other applications on the same ZooKeeper cluster) Apache Kafka


  4. Since no member class can have the same name as any containing class, however, the use of the enclosing class name prepended to this is a perfectly general way to refer to any containing instance. Java in a Nutshell 5th Edition


  5. This file can optionally be prepended with the private key. admin-user-guide-trunk


  6. As with join, to avoid ambiguity, the field name will have the bag’s name and :: prepended to it. Programming Pig


  7. The body of each document from the docmaker will be truncated to this size, with the leftover being prepended to the next document. Lucene in Action 2nd Edition


  8. This option is not necessary if your key is prepended to your cert file. user-guide


  9. Note When a class resides within a Java package, it is no longer referenced by only the class name, but instead the package name is prepended to the class name, which is known as the fully qualified name. Java 7 Recipes A Problem-Solution Approach


  10. When specified, the schema name will be prepended to the table name when the persistence provider goes to the database to access the table. Pro JPA2



1141、单词 hcatalog 的匹配文本:


  1. Finally, you learn about HCatalog, and the way it can be used to advertise and access data. Professional Hadoop Solutions


  2. The Apache HCatalog project (http://incubator.apache.org/hcatalog/) solves this problem by providing a table metadata service, based on Hive’s metastore, so that Pig queries can reference schemas by name, rather than specifying them in full each time. Hadoop:The Definitive Guide 3rd Edition


  3. Apache HCatalog A relatively new entry, Apache HCatalog is a service that provides shared schema and data access abstraction services to applications with the ecosystem. Hadoop Operations


  4. Metadata in Hadoop Apache HCatalog provides a metadata and table management layer for Hadoop. Programming Pig


  5. HCatalog is a separate Apache project from Hive, and is part of the Apache Incubator. Programming Hive



1142、单词 suboptimal 的匹配文本:


  1. Even if your estimate for the value k is suboptimal, the clustering quality isn’t affected much by it. Mahout in Action


  2. This leads to fragmentation and suboptimal utilization, which is not viable for us, and so we did not explore this option any further. Omega - flexible, scalable schedulers for large compute clusters


  3. For example, suboptimal configuration on data replication properties can cause a large number of data block copies over the network, which will pose a negative effect on the performance of a cluster. Hadoop Operations and Cluster Management Cookbook


  4. It focuses on potential coding problems such as unused or suboptimal code, code size and complexity, and good coding practices. Jenkins The Definitive Guide


  5. Although splitting files on the file block boundaries works well in the majority of cases, sometimes such a split might be suboptimal. Professional Hadoop Solutions


  6. Region hot-spotting typically points to another problem, though—bad key design leading to suboptimal load distribution. HBase in Action


  7. Show that this algorithm is suboptimal, regardless of the choices for Algorithms A, B, and C. Data Structures and Algorithm Analysis in Java 3rd Edition


  8. This is to make sure that objects referenced near other objects in time appear near them in memory as well, evacuating them from suboptimal NUMA nodes. Oracle JRockit The Definitive Guide


  9. These use cases would have workloads that are challenging to scale because of very high write throughput, massive datasets, unpredictable growth, or other patterns that may be difficult or suboptimal in a sharded RDBMS environment. Apache Hadoop Goes Realtime at Facebook


  10. Some VMs are intended purely for development and are highly suboptimal in terms of performance. Java Performance Tuning 2nd Edition



1143、单词 mvcc 的匹配文本:


  1. Instead, they use row-level locking in conjunction with a technique for increasing concurrency known as multiversion concurrency control (MVCC) High Performance MySQL 3rd Edition


  2. Instead of locks, CouchDB uses Multi-Version Concurrency Control (MVCC) to manage concurrent access to the database. CouchDB The Definitive Guide


  3. The existing MVCC-like read-write consistency control (RWCC) provided sufficient isolation guarantees and the HLog (write ahead log) on HDFS provided sufficient durability. Apache Hadoop Goes Realtime at Facebook


  4. CouchDB is interesting in part for what it terms Multi-Version Concurrency Control (MVCC) Cassandra The Definitive Guide


  5. Many databases choose the multi-version concurrency control (MVCC) approach, which allows multiple users to see different versions of the data. The Definitive Guide to MongoDB



1144、单词 extractors 的匹配文本:


  1. You’ll see more examples of extractors and pattern matching throughout the book. Scala in Action


  2. Programs are written in a variant of SQL, with extensions to call out to custom extractors, filters, and processors that are written in C# FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. If you want to use pattern matching on one of your classes, but you do not want to open access to your classes the way case classes do, then you can use the extractors described in Chapter 26 Programming in Scala 2nd Edition


  4. If you want to match arbitrary patterns, Scala extractors are your friend. Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine


  5. We can even introduce  as a pattern , which lets us** using custom extractors write: Functional Programming in Scala


  6. The following script demonstrates the extractors in pattern matching case statements: Programming Scala


  7. It turns out that they also get properties and extractors. Beginning Scala



1145、单词 reuses 的匹配文本:


  1. A high-fidelity simulator that replays historic workload traces from Google production clusters, and reuses much of the Google production scheduler’s code. Omega - flexible, scalable schedulers for large compute clusters


  2. As shown in the command-line output, Gradle inspects the available Ant targets, wraps them with Gradle tasks, reuses their description, and even keeps their dependencies intact. Gradle in Action


  3. HadoopDB[15] reuses most of Hive’s system, except, it uses traditional database instances in each of the nodes to store data instead of using a distributed file system. Hive A Petabyte Scale Data Warehouse Using Hadoop


  4. Hadoop starts only a single JVM per task slot and then reuses it for an unlimited number of tasks in the job. Hadoop MapReduce Cookbook


  5. The MapReduce framework reuses iterator value objects supplied to combine. Hadoop in Practice


  6. It is necessary to make a copy of the record with the highest temperature found so far, since the iterator reuses the instance for reasons of efficiency (and only the fields are updated) Hadoop:The Definitive Guide 3rd Edition


  7. The schema also reuses the same table to store various other details required to implement full Lucene support. HBase:The Definitive Guide


  8. If the same query is prepared twice, the database simply reuses the query strategy. Core Java Volume II Advanced Features 9th Edition


  9. It reuses familiar concepts from the relational database   world, such as tables, rows, columns, and schema, to ease learning. Hadoop in Action


  10. The cached thread pool you have created creates new threads if needed to execute the new tasks, and reuses the existing ones if they have finished the execution of the task they were running, which are now available. Java 7 Concurrency Cookbook



1146、单词 jlist 的匹配文本:


  1. Note If you want to show a permanently displayed list instead of a drop-down list, use the JList component. Core Java Volume I Fundamentals 9th Edition


  2. While a JComboBox box drops down when you activate it, a JList occupies some fixed number of lines on a screen all the time and doesn’t change. Thinking in Java 4th Edition


  3. The JList component has many more features, and its design is similar to that of the tree and table components. Core Java Volume II Advanced Features 9th Edition


  4. In the constructor you’ll notice it takes a reference to a JListBox; this is where to display the words as the user is typing characters. Java 7 Recipes A Problem-Solution Approach



1147、单词 prepend 的匹配文本:


  1. For example, if the file is in the current directory, prepend it with ./ (necessary if. Web Crawling and Data Mining with Apache Nutch


  2. The simplest solution to achieve this is to prepend the arguments of the lambda expression with an additional argument for each of the captured variables, so the method generated to implement the former lambda expression will be something like this: Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. You may wonder why you prepend the new element to the list head. RabbitMQ in Action


  4. The form has prependId set to false to ensure that the id of each element is preserved as mentioned in the form. Java EE 7 Essentials


  5. You prepend the command with the exclamation mark (!) and end it with a semicolon (;) Hadoop in Action


  6. I said earlier that to invoke an instance method you must prepend a reference to the object to be operated on. Java in a Nutshell 5th Edition


  7. By default, you have to prepend the destination name with either queue:// or topic:// ActiveMQ in Action


  8. You must reside within the same directory as the source file or prepend the name of the file with the path to where it is located. Java 7 Recipes A Problem-Solution Approach


  9. To facilitate this, we will prepend the query name with the name of the entity. Pro JPA2


  10. If the code is being run from a Ruby file, prepend Ruby’s p method to print the output to the screen: MongoDB in Action



1148、单词 odbc 的匹配文本:


  1. An Impala implementation makes use of some of the existing Hadoop infrastructure, namely the Hive metastore and Hive ODBC driver. Professional Hadoop Solutions


  2. HiveServer – The component that provides a thrift interface and a JDBC/ODBC server and provides a way of integrating Hive with other applications. Hive A Petabyte Scale Data Warehouse Using Hadoop


  3. What’s more, SQL is the lingua franca in business intelligence tools (ODBC is a common bridge, for example), so Hive is well placed to integrate with these products. Hadoop:The Definitive Guide 3rd Edition


  4. It was named to be reminiscent of ODBC, a standard database API pioneered by Microsoft and since incorporated into the SQL standard. Core Java Volume II Advanced Features 9th Edition


  5. However, the JDBC package does not provide concrete implementations of those APIs (except an ODBC driver) Java Performance Tuning 2nd Edition


  6. When you do need access to API libraries for capabilities such as visualization and graphing, you can access Impala data from programs written in languages such as Java and C++ through the standard JDBC and ODBC protocols. Cloudera Impala


  7. Bundled with the Hive distribution is the CLI, a simple web interface called Hive web interface (HWI), and programmatic access through JDBC, ODBC, and a Thrift server (see Chapter 16) Programming Hive


  8. It was created as a Java-specific version of its more generic predecessor, the Object Database Connectivity (ODBC) specification, a standard for accessing any relational database from any language or platform. Pro JPA2


  9. Business Intelligence tools typically connect to databases using ODBC or JDBC, so databases that want to work with these tools must accept SQL queries through these interfaces. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads


  10. You have a legacy, proprietary database and want to give it an SQL/ODBC interface. Understanding MySQL Internals



1149、单词 initializations 的匹配文本:


  1. Any initializations that occur at the point of field definition are executed. Thinking in Java 4th Edition


  2. It’s instructive to trace the sequence of initializations that lead to the string 1/2 to be printed in the code above: Programming in Scala 2nd Edition


  3. For convenience, and to avoid the stutter endemic in some languages’ declaration syntaxes, in initializations the appropriate conversion operation (with default parameter values) is supplied implicitly by the compiler. Interpreting the Data Parallel Analysis with Sawzall


  4. The restrictions above are designed to catch, at compile time, circular or otherwise malformed initializations. The Java Language Specification Java SE 7 Edition


  5. This seems like a good idea in general; moreover, the Scheme interpreter needs some cumbersome initializations that we did not want to include in every event handler script. Core Java Volume II Advanced Features 9th Edition


  6. It would be possible to eliminate the unnecessary initializations by lazily initializing these fields (Item 71) the first time the isBabyBoomer method is invoked, but it is not recommended. Effective Java 2nd Edition


  7. Both the initialize and update expressions of a for loop can use a comma to separate multiple initializations and update expressions. Java in a Nutshell 5th Edition


  8. You can start several asynchronous operations in parallel, then perform your local initializations, and then request the results of the network operations: Java Network Programming 4th Edition


  9. The second component in the startup time of the VM is the time taken to manage the VM runtime initializations. Java Performance Tuning 2nd Edition


  10. The code evolved over time, gradually becoming more compact and readable as various initializations involving THD variables were moved under the THD class. Understanding MySQL Internals



1150、单词 ebook 的匹配文本:


  1. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub. Apache Flume Distributed Log Collection for Hadoop


  2. Buy the Book If you liked this eBook, perhaps you''d like to have a paper copy of the book. Programming Concurrency on the JVM


  3. The purchaser may place the eBook title on any of their personal computers for their own personal reading and reference. Pro Hadoop


  4. Did you know that Packt offers eBook versions of every book published, with. Storm Blueprints Patterns for Distributed Real-time Computation


  5. This book is available in both paper and PDF eBook form. Programming in Scala 2nd Edition


  6. For more information, reference our Special Bulk Sales–eBook Licensing web page at http://www.apress.com/info/bulksales. Beginning CouchDB


  7. Also, for ebook buyers, clicking on the box above the code extracts downloads the code for that extract for you. Functional Programming Patterns in Scala and Clojure


  8. Buy the Book If you liked this eBook, perhaps you’d like to have a paper copy of the book. SQL Antipatterns


  9. While on the project home page, look for a link to download a free O’Reilly ebook by Josh Long and Steve Mayzak called Getting Started with Roo [LongMay11] Spring Data Modern Data Access for Enterprise Java


  10. Even if you have an eBook copy that didn’t go through that process, there were numerous hands on the bits and bytes that you downloaded— hands that edited it, reviewed it, typeset it, and proofread it. Spring in Action 4th Edition



1151、单词 oauth 的匹配文本:


  1. OAuth is an authentication protocol that allows users to approve applications to act on their behalf, without sharing their password. Jenkins Continuous Integration Cookbook


  2. In addition, more recent trends such as mobile, social, and NoSQL are now commonplace in both corporate and noncorporate settings, and some of the recipes in the book treat topics such as GitHub, OAuth, MongoDB, and Neo4j as well. Spring in Practice


  3. Throughout his career, he has developed hands on experience integrating Spring Security with an array of security standards (that is, LDAP, SAML, CAS, OAuth, and so on) Spring Security 3.1


  4. There is also the OAuth integration, which lives on its own as an individual project. Pro Spring Security


  5. In order to connect to the Twitter API, we must first generate a set of OAuth. Storm Blueprints Patterns for Distributed Real-time Computation


  6. Recent developments in the trunk version of CouchDB include support for cookie authentication and OAuth authentication, which should allow for a much more streamlined security solution than the basic HTTP authentication option. Beginning CouchDB


  7. We cover the timeline reading adapter shortly, but first we discuss how you should provide the configuration necessary for creating a Twitter template instance that can pass OAuth credentials. Spring Integration in Action


  8. Problem In this recipe, we’ll implement a data model and interaction to support an OAuth v1.0a API. Redis Cookbook



1152、单词 fetcher 的匹配文本:


  1. FetcherJob The job of the fetcher is to fetch the URLs which are generated by the GeneratorJob. Web Crawling and Data Mining with Apache Nutch


  2. For example, the content part is populated by a tool called Fetcher, which downloads raw data from URLs on the fetchlist (2) Hadoop:The Definitive Guide 3rd Edition


  3. We added a time-shutoff parameter that terminates the fetcher after a fixed amount of time as an engineering fix to this problem. Hadoop in Action


  4. We’ll first look at how this fetcher works and then we’ll peel back the covers to reveal its implementation. MongoDB in Action



1153、单词 rados 的匹配文本:


  1. Rados Block Device (RBD) - This backend stores images inside of a Ceph storage cluster using Ceph''s RBD interface. bk-compute-adminguide-grizzly


  2. Rados block device (RBD) Stores images inside of a Ceph storage cluster using Ceph''s RBD interface. bk-admin-guide-cloud-latest


  3. To maintain system availability and ensure data safety in a scalable fashion, RADOS manages its own replication of data using a variant of primary-copy replication [2], while taking steps to minimize the impact on performance. Ceph A Scalable, High-Performance Distributed File System


  4. We present the design and implementation of RADOS, a reliable object storage service that can scales to many thousands of devices by leveraging the intelligence present in individual storage nodes. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.



1154、单词 composable 的匹配文本:


  1. To provide full flexibility, each composable trait should be able to call a super method, even in that super method doesn’t do anything. Scala in Depth


  2. Monads are a great way to raise the abstraction level that is composable. Scala in Action


  3. We have developed a Java library, based on a small set of composable primitives, that is both expressive and convenient. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. The calculations that need to be run must be composable. Professional Hadoop Solutions


  5. An important property of averages is that they are not composable—that is, if I take two groups of orders, I can’t combine their averages alone. NoSQL Distilled


  6. A pure function is modular and composable because itcomposable separates the logic of the computation itself from "what to do with the result" and. Functional Programming in Scala


  7. Mathematical functions are side-effect-free, and they are composable from other functions. Programming Scala


  8. However, they tend to be at the wrong level of granularity for creating sophisticated, highly composable code that can be shared among different developers. Hadoop:The Definitive Guide 3rd Edition


  9. This forces you to architect the pre- and postprocessing steps in a modular and composable manner. Hadoop in Action


  10. In Chapter 6, you learn about improvements in atomic counters, concurrent hash maps, parallel array operations, and composable futures. Java SE 8 for the Really Impatient



1155、单词 memtables 的匹配文本:


  1. Writes are first written to commit logs and memtables, and are only considered good when the write to commit log and memtable was successful. NoSQL Distilled


  2. It does this by isolating the disk traffic for Commit Log activity from the traffic used for reads as well as flush Memtables and compact SSTables. Cassandra High Performance Cookbook


  3. Memtables, SSTables, and Commit Logs When you perform a write operation, it’s immediately written to the commit log. Cassandra The Definitive Guide


  4. The manner in which Bigtable uses memtables and SSTables to store updates to tablets is analogous to the way that the Log-Structured Merge Tree [26] stores updates to index data. Bigtable - A Distributed Storage System for Structured Data



1156、单词 jtree 的匹配文本:


  1. On the other hand, the JList class is consistent with the JTree and JTable classes where this generality is useful. Core Java Volume II Advanced Features 9th Edition



1157、单词 syncing 的匹配文本:


  1. Writing to remote memory (a replica’s acknowledgment) is faster than writing to the local disk (syncing on commit) High Performance MySQL 3rd Edition


  2. Table 4.1 shows the appendfsync options and how they affect file-write syncing to disk. Redis in Action


  3. The candidate will continue syncing and once it has synced operation 124, it will call for an election again (if no one else has become primary in that time) MongoDB The Definitive Guide 2nd Edition


  4. HDFS-200, and implemented syncFs() to make the process of syncing changes to a file more reliable. HBase:The Definitive Guide


  5. Once it connects, the follower and the leader sync their state, and only after syncing can the follower start processing new requests. ZooKeeper Distributed Process Coordination


  6. Journaling works by syncing a write buffer to disk every 100 ms. MongoDB in Action



1158、单词 testability 的匹配文本:


  1. Even though you took it to the extreme and provided configurable properties for your tasks, code maintainability and testability fell by the wayside. Gradle in Action


  2. A secondary advantage of having a compiled source code is that you can run a number of statistic code review tools over the code base and obtain testability metrics. Jenkins Continuous Integration Cookbook


  3. The testability property of your design determines how easy it is to write tests. Scala in Action


  4. Improved testability: When you design your classes for DI, you make it possible to replace dependencies easily. Pro Spring 3


  5. Starting with the good habits of structuring your code syntactically and semantically for testability, writing the right tests at the right times, running them regularly, and monitoring their results will keep you in that lower-right sweet spot in the matrix. Testable JavaScript


  6. IoC promotes better decoupling and testability, as the object only knows about the abstractions of its dependencies, not specific concrete implementers of them. Programming Scala


  7. Ideally, you should reach out to developers and recommend that they add either the name attribute or title for better testability and accessibility. Selenium Testing Tools Cookbook


  8. Most presentations of constructions and design patterns later in this book include descriptions of how they resolve applicable forces discussed here (as well as others that are less directly tied to concurrency, such as accuracy, testability, and so on) Concurrent Programming in Java Design Principles and Pattern


  9. Integration and Testability Multitier applications hosted on an application server have become the de facto standard for application architectures. Pro JPA2


  10. This is really nice from a usability point of view, but from a testability point it is a nightmare! Selenium 2 Testing Tools Beginner''s Guide



1159、单词 sdo 的匹配文本:


  1. If you wanted additionally to prevent code from accessing classes in the sdo package, you would edit the java.security file so that it defined a property like this: Java Security 2nd Edition



1160、单词 usability 的匹配文本:


  1. Having the dynamic schema built on top of the static schema, rather than the other way around, allows most applications to enjoy the performance, usability, Megastore - Providing Scalable, Highly Available Storage for Interactive Services


  2. We chose a different model for reasons of usability and performance. Pregel  -  A System for Large-Scale Graph Processing


  3. This data further underscores the usability of the FlumeJava API. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  4. We could keep tinkering with it, but at this point let''s try  the library to construct tests and see if weusing notice any deficiencies, either in what it can express or its general usability. Functional Programming in Scala


  5. By focusing on structured data , Hive can add certain optimization and usability features that MapReduce, being more general, doesn’t have. Hadoop in Action


  6. Like Chapter 5, this chapter focuses on usability, robustness, and flexibility. Effective Java 2nd Edition


  7. The JavaBeans conventions are widely used and wellunderstood, however, and you can improve the usability and reusabilty of your code by following the relevant ones. Java in a Nutshell 5th Edition


  8. Lucli lets you scroll through a history of commands and reexecute a previously entered command to enhance its usability. Lucene in Action 2nd Edition


  9. I believe it’s imperative to have server-side validation, but for better usability, client-side validation can add that extra polish your application needs. Java 7 Recipes A Problem-Solution Approach


  10. In the case of session beans and message-driven beans, solutions to usability issues were reached by simply removing some of the more onerous implementation requirements and letting components look more like plain Java objects. Pro JPA2



1161、单词 evaluator 的匹配文本:


  1. You could try using this as the UserSimilarity in the evaluator code from before. Mahout in Action


  2. Somewhat akin to FlumeJava’s executor, the Cascading evaluator breaks the dataflow graph into pieces, and, if possible, runs those in parallel, using the underlying Hadoop job scheduler. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  3. This method (hasPermission) uses the permission evaluator we defined in Listing 7-22 to decide if the authentication has the required permissions on the object being passed. Pro Spring Security


  4. Hence, there’s a mutual dependency between the Model and the Evaluator. Programming in Scala 2nd Edition


  5. It is a compile-time error if the condition does not provide enough information to construct a reasonable evaluator for the quantifier. Interpreting the Data Parallel Analysis with Sawzall


  6. The algorithm is also configured with a cost evaluator (another static member), which simply identifies the property on a relationship representing that relationship’s weight or cost. Graph Databases


  7. Also implement , for anIO runAsync asynchronous evaluator for , and , the synchronous evaluator:Future run. Functional Programming in Scala


  8. An evaluator must implement five methods, described in turn here (the flow is illustrated in Figure 12-4): init() Hadoop:The Definitive Guide 3rd Edition


  9. It is possible to build rules that are not based on numeric attributes; for string attributes there is a string matching expression instead of a numeric evaluator. Oracle JRockit The Definitive Guide


  10. In the init method, the object inspectors for the result type are set, after determining what mode the evaluator is in. Programming Hive



1162、单词 usb 的匹配文本:


  1. For the novice Linux user who just wants to play around a bit, the Live CD and a USB stick for permanent storage can provide a simple and quick way to boot up a test environment. Pro Hadoop


  2. Volumes may be attached to a single instance at a time, but may be detached or reattached to a different instance while retaining all data, much like a USB drive. bk-compute-adminguide-grizzly


  3. In this section, we will cover steps of creating a kickstart file and a USB boot media with the operating system image. Hadoop Operations and Cluster Management Cookbook


  4. It shouts at you through mobile devices, radiates information as you walk past big screens, and fires at you with USB sponge missile launchers. Jenkins Continuous Integration Cookbook


  5. It comes in several versions and two formats (CD or USB stick image) Deploying OpenStack


  6. This includes devices such as Ambient Orbs, Lava Lamps, traffic lights, or other more exotic USB-controlled devices. Jenkins The Definitive Guide


  7. The JRockit VE kernel with a small built-in E1000 network driver can boot from a USB stick and run Java applications on any x86 machine out of the box (It is doubtful, however, why this demo, except for the fact that it is really cool, would illustrate any benefits of JRockit VE in cloud computing, but we''ll get to that) Oracle JRockit The Definitive Guide


  8. These are purely abstractions in the computer’s memory and do not represent anything physical, like a USB port. Java Network Programming 4th Edition


  9. Manage volumes A volume is a detachable block storage device, similar to a USB hard drive. admin-user-guide-trunk


  10. Once you have dumped the database, you can archive and store the folder on to any online or offline media, such as CD, USB Drive, Tape, or S3 format. The Definitive Guide to MongoDB



1163、单词 manifoldcf 的匹配文本:


  1. Mahout, ManifoldCF, Lucene, Solr, Nutch, Droids—all of these will roll off the tongue by the time you’re done surveying Lucene’s rich and vibrant community. Tika in Action


  2. Maintaining document security with ManifoldCF A frequent requirement for search engines is to maintain document level security. Solr 3 Enterprise Search Server


  3. Sending data and running queries by sending JSON objects over HTTP makes it easy for someone to extend anything, from a syslog daemon like rsyslog to a connecting framework like ManifoldCF, to interact with Elasticsearch. Elasticsearch in Action MEAP V07 cHAPTER 1



1164、单词 setuptools 的匹配文本:


  1. Setuptools for Python 2.7 for Windows can be download from here: bk-compute-adminguide-grizzly


  2. But you’ll download and install a simple helper package called setuptools, which will help you download and install the Redis client libraries.1 This last step of installing the Redis client libraries for Python is shown next. Redis in Action


  3. If you got an installation error, the best next step is to attempt to upgrade setuptools and then upgrade CouchApp by running the following commands: CouchDB The Definitive Guide


  4. When the download has completed and you are returned to the shell, issue the following command to install setuptools: Beginning CouchDB


  5. As when installing the PHP driver, there are two methods you can use to install PyMongo: an automated approach that relies on setuptools or a manual approach where you download the source code for the project. The Definitive Guide to MongoDB


  6. Many Linux distributions provide packages to make setuptools easy to install. user-guide



1165、单词 functors 的匹配文本:


  1. The map method is associated with functors, and the flatMap method is associated with monads. Scala in Depth


  2. All monads are functors, but not all functors are monads. Functional Programming in Scala


  3. Assume thatA andB are two categories and thatBA is the collection of all functors from A andB. Steps in Scala An Introduction to Object-Functional Programming



1166、单词 hlog 的匹配文本:


  1. When passing the --dump option to the HLog class, it internally invokes the HLogPrettyPrinter class to dump the WAL file. HBase Administration Cookbook


  2. When a write is made, by default, it goes into two places: the write-ahead log (WAL), also referred to as the HLog, and the MemStore (figure 2.1) HBase in Action


  3. The first set of files are the write-ahead log files handled by the HLog instances, created in a directory called .logs underneath the HBase root directory. HBase:The Definitive Guide


  4. The existing MVCC-like read-write consistency control (RWCC) provided sufficient isolation guarantees and the HLog (write ahead log) on HDFS provided sufficient durability. Apache Hadoop Goes Realtime at Facebook



1167、单词 versioned 的匹配文本:


  1. Documents in CouchDB are versioned, much like they would be in a regular version control system such as Subversion. CouchDB The Definitive Guide


  2. Because the Java language doesn’t provide or propose any tooling for managing versioned dependencies, teams will have to come up with their own strategies to store and retrieve them. Gradle in Action


  3. Dependency configuration Java has no real support for working with versioned libraries as dependencies. Gradle Effective Implementation Guide


  4. As a consequence, Spanner has evolved from a Bigtable-like versioned key-value store into a temporal multi-version database. Spanner - Google’s Globally-Distributed Database


  5. To make things more complicated: Freshness requirements Heavily-versioned documents (e.g. Storage Architecture and Challenges


  6. Normally Gradle caches a module once and never reloads it from its source repository again, since versioned modules are supposed to be immutable. Gradle Beyond The Basics


  7. Versioning support The data types are capable of being versioned to support updates to the client API. Cassandra The Definitive Guide


  8. VersionedWritable: This is a base implementation to allow writable classes to track versions over time. Hadoop Beginners Guide


  9. Hightide is an optimized, versioned distribution of the Jetty open source web container. ActiveMQ in Action


  10. To make the management of versioned data less onerous, we support two per-column-family settings that tell Bigtable to garbage-collect cell versions automatically. Bigtable - A Distributed Storage System for Structured Data



1168、单词 findbugs 的匹配文本:


  1. FindBugs findbugs XML or HTML Discovers potential bugs, performance issues, and bad coding practices. Gradle in Action


  2. The FindBugs Eclipse plugin The automatic install location for the Eclipse plugin is http://findbugs.cs.umd.edu/eclipse. Jenkins Continuous Integration Cookbook


  3. The plugin will add the tasks findbugsMain and findbugsTest to analyze the source code from the main and test source sets. Gradle Effective Implementation Guide


  4. To make the build more portable, we are storing the FindBugs installation in our project directory structure, in the tools/findbugs directory: Jenkins The Definitive Guide


  5. The default Sonar Way profile does include Java standards rules but for a more comprehensive inspection, Sonar Way with Findbugs is recommended. Sonar Code Quality Testing Essentials


  6. Getting ready Before stating this recipe, you should download FindBugs from the project''s web page (http://findbugs.sourceforge.net/) Java 7 Concurrency Cookbook


  7. Uncompress into a folder in your computer; then copy the contents of the ./lib/ folder into your project’s /dep/findbugs folder (create the /dep/findbugs folder if necessary) Java 7 Recipes A Problem-Solution Approach



1169、单词 zkybase 的匹配文本:


  1. This chapter shows how Willie is using Spring to implement several features in an open source configuration management database (CMDB) called Zkybase1 (http:// zkybase.org/) Spring in Practice



1170、单词 downloadable 的匹配文本:


  1. A README.txt file has more details about the downloadable code. Pro Hadoop


  2. A minimal amount of additional software is required, and most of the tools for Java programming are available for free and downloadable via the WWW. Java Network Programming and Distributed Computing


  3. The essential part of that CD, the Thinking in C multimedia seminar (created for MindView by Chuck Allison), is now available as a downloadable Flash presentation. Thinking in Java 4th Edition


  4. The downloadable code example contains all the relevant code if you want to dig deeper. Gradle in Action


  5. For example, follow the link http://mirror.quintex.com/apache/hbase/ stable/ and you can see the downloadable files as shown in the following screenshot: Hadoop Operations and Cluster Management Cookbook


  6. To help, the modified configuration files mentioned in this recipe will be downloadable from the Packt website. Jenkins Continuous Integration Cookbook


  7. We can run the job again, and this time, we see the artifacts of our projects as downloadable links on the job page. Gradle Effective Implementation Guide


  8. Since a Maven repository is nothing more than a website containing downloadable POM files and modules in a predictable directory structure, declaring a Maven repository in Gradle begins with telling Gradle the URL of the repo. Gradle Beyond The Basics


  9. The downloadable files contain instructions on how to use them. Oracle JRockit The Definitive Guide


  10. There you’ll see a grid with all the latest downloadable MongoDB binaries. MongoDB in Action



1171、单词 bashrc 的匹配文本:


  1. On your Mac system, you can run the following command or add it to ~/.bashrc. Web Crawling and Data Mining with Apache Nutch


  2. Add the following two lines to your .bashrc or .bash_profile: Pro Hadoop


  3. Note These commands need to be run as root only if the credentials used to interact with nova-api have been put under /root/.bashrc. bk-compute-adminguide-grizzly


  4. For example, the .bashrc file under a user''s home directory is the run configuration file for bash shell. Hadoop Operations and Cluster Management Cookbook


  5. For example, you can do this by defining the JAVA_HOME and PATH variables in the .bashrc file, as shown here: Jenkins The Definitive Guide


  6. In Linux, edit the .bashrc or .bash_profile configuration files, and append the following lines and substitute the path highlighted in the following snippet with yours: Sonar Code Quality Testing Essentials


  7. You must run these commands as root only if the credentials used to interact with nova-api are in /root/.bashrc. bk-admin-guide-cloud-latest


  8. Mac OS X systems don’t have the /etc/profile.d directory and they are typically single-user systems, so it’s best to put the environment variable definitions in your $HOME/.bashrc. Programming Hive



1172、单词 checkboxes 的匹配文本:


  1. It also saves you the trouble of error checking.) In this section, you will learn how to program checkboxes, radio buttons, lists of choices, and sliders. Core Java Volume I Fundamentals 9th Edition


  2. It includes handling various HTML form elements, (checkboxes, radio buttons, file uploads, downloads, and so on), JSON data, and XML. Performance Testing With JMeter 2.9


  3. It’s in the same class as MySQL Workbench, but both tools have some checkboxes in their feature matrices that the other doesn’t have. High Performance MySQL 3rd Edition


  4. Click on the Extensions panel, scroll to the section Language and Framework Tooling, and tick the checkboxes next to the extensions named Gradle Support and Groovy-Eclipse, as shown in figure 10.9 Gradle in Action


  5. This is defined in the For previous build result set of checkboxes. Jenkins Continuous Integration Cookbook


  6. Installing all that stuff will take a long time so if you just want the adb tool, uncheck all the other checkboxes and then click Install. Testable JavaScript


  7. They should be listed with a black font, and the checkboxes next to them should be checked, as seen in Figure 14-5 Programming Scala


  8. Let''s create a test which checks that specified checkboxes are selected when page is displayed, as follows: Selenium Testing Tools Cookbook


  9. An HTML form When the user clicks the Submit button, the text in the text fields and the settings of the checkboxes and radio buttons are sent back to the web server. Core Java Volume II Advanced Features 9th Edition


  10. For instance, it doesn’t matter if it’s a set of checkboxes, a pop-up list, or a text field. Java Network Programming 4th Edition



1173、单词 clob 的匹配文本:


  1. However, make sure that the problem is caused by the design of the DOM (e.g., a single domain object was found containing a large number of long text fields like CLOB or TEXT, which impact the performance of data retrieval) Pro Spring 3


  2. For example, MySQL’s direct mode cannot handle large objects, such as CLOB or BLOB columns, because Sqoop needs to use a JDBC-specific API to load these columns into HDFS.) For databases that provide such tools, Sqoop can use these to great effect. Hadoop:The Definitive Guide 3rd Edition


  3. To read a LOB, execute a SELECT statement and call the getBlob or getClob method on the ResultSet. Core Java Volume II Advanced Features 9th Edition


  4. Because the size of the Strings that need to be stored is unlimited, it is best to use a character large object (CLOB) data type to store the data. Java 7 Recipes A Problem-Solution Approach


  5. As their names imply, a CLOB column holds a large character sequence, and a BLOB column can store a large byte sequence. Pro JPA2



1174、单词 akkaoogle 的匹配文本:


  1. To understand how the individual pieces of Akka fit together, you’re going to build a large real-time product search application using Akka called Akkaoogle. Scala in Action



1175、单词 sys 的匹配文本:


  1. The benchmark was the standard SysBench read-only workload, with all data in InnoDB, fully in-memory and CPU-bound. High Performance MySQL 3rd Edition


  2. Keep that in mind if you are cutting and pasting this code: #!/usr/bin/python import sys from datetime import datetime, timedelta. Hadoop RealWorld Solutions Cookbook


  3. Puppet (http://www.puppetlabs.com/puppet) is a reaction to these gaps in the tools available to SysAdmins, Operators and Developers. Pro Puppet


  4. However, if you are interested only in the existence of a file or directory, the exists() method on FileSys tem is more convenient: Hadoop:The Definitive Guide 3rd Edition


  5. The supplied ChecksumFileSys tem class is loaded by the client and uses local disk paths to store all the data. HBase:The Definitive Guide


  6. In this code, a new JsonWriter instance is created and configured to write to Sys tem.out. Java EE 7 Essentials


  7. The FileSystem object, zipFileSys, was created in the try-with-resources block, so the resource will automatically be closed, but if you do not wish to use the nested try-with-resources block you must use the FileSystem class'' close method to close the resource manually. Java 7 New Features Cookbook


  8. Sun originally planned to deprecate PrintStream in favor of PrintWriter but backed off when it realized this step would invalidate too much existing code, especially code that depended on Sys tem.out. Java Network Programming 4th Edition


  9. For example, if monitoring operating system statistics suggests high sys CPU utilization, you should profile the application to determine what methods are consuming the highest sys CPU cycles. Java Performance


  10. However, there is one external file it will absolutely not run without: errmsg.sys. Understanding MySQL Internals



1176、单词 numa 的匹配文本:


  1. For example, one implementation may be suitable for a small shared-memory machine, another for a large NUMA multi-processor, and yet another for an even larger collection of networked machines. MapReduce - Simplied Data Processing on Large Clusters


  2. MongoDB will warn you about a variety of issues: that you’re running on a 32-bit machine (which MongoDB is not designed for), that you have NUMA enabled (which can slow your application to a crawl), or that your system does not allow enough open file descriptors (MongoDB uses a lot of file descriptors) MongoDB The Definitive Guide 2nd Edition


  3. Typically, in a NUMA architecture, the address space is divided between processors. Oracle JRockit The Definitive Guide


  4. If the application is deployed on a NUMA (Non-Uniform Memory Architecture) system, an additional HotSpot VM command line option that can be used with the throughput garbage collector is. Java Performance


  5. In the solution to this recipe, two values are accepted as input and they are referenced by their id attributes as numa and numb. Java 7 Recipes A Problem-Solution Approach



1177、单词 discriminator 的匹配文本:


  1. For example, Roo doesn’t have a command to let you configure the discriminator for the SINGLE_TABLE model, so you’ll just have to edit the classes and add the proper annotations yourself. Spring Roo in Action


  2. Uses one table for all classes and distinguishes between instance types by using a discriminator column. Java Performance


  3. The name element of this annotation specifies the name of the column that should be used as the discriminator column, and if not specified will be defaulted to a column named “DTYPE” Pro JPA2



1178、单词 denoting 的匹配文本:


  1. One of rows and columns (but not both) may be zero, denoting an arbitrary number of components per row or column. Core Java Volume I Fundamentals 9th Edition


  2. The revision number is the md5 hash of the transport representation of a document with an N- prefix denoting the number of times a document got updated. CouchDB The Definitive Guide


  3. Now we need to output each vertex''s current value denoting the minimum number of hops from the source vertex. Hadoop RealWorld Solutions Cookbook


  4. A compile-time constant expression is an expression denoting a value of primitive type or a String that does not complete abruptly and is composed using only the following: The Java Language Specification Java SE 7 Edition


  5. All nodes other than the starting node start with a distance of infinity, denoting. Hadoop in Practice


  6. As can be seen from the previous example, the output is sorted in the form of an inheritance tree, with the stars denoting the depth in the tree. Oracle JRockit The Definitive Guide


  7. The method add() adds an object to the set, and the method contains() returns a Boolean true/false value denoting whether an object is in the set or not. Hadoop in Action


  8. Without creating and denoting the proper keystore and truststore, you can expect to see the following exceptions: ActiveMQ in Action


  9. Alternative supports a select method that returns an index denoting which of them can (and then must) be chosen. Concurrent Programming in Java Design Principles and Pattern


  10. You will first add a JLabel component that represents a read-only label denoting the first name beside the input field. Java 7 Recipes A Problem-Solution Approach



1179、单词 tokenize 的匹配文本:


  1. The following listing shows a custom Lucene Analyzer that can tokenize a document using the StandardTokenizer, a somewhat error-resilient tokenizer implementation in Lucene. Mahout in Action


  2. The usual solution is to read in a line of text, tokenize it, and then use the various parse methods of Integer, Double, etc., to parse the data: Thinking in Java 4th Edition


  3. As part of listing 7.1, we include a list of stop words (fetched from http://www.textfixer.com/resources/), as well as functions to both tokenize and index a document, taking into consideration the stop words that we want to remove. Redis in Action


  4. The pipeline also has two functions — a tokenize and a count function (which is an aggregator), and the workflow has a GroupBy pipe assembly. Professional Hadoop Solutions


  5. The call to strip() method removes any leading/trailing whitespace, and then we tokenize it into an array of named variables. Hadoop RealWorld Solutions Cookbook


  6. A better approach would be to tokenize each line and produce an intermediary file containing a word per line. Hadoop in Practice


  7. You can even use this class to tokenize words that are delimited by characters other than spaces: Java in a Nutshell 5th Edition


  8. In line 3, we have to cast the value to a character array in order to tokenize it with the built-in TOKENIZE function. Cassandra The Definitive Guide


  9. SimpleAnalyzer left in what may be considered irrelevant (stop) words, but it did lowercase and tokenize at nonalphabetic character boundaries. Lucene in Action 2nd Edition


  10. Now, we will use the TRANSFORM feature and two Python scripts, one to tokenize lines of text into words, and the second to accept a stream of word occurrences and an intermediate count of the words (mostly the number “1”) and then sum up the counts for each word. Programming Hive



1180、单词 replicator 的匹配文本:


  1. When Solr runs in a replication mode, the index replicator is responsible for distributing indexes across multiple slaves. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. Replicator An interface to the replication system that enables you to initiate replication between local and remote databases. Getting Started with CouchDB


  3. Another option is to use Continuent’s Tungsten Replicator, which we’ll discuss later in this chapter. High Performance MySQL 3rd Edition


  4. Replicating images across multiple data centers The image service comes with a tool called glance-replicator that can be used to populate a new glance server using the images stored in an existing glance server. bk-compute-adminguide-grizzly


  5. Now click “Replicator” in the sidebar and choose hello-world as the source and hello-replication as the target. CouchDB The Definitive Guide


  6. For each local partition, the replicator compares it against the replicated copies in the other zones to see if there are any differences. bk-admin-guide-cloud-latest


  7. This is useful when you have large databases and are using the Futon Replicator to replicate these databases between instances or if you are compacting a large database. Beginning CouchDB


  8. Finally, Lounge includes a replicator which keeps design documents synchronized and can replicate documents for redundancy. Scaling CouchDB



1181、单词 jpeg 的匹配文本:


  1. If the user should choose a JPEG image file, the extension can be either .jpg or .jpeg. Core Java Volume I Fundamentals 9th Edition


  2. We’re using the -H option to tell CouchDB that we’re uploading a JPEG file. CouchDB The Definitive Guide


  3. What follows is a discussion of the growing complexity of the internet in terms of the types of information available (PDF files, Word documents, JPEG images, QuickTime files), and in terms of the number of languages and encoding schemes that this information is provided in. Tika in Action


  4. The response is the actual file attachment (in this example, a JPEG photo) Beginning CouchDB


  5. It is recommended that you enable compression unless you have a reason not to do so—for example, when using already compressed content, such as JPEG images. HBase:The Definitive Guide


  6. It was possible to read GIF and JPEG images, but there was no official support for writing images at all. Core Java Volume II Advanced Features 9th Edition


  7. It could well be pointing to a GIF or JPEG image, an MP3 sound file, or something else entirely. Java Network Programming 4th Edition


  8. Among other things, it has its own Content-Type header indicating that it’s a JPEG image. Spring in Action 4th Edition



1182、单词 scaladoc 的匹配文本:


  1. Because the collections library is always improving, the best source for learning these methods is the scaladoc documentation for the current release. Scala in Depth


  2. Creating documentation with the Scala plugin The Scala plugin also adds a scaladoc task to our build. Gradle Effective Implementation Guide


  3. Predef is interesting, and you can learn a lot about Scala by looking at the scaladoc or source of the scala.Predef object. Scala in Action


  4. For more information, you can also consult the library’s Scaladoc documentation, which is available in the distribution and online at http://www.scala-lang.org. Programming in Scala 2nd Edition


  5. This documentation was generated using the scaladoc tool, analogous to Java’s javadoc tool. Programming Scala


  6. You will also learn how to browse the Scaladoc documentation at a beginner’s level. Scala for the Impatient



1183、单词 memleak 的匹配文本:


  1. The Memleak tool, which is part of the JRockit Mission Control suit is ideal for this kind of analysis. Oracle JRockit The Definitive Guide



1184、单词 booleans 的匹配文本:


  1. For booleans, the bitwise operators have the same effect as the logical operators except that they do not short circuit. Thinking in Java 4th Edition


  2. The Boolean class now includes the static methods logicalAnd, logicalOr, and logicalXor, to apply the and, or, and xor operations between two booleans. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. You can choose from several different parameter types, such as Strings, Booleans, and drop-down lists. Jenkins The Definitive Guide


  4. A BitSet saves space compared to an array of booleans since the. Java Threads 3rd Edition


  5. The values stored in the document can be strings, numbers, dates, booleans, lists, maps, or other data types. Beginning CouchDB


  6. The term filter is used to filter by exact values, be they numbers, dates booleans, Elasticsearch The Definitive Guide


  7. Avro also supports primitive types such as integers, floats, and booleans as well as other structured types such as arrays and enums. Hadoop Beginners Guide


  8. Any data type manipulation other than with ints or booleans produced highly varying and misleading times. Java Performance Tuning 2nd Edition


  9. You’ll notice the call to super() where you will pass in Booleans to the super class to set the JInternalFrame object to be resizable, closable, maximizable, and iconifiable. Java 7 Recipes A Problem-Solution Approach


  10. Literals There are a number of different literal types that can be used in JP QL, including strings, numerics, booleans, enums, entity types, and temporal types. Pro JPA2



1185、单词 deserializing 的匹配文本:


  1. The core concept is that of defining a type in a text file and having a tool generate per-language APIs for accessing the data structure and for serializing and deserializing the data structure. Pro Hadoop


  2. For this reason, a SUID is added as a field, to enable a deserializing ObjectInputStream to check for a match. Java Network Programming and Distributed Computing


  3. Additionally, leaf-level indexes are structured in a way to allow a binary search on the key without deserializing. Professional Hadoop Solutions


  4. This interface permits implementors to compare records read from a stream without deserializing them into objects, thereby avoiding any overhead of object creation. Hadoop:The Definitive Guide 3rd Edition


  5. It does deserializing to allow multiple object references to be recovered. Core Java Volume II Advanced Features 9th Edition


  6. Encoding an object as a byte stream is known as serializing the object; the reverse process is known as deserializing it. Effective Java 2nd Edition


  7. These byte-stream classes are used for serializing and deserializing the internal state of objects for storage or interprocess communication. Java in a Nutshell 5th Edition


  8. Deserializing Booleans would have required special handling to return the canonical Boolean. Java Performance Tuning 2nd Edition


  9. Serializing and deserializing data to pass it in and out of the pipe is relatively inefficient. Programming Hive


  10. This is in addition to the already robust library for handling input and output streams, and serializing (and deserializing) object information. Java 7 Recipes A Problem-Solution Approach



1186、单词 encapsulating 的匹配文本:


  1. With the basic Checkstyle definitions in place, you can apply the encapsulating script plugin to all subprojects of your multiproject build, as shown in the next listing. Gradle in Action


  2. It’s likely that you may want to apply this manipulation to a date in several points of your code, and for this reason we suggest encapsulating its logic in a proper class as we did here. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. Listing 9-10 (code file: class TableManager) shows the class encapsulating HTablePool (TableManager), which additionally provides all configuration information (table names, family names, and so on) Professional Hadoop Solutions


  4. At the end of this stage the physical plan looks like a DAG of tasks with each task encapsulating a part of the plan. Hive A Petabyte Scale Data Warehouse Using Hadoop


  5. After confirming that the job has completed, we call the RunningJob’s getCounters() method, which returns a Counters object, encapsulating all the counters for a job. Hadoop:The Definitive Guide 3rd Edition


  6. The Java platform supports a number of permission classes, each encapsulating the details of a particular permission. Core Java Volume II Advanced Features 9th Edition


  7. In effect, we are applying the advice of Item 13 by encapsulating the lock object within the object it synchronizes. Effective Java 2nd Edition


  8. Encapsulating the complexity makes it easier for end users to construct complex queries from reusable parts. Programming Hive


  9. Optimizing for extensibility entails removing hard-wired decisions and instead allowing them to vary, for example by encapsulating them as overridable methods, supporting callback hooks, or abstracting functionality via interfaces that can be re-implemented in completely different ways by dynamically loaded components. Concurrent Programming in Java Design Principles and Pattern


  10. Methods can be very useful for encapsulating logic that performs tasks. Java 7 Recipes A Problem-Solution Approach



1187、单词 adt 的匹配文本:


  1. So far you’ve learned that ADT is a kind of type that represents a set of values. Scala in Action


  2. Thus any other part of the program that needs to perform an operation on the ADT can do so by calling the appropriate method. Data Structures and Algorithm Analysis in Java 3rd Edition


  3. List is just one example of what is called an  (ADT).algebraic data type (Somewhat confusingly, ADT is sometimes used in OO to stand for "abstract data type".) An ADT is just a data type defined by one or more data constructors, each of which may contain zero or more arguments. Functional Programming in Scala


  4. You can set up the Android Test framework with Eclipse by installing the Android Development Tools (ADT) plugin. Selenium Testing Tools Cookbook


  5. It is easy to define your own ADT classes, for example, Fraction, Interval, ComplexFloat, and so on. Concurrent Programming in Java Design Principles and Pattern


  6. Download and install the ADT plug-in for Eclipse according to the documentation that can be found at http://developer.android.com/sdk/eclipse-adt.html#installing. Java 7 Recipes A Problem-Solution Approach



1188、单词 retrying 的匹配文本:


  1. Disk failover mode stores undeliverable data to the local disk (or sometimes a local database) and keeps retrying until the data can be delivered to the next recipient in your data flow. Apache Flume Distributed Log Collection for Hadoop


  2. If the runtimes of tasks are too long, a single task may hold the cluster for a long period, or retrying a failed task becomes expensive. Pro Hadoop


  3. To throw an error in such a way that the user will be asked to authenticate, before retrying the request, use JavaScript code like: CouchDB The Definitive Guide


  4. In this case, a “standard” Hadoop retrying mechanism will do you no good. Professional Hadoop Solutions


  5. The framework tries to narrow down the skipped range by retrying until this threshold is met or all attempts get exhausted for this task. Hadoop RealWorld Solutions Cookbook


  6. Even if a node is a group member, communications with it may fail, and such failures must be handled in the usual ways (retrying, trying a different member of the group, and so on) Hadoop:The Definitive Guide 3rd Edition


  7. On the contrary, throwing a timeout exception or retrying the RPC request causes tasks to fail unnecessarily or add additional load to a RPC server. Apache Hadoop Goes Realtime at Facebook


  8. After checking the datanode processes’ logs and confirming that they were still alive and retrying, Mike checked the namenode user interface again. Hadoop Operations


  9. None of these exception classes have any special methods you wouldn’t find in any other exception class, but you can take advantage of these subclasses to provide more informative error messages or to decide whether retrying the offending operation is likely to be successful. Java Network Programming 4th Edition


  10. You can contain local failure to the current method, rather than throwing exceptions back to clients, if you have reason to believe that retrying an action will succeed. Concurrent Programming in Java Design Principles and Pattern



1189、单词 rmiregistry 的匹配文本:


  1. Included as part of the Java platform is an application called rmiregistry, which runs as a separate process and allows applications to register RMI services or obtain a reference to a named service. Java Network Programming and Distributed Computing


  2. For security reasons, the rmiregistry service that is part of the JDK only allows binding calls from the same host. Core Java Volume II Advanced Features 9th Edition


  3. A remote Java process may be probed by specifying the remote process id, the remote host name, and the port number on which the remote host’s rmiregistry server is running (if other than the default of 1099) Java in a Nutshell 5th Edition



1190、单词 associativity 的匹配文本:


  1. Scala’s treatment of the : character in operator notation is a general concept designed to handle cases like this where left associativity is not as expressive as right associativity. Scala in Depth


  2. The associativity of the cons ( :: ) is right instead of left. Scala in Action


  3. With this more round-about algorithm, the reducer code can be used as a combiner as associativity is preserved. MapReduce Design Patterns


  4. Operator associativity will be described in more detail in Section 5.8 Programming in Scala 2nd Edition


  5. The constraints of commutativity and associativity are not too restrictive; they permit a wide range of valuable analyses, including: counting, filtering, sampling, generating histograms, finding the most frequent items, and many more. Interpreting the Data Parallel Analysis with Sawzall


  6. Associativity is a nice property to expect of operations in an algebra. Functional Programming in Scala


  7. The P and A columns of the table specify the precedence and associativity of each group of related operators, respectively. Java in a Nutshell 5th Edition


  8. Associativity of operators whose Operator precedence of operator name ends with the corresponding whose name starts with characters. Steps in Scala An Introduction to Object-Functional Programming



1191、单词 inmemory 的匹配文本:


  1. When a deferred operation is evaluated, it converts its result PCollection into a materialized state, e.g., as an inmemory data structure or as a reference to a temporary intermediate file. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  2. The order of the data sets in the line of code matters because all but the first data sets listed are stored inmemory. MapReduce Design Patterns


  3. The test is very fast because it is inmemory, but it has a nonzero probability of false positives, in which case the regular get() method is called. Hadoop:The Definitive Guide 3rd Edition


  4. Like a CheckpointNode, the BackupNode is capable of creating periodic checkpoints, but in addition it maintains an inmemory, up-to-date image of the file system namespace that is always synchronized with the state of the NameNode. The Hadoop Distributed File System


  5. Memstore flush size We discussed the storage model earlier and identified how HBase uses an inmemory store to buffer values before writing them to disk as a new storage file in an operation called flush. HBase:The Definitive Guide


  6. If small enough, the secondary table is pulled into the memory of each mapper / reducer process for inmemory lookups, which typically is the fastest method for joining. Tenzing - A SQL Implementation On The MapReduce Framework


  7. Because the update mechanism is SQL, you can log the SQL statements locally to disk as you execute them on the inmemory database, a process called journaling. Java Performance Tuning 2nd Edition


  8. Its speed is the result of the combination of a fast transactional journal comprised of data log files, the highly optimized indexing of message IDs, and inmemory message caching. ActiveMQ in Action


  9. Between the two in-memory parsers, JAXB has a larger cost of building the inmemory tree because of the higher binding overhead incurred by JAXB. Java Performance


  10. A typical strategy for persistence providers dealing with bulk operations is to invalidate any inmemory cache of data related to the target entity. Pro JPA2



1192、单词 breakpoint 的匹配文本:


  1. A debugger runs your program in full motion until it reaches a breakpoint, and then you can look at everything that interests you. Core Java Volume I Fundamentals 9th Edition


  2. For Figure 7-5, a breakpoint was set in the map task, and the job launched via the Debug As Java Application menu item. Pro Hadoop


  3. You can also use the familiar debugger statement somewhere in your code to trigger a breakpoint. Testable JavaScript


  4. You also can modify your code to catch an exception, which will allow you to set a breakpoint in your IDE and observe the input that’s causing your exception. Hadoop in Practice


  5. This command should be used when the current thread is stopped at a breakpoint. Java in a Nutshell 5th Edition


  6. Unfortunately, you cannot set a breakpoint on a native method, so this is not possible. Java Performance Tuning 2nd Edition


  7. By default, when you debug a concurrent application and the debugger finds a breakpoint, it only stops the thread that has that breakpoint while the rest of the threads continue with their execution. Java 7 Concurrency Cookbook


  8. The debugger shows you the stack trace at the current breakpoint. Understanding MySQL Internals


  9. You simply right-click on the line and select breakpoint from the list. Selenium 2 Testing Tools Beginner''s Guide



1193、单词 pong 的匹配文本:


  1. PongWebSocketFrame WebSocketFrame which is sent as response to a PingWebSocketFrame. Netty in Action MEAP V08


  2. From the WSDL file, SoapUI created a basic test for the Ping and Pong services. Jenkins Continuous Integration Cookbook


  3. Here is an example of two actors communicating with each other by sending ping-pong messages: Scala in Action


  4. If the answer is pong, then you succeeded; if you get pang as reply, that means you couldn’t connect to the other node. RabbitMQ in Action


  5. If we try to use a REQ socket in anything other than a strict send/receive fashion, we’ll get an error (technically, the REQ socket implements a small finite-state machine to enforce the send/receive ping-pong, so the error code is called “EFSM”) ZeroMQ


  6. Redis should respond with the message "PONG" if everything has been set up properly. Hadoop RealWorld Solutions Cookbook


  7. There will be Ping messages that the clock sends the simulants, and Pong messages that the simulants send back when they are ready for the clock to move forward. Programming in Scala 2nd Edition


  8. A pong message is a WebSocket control message and is generally not dealt with at the application layer. Java EE 7 Essentials


  9. It’s vulnerable to a “ping-pong” denialof-service attack, in which spoofed Internet packets cause two hosts to spew an unlimited amount of data at each other. Java Network Programming 4th Edition


  10. This means that MarcoHandler will also handle binary and pong messages, but will do nothing with those messages. Spring in Action 4th Edition



1194、单词 referent 的匹配文本:


  1. Here, the referent can be inserted into any queue whose element type is a supertype of the type T of the referent; T is the lower bound for the wildcard. The Java Language Specification Java SE 7 Edition


  2. For a single instance, Spring DM will automatically look for a replacement when the current service referent disappears. Spring Dynamic Modules in Action


  3. However, reference handles don’t work well with random access, because the referent class may occur at any point in the preceding stream—that is, there is state stored in the stream. Hadoop:The Definitive Guide 3rd Edition


  4. A weak reference is one that does not prevent the referent object from being garbage-collected. Java in a Nutshell 5th Edition


  5. You can access the private object (termed the referent) using the Reference.get( ) method. Java Performance Tuning 2nd Edition



1195、单词 xenapi 的匹配文本:


  1. The nova-compute process is primarily a worker daemon that creates and terminates virtual machine instances via hypervisor''s APIs (XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, etc.) bk-compute-adminguide-grizzly


  2. XenAPI (XenServer/XCP) Compute uses the XenAPI agent to inject passwords into guests when using the XenAPI hypervisor backend. bk-admin-guide-cloud-latest



1196、单词 ldif 的匹配文本:


  1. Add the LDIF records from the command line; you will then be asked for the administrator''s password you filled in, in step 1 Jenkins Continuous Integration Cookbook


  2. Fortunately, this is a relatively simple exercise, using the embedded LDAP server and a sample LDIF file. Spring Security 3.1


  3. I will be importing a couple of users using an LDIF file. Pro Spring Security


  4. You can import your host definitions using LDIF files or manipulate your directory using your choice of tools such as phpldapadmin (http://phpldapadmin.sourceforge.net/wiki/index.php/Main_Page) Pro Puppet


  5. When the LDAP server starts, it will attempt to load data from any LDIF files that it can find in the classpath. Spring in Action 4th Edition



1197、单词 docid 的匹配文本:


  1. There are a few things going on here, so let’s list them out to be clear: • The URL contains the full path to the document (that is DATABASE/DOCID) Getting Started with CouchDB


  2. As we’ve seen, design documents are normal JSON documents, denoted by the fact that their DocID is prefixed with _design/ CouchDB The Definitive Guide


  3. A Document has a required integer DocId and optional Links, containing a list of Forward and Backward entries holding DocIds of other web pages. Dremel - Interactive Analysis of WebScale Datasets


  4. When skipping a large number of rows, CouchDB still needs to scan the entire B-tree index, starting from the startkey and startkey_docid (if specified) Scaling CouchDB


  5. Some of the Solr caches (the document cache, actually) operate on internal identifiers called docid. Solr 3.1 Cookbook


  6. Every Web page has an associated 1D number called a docID which is assigned whenever a new URL is parsed out of a Web page. The anatomy of a large-scale hypertextual Web search engine


  7. In the reduce phase, terms are sorted so that all term / docID pairs are sent to the same reducer process for each unique term. Solr in action


  8. For each of the top N results, TopDocs records the int docID (which you can use to retrieve the document) as well as the float score. Lucene in Action 2nd Edition


  9. Some of the Solr caches (documentCache actually) operate on internal identifiers called docid. Apache Solr 4 Cookbook



1198、单词 tiered 的匹配文本:


  1. Tiered data collection (multiple flows and/or agents) You can chain your Flume agents depending on your particular use case. Apache Flume Distributed Log Collection for Hadoop


  2. But you can smartly reduce the distance-measure computation and clustering time by an order of magnitude by using a two-tiered approach for batch clustering. Mahout in Action


  3. Site Architecture Tagged’s scale-out, tiered site architecture, shown in Figure 9-1, is typical of social networking websites. Monitoring with Ganglia


  4. As a result, he has been involved in the planning, development, implementation, integration, and testing of numerous applications, including multi-tiered, standalone, distributed, and cloud-based applications. Performance Testing With JMeter 2.9


  5. The tiered merge policy This is the default merge policy that ElasticSearch uses. Mastering ElasticSearch


  6. Our design further seeks to minimize metadata related disk I/O through the use of a two-tiered storage strategy, and to maximize locality and cache efficiency with Dynamic Subtree Partitioning [30] Ceph A Scalable, High-Performance Distributed File System


  7. We intend to implement a priority queue so that queries can be tiered by priority. Tenzing - A SQL Implementation On The MapReduce Framework


  8. There are tiered cost models, so the actual costs tend to have smaller marginal increases at higher levels. Hadoop Beginners Guide


  9. By far, the N-tiered tree network (see Figure 4-3) is the predominant architecture deployed in data centers today. Hadoop Operations


  10. At the time of this writing development is underway to introduce a hybrid HotSpot JIT compiler that combines the major attributes of the Client JIT compiler and the Server JIT compiler called tiered compilation. Java Performance



1199、单词 tableswitch 的匹配文本:


  1. The compiler will then emit a tableswitch bytecode for this pattern match. Scala in Depth


  2. Compilation of switch statements uses the tableswitch and lookupswitch instructions. The Java Virtual Machine Specification Java SE 7 Edition


  3. It contains either a "tableswitch" bytecode identifier or a "lookupswitch" bytecode identifier. Java Performance Tuning 2nd Edition



1200、单词 sqlexception 的匹配文本:


  1. Suppose the code is inside a method public void updateRecord() throws SQLException. Core Java Volume I Fundamentals 9th Edition


  2. Exception Handling Because Spring advocates using runtime exceptions rather than checked exceptions, you need a mechanism to translate the checked SQLException into a runtime Spring JDBC exception. Pro Spring 3


  3. Each SQLException has a chain of SQLException objects that are retrieved with the getNextException method. Core Java Volume II Advanced Features 9th Edition


  4. The catch block then handles a SQLException, so if one is thrown the exception will be handled using the code within the catch block. Java 7 Recipes A Problem-Solution Approach


  5. And, to appease JDBC’s anger, you must catch SQLException, a checked exception, even though there’s not a lot you can do if it’s thrown. Spring in Action 4th Edition



1201、单词 confi 的匹配文本:


  1. A Solr core is nothing but the running instance of a Solr index along with its confi guration. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. Along the way, you’ll explore Springbased approaches to domain-specifi c challenges like CRM, confi guration management, and site reliability. Spring in Practice


  3. And it uses Spring’s familiar programming model to simplify confi guration and implementation, so it’ll be comfortably familiar to most Java developers. Spring Batch in Action


  4. Along the way, you’ll learn to handle data access and web-based components, and explore topics like unit testing and confi guration in OSGi. Spring Dynamic Modules in Action


  5. Numerous examples will help you develop confi dence in the crucial area of data modeling. MongoDB in Action



1202、单词 rev 的匹配文本:


  1. This will save the document, and then display the new document with the saved revision information (the _rev field) Getting Started with CouchDB


  2. We’ll go into more detail about this in later chapters, but for now, the important thing to note is that _rev acts like a safety feature when saving a document. CouchDB The Definitive Guide


  3. The REV column is for storing each revision number, which will be autoincremented when a new history record is created. Pro Spring 3


  4. Reverse could be implemented using concatenation (:::), like in the following method, rev: Programming in Scala 2nd Edition


  5. Issue the following command to delete your johndoe document from the database (be sure to replace the revision number after ?rev= with the actual revision number of your document): Beginning CouchDB


  6. This requires us to pass the incoming argument list, of type List

    , as an argument to rev() The Java Language Specification Java SE 7 Edition


  7. Mrs., Ms., Dr., and Rev., you’ve accounted for virtually everyone. SQL Antipatterns


  8. Exercise 2.37 Explain why function rev computes the reverse of a list. Steps in Scala An Introduction to Object-Functional Programming



1203、单词 sgt 的匹配文本:


  1. In this case, you’ll start with one: the SgtPeppers class, as shown in the next listing. Spring in Action 4th Edition



1204、单词 minimized 的匹配文本:


  1. When you are debugging a GUI program, it is a good idea to keep the console available on the screen and not minimized.) Core Java Volume I Fundamentals 9th Edition


  2. How can you cut the graph for distribution across the cluster so that the number of arcs bridging machines is minimized? Professional Hadoop Solutions


  3. Because the O(N logN) comparison-based sorting algorithms will generally look only at a small number of characters in each string comparison, once the average string length starts getting large, radix sort’s advantage is minimized or evaporates completely. Data Structures and Algorithm Analysis in Java 3rd Edition


  4. Using this filter combination will cause the region server to only load the row key of the first KeyValue (i.e., from the first column) found and return it to the client, resulting in minimized network traffic. HBase:The Definitive Guide


  5. If you have a development environment that provides Hadoop integration or are using Hadoop in standalone mode, the problems are minimized; just remember that Streaming does give you this ability to try the scripts outside of Hadoop; it may be useful some day. Hadoop Beginners Guide


  6. More realistically, several identical copies of the data traverse the Internet; but, by carefully choosing the points at which the streams are duplicated, the load on the network is minimized. Java Network Programming 4th Edition


  7. By choosing the most recently used thread, paging and cache overhead may be minimized. Java Performance Tuning 2nd Edition


  8. But when such a data structure is not available, an alternative is to identify a way to design the application such that the frequency at which multiple threads access the same data and the scope of the data that is accessed is minimized. Java Performance


  9. Useless signals, and the resulting "thundering herds" of context switches can be minimized by delegating operations with different wait conditions to different helper objects. Concurrent Programming in Java Design Principles and Pattern


  10. By breaking tasks into small, granular tasks, the effect of faults and “straggler” nodes can be minimized. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



1205、单词 splunk 的匹配文本:


  1. Splunk Splunk (http://www.splunk.com) is a log aggregator and search engine that can help you gain operational insight into all of the machine-generated data in your environment. High Performance MySQL 3rd Edition


  2. Processing Applications Logs You can use a wide range of solutions for logging, starting from specialized software (for example, the Splunk App for HadoopOps) and up to a custom log-processing application. Professional Hadoop Solutions


  3. Splunk is one commercial product, another is Loggly, a cloud based logging tool that is based on Solr! More information is available at http://www.splunk.com/ and http://www.loggly.com. Solr 3 Enterprise Search Server


  4. On the consumer side, it outputs into Splunk, Graphite, or Esper-like real-time alerting. Apache Kafka


  5. Another product, Splunk, also requires a wide range of functionality to create real-world big data pipeline solutions. Spring Data Modern Data Access for Enterprise Java



1206、单词 tgt 的匹配文本:


  1. A ticket is issued with a Ticket Granting Ticket (TGT), which is issued by the KDC. HBase Administration Cookbook


  2. For Hadoop users, this is done via the kinit command.) The KDC responds with something called a Ticket Granting Ticket (TGT), which is simply a ticket used for requesting tickets to communicate to other servers and services. Professional Hadoop Solutions


  3. This means that in the case of a tgt restart or even a node reboot your existing volumes on that node will be restored automatically with their original IQN. bk-admin-guide-cloud-latest


  4. The client authenticates itself to the Authentication Server and receives a timestamped Ticket-Granting Ticket (TGT) Hadoop:The Definitive Guide 3rd Edition


  5. The KDC is made up of two distinct services: the authentication server (AS), which is responsible for authenticating a client and providing a ticket granting ticket (TGT), and the ticket granting service (TGS), which, given a valid TGT, can grant. Hadoop Operations



1207、单词 omega 的匹配文本:


  1. Indeed, the Omega service scheduler will try to place tasks to resist both independent and coordinated failures, which is an NP-hard chance-constrained optimization problem with tens of failure domains that nest and overlap. Omega - flexible, scalable schedulers for large compute clusters


  2. Although this might be the best of all possible cookbooks, there are two tiny problems with The Cookbook Omega. MongoDB in Action



1208、单词 mixins 的匹配文本:


  1. This gives all our mixed-in traits the ability to delegate to super, which is a common theme when using traits as mixins. Scala in Depth


  2. Mixins The term mixin seems to have acquired numerous meanings over time, but the fundamental concept is that of mixing in capabilities from multiple classes in order to produce a resulting class that represents all the types of the mixins. Thinking in Java 4th Edition


  3. Another way of thinking about Scala mixins is as decorators. Scala in Action


  4. Using ProxyFactoryBean for Introductions You are not limited in using the ProxyFactoryBean class for just advising an object but also for introducing mixins to your objects. Pro Spring 3


  5. The order of mixins is significant.2 The precise rules are given in the following section, but, roughly speaking, traits further to the right take effect first. Programming in Scala 2nd Edition


  6. Traits As Mixins Scala provides a complete mixin solution, called traits. Programming Scala


  7. If you are familiar with Ruby, traits are similar to Ruby’s mixins. Beginning Scala


  8. Abstract classes can’t be used to define mixins for the same reason that they can’t be retrofitted onto existing classes: a class cannot have more than one parent, and there is no reasonable place in the class hierarchy to insert a mixin. Effective Java 2nd Edition



1209、单词 makefile 的匹配文本:


  1. First, gmetad has one additional software dependency (RRDtool), and second, you must include the --with-gmetad option to the configure script, because only gmond is built by the default Makefile. Monitoring with Ganglia


  2. Any automation method will do; for example, a Makefile or a set of custom scripts. High Performance MySQL 3rd Edition


  3. The first file is called Makefile and will reference the umbrella make file. RabbitMQ in Action


  4. Here is a quick makefile that will run unit tests on all the code and then compress the code into a release directory to be blasted out to production web servers if all is OK: Testable JavaScript


  5. For example, many software projects contain text files such as README, LICENSE, and Makefile without any filename extensions. Tika in Action


  6. The .pro file is the Qt makefile or the project file. Storm Real-Time Processing Cookbook


  7. Compiling and Running Now we can compile and link our program using the makefile in Example 2-13 Hadoop:The Definitive Guide 3rd Edition


  8. For example, add the following options to Makefile.in, and then. SQL Antipatterns


  9. For each directory where you added files, edit Makefile.am in that directory. Understanding MySQL Internals



1210、单词 attr 的匹配文本:


  1. For example, you can extract the POSIX permissions of a file and create another file with the same attributes as follows (this example uses the attr object from the previous examples): Pro Java 7 NIO.2


  2. Our getAttr method uses the getAttrs method to extract the first value. Spring Security 3.1


  3. For example, if Tika returned a field named creator and we don''t have such a field in our index, than Solr would try to index it under a field named attr_creator, which is a dynamic field. Solr 3.1 Cookbook


  4. NamedNodeMap getAttributes() returns a node map that contains Attr nodes that describe all attributes of this node. Core Java Volume II Advanced Features 9th Edition


  5. Some people try to extend the EAV design by defining a separate attr_ SQL Antipatterns


  6. An Attr object represents an attribute of an Element node. Java in a Nutshell 5th Edition


  7. For example, if Tika returned a field named creator, and we don''t have such a field in our index, then Solr would try to index it under a field named attr _ creator which is a dynamic field. Apache Solr 4 Cookbook



1211、单词 plaintext 的匹配文本:


  1. However, some developers prefer a more minimalistic editing environment: plaintext editors or power editors. Gradle in Action


  2. Listing 13-4 provides a modification to the previous logs DDL statement for the data stored as JSON in plaintext files, where each JSON document is stored on a single line in the file. Professional Hadoop Solutions


  3. The bulk of the work of any cipher engine will be in the engineUpdate( ) method, which is responsible for actually providing the ciphertext or plaintext. Java Security 2nd Edition


  4. The history log includes job, task, and attempt events, all of which are stored in a plaintext file. Hadoop:The Definitive Guide 3rd Edition


  5. For decryption, the very last byte of the plaintext is a count of the padding characters to discard. Core Java Volume II Advanced Features 9th Edition


  6. You can protect against this kind of exposure if you don’t put the plaintext password into the SQL query. SQL Antipatterns


  7. When writing Java programs, you should use a text editor that saves files in plaintext format, not a word processor that supports fonts and formatting and saves files in a proprietary format. Java in a Nutshell 5th Edition


  8. Other features, such as storage of non-XML documents (for example, full plaintext [unstructured] documents) Cassandra The Definitive Guide


  9. The query code will be styled in bold, whereas the resulting output will be rendered in plaintext. The Definitive Guide to MongoDB


  10. And the payload here is a simple plaintext message, but could be anything in principle: MongoDB in Action



1212、单词 bucketing 的匹配文本:


  1. On top of the bag of key-value mappings, the Spanner implementation supports a bucketing abstraction called a directory, which is a set of contiguous keys that share a common prefix. Spanner - Google’s Globally-Distributed Database


  2. Now take a look at a special case of a reducer join called a “Bucketing” join. Professional Hadoop Solutions


  3. Alphabetic range bucketing Solr does not directly support alphabetic range bucketing (A-C, D-F, and so on) Solr 3 Enterprise Search Server


  4. We can see that during the copy process Puppet has backed up the old file, a process Puppet calls file bucketing. Pro Puppet


  5. Both of these arguments support the Flume functionality called output bucketing, which permits some macro substitutions. Hadoop in Practice


  6. Each file is copied by a single map, and distcp tries to give each map approximately the same amount of data by bucketing files into roughly equal allocations. Hadoop:The Definitive Guide 3rd Edition


  7. For example, in computing the average of a column, a random sample of data can provide a good approximation.) Bucketing divides data into a specified number of files based on the hash of the bucket column. Hadoop in Action


  8. Bucketing Table Data Storage Partitions offer a convenient way to segregate data and to optimize queries. Programming Hive



1213、单词 leftmost 的匹配文本:


  1. The mapper key class will be the key class of the leftmost data source, and the key classes of all data sources should be identical. Pro Hadoop


  2. I created a JSlider (the arguments are the leftmost value of the JSIider, the rightmost value, and the starting value, respectively, but there are other constructors as well) and dropped it into the JFrame. Thinking in Java 4th Edition


  3. If you index more than one column, the column order is very important, because MySQL can only search efficiently on a leftmost prefix of the index. High Performance MySQL 3rd Edition


  4. We’ve written this as a function type (the leftmost Function) to explicitly affirm the fact that you could pass this differentiating function to yet another function. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. The leftmost bit in an integer type is the sign bit. Programming in Scala 2nd Edition


  6. Note that the leftmost reducer has more load due to more data under the “ellipse” key. Hadoop in Action


  7. When using new with multidimensional arrays, you do not have to specify a size for all dimensions of the array, only the leftmost dimension or dimensions. Java in a Nutshell 5th Edition


  8. In the case of the leftmost button, all we need to do is to display the total calories in the specially designated label: Steps in Scala An Introduction to Object-Functional Programming


  9. Range matches An exact match on any set of leftmost keys (including none), followed by either a range or a sort using the next key to the right. MongoDB in Action


  10. When the crawl rate is low, Percolator clusters documents faster than MapReduce as expected; this scenario is illustrated by the leftmost pair of points which correspond to crawling 1 percent of documents per hour. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



1214、单词 california 的匹配文本:


  1. In 1999, I packed everything I owned into my car for a cross-country trip to begin my new job as Staff Researcher at the University of California, Berkeley Computer Science Department. Monitoring with Ganglia


  2. Carla Schroer and her teams of compatibility testers in Cupertino, California, and Novosibirsk, Russia (with special thanks to Leonid Arbouzov and Alexei Kaigorodov) painstakingly wrote compatibility tests for each testable assertion in the First Edition. The Java Virtual Machine Specification Java SE 7 Edition


  3. For example, if your application is US-only and you want to divide your dataset into 20 shards, you probably shouldn’t shard by state, because California has such a huge population. High Performance MySQL 3rd Edition


  4. For example, if we want to find a random plumber in California, we can create an index on "profession", "state", and "random": MongoDB The Definitive Guide 2nd Edition


  5. To hijack a great analogy from Gregor Hohpe, we could call it the Coffee Bean model (Coffee Bean & Tea Leaf is a chain of coffee houses in California) RabbitMQ in Action


  6. Any action related to this Agreement will be governed by California law and controlling U.S. The Java Language Specification Java SE 7 Edition


  7. With the view created, we then execute a query against it, in this case, to simply count how many of the sightings took place in California. Hadoop Beginners Guide


  8. Ganglia was built by a group at the University of California, Berkeley, specifically to collect, aggregate, and plot a large number of metrics from large clusters of machines. Hadoop Operations


  9. This table contains just the name, salary, and address columns from the employee table records for employees in California. Programming Hive


  10. Consider the following example that returns the distinct departments based in California that are participating in the “Release1” project: Pro JPA2



1215、单词 snapshotting 的匹配文本:


  1. So, the initial copy is basically an instantaneous operation, no matter how large a volume you’re snapshotting. High Performance MySQL 3rd Edition


  2. One is a method called snapshotting that takes the data as it exists at one moment in time and writes it to disk. Redis in Action


  3. Snapshotting makes queries slower, so only use snapshotted queries when necessary. MongoDB The Definitive Guide 2nd Edition


  4. It is important to realize that this snapshotting feature is built into Web Flow and is automatically at your disposal. Pro Spring MVC


  5. In addition to that, when requested the result in enriched by information about the recovery status of shards and snapshotting. Mastering ElasticSearch


  6. Alternatively (or in addition to snapshotting), Redis can run in append mode. The Little Redis Book


  7. As a result, we have written a simple database using write ahead logging and snapshotting similar to the design of Birrell et al. Chubby - The lock service for loosely-coupled distributed systems


  8. Snapshotting often will minimize the number of transactions that must be applied after the snapshot is read in. ZooKeeper Distributed Process Coordination


  9. Solution The default persistence model is snapshotting, which consists of saving the entire database to disk in the RDB format (basically a compressed database dump) Redis Cookbook


  10. To deal with the “dirty” buffer issue, we recommend using the sync command before snapshotting: OpenStack Operations Guide



1216、单词 jquery 的匹配文本:


  1. Out of the many available libraries, pick JQuery (http://jquery.com/), an established, feature-rich, easy-to-use API for handling AJAX calls and manipulating the HTML DOM. Gradle in Action


  2. To implement drag-and-drop, use the jQuery-ui plug-in (http://jqueryui.com) for jQuery (http://jquery.com), a JavaScript framework that simplifies HTML, document traversing, event handling, and Ajax interactions for web development. Scala in Action


  3. For the samples in this chapter, jQuery (http://jquery.org) and jQuery UI (http://jqueryui.com) JavaScript libraries will be used to implement a rich user interface. Pro Spring 3


  4. Adding AJAX to the View with Spring JS and JQuery Finally, we need to physically invoke an asynchronous call from our browser. Pro Spring MVC


  5. Our header.jsp file has already included the JQuery library and the OpenID Selector library in the

    of our page. Spring Security 3.1


  6. For more details and a list of available jQuery selectors, please visit http://api.jquery.com/category/selectors/ Selenium Testing Tools Cookbook


  7. You’ll need to download the jQuery library from http://jquery.org and place it in the previously mentioned directory for this to work. Spring Roo in Action



1217、单词 activatable 的匹配文本:


  1. Developers creating activatable services must extend the Activatable class, implement a remote service interface, and provide their own implementing methods for that service interface. Java Network Programming and Distributed Computing


  2. Your remote objects don’t actually have to extend the Activatable class. Core Java Volume II Advanced Features 9th Edition



1218、单词 octal 的匹配文本:


  1. Naturally, this can be confusing, so we recommend against the use of octal constants. Core Java Volume I Fundamentals 9th Edition


  2. Two command-line tools that are helpful in this process are the command-line octal dump utility, od, and the strings utility, which displays human readable strings inside binary files. Cassandra High Performance Cookbook


  3. In a decimal or octal literal, the integer is denoted by all the digits in the literal before any type suffix. The Java Language Specification Java SE 7 Edition


  4. For example, the default in HDFS is the three digit octal 755, equivalent to -rwxr-xr-x. Hadoop in Practice


  5. Notice that the octal form of the delimiter characters can be used—001 for Control-A, for instance. Hadoop:The Definitive Guide 3rd Edition


  6. Text in single quotes is treated as a literal, whereas double-quoted text is interpolated, that is, it transforms the octal, or hexadecimal, values into bytes. HBase:The Definitive Guide


  7. The underscore can be applied to primitive data types in any supported base (binary, octal, hexadecimal, or decimal), and to both integer and floating-point literals. Java 7 New Features Cookbook


  8. Integer literals can also be expressed in hexadecimal or octal notation. Java in a Nutshell 5th Edition


  9. These permissions are granted by using an optional fourth octal number placed to the left of the standard set of three. Hadoop Operations


  10. Written using the octal code \001 when explicitly specified in CREATE TABLE statements. Programming Hive



1219、单词 nonrelational 的匹配文本:


  1. For example, a web crawler is needed for getting data from the Web, and a data translation module is needed to transfer data from relational databases to a nonrelational Big Data platform. Hadoop Operations and Cluster Management Cookbook


  2. The data used in web and scientific computing is often nonrelational. Dremel - Interactive Analysis of WebScale Datasets


  3. It’s generally applied to a number of recent nonrelational databases such as Cassandra, Mongo, Neo4J, and Riak. NoSQL Distilled


  4. He has recently been exploring build automation, nonrelational data stores, and abstract ideas, like how to make software architecture look more like an ant colony. Gradle Beyond The Basics


  5. Nonrelational Database Systems, Not-Only SQL or NoSQL? Over the past four or five years, the pace of innovation to fill that exact problem space has gone from slow to insanely fast. HBase:The Definitive Guide


  6. If you have nonrelational data management needs, the best answer is. SQL Antipatterns


  7. Here we contextualize Cassandra against a variety of the more popular nonrelational databases, examining documentoriented databases, distributed hashtables, and graph databases, to better understand Cassandra’s offerings. Cassandra The Definitive Guide


  8. Apache HBase Apache HBase is a low-latency, distributed (nonrelational) database built on top of HDFS. Hadoop Operations


  9. The umbrella term NoSQL was coined in 2009 to lump together the many nonrelational databases gaining in popularity at the time. MongoDB in Action


  10. He has recently been exploring build automation, nonrelational data stores, and abstract ideas like how to make software architecture look more like an ant colony. Building and Testing with Gradle



1220、单词 hregion 的匹配文本:


  1. As we did not configure the region server here, HBase will start a single slave daemon (HRegionServer) on the same JVM too. HBase Administration Cookbook


  2. HBase has a number of server processes, a single HBaseMaster that manages the HBase cluster and a set of HRegionServers, each of which is responsible for a set of MapFiles containing column regions. Pro Hadoop


  3. It also includes the server that hosts the region in the input split, which means that the map tasks will be scheduled to execute on the same nodes as the HRegionServer hosting the data. Hadoop in Practice


  4. It lists all the hosts on which you want to run HRegionServer daemons. HBase:The Definitive Guide


  5. At a very high level, the master server (HMaster) is responsible for keeping track of the region servers (HRegionServer) available and assigning regions to servers. ZooKeeper Distributed Process Coordination



1221、单词 decoders 的匹配文本:


  1. Compression codecs Codecs (coder/decoders) are used to compress and decompress data using various compression algorithms. Apache Flume Distributed Log Collection for Hadoop


  2. Various types of abstract classes are provided for encoders and decoders, depending on the task at hand. Netty in Action MEAP V08


  3. The Priorities class defines built-in priorities for security, decoders/encoders, and more. Java EE 7 Essentials



1222、单词 gigabyte 的匹配文本:


  1. If the ratio or the block size increases, a gigabyte of RAM will support more petabytes of physical storage, and vice versa. HDFS scalability the limits to growth


  2. HBase servers, particularly the version using memcached, are memory intensive and generally require at least a gigabyte of real memory per server; any paging will drastically affect performance. Pro Hadoop


  3. Nodes on the same rack can be interconnected with a 1 GBps (Gigabyte per second) Ethernet switch. Hadoop Operations and Cluster Management Cookbook


  4. Both have their sweet spots, but neither one alone is capable of solving a common business problem — fast access to large (megabyte or gigabyte size) data items. Professional Hadoop Solutions


  5. Unless the cumulative output from the map tasks is relatively small, less than a gigabyte, the default value should not be used. Hadoop RealWorld Solutions Cookbook


  6. A typical job will process anywhere from a gigabyte to many terabytes of data on hundreds or even thousands of machines in parallel, some executing the query while others aggregate the results. Interpreting the Data Parallel Analysis with Sawzall


  7. The data gathered included a photo taken every minute, which resulted in an overall data volume of one gigabyte per month. Hadoop:The Definitive Guide 3rd Edition


  8. Now assume your cells are, on average, 1 KB in size; in this case, the filter needs only Bloom filter of a few hundred kilobytes for a store file of one or more gigabyte. HBase:The Definitive Guide


  9. If the data is 1 gigabyte in size and on a single host, Hadoop will schedule the processing accordingly. Hadoop Beginners Guide


  10. In Example 5-16, we generate a one gigabyte file and copy it into the two separate filesystem namespaces. Hadoop Operations



1223、单词 cfg 的匹配文本:


  1. Add the following settings to each node''s zoo.cfg, so that every node knows about every other node in the ensemble: HBase Administration Cookbook


  2. It references a number of files with the extension .cfg, which contains specific monitoring configurations. Hadoop Operations and Cluster Management Cookbook


  3. For example, if you’re using grub, you can add the numa=off option to grub.cfg: MongoDB The Definitive Guide 2nd Edition


  4. One of the places dupfind looks for its configuration file in the current directory is in a file named dupfind.cfg. Testable JavaScript


  5. MCollective plugins should be placed in the directory specified by the libdir setting in the server.cfg configuration file. Pro Puppet


  6. The configuration file is conventionally called zoo.cfg and placed in the conf subdirectory (although you can also place it in /etc/zookeeper, or in the directory defined by the ZOOCFGDIR environment variable, if set) Hadoop:The Definitive Guide 3rd Edition


  7. There are some ZooKeeper client settings that are not read from zoo.cfg but must be set in hbase-site.xml. HBase:The Definitive Guide


  8. An assembled image is always uniquely identified by its hypervisor specific config file (in our case vm.cfg for Oracle VM or Xen) Oracle JRockit The Definitive Guide


  9. So, first we create z1/z1.cfg using the configuration file we discussed at the beginning of this section. ZooKeeper Distributed Process Coordination


  10. Please remember to preserve the directory structure you''ll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: Apache Solr 4 Cookbook



1224、单词 substrings 的匹配文本:


  1. Overall, the designers of Java decided that the efficiency of sharing outweighs the inefficiency of string editing by extracting substrings and concatenating. Core Java Volume I Fundamentals 9th Edition


  2. This required preprocessing the data before indexing to convert all the interesting substrings into standalone “words.” Here’s a sample URL before and after preprocessing: High Performance MySQL 3rd Edition


  3. Table 3.2 shows some methods that can be used to manipulate substrings and individual bits of STRINGs in Redis. Redis in Action


  4. The newTuple() method creates a new tuple with the required number of fields, which is just the number of Range classes, and the fields are populated using substrings of the line, which are determined by the Range objects. Hadoop:The Definitive Guide 3rd Edition


  5. Usually, you don’t want to match the entire input against a regular expression, but to find one or more matching substrings in the input. Core Java Volume II Advanced Features 9th Edition


  6. Given a string and a delimiter, the Splitter class will produce substrings broken out by the provided delimiter. Getting Started with Google Guava


  7. Comparing substrings is bound to be inefficient or inaccurate in SQL. SQL Antipatterns


  8. The replaceFirst( ) and replaceAll( ) methods search a string for the first substring or all substrings that match a given pattern and replace the string or strings with the specified replacement text, returning a new string that contains the replacements. Java in a Nutshell 5th Edition


  9. For example, Vector.toString( ) calls toString( ) on all its elements and combines the generated substrings with the comma character surrounded by opening and closing square brackets. Java Performance Tuning 2nd Edition


  10. In order to achieve this, you’d have to decompose your words into substrings and index those. Redis Cookbook



1225、单词 nondistributed 的匹配文本:


  1. You’ll learn how a simple distributed recommender system is designed in a distributed environment, because it differs greatly from nondistributed implementations. Mahout in Action


  2. For example, consider a large data set partitioned into a collection of nondistributed databases, perhaps using a hash function. MapReduce a flexible data processing tool


  3. Since you are running each process type on one machine only, you need to consider two ports per server type—unless you run in a nondistributed setup. HBase:The Definitive Guide


  4. To lessen the burden, you should test in stages, from a nondistributed (i.e., local) mode to a single-node pseudo-distributed mode, and finally to a fully distributed mode. Hadoop in Action


  5. Cluster Configuration MapReduce and other tools can run in a nondistributed way for trying things out or troubleshooting a problem. Cassandra The Definitive Guide


  6. So when performance is bad, the application tends to slow significantly more than in nondistributed applications. Java Performance Tuning 2nd Edition



1226、单词 shakespeare 的匹配文本:


  1. For example, you could use the combined works of Shakespeare to create a profile for detecting his plays, those of his contemporaries, or modern works that mimic the Shakespearean style. Tika in Action


  2. In this recipe, we will use the Apache Mahout k-means implementation to cluster the words found in Shakespeare''s tragedies. Hadoop RealWorld Solutions Cookbook


  3. That works for exploring a single user, but what if you want to calculate a statistic over all users? Given your user base, perhaps it would be interesting to know what percentage of twits are about Shakespeare. HBase in Action


  4. Here we have high-quality information about Shakespeare and some of his plays, together with details of one of the companies that has recently performed the plays, plus a theatrical venue, and some geospatial data. Graph Databases


  5. Those of you who’ve read Hamlet know that Shakespeare got some of it right: the important decision is whether you should be a slave to your invokers or take matters into your own hands. Spring Integration in Action



1227、单词 acled 的匹配文本:


  1. For the ACLED Nigeria dataset, a 512 MB memory is more than enough. Hadoop RealWorld Solutions Cookbook



1228、单词 gus 的匹配文本:


  1. She first sends Gus the key they’ll use to exchange the secret. Java Network Programming 4th Edition



1229、单词 classloading 的匹配文本:


  1. A (or one of that classloader''s ancestors in the classloading hierarchy) Java Threads 3rd Edition


  2. The lifecycle layer relies on the module layer for classloading and provides a dynamic approach to bundle management, making it possible to update parts of an application without stopping it. Spring Dynamic Modules in Action


  3. Dynamic Contextual Objects Application component container contexts, such as classloading, namespace, and security, can be associated with an object instance via ContextService. Java EE 7 Essentials


  4. This means that if a large amount of classloading takes place, the permanent generation will fill up and OutOfMemoryErrors will be thrown. Oracle JRockit The Definitive Guide


  5. The simplest way to handle all this is to have a flag on which objects are conditionally passed to the monitor from the Object constructor and to have this flag in a simple class with no superclasses so that classloading does not impose extra calls to superclasses. Java Performance Tuning 2nd Edition


  6. For example, dynamic compilation allows many extensible components to be treated as if they are closed at classloading time, leading to optimizations and specializations that exploit particular run-time contexts more effectively than any programmer could. Concurrent Programming in Java Design Principles and Pattern



1230、单词 resync 的匹配文本:


  1. And if the inconsistency you discovered isn’t critical, you might be able to leave the replica online and resync only the affected data. High Performance MySQL 3rd Edition


  2. One method of addressing the slave resync issue is to reduce the total data volume that’ll be sent between the master and its slaves. Redis in Action


  3. With this, AMs can continue to progress with existing containers while the RM is down, and can resync with the RM when it comes back up. Apache Hadoop YARN Yet Another Resource Negotiator


  4. If you want to change a non-index-building member to an index-building one, you must remove it from the set, delete all of its data, re-add it to the set, and allow it to resync from scratch. MongoDB The Definitive Guide 2nd Edition


  5. To deal with this I will add two concepts to the protocol: a resync option and a cache field (a dictionary) ZeroMQ


  6. The following sections explain the three main methods you can use to resync a slave with its master. The Definitive Guide to MongoDB


  7. Because the only remedy for halted replication is a complete resync of the primary’s data, you’ll want to strive to avoid this state. MongoDB in Action



1231、单词 lda 的匹配文本:


  1. Finally, you’ll explore latent Dirichlet allocation (LDA)—an algorithm that closely resembles clustering but that achieves something far more interesting. Mahout in Action


  2. Some other example use-cases of LDA are identifying influential Twitter users for particular topics and Twahpic (http://twahpic.cloudapp. Hadoop MapReduce Cookbook



1232、单词 loc 的匹配文本:


  1. Figure 4-12 includes two aggregate graphs showing all metrics matching host regex of loc and metric regex of load. Monitoring with Ganglia


  2. Adds a "$diskLoc" field to the results that shows where on disk that particular result lives. MongoDB The Definitive Guide 2nd Edition


  3. The following screen shows what JDK 7 looks like (size is for LOC and color for Rules compliance): Sonar Code Quality Testing Essentials


  4. The columns loc and fatalities are conditionally checked for empty strings; and if found to be empty, are set to a default value. Hadoop RealWorld Solutions Cookbook


  5. The expression int(loc.lat) converts loc.lat, a float, to an integer, thereby truncating it to the degree and making it suitable as an index value. Interpreting the Data Parallel Analysis with Sawzall


  6. Buy a Feature contains 15,000 Scala lines of code (LoC) and represents about one manyear of effort. Beginning Scala


  7. After you add the data, you need to tell the MongoDB shell to create an index based on the location information that was specified in the loc key, as in this example: The Definitive Guide to MongoDB


  8. In addition to the expected city, state, and ZIP fields, you’ll see a fourth field, loc, that stores the coordinates for the geographic center of the given ZIP code. MongoDB in Action


  9. The index structure is simple, it contains the identifier (the id field), name of the company (the name field), and the geographical location of the company (the loc field) Apache Solr 4 Cookbook



1233、单词 wget 的匹配文本:


  1. Curl/wget utilities can be used for uploading data to Solr in your environment. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. But it would also be possible to use Google Chrome and for example a unix tool like wget to test for fallback to HTTPS. Netty in Action MEAP V08


  3. We can also use the following wget command to download the image file: Hadoop Operations and Cluster Management Cookbook


  4. It also assumes that you have a modern version of wget (http://www.gnu.org/s/wget/) installed. Jenkins Continuous Integration Cookbook


  5. For example, if my Jenkins server was running on http://myserver:8080/jenkins, I could start the gameoflife build job by invoking the following URL using a tool like wget or curl: Jenkins The Definitive Guide


  6. Then you obtained the library tarball by using wget and decompressed the file that you got from GitHub. RabbitMQ in Action


  7. Download a matching version of Thrift and compile it: $ wget http://apache.imghat.com/ Cassandra High Performance Cookbook


  8. Note In this section, you are using wget to download Django. Beginning CouchDB


  9. First we’ll use GNU Wget (http://mng.bz/DMf6), a popular tool for retrieving files using HTTP, to subscribe to the desired destination: ActiveMQ in Action


  10. Note that we wrapped the long line for the wget command: Programming Hive



1234、单词 orthogonal 的匹配文本:


  1. The division of inputs is orthogonal to the partitioning of the graph itself, and is typically based on file boundaries. Pregel  -  A System for Large-Scale Graph Processing


  2. Concurrency choice is orthogonal to language choice; we can program concurrency with STM from any of the JVM languages. Programming Concurrency on the JVM


  3. What is important in this context is to understand that maintaining redundant copies is orthogonal to the harder task of ensuring that the cluster consistently chooses the same partition for a particular document ID. CouchDB The Definitive Guide


  4. Thus, upgrading a cluster requires users to halt, validate, and restore their pipelines for orthogonal changes. Apache Hadoop YARN Yet Another Resource Negotiator


  5. Keeping interfaces minimal and orthogonal lets you achieve great reuse and composition of behavior inside your codebase. Java 8 in Action Lambdas, Streams and Functional-style Programming


  6. This ensures the existence of an A-orthogonal vector at each step of the process. Professional Hadoop Solutions


  7. The advice code was often described as being orthogonal to the original code, meaning that the purposes of the two blocks of code weren’t correlated in any direct way, even though the execution sequence was. Gradle Beyond The Basics


  8. Both of these are special cases of an (orthogonal) range query. Data Structures and Algorithm Analysis in Java 3rd Edition


  9. Lastly, it’s important to note that HA and federation are orthogonal features. Hadoop Operations


  10. However, a text file could contain JSON or XML “documents.” For Hive, the file format choice is orthogonal to the record format. Programming Hive



1235、单词 decentralized 的匹配文本:


  1. Centralized Versus Decentralized A first decision we have to make is whether to work with a central server or not. ZeroMQ


  2. Although Cassandra is a completely decentralized system we have learned that having some amount of coordination is essential to making the implementation of some distributed features tractable. Cassandra A Decentralized Structured Storage System


  3. While this decentralized scheduling model may not always lead to globally optimal scheduling, we have found that it performs surprisingly well in practice, allowing frameworks to meet goals such as data locality nearly perfectly. Mesos A Platform for Fine-Grained Resource Sharing in the Data Center


  4. It is itself part of the Open Participation Software for Java (OPS4J) community, whose open source model is more decentralized and open than that of some other communities. Spring Dynamic Modules in Action


  5. The publish-subscribe channel solution, on the other hand, is decentralized and open; the routing configuration is emerging as a sum of all the conditions defined on the individual filters, and new subscribers can be added without modifying any existing component. Spring Integration in Action


  6. Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  7. The second decision that we made is to not have a central “master” node, but instead let consumers coordinate among themselves in a decentralized fashion. Kafka a Distributed Messaging System for Log Processing


  8. It performs blazingly fast writes, can store hundreds of terabytes of data, and is decentralized and symmetrical so there’s no single point of failure. Cassandra The Definitive Guide


  9. These systems address concerns that do not arise for Bigtable, such as highly variable bandwidth, untrusted participants, or frequent reconfiguration; decentralized control and Byzantine fault tolerance are not Bigtable goals. Bigtable - A Distributed Storage System for Structured Data



1236、单词 parallelizing 的匹配文本:


  1. If you add in the needed synchronization, you’ll likely discover that thread contention robs you of all the performance that parallelism was supposed to give you! Parallelizing this computation requires a different approach: partition the input, sum the partitions, and combine the sums. Java 8 in Action Lambdas, Streams and Functional-style Programming


  2. But for many programs with CPU-intensive loops, parallelizing the loop. Java Threads 3rd Edition


  3. We are working on improving this by having multiple replicas take part in the bootstrap transfer thereby parallelizing the effort, similar to Bittorrent. Cassandra A Decentralized Structured Storage System


  4. Parallelizing the computation worked well last time, so you might as well throw more computers at it. HBase in Action


  5. The choice of splitting column is essential to parallelizing work efficiently. Hadoop:The Definitive Guide 3rd Edition


  6. On the upside, you could use multiple threads to read this data from distinct servers, therefore parallelizing read access. HBase:The Definitive Guide


  7. There are several variants to parallelizing mark and sweep beyond the scope of this chapter. Oracle JRockit The Definitive Guide


  8. In MapReduce, developers write jobs that consist primarily of a map function and a reduce function, and the framework handles the gory details of parallelizing the work, scheduling parts of the job on worker machines, monitoring for and recovering from failures, and so forth. Hadoop Operations


  9. So even on a single-CPU machine, parallelizing this calculation enables it to run faster. Java Performance Tuning 2nd Edition


  10. Second, we tailor the design for best aggregate request throughput, not peak server response time, since we can manage response times by parallelizing individual requests. Web search for a planet the google cluster architecture



1237、单词 smith 的匹配文本:


  1. Thanks to my buddy Al Smith for creating it and making it such a great place, and for being such an interesting and entertaining part of the Crested Butte experience. Thinking in Java 4th Edition


  2. That is, you won’t be able to find all people whose last name is Smith and who were born on a particular date. High Performance MySQL 3rd Edition


  3. After spending a number of years running his own web agency, managing the development team, and working for Smith Electric Vehicles on developing its web-based vehicle telematics platform, he currently serves as head developer for an ambitious new start-up: leading the development team and managing the software development processes. Jenkins Continuous Integration Cookbook


  4. For example, if you want to sort by last name first, and city second, use a key that looks like Smith^Baltimore. MapReduce Design Patterns


  5. Names such as Smith are very common while names such as Capriolo are very rare. Cassandra High Performance Cookbook


  6. Now let’s update this document with John Smith’s actual contact details. Beginning CouchDB


  7. And the response will be John Smith’s original JSON document: Elasticsearch The Definitive Guide


  8. There’s enough information in this message to let someone snooping packets use John Smith’s credit card number for nefarious purposes. Java Network Programming 4th Edition


  9. Michael O’Brien spent many long nights translating our chapterware into running code, and probably got more than he bargained for! Thanks to Doug Clarke and Shaun Smith for spreading the word that a JPA book was available, and to Dennis Leung for offering his full support of my writing this book. Pro JPA2


  10. To take an example, suppose that user smith decides to add her country of residence. MongoDB in Action



1238、单词 rereduce 的匹配文本:


  1. The rereduce is a boolean value that is used to indicate when the reduce function is being called by itself (i.e., in rereduce mode) Getting Started with CouchDB


  2. If you’d like to count rows without depending on the row value, you can switch on the rereduce parameter: CouchDB The Definitive Guide


  3. When this happens, the data will be processed by the reduce function before sending the result to the reduce function with the rereduce argument set to true. Beginning CouchDB



1239、单词 appender 的匹配文本:


  1. One way to avoid this problem is to use the Flume Log4J Appender in your Java application(s) Apache Flume Distributed Log Collection for Hadoop


  2. Provide Scanner and Appender interfaces – Specialized to structured data. Introduction to Tajo


  3. A scanner reads input data from HDFS or local file system, whereas an appender writes output data to either of them. Tajo A Distributed Data Warehouse System on Large Clusters


  4. The AppenderBase class defines a single abstract method as follows: Storm Blueprints Patterns for Distributed Real-time Computation


  5. Log4j also has a number of builtin appenders such as the SyslogAppender used in this recipe. Cassandra High Performance Cookbook


  6. Loggers output their log events to an appender which is responsible to handling the event in some meaningful way. Hadoop Operations


  7. The default set here says that messages below the INFO level should be discarded and messages should be output using the CONSOLE appender. ZooKeeper Distributed Process Coordination


  8. A special log appender valve may have to be written for GlassFish to seamlessly integrate with these solutions, the implementation details of which are beyond the scope of this book. Java Performance


  9. For example, here we set the default logger to the DEBUG level and send output to the console appender: Programming Hive



1240、单词 tla 的匹配文本:


  1. We refer to these thread local buffers as Thread Local Areas (TLA) Oracle JRockit The Definitive Guide



1241、单词 sakai 的匹配文本:


  1. He has also co-authored two books about Sakai (http://sakaiproject.org)—a highly successful open source learning management platform used by many millions of students around the world. Jenkins Continuous Integration Cookbook



1242、单词 lexicographically 的匹配文本:


  1. The net result is a single part file that is output from the reducer with a list of locations in lexicographically sorted order. Hadoop RealWorld Solutions Cookbook


  2. It orders numeric data numerically, Strings (chararray) lexicographically, and bytearray as binary data. Hadoop in Practice


  3. The first field, left, is a UTF-8-encoded string, for which Avro can compare the bytes lexicographically. Hadoop:The Definitive Guide 3rd Edition


  4. All rows are always sorted lexicographically by their row key. HBase:The Definitive Guide


  5. Note that the rows are sorted lexicographically because Streaming processes everything as text and doesn’t know other data types. Hadoop in Action


  6. You can also use the compareTo method to compare two paths lexicographically using an implementation of the Comparable interface. Java 7 New Features Cookbook


  7. What you can do, however, is sort as if the column were one of several different types (ASCII, Long integer, TimestampUUID, lexicographically, etc.) Cassandra The Definitive Guide


  8. Since the SSTables and the memtable are lexicographically sorted data structures, the merged view can be formed efficiently. Bigtable - A Distributed Storage System for Structured Data


  9. For our purposes we chose a very easy to implement heuristic which in practice gives good results: we sort lexicographically by the field order chosen for the partitioning. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. The result will be a negative integer if the String lexicographically precedes the argument String. Java 7 Recipes A Problem-Solution Approach



1243、单词 stdin 的匹配文本:


  1. If you specify neither of these, stdin will be used. Apache Flume Distributed Log Collection for Hadoop


  2. You can simply run it on any fi le or type in your input through STDIN to load it in Apache Solr. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  3. One practical example is in Unix or Linux, where many programs operate by reading data from standard input (stdin in Unix and C, System.in in Java), operating on it, and then writing their results to standard output (stdout in Unix and C, System.out in Java) Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Both mappers and reducers receive their input on stdin and emit output (key/value pairs) to stdout. Professional Hadoop Solutions


  5. Although put is slightly more general, it is able to copy multiple files into HDFS, and also can read input from stdin. Hadoop RealWorld Solutions Cookbook


  6. The program iterates over lines from standard input by executing a block for each line from STDIN (a global constant of type IO) Hadoop:The Definitive Guide 3rd Edition


  7. Inputs come in through STDIN and outputs go to STDOUT. Hadoop in Action


  8. Alternatively, you can pass images to the client through stdin. admin-user-guide-trunk


  9. Pig instantiates the executable and keeps feeding data to it via stdin. Programming Pig


  10. The same thing is done when using the STDIN redirection such as shown in the example. OpenStack Operations Guide



1244、单词 arrayref 的匹配文本:


  1. Description The arrayref must be of type reference and must refer to an array whose components are of type reference. The Java Virtual Machine Specification Java SE 7 Edition



1245、单词 mpstat 的匹配文本:


  1. In addition to these tools, your operating system might provide others, such as mpstat or sar. High Performance MySQL 3rd Edition


  2. Finally, mpstat is useful because it gives statistics broken down by CPU, and not aggregated across all the CPUs, as other commands will do. Hadoop in Practice


  3. Solaris and Linux also offer a tabular view of CPU utilization for each virtual processor using the command line tool mpstat. Java Performance



1246、单词 geolocation 的匹配文本:


  1. The weblog data is parsed, summarized, and combined with the IP address geolocation data. Hadoop RealWorld Solutions Cookbook


  2. Use th compre You’ll geolocate the IP addresses from the logs table using the free geolocation database from MaxMind. Hadoop in Practice


  3. Geolocation (TODO) Adding geolocation to your documents Geolocation-aware search Sorting by distance Geo-shapes Optimizing geo-queries 24 Elasticsearch The Definitive Guide


  4. Other missing types include XML, image, and geolocation (the lat-long variety) Hadoop in Action


  5. Both hotels and points of interest need to maintain geolocation data so that they can be found on maps for mashups, and to calculate distances. Cassandra The Definitive Guide


  6. Such data might include zipcode tables, IP geolocation tables, parts catalogs, and so on. The Definitive Guide to MongoDB


  7. A popular operation is determining the geolocation of web traffic based on the IP address. Programming Hive


  8. In the following example, we are going to disable Geolocation from our tests. Selenium 2 Testing Tools Beginner''s Guide



1247、单词 closeable 的匹配文本:


  1. It is a subinterface of AutoCloseable, also with a single close method. Core Java Volume I Fundamentals 9th Edition


  2. Therefore, you can use the try-with-resources statement with any Closeable. Core Java Volume II Advanced Features 9th Edition


  3. The Closer class, which gives us a very clean way of ensuring Closeable instances are properly closed. Getting Started with Google Guava


  4. Any objects created with the try-with-resources block must implement the AutoCloseable interface. Java 7 New Features Cookbook


  5. Java 5.0 adds the Closeable and Flushable interfaces to identify types that have close( ) and flush( ) methods. Java in a Nutshell 5th Edition


  6. Java automatically invokes close() on any AutoCloseable objects declared inside the argument list of the try block. Java Network Programming 4th Edition


  7. Also, third-party implementers can create resources that will work with the try-with-resources statements by implementing the AutoCloseable interface. Java 7 Recipes A Problem-Solution Approach


  8. Use the try-with-resources statement with any object that implements AutoCloseable. Java SE 8 for the Really Impatient



1248、单词 ziplist 的匹配文本:


  1. When using short LISTs, HASHes, and ZSETs, Redis can optionally store them using a more compact storage method known as a ziplist. Redis in Action



1249、单词 notifier 的匹配文本:


  1. But you should complement this with one of the more active strategies, such as instant messaging or a desktop notifier. Jenkins The Definitive Guide


  2. Defined in rpc_notifier for rpc way, can be comma separated values. bk-admin-guide-cloud-latest


  3. This listener uses the notifier when a job execution fails. Spring Batch in Action


  4. Why can the previous snippet be considered tightly coupled? The reason is that the notification technology is concretely defined using the JmsNotifier property. Spring Dynamic Modules in Action


  5. NewsNotifierBolt Tells the web application to update the user interface immediately. Getting Started with Storm


  6. In parallel, the integration team replaces the basic notifier with a bus so they can connect multiple sources of notifications. Spring Integration in Action



1250、单词 mongofiles 的匹配文本:


  1. Getting Started with GridFS: mongofiles The easiest way to try out GridFS is by using the mongofiles utility. MongoDB The Definitive Guide 2nd Edition


  2. Rather than describe all the options you can use with mongofiles, let’s jump right in and start playing with some of the tool’s features. The Definitive Guide to MongoDB


  3. The GridFS specification is implemented by all of the official drivers and by MongoDB’s mongofiles tool, ensuring consistent access across platforms. MongoDB in Action



1251、单词 lexically 的匹配文本:


  1. This can be used to lexically scope certain features of an API. Scala in Depth


  2. Creating a Custom Mapper and Reducer As you’ve seen, your first Hadoop job, in MapReduceIntro, produced sorted output, but the sorting was not suitable, as it sorted lexically rather than numerically, and the keys for the job were numbers. Pro Hadoop


  3. It is the responsibility of a Java compiler to ensure that the method identified via the method_index is indeed the closest lexically enclosing method of the class that contains this EnclosingMethod attribute. The Java Virtual Machine Specification Java SE 7 Edition


  4. In this case you’re importing the Random class defined in the scala.util package in the Scala code block, and it’s lexically scoped inside the block and won’t be available outside it. Scala in Action


  5. A class O is the zeroth lexically enclosing class of itself. The Java Language Specification Java SE 7 Edition


  6. Java is a lexically scoped language, which means that its concept of scope has to do with the way the source code is written. Java in a Nutshell 5th Edition


  7. Data is sorted based on the types of the indicated fields: numeric values are sorted numerically, chararray fields are sorted lexically, and bytearray fields are sorted lexically, using byte values rather than character values. Programming Pig


  8. A system developed at Princeton University’s Cognitive Science Laboratory, driven by Psychology Professor George Miller, illustrates the net of synonyms.8 WordNet represents word forms that are interchangeable, both lexically and semantically. Lucene in Action 2nd Edition



1252、单词 popup 的匹配文本:


  1. If we go back a page, we will get a nice popup asking if we want to resubmit the form. Pro Spring MVC


  2. For example, if the user clicks the Send button, you may wish to display a popup or perhaps log the action in a file. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. There is no name or title provided for Chat Page Popup //We will iterate through all the open Windows and check the //contents to find //out if it''s Chat Window. Selenium Testing Tools Cookbook


  4. The setMillisToPopup is your estimation of the time the dialog box needs to pop up. Core Java Volume II Advanced Features 9th Edition


  5. The following screenshot shows an example of the window type Type.POPUP: Java 7 New Features Cookbook


  6. Click OK (on each of the popup boxes that were opened before) to accept the changes. Java 7 Recipes A Problem-Solution Approach


  7. Verify the text in the popup by right-clicking and selecting VerifyText id=popup text within the popup window. Selenium 2 Testing Tools Beginner''s Guide



1253、单词 apr 的匹配文本:


  1. Still, there are some undesired words, like 7-apr-1987, said, he, and so on. Mahout in Action


  2. So rather than reinvent a new type of modular interface, why not just reuse a tried and true interface? Of course, the fact that gmond is built on top of the Apache Portability Runtime (APR) libraries made the Apache way of implementing a modular interface an obvious fit. Monitoring with Ganglia


  3. INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library. Jenkins Continuous Integration Cookbook



1254、单词 itds 的匹配文本:


  1. These ITDs are the magic behind Spring Roo and store all the logic generated and maintained by Roo. Pro Spring 3


  2. Roo automatically creates the service and repository beans, and their respective ITDs: Spring Roo in Action


  3. There is, for instance, an @RooJavaBean annotation that triggers the generation of an AspectJ aspect declaration that provides ITDs that introduce getters and setters in your Java class. Spring Data Modern Data Access for Enterprise Java



1255、单词 subscribes 的匹配文本:


  1. All you need to do is write a small app that connects to RabbitMQ and subscribes to the authorization requests your auth module is already publishing. RabbitMQ in Action


  2. I’ve added a service definition that subscribes to the ntp package and its configuration file. Managing Infrastructure with Puppet


  3. A subscriber subscribes with a publisher to be notified of any published events. Programming in Scala 2nd Edition


  4. The following command is used for starting the console-based consumer that shows output at command line as soon as it subscribes to the topic created in Kafka broker: Apache Kafka


  5. Having a class that subscribes for DeadEvent instances can be very helpful when trying to ensure that all events have registered subscribers. Getting Started with Google Guava


  6. The consumer registry includes the consumer group to which a consumer belongs and the set of topics that it subscribes to. Kafka a Distributed Messaging System for Log Processing


  7. Then, similar to the Java examples, it creates a connection, adds a listener, starts a connection, and finally subscribes to the desired destinations. ActiveMQ in Action


  8. The method subscribes or unsubscribes the sender of the message depending on the start of the subject line (This is akin to a mailing list, where sending a message with the subject of “subscribe” adds the sender to the mailing list.) Java 7 Recipes A Problem-Solution Approach


  9. The optimistic locking model subscribes to the philosophy that there is a good chance that the transaction in which changes are made to an entity will be the only one that actually changes the entity during that interval. Pro JPA2


  10. In the request-reply pattern, the client subscribes to a destination expecting a one-time response at that destination. Spring in Action 4th Edition



1256、单词 emulated 的匹配文本:


  1. These emulated decisions are the end product of the classification system. Mahout in Action


  2. The average computer must be capable of running Java code without new chips, so a "virtual" machine is emulated. Java Network Programming and Distributed Computing


  3. We have just emulated one user and essentially repeated the process five times. Performance Testing With JMeter 2.9


  4. The first prototype implementation of the Java Virtual Machine, done at Sun Microsystems, Inc., emulated the Java Virtual Machine instruction set in software hosted by a handheld device that resembled a contemporary Personal Digital Assistant (PDA) The Java Virtual Machine Specification Java SE 7 Edition


  5. With the help of a test automation framework, you remote-controlled the browser and emulated user interaction. Gradle in Action


  6. This functionality is also emulated with condition variables using the. Java Threads 3rd Edition


  7. A general set can be emulated by using a MapWritable (or a SortedMapWritable for a sorted set) with NullWritable values. Hadoop:The Definitive Guide 3rd Edition


  8. Multiple secondary indexes can be emulated by using multiple column families—although that is not the recommended way of designing a schema. HBase:The Definitive Guide


  9. In order to reconstruct an expression, such as add, the contents of the execution stack must be emulated at any given point in the program. Oracle JRockit The Definitive Guide


  10. This behavior of the Timestamp class was a mistake and should not be emulated. Effective Java 2nd Edition



1257、单词 queens 的匹配文本:


  1. An imperative solution to this problem would place queens one by one, moving them around on the board. Programming in Scala 2nd Edition


  2. You want to store that data closer to the Midtown data than, say, data about Brooklyn or Queens. HBase in Action


  3. Two queens are said to attack each other if they are on the same row, column, or (not necessarily main) diagonal. Data Structures and Algorithm Analysis in Java 3rd Edition


  4. Imagine if you had a deck of cards and you were asked to remove the odd-numbered cards and then remove all the queens. Functional Programming in Scala


  5. For example, here is a class that solves the classic N-Queens problem, searching for the placement of N queens that do not attack each other on a chessboard of size NxN. Concurrent Programming in Java Design Principles and Pattern



1258、单词 matchers 的匹配文本:


  1. To effectively work with Specs, you need to get comfortable with specifications and the available matchers. Scala in Action


  2. At this point, we have a union of two MethodMatchers: one that matches all methods starting with get and one that matches all methods starting with set. Pro Spring 3


  3. Jasmine matchers (toEqual in the previous example) are equivalent to YUI Test assertions. Testable JavaScript


  4. To enable support for using these matchers, you simply need to add the attribute request-matcher to the

    element specifying the type of matcher your patterns will use within the values ant, regex, and ciRegex. Pro Spring Security


  5. For running unit tests we use junit, as well as a couple of helper libraries: hamcrest-all provides useful matchers for writing test assertions, and mrunit is used for writing MapReduce tests. Hadoop:The Definitive Guide 3rd Edition


  6. This test framework builds on top of Hamcrest to offer custom matchers that can be used with assertThat. Spring Integration in Action


  7. Specs provides a lot of different matchers that allow you to test Strings by testing them against a regular expression: Beginning Scala


  8. Boundary matchers These are special symbols that should be used when it matters where the string that will be matched is located. Steps in Scala An Introduction to Object-Functional Programming


  9. You also need to know the exact value that you want to remove—you can’t apply regular expressions or other matchers and you can’t remove by index. Redis Cookbook


  10. The first call to antMatchers() specifies that requests whose path is /spitters/me should be authenticated. Spring in Action 4th Edition



1259、单词 gif 的匹配文本:


  1. For example, suppose the user should choose a GIF image file. Core Java Volume I Fundamentals 9th Edition


  2. For example, any sort of dynamic graphing is nearly impossible to perform with consistency because a Graphics Interchange Format (GIF) file must be created and moved from the server to the client for each version of the graph. Thinking in Java 4th Edition


  3. Other formats that are supported include extracting metadata from images such as JPG, GIF, and PNG, as well as from various audio formats such as MP3, MIDI, and Wave audio. Solr 3 Enterprise Search Server


  4. The image is in GIF format and is less than 4KB in size. Beginning CouchDB


  5. The objects that are returned by standard content types such as text/plain and image/gif require classes in the com.sun hierarchy for processing. Core Java Volume II Advanced Features 9th Edition


  6. It could well be pointing to a GIF or JPEG image, an MP3 sound file, or something else entirely. Java Network Programming 4th Edition


  7. Java can currently load .jpg, .gif, and .png image formats. Java 7 Recipes A Problem-Solution Approach


  8. The accept attribute is set to limit file types to JPEG, PNG, and GIF images. Spring in Action 4th Edition



1260、单词 simplification 的匹配文本:


  1. A simplification that results from this decision is that, in the first pass, using Text is acceptable for the key and value, as the records may be distinguished lexically. Pro Hadoop


  2. MapReduce can be considered a simplification and distillation of some of these models based on our experience with large real-world computations. MapReduce - Simplied Data Processing on Large Clusters


  3. Over the past few years, new Domain Specific Languages (DSLs) have been developed for the simplification of Hadoop MapReduce programming, and this is a growth area of Hadoop — especially in the area of graph processing. Professional Hadoop Solutions


  4. But the rewards—both in terms of the simplification of program logic and. Java Threads 3rd Edition


  5. The sorter takes the barrels, which are sorted by docID (this is a simplification, see Section 4.2.5 in the full version), and resorts them by wordID to generate the inverted index. The anatomy of a large-scale hypertextual Web search engine


  6. So, for simplification, small methods are frequently inlined, replacing the call with the code of the called function. Oracle JRockit The Definitive Guide


  7. Chubby used few of the features of Berkeley DB, and so this rewrite allowed significant simplification of the system as a whole; for example, while we needed atomic operations, we did not need general transactions. Chubby - The lock service for loosely-coupled distributed systems


  8. This is not an inherent limitation of the technique, but a simplification that makes our code easier to follow. Hadoop in Action


  9. That''s all there is to it! As you''ve seen, each MapReduce program sits atop a significant amount of machinery provided by Hadoop and the sketch provided is in many ways a simplification. Hadoop Beginners Guide


  10. The JPA specification is a simplification over the EJB 2.1 persistent entity programming model. Java Performance



1261、单词 lcov 的匹配文本:


  1. When all the tests are finished, the global _yuitest_coverage variable will be available to be persisted, and will be converted to LCOV format and optionally HTML-ized. Testable JavaScript



1262、单词 jim 的匹配文本:


  1. Amusingly, the company that did was led by the same Jim Clark who started Netscape—a company that did much to make Java successful.) Core Java Volume I Fundamentals 9th Edition


  2. Jim Gray introduced a contest called the TeraByteSort, which was a benchmark to sort one terabyte of data and write the results to disk. Pro Hadoop


  3. Thank you to my father, Jim, who has always worked his socks off to make sure that I had everything I could ever need or want. Beginning CouchDB


  4. The startup Powerset decided to dedicate Jim Kellerman and Michael Stack to work on this Bigtable analog as a way of contributing back to the open source community on which it relied.9 HBase in Action


  5. Figure 7.13 shows a network of people with Jim, one of the users, highlighted. Hadoop in Practice


  6. In February 2007, Mike Cafarella made a code drop of a mostly working system that Jim Kellerman then carried forward. Hadoop:The Definitive Guide 3rd Edition


  7. This Nutch distributed filesystem would later grow up to be HDFS.) Jim Kellerman of Powerset took Mike’s dump and started filling in the gaps, adding tests and getting it into shape so that it could be committed as part of Hadoop. HBase:The Definitive Guide


  8. Marc Loy and Jim Elliott reviewed some of the most bleeding edge material in the book. Java Network Programming 4th Edition


  9. On the other hand, Jim Elliott wanted me to point out that custom tags are so much better than the alternative of mixing presentation and logic that eliminating custom tags may not be worth the performance gain.) Java Performance Tuning 2nd Edition


  10. A big thanks to Jim Weaver for recommending me for this project and being so encouraging. Java 7 Recipes A Problem-Solution Approach



1263、单词 castor 的匹配文本:


  1. Classes for abstraction of XML marshaling and unmarshaling and support for popular tools like Castor, JAXB, XMLBeans, XStream, and so on, are packed into this module. Pro Spring 3


  2. The OXM module provides an abstraction layer that offers simplified and consistent support for popular Object/XML mapping tools such as Castor, the Java Architecture for XML Binding (JAXB), JiBX, XMLBeans, and XStream. Spring in Practice


  3. This marshaller uses Castor through the Spring OXM class CastorMarshaller, which implements both the Marshaller and Unmarshaller interfaces. Spring Batch in Action



1264、单词 javap 的匹配文本:


  1. You can use javap to generate additional examples of compiled methods. The Java Virtual Machine Specification Java SE 7 Edition


  2. To see what really happens, you can decompile the above code using the javap tool that comes as part of the JDK. Thinking in Java 4th Edition


  3. You could use the Java class file disassembler javap to disassemble Scala byte code (chapter 11 looks into this in more detail) as you could for Java classes. Scala in Action


  4. You can find out what translation your current Scala compiler uses by examining the “.class” files with tools like javap. Programming in Scala 2nd Edition


  5. The scalap, javap, and jad Command-Line Tools When you are learning Scala and you want to understand how Scala constructs are mapped to the runtime, there are several decompilers that are very useful. Programming Scala


  6. NOTE: To see these methods with your own eyes, compile the Person class and then look at the bytecode with javap: Scala for the Impatient


  7. Bytecode for a class can be dumped using the javap command with the –c command-line switch. Oracle JRockit The Definitive Guide


  8. First, run the javap program to find out how the compiler translates the fun method. Core Java Volume II Advanced Features 9th Edition


  9. You may want to use the javap disassembler to disassemble some of the class files for nested types so you can see what tricks the compiler has used to make the nested types work. Java in a Nutshell 5th Edition


  10. A compiler is not obliged to use the second type of switch bytecode at all, but generally it does if it can easily be used.) You can determine which switch a particular statement has been compiled into using javap, the disassembler available with the JDK. Java Performance Tuning 2nd Edition



1265、单词 blackhole 的匹配文本:


  1. However, the server writes queries against Blackhole tables to its logs, so they can be replicated or simply kept in the log. High Performance MySQL 3rd Edition


  2. If you want to avoid a warning when restarting apache2, create a blackhole directory in the dashboard directory like so: bk-compute-adminguide-grizzly


  3. On Ubuntu: If you want to avoid a warning when you restart apache2, create a blackhole directory in the dashboard directory, as follows: bk-admin-guide-cloud-latest


  4. These real-time blackhole lists need to respond to queries extremely quickly, and process a very high load. Java Network Programming 4th Edition


  5. Those who need to integrate their storage engine into other versions are advised to search (case-insensitive) for the string “blackhole” in the source tree of the given version, and follow the patterns of the blackhole storage engine. Understanding MySQL Internals



1266、单词 mp3 的匹配文本:


  1. The DataGrid is followed by a VBox containing an Image to show the front of the album along with song information, and a MediaPlayback control that will play MP3 files. Thinking in Java 4th Edition


  2. For example, suppose that you are running an online store and someone searches for mp3 MongoDB The Definitive Guide 2nd Edition


  3. That allows you to double-click on an MP3 attachment in your. Java Threads 3rd Edition


  4. Aside from his work at Core, Joe has worked on several successful web development projects, including Learn French by Podcast, a language-learning web site that offers its members free highquality MP3 podcasts and allows them to purchase accompanying professional PDF guides for a small fee. Beginning CouchDB


  5. For you audiophiles, it’s a bit like an MP3 of your source recording. HBase in Action


  6. A table without a primary key is like organizing your MP3 collection with no song titles. SQL Antipatterns


  7. This then proceeds down the music directory and to the mp3 file. Java 7 New Features Cookbook


  8. It could well be pointing to a GIF or JPEG image, an MP3 sound file, or something else entirely. Java Network Programming 4th Edition


  9. Readers are very welcome to use all these tools to implement a simple MP3 player. Steps in Scala An Introduction to Object-Functional Programming


  10. Solution Create an MP3 player by utilizing the following classes: Java 7 Recipes A Problem-Solution Approach



1267、单词 recompiling 的匹配文本:


  1. Editing and recompiling a Java file weekly is more tedious. Core Java Volume I Fundamentals 9th Edition


  2. It is possible to increase this value by recompiling Ganglia; alternatively, you could also split the value into multiple metrics. Monitoring with Ganglia


  3. Even if tune( ) is in a separate file and new methods are added to the interface of Instrument, tune( ) will still work correctly, even without recompiling it. Thinking in Java 4th Edition


  4. The MapReduce community has made sure that applications written against 1.x can run on top of YARN in a fully binary compatible manner (mapred APIs) or just by recompiling (source compatibility for mapreduce APIs) Apache Hadoop YARN Yet Another Resource Negotiator


  5. Recompiling the application produces errors because it depends on the Resizable interface. Java 8 in Action Lambdas, Streams and Functional-style Programming


  6. As a result, in the most simplistic case, existing MapReduce applications will work as is, but will require recompiling. Professional Hadoop Solutions


  7. Then, recompiling Super and executing this new binary with the original binaries for Test and Hyper produces the output: The Java Language Specification Java SE 7 Edition


  8. Possibly this involves going over a large number of recent source code check-ins, recompiling the application at different changes and rerunning the affected application until the guilty check-in is spotted. Oracle JRockit The Definitive Guide


  9. You can add or reorder constants in an enum type without recompiling its clients because the fields that export the constants provide a layer of insulation between an enum type and its clients: the constant values are not compiled into the clients as they are in the int enum pattern. Effective Java 2nd Edition


  10. Although you can specify this in code, it’s usually advisable to set this up in a configuration file so log locations can be changed without recompiling. Java Network Programming 4th Edition



1268、单词 resultset 的匹配文本:


  1. In later sections of this chapter, you will see examples to retrieve the data via JDBC from the database and directly map the resultset into Java objects (i.e., POJOs) Pro Spring 3


  2. I rounded up this chapter and this part of the book by covering some of CouchDB’s builtin specialized views for retrieving all documents and some of the parameters that can be used to manipulate the resultset returned by these views. Beginning CouchDB


  3. This is the query that will determine which documents are included in our resultset— we can trim the. Elasticsearch The Definitive Guide



1269、单词 scp 的匹配文本:


  1. If you are on a Unix-flavored machine, chances are that you already have the scp command-line utility handy. Performance Testing With JMeter 2.9


  2. This time we use SSH, the secure protocol upon which SCP is based. High Performance MySQL 3rd Edition


  3. You may know that transferring a file to a remote machine running a SSH daemon can be achieved with Secure Copy (SCP) Gradle in Action


  4. The SCP task (http://Ant.apache.org/manual/Tasks/scp.html) is run a large number of times across many servers. Jenkins Continuous Integration Cookbook


  5. In the following sample, we are using the scp ANT optional task. Gradle Effective Implementation Guide


  6. For a PHP application, an FTP or SCP file transfer may suffice. Jenkins The Definitive Guide


  7. Several recipes in this chapter will use password-less access to the OpenSSL tools such as ssh, scp, and rsync. Cassandra High Performance Cookbook


  8. When the SSH service is enabled in a running virtual machine image, the virtual machine will answer SCP and SFTP requests, given some preconfigured authentication policy. Oracle JRockit The Definitive Guide


  9. You’ll copy your application code to the master node in your Hadoop EC2 cluster using scp. Hadoop in Action


  10. The act of copying files to or from HDFS is more like SCP or FTP than working with an NFS mounted filesystem, for example. Hadoop Operations



1270、单词 reboots 的匹配文本:


  1. You need to set vm.swappiness in the /etc/sysctl.conf file, so that the setting is enabled whenever the system reboots. HBase Administration Cookbook


  2. Also, our cron job runs frequently enough that if the tmpfs is cleared (for instance, if the server reboots), our window of lost data is small. Monitoring with Ganglia


  3. Ephemeral storage exists only for the life of an instance, it will persist across reboots of the guest operating system but when the instance is deleted so is the associated storage. bk-compute-adminguide-grizzly


  4. If your build server is a Windows box, for example, you probably should install Tomcat as a Windows service, so that you can ensure that it starts automatically whenever the server reboots. Jenkins The Definitive Guide


  5. It persists across reboots of the guest operating system, but when the instance is deleted, the ephemeral storage is also removed. bk-admin-guide-cloud-latest


  6. Things like kernel upgrades, minor reboots, and so on become impossible to do without downtime. HBase in Action


  7. For node reboots or other short outages, however, this is not necessary—the recent PG logs are sufficient to quickly resynchronize replicas. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  8. Kernel parameters should be configured in /etc/sysctl.conf so that settings survive reboots. Hadoop Operations


  9. There would be little point in doing this if the machine used only one or two connections per hostname between successive reboots. Java Performance Tuning 2nd Edition


  10. Validating and Repairing Your Data It is possible that your data will be left in a damaged or incomplete state if your server unexpectedly reboots or your MongoDB server crashes for any reason. The Definitive Guide to MongoDB



1271、单词 tls 的匹配文本:


  1. Among these are codecs for various protocols including HTTP and SSL/TLS. Netty in Action MEAP V08


  2. You might be tempted to not implement TLS, leaving your packets open. Jenkins Continuous Integration Cookbook


  3. To change this behavior we are going to switch to SSL (Secure Socket Layer, which is now called TLS, or Transport Layer Security, its successor, but for consistency we’ll call it SSL), whenever a user logs in. Pro Spring MVC


  4. Client certificate authentication is also known as mutual authentication and is part of the Secure Sockets Layer (SSL) protocol and its successor, Transport Layer Security (TLS) Spring Security 3.1


  5. Since later versions of the protocol can support earlier versions, the string TLS provides the most generic instance of an SSL context. Java Security 2nd Edition


  6. There is some limited support for TLS or SSL, but only if your LDAP server does not require client-side certificates. Pro Puppet


  7. A comma-delimited list of SSL/ TLS cipher suites to enable. Oracle JRockit The Definitive Guide


  8. Algorithm restrictions can also be placed at the TLS level. Java 7 New Features Cookbook


  9. This package defines an API for secure network sockets using the SSL (Secure Sockets Layer) protocol, or the closely related TLS (Transport Layer Security) protocol. Java in a Nutshell 5th Edition


  10. Defaults to env[OS_CACERT] --insecure            Explicitly allow keystoneclient to perform "insecure" TLS (https) requests. admin-user-guide-trunk



1272、单词 uat 的匹配文本:


  1. So, for instance, test plans recorded against the User Acceptance Test (UAT) environment can be run in production if those two environments bear a resemblance in structure. Performance Testing With JMeter 2.9


  2. A similar process has to be followed for setting up the hardware and software infrastructure for other environments (for example, UAT and production) that are part of the deployment pipeline. Gradle in Action


  3. The application is automatically deployed into test environments for testing by the QA team either as changes are committed, or on a nightly basis; a version can be deployed (or “promoted”) to UAT and possibly production environments using a manually-triggered build when testers consider it ready. Jenkins The Definitive Guide


  4. You will need to have these databases in your UAT, QA, and Dev environments. NoSQL Distilled



1273、单词 x86 的匹配文本:


  1. The success of NOW led to a following project, the Millennium Project, that aimed to support even larger clusters built on x86 hardware and distributed across the Berkeley campus. Monitoring with Ganglia


  2. Solaris is the leader on SPARC hardware, and it runs on x86 hardware too. High Performance MySQL 3rd Edition


  3. Specifically for virtualization on certain hypervisors on the node or nodes running nova-compute, you need a x86 machine with an AMD processor with SVM extensions (also called AMD-V) or an Intel processor with VT (virtualization technology) extensions. bk-compute-adminguide-grizzly


  4. VirtualBox and the Android-x86 project There are a number of options for running Android apps. Jenkins Continuous Integration Cookbook


  5. An example of the yum repository configuration for the x86_64 architecture is: Pro Puppet


  6. Then x86 will be the only architecture on which the program can run. Oracle JRockit The Definitive Guide


  7. While not strictly necessary, machines in a Hadoop cluster are usually relatively homogeneous x86 Linux boxes. Hadoop in Action


  8. On workloads with a much larger number of runnable threads, especially threads that experience CPU stalls, the SPARC T-series will likely perform better than an x86/x64 quad core processor since the time it takes to switch between hardware threads on a SPARC T-series is faster than the time it takes for a thread context switch on a single hardware thread per core architecture because the thread context switch may require CPU caches to be primed with data, which means the switched-to-thread will waste clock cycles waiting for data to be loaded from memory. Java Performance


  9. The servers run the Linux operating system on x86 processors; each machine is connected to several commodity SATA drives. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications


  10. On an x86 or Alpha Linux, the MySQL server binary is capable of unwinding its own stack and printing the stack trace when it receives a fatal signal such as SIGSEGV. Understanding MySQL Internals



1274、单词 porting 的匹配文本:


  1. It attempts to maintain compatibility with the (sequential) BGL [43] to facilitate porting algorithms. Pregel  -  A System for Large-Scale Graph Processing


  2. Having a fixed size for number types eliminates a major porting headache. Core Java Volume I Fundamentals 9th Edition


  3. Software written for Java can then be executed on any CPU type and operating system that supports Java, without the need to modify and convert source code (a process known as porting) Java Network Programming and Distributed Computing


  4. However, simply porting these database-oriented structures into a MapReduce-based data warehouse system cannot fully satisfy all four requirements. RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems


  5. By porting Storm on YARN a great deal of flexibility in resource allocation can be unblocked. Apache Hadoop YARN Yet Another Resource Negotiator


  6. Because Lucene’s Directory APIs expose filesystem semantics, a “standard” way to implement a new Lucene back end is to impose such semantics on every new back end, which is not always the simplest (most convenient) approach to porting Lucene. Professional Hadoop Solutions


  7. As an interesting follow-up, a paper was published in 201120 that explores methods for porting traditional data structures like quad-trees and kd-trees to HBase in the form of a secondary index. HBase in Action


  8. Once this is complete, it will be added as the thrift2 package so that you can maintain your existing code using the older schema, while working on porting it over to the new schema. HBase:The Definitive Guide


  9. As you can see, the NMS API is practically identical to the JMS API, which can greatly simplify developing and porting message-based applications. ActiveMQ in Action


  10. Without Hive, developers would face a daunting challenge when porting their SQL applications to Hadoop. Programming Hive



1275、单词 chars 的匹配文本:


  1. They do not support TEXT or BLOB column types, and they support only fixed-size rows, so they really store VARCHARs as CHARs, which can waste memory. High Performance MySQL 3rd Edition


  2. Listing 7.11 shows a ByteToCharDecoder, which decodes bytes to chars and will be later combined. Netty in Action MEAP V08


  3. The hash code for an Int is the value of the Int, as are the hash codes of Shorts, Bytes, and Chars when automatically widened to Int. Programming in Scala 2nd Edition


  4. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the resulting value to be a negative number, even though chars represent 16-bit unsigned integer values. The Java Language Specification Java SE 7 Edition


  5. Compiling this file and decompiling the resulting class file with javap AllOpChars yields the following Java. Programming Scala


  6. You can obtain a char array that holds the characters of a string with toCharArray( ), or use getChars( ) to copy just a selected region of the string into an existing array. Java in a Nutshell 5th Edition


  7. The writeChars() method simply iterates through the String argument, writing each character in turn as a two-byte, big-endian Unicode character (a UTF-16 code point, to be absolutely precise) Java Network Programming 4th Edition


  8. FloatingDecimal stores the digits of the floating-point number as an array of chars when the FloatingDecimal is created; no strings are created until the floating-point number is converted to a string. Java Performance Tuning 2nd Edition


  9. A String is a sequence of Chars that is enclosed in double quotes: Steps in Scala An Introduction to Object-Functional Programming


  10. The CharSequence interface has methods codePoints and chars that yield an IntStream of the Unicode codes of the characters or of the code units in the UTF-16 encoding. Java SE 8 for the Really Impatient



1276、单词 foundational 的匹配文本:


  1. Instead, it is a basic building block that can be a foundational part of a testing tool. Core Java Volume I Fundamentals 9th Edition


  2. Before we go on to study recommender engines in detail, it’s important to spend some time with another foundational concept for recommenders in Mahout: the representation of data. Mahout in Action


  3. Code organization and reuse are the foundational levers of all computer programming: Write it once, use it more than once, and keep the code in one place. Thinking in Java 4th Edition


  4. Unfortunately, the implementation of this pattern isn’t straightforward in Hadoop because one of the foundational pieces of the framework is assigning one map task to an input split and assigning one map function call to one record. MapReduce Design Patterns


  5. It turns out that files() returns a FileCollec tion, which is a foundational interface for file programming in Gradle. Gradle Beyond The Basics


  6. You need not be an expert in all these technologies to make effective use of HBase, but it helps to have an understanding of these foundational layers in order to take full advantage of HBase. HBase in Action


  7. After considerable research and experimentation, we chose Hadoop and HBase as the foundational storage technology for these next generation applications. Apache Hadoop Goes Realtime at Facebook


  8. A Quick Review of Relational Databases Though you are likely familiar with them, let’s briefly turn our attention to some of the foundational concepts in relational databases. Cassandra The Definitive Guide


  9. We’ll revisit QueryParser in section 3.5, after we cover the more foundational pieces. Lucene in Action 2nd Edition


  10. This is foundational JDBC that needs to be known if you are working with databases within Java applications. Java 7 Recipes A Problem-Solution Approach



1277、单词 vhosts 的匹配文本:


  1. Within every RabbitMQ server is the ability to create virtual message brokers called virtual hosts (vhosts) RabbitMQ in Action



1278、单词 ternary 的匹配文本:


  1. Finally, Java supports the ternary ?: operator that is occasionally useful. Core Java Volume I Fundamentals 9th Edition


  2. Ternary if-else operator The ternary operator, also called the conditional operator, is unusual because it has three operands. Thinking in Java 4th Edition


  3. You can implement that method in a single statement and without using any conditional constructs like the ternary operator as follows: Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. As demonstrated here, Scala’s if expression can result in a value, similar to Java’s ternary operator. Programming in Scala 2nd Edition


  5. Conditional x ? y : z Bincond/ternary; y if x evaluates to true, z otherwise. Hadoop:The Definitive Guide 3rd Edition


  6. Basically, instead of using just one binary mark bit per object, a color, or ternary value is used. Oracle JRockit The Definitive Guide


  7. Because Scala’s if expression works like Java’s ternary operator, the if in the previous code returns a if it’s longer than b. Beginning Scala


  8. Java also defines one ternary operator, often called the conditional operator. Java in a Nutshell 5th Edition


  9. Bentley''s article also looks briefly at optimizing partialmatch searching in ternary search trees by transforming a tail recursion in the search into an iteration. Java Performance Tuning 2nd Edition


  10. In this case, the expression is using a ternary operator that returns an int value. Java 7 Recipes A Problem-Solution Approach



1279、单词 readonly 的匹配文本:


  1. For example, as soon as a document has been set to readonly mode, you can locate the Save and Save As menu items and disable them. Core Java Volume I Fundamentals 9th Edition


  2. Attributes may be mutable or immutable to the remote client through the readonly keyword. Java Network Programming and Distributed Computing


  3. An index is much like a database, in the sense that it can have its own completely different settings, such as the way it''s stored on disk or whether it''s readonly. Elasticsearch in Action MEAP V07 cHAPTER 1


  4. Other Linux tools such as dmesg can help determine if a drive has gone into a readonly mode. Hadoop in Practice


  5. Recall from our earlier discussion of upper-bounded wildcards that if you have a collection whose element type is an upper-bounded wildcard, it is effectively readonly. Java in a Nutshell 5th Edition


  6. If it’s false, you will see a label indicating the readonly value; if it’s true, you will see a set of one or more fields to add your new value and a button to update it. Cassandra The Definitive Guide


  7. As an additional safeguard, the NameNode process will hold the HDFS filesystem in a readonly mode until it has confirmed that a given percentage of blocks meet their replication threshold. Hadoop Beginners Guide


  8. Note that monitorRole is the role name and readonly is the access level. ActiveMQ in Action


  9. Processing each query shares mostly readonly data with the rest of the system, and constitutes a work unit that requires little communication. Web search for a planet the google cluster architecture


  10. In many cases you can instead selectively permit some leakage by constructing and returning an Adapter object surrounding the part that exposes only those operations that clients may use without introducing any potential interference — generally, readonly operations. Concurrent Programming in Java Design Principles and Pattern



1280、单词 manageability 的匹配文本:


  1. The problems that you solve with concurrency can be roughly classified as "speed" and "design manageability." Thinking in Java 4th Edition


  2. Note You can also install Foreman from source if you wish, though we recommend for manageability that you stick with packages. Pro Puppet


  3. In this time, the software has made great leaps in performance, reliability, scalability, and manageability. Hadoop:The Definitive Guide 3rd Edition


  4. The Management Console relies on the JMX standard and provides a way to monitor any application that exposes manageability features through JMX, including the JRockit JVM. Oracle JRockit The Definitive Guide


  5. We consider the peer-to-peer design and its corresponding gossip protocol, as well as what Cassandra does on read and write requests, and examine how these choices affect architectural considerations such as scalability, durability, availability, manageability, and more. Cassandra The Definitive Guide


  6. In contrast to functional requirements, which dictate functionally what an application computes or produces for output, systemic requirements dictate a particular aspect of an application’s operation such as its throughput, response time, the amount of memory it consumes, startup time, availability, manageability, and so on. Java Performance


  7. This increases your productivity, reduces the chances of typing errors, and also enhances manageability because if you have to make a change, it can occur in one place rather than in several different locations. Java 7 Recipes A Problem-Solution Approach



1281、单词 markdown 的匹配文本:


  1. The resulting blog engine features access-controlled posting, open comments with the possibility of moderation, Atom feeds, Markdown formatting, and a few other little goodies. CouchDB The Definitive Guide


  2. We won’t go into the syntax of Markdown here because that’s available on the web and because you can easily discover it by using the button bar interface. Spring in Practice


  3. Here is the Markdown text we get for the worker protocol: ## The MDP/Worker Protocol. ZeroMQ


  4. For example, to convert Markdown text to HTML using MarkdownJ, see Example 1-9 Gradle Beyond The Basics



1282、单词 hyperlink 的匹配文本:


  1. You supply the name of a class, method, or variable, and javadoc inserts a hyperlink to the documentation. Core Java Volume I Fundamentals 9th Edition


  2. By placing a hyperlink over a word (such as a scientific term) or a phrase, users would be able to jump instantly from one document to another, even though the documents could reside on servers located in other countries. Java Network Programming and Distributed Computing


  3. When it comes to the reduce phase, all the comment IDs that reference the same hyperlink will be grouped together. MapReduce Design Patterns


  4. They’re used to represent data such as the hyperlink structure of the internet, social networks (where they represent relationships between users), and in internet routing to determine optimal paths for forwarding packets. Hadoop in Practice


  5. Each hyperlink is an edge, and the frequency at which people click from one page to the next is edge metadata. Hadoop:The Definitive Guide 3rd Edition


  6. For example, all action components, like the command button or the hyperlink, have default action listeners that are invoked in this phase. Java EE 7 Essentials


  7. The HyperlinkListener interface has a single method, hyperlinkUpdate, that is called when the user moves over or clicks on a link. Core Java Volume II Advanced Features 9th Edition


  8. If reference is not a quoted string or a hyperlink, it is expected to have the following form: Java in a Nutshell 5th Edition


  9. For example, suppose that while browsing http://www.ibiblio.org/javafaq/javatutorial.html you click on this hyperlink: Java Network Programming 4th Edition


  10. Solution 3 is used when a user clicks a special hyperlink on a web page that launches the Swing application that will be pushed (installed) onto the local workstation. Java 7 Recipes A Problem-Solution Approach



1283、单词 cmd 的匹配文本:


  1. If you use Windows XP, select the Run option from the Start menu and type cmd. Core Java Volume I Fundamentals 9th Edition


  2. Based on your operating system, you’ll either end up with a batch script named grunt.cmd (Windows) or a shell script named grunt (*nix) Gradle in Action


  3. Commands in MongoDB are implemented as a special type of query that gets performed on the $cmd collection. MongoDB The Definitive Guide 2nd Edition


  4. JENKINS_JAVA_CMD The version of Java you want to use to run Jenkins. Jenkins The Definitive Guide


  5. The easiest way to do this is to open Spotlight (Cmd+Spacebar) and enter Terminal into the Spotlight text box. Beginning CouchDB


  6. The command cmd is usually named after the corresponding Unix equivalent. Hadoop in Action


  7. So JMX is doing the real work, the NodeProbe class is used to connect to the JMX agent and sort out the data, and the NodeCmd class is used to present it in an interactive command-line interface. Cassandra The Definitive Guide


  8. In a Command Prompt window, or in a cmd file, the command looks like. Java Performance


  9. To open a command window in Microsoft Windows, click Start, type CMD, and press Enter. Java 7 Recipes A Problem-Solution Approach


  10. Regardless of the functionality they provide, what all database commands have in common is their implementation as queries on a special virtual collection called $cmd. MongoDB in Action



1284、单词 restlet 的匹配文本:


  1. In the XML world he is known as the developer of DB Generator for the Apache Cocoon project, the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration by using Oracle JVM Directory implementation and in the Restlet.org project the Oracle XDB Restlet Adapter, an alternative to write native REST web services inside the database-resident JVM. Mastering ElasticSearch


  2. He recently joined Noelios Technologies, the company behind the Restlet framework, and lives in Brittany (France) Spring Batch in Action


  3. In this section, we’ll describe some general concepts related to integrating frameworks of this kind with Spring DM, and we’ll take a look at examples of the most popular ones: Spring Web Services (Spring WS) and Restlet. Spring Dynamic Modules in Action


  4. In the XML world, he is known as the developer of the DB Generator for the Apache Cocoon project, the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration by using Oracle JVM Directory implementation, and the Restlet.org project – the Oracle XDB Restlet Adapter, an alternative to writing native REST web services inside the database resident JVM. Apache Solr 4 Cookbook



1285、单词 jackson 的匹配文本:


  1. Although it’s true that Jackson provides JSON-specific annotations for object/ JSON mapping, you can configure Jackson to understand JAXB annotations, and this allows you to avoid parallel sets of JAXB and Jackson annotations. Spring in Practice


  2. They use the Jackson JSON library, which is included in your Pig distribution. Programming Pig


  3. For the purpose of discussion, the Jackson serializer for JSON and JAXB for XML were selected. Java Performance


  4. Thanks to my supportive, patient, and loving wife Robyn; and our five children Jackson, Nicholas, Matthew, Tressa, and Ruby. Java 7 Recipes A Problem-Solution Approach


  5. Jackson (the JSON marshalling technology used by Spring Data REST) seems to choke on serializing the EmailAddress value object. Spring Data Modern Data Access for Enterprise Java


  6. Registered if the Jackson JSON library is present on the classpath. Spring in Action 4th Edition



1286、单词 exe 的匹配文本:


  1. A wrapper is a Windows program with the familiar .exe extension that locates and launches the Java virtual machine (JVM) or tells the user what to do when no JVM is found. Core Java Volume I Fundamentals 9th Edition


  2. If you''ve already saved the lightbulb schema as LightBulb.idl (preferably in a directory of its own), the next step is to generate the Java mapping, using idlj.exe. Java Network Programming and Distributed Computing


  3. Now that you have a data directory, open the command prompt (cmd.exe) MongoDB The Definitive Guide 2nd Edition


  4. The installer comes in the form of a ZIP file containing an MSI package for Jenkins, as well as a setup.exe file that can be used to install the .NET libraries if they have not already been installed on your machine. Jenkins The Definitive Guide


  5. Typically, this works best in something like a native binary executable, such as an .exe file, where distinctions between code and data are less clear. Oracle JRockit The Definitive Guide


  6. The other two were a Word document and the executable java.exe file. Java 7 New Features Cookbook


  7. It can be used to transfer TIFF pictures, Microsoft Word documents, Windows .exe files, or anything else that can be represented in bytes. Java Network Programming 4th Edition


  8. Assuming that you have appropriate NMS DLLs, the preceding command should produce the Consumer.exe binary. ActiveMQ in Action


  9. Once the installation is complete, you will find the easy_install.exe file in Python’s Scripts subdirectory. The Definitive Guide to MongoDB


  10. The core database server runs via an executable called mongod (mongodb.exe on Windows) MongoDB in Action



1287、单词 speedups 的匹配文本:


  1. The overall effect of these speedups is that raw data to be classified can be read, parsed, and encoded at a rate of about 15 MB/s without much effort. Mahout in Action


  2. Although the max-parallelism policy produces the largest improvements, the relative job size policy also does quite well, and its speedups probably have a higher likelihood of being achieved because it requires fewer new MapReduce workers to be constructed: the time to set up a worker on a new machine is not fully accounted for in the simple model. Omega - flexible, scalable schedulers for large compute clusters


  3. It is possible that some algorithmic tuning of these programs could reduce this effect, and in turn lead to better speedups. A Java ForkJoin Framework


  4. There are a number of additional speedups possible in a JVM. Thinking in Java 4th Edition


  5. At the scale of the bulk data processing jobs that run in today’s data centers, these asymptotic improvements can translate to huge speedups and cluster utilization savings. Incoop MapReduce for Incremental Computations


  6. More than once, the simple removal of a couple of calls to System.gc has led to immense speedups for quite a few customer applications. Oracle JRockit The Definitive Guide


  7. Order-of-magnitude speedups are typical for the first round of performance tuning. Java Performance Tuning 2nd Edition


  8. You cranked up mergeFactor and got awesome speedups, so you want to push it even higher. Lucene in Action 2nd Edition


  9. However, there are also efficient alternative concurrent algorithms for many problems, including those that open up the possibility of further speedups via parallelism. Concurrent Programming in Java Design Principles and Pattern



1288、单词 pearson 的匹配文本:


  1. Audio page 50 Sean addresses questions about the implementation of the Pearson correlation. Mahout in Action


  2. Popular measures that can calculate these scores include Euclidean distance and Pearson’s correlation. Hadoop in Practice


  3. InformIT is a brand of Pearson and the online presence for the world’s leading technology publishers. Java SE 8 for the Really Impatient


  4. He currently serves as a Director of Technology for Pearson Education, driving the development of new educational service platforms. Pro JPA2



1289、单词 hly 的匹配文本:


  1. Create an hly_temp_normal table in your MySQL database, using the following SQL statements: HBase Administration Cookbook



1290、单词 subinterface 的匹配文本:


  1. It is a subinterface of AutoCloseable, also with a single close method. Core Java Volume I Fundamentals 9th Edition


  2. NIO.2 also introduces the MulticastChannel interface as a subinterface of NetworkChannel. Pro Java 7 NIO.2


  3. This class implements the MutableTreeNode interface, a subinterface of TreeNode (see Figure 6.17) Core Java Volume II Advanced Features 9th Edition


  4. In fact, the key types implement WritableComparable   , a subinterface of Writable. Hadoop in Action


  5. If you answered no to the first question, ask yourself one more: Do I want to limit the use of this marker to elements of a particular interface, forever? If so, it makes sense to define the marker as a subinterface of that interface. Effective Java 2nd Edition


  6. TreeSet actually implements the SortedSet interface, which is a subinterface of Set. Java in a Nutshell 5th Edition


  7. Unfortunately, you cannot add a method to the Comparator interface, so you have to use the following subinterface: Java Performance Tuning 2nd Edition


  8. This mechanism is called the Executor framework and is around the Executor interface, its subinterface ExecutorService, and the ThreadPoolExecutor class that implements both interfaces. Java 7 Concurrency Cookbook


  9. Optionally, define a subinterface that supports additional update methods used in the normal. Concurrent Programming in Java Design Principles and Pattern


  10. It is a subinterface of AutoCloseable, also with a single close method, but that method is declared to throw an IOException. Java SE 8 for the Really Impatient



1291、单词 binlog 的匹配文本:


  1. The sync_binlog option controls how MySQL flushes the binary log to disk. High Performance MySQL 3rd Edition


  2. If the low-level module has made a modification to the data in some way and if the binary update logging is enabled, the module will be responsible for asking the Logging Module to log the update event to the binary update log, sometimes known as the replication log, or, among MySQL developers and power users, the binlog. Understanding MySQL Internals



1292、单词 proc 的匹配文本:


  1. The dendrite would send periodic heartbeats as well as publish any significant /proc metric changes on a common multicast channel. Monitoring with Ganglia


  2. You can capture /proc/diskstats to record disk I/O for later analysis with the pt-diskstats tool,5 for example. High Performance MySQL 3rd Edition


  3. Create the proc folder under the root filesystem of the image file using the following command: sudo mkdir -pv /mnt/centos/proc. Hadoop Operations and Cluster Management Cookbook


  4. The input-events startLogProc in Listing 6-7 defines a dependency between two data sets: systemLog and applicationLog. Professional Hadoop Solutions


  5. Swift Recon The Swift Recon middleware (see http://swift.openstack.org/admin_guide.html#clustertelemetry-and-monitoring) provides general machine statistics, such as load average, socket statistics, /proc/meminfo contents, and so on, as well as Swift-specific metrics: bk-admin-guide-cloud-latest


  6. Determine how much physical memory your system has by searching through the /proc/meminfo file: Cassandra High Performance Cookbook


  7. The cumulative CPU time for a task in milliseconds, as reported by /proc/cpuinfo. Hadoop:The Definitive Guide 3rd Edition


  8. One example of this would be not having to emulate parts of /proc file system in the kernel, as this is the way JRockit on Linux collects information on memory layout. Oracle JRockit The Definitive Guide


  9. There are other ways to monitor for swap activity on Linux such as using the top command or observing the contents of the file /proc/meminfo. Java Performance


  10. This is because the DUMMY_PROC procedure simply assigns the contents if the IN parameter to the OUT parameter. Java 7 Recipes A Problem-Solution Approach



1293、单词 superinterfaces 的匹配文本:


  1. The value of the interfaces_count item gives the number of direct superinterfaces of this class or interface type. The Java Virtual Machine Specification Java SE 7 Edition


  2. The type Object, if C is an interface type with no direct superinterfaces. The Java Language Specification Java SE 7 Edition


  3. A class that implements such an interface must implement the abstract methods defined directly by the interface, as well as all the abstract methods inherited from all the superinterfaces. Java in a Nutshell 5th Edition


  4. Loading a class requires loading all Java superclasses and all Java superinterfaces. Java Performance


  5. NOTE: When calling Java.extend, you can specify any number of superinterfaces as well as a superclass. Java SE 8 for the Really Impatient



1294、单词 jbutton 的匹配文本:


  1. Caution Most Swing component classes start with a “J”: JButton, JFrame, and so on. Core Java Volume I Fundamentals 9th Edition


  2. Making a button Making a button is quite simple: You just call the JButton constructor with the label you want on the button. Thinking in Java 4th Edition


  3. For example, you can’t serialize any Swing components (like JButton, JSpinner), and you can’t serialize File objects, but you can serialize the Color class (awt.color, to be more precise) Java 7 Recipes A Problem-Solution Approach



1295、单词 comparators 的匹配文本:


  1. Key Contents and Comparators For simplicity in this pass, we are going to use the same object, Text, for the keys for both datasets, and Text for the values. Pro Hadoop


  2. In this chapter, the recipe Using column metadata and comparators for schema enforcement stores meta-information inside the schema so it displays properly. Cassandra High Performance Cookbook


  3. It’s not customary to modify an object from within a Comparator Normally you wouldn’t write Comparators the way we did in the code example. HBase in Action


  4. The sorting algorithms are easily rewritten to use Comparators, in the event that the default ordering is unavailable or unacceptable. Data Structures and Algorithm Analysis in Java 3rd Edition


  5. Another tool to improve the execution times of your MapReduce jobs are binary comparators, which we’ll examine next. Hadoop in Practice


  6. If you are using your own custom Writable objects or custom comparators, make sure you have implemented RawComparator. Hadoop:The Definitive Guide 3rd Edition


  7. Be advised that the KeyValue class, and its accompanying comparators, are designed for internal use. HBase:The Definitive Guide


  8. A TreeBasedTable table where the row and column keys are ordered, either by the natural order or by specified comparators for the row and column keys. Getting Started with Google Guava


  9. The Comparator interface is generic (Item 26) so that it is applicable to comparators for objects other than strings. Effective Java 2nd Edition


  10. NOTE: As you will see in Chapter 8, the Comparator class has several methods that yield or modify comparators. Java SE 8 for the Really Impatient



1296、单词 etag 的匹配文本:


  1. Etag The Etag HTTP header field is used to show the revision for a document. Getting Started with CouchDB


  2. An Etag in HTTP-speak identifies a specific version of a resource. CouchDB The Definitive Guide


  3. Whenever you get a resource, the server responds with an etag in the header. NoSQL Distilled


  4. An ETag is a string that identifies a specific version of a component. Solr 3 Enterprise Search Server


  5. The response to every GET request to a document or view includes an Etag HTTP header (for documents, the Etag’s value is the document’s revision in double quotes) Scaling CouchDB


  6. ETag - An entity tag which can be used for cache validation by a client. A Little Riak Book


  7. The ETag header (HTTP 1.1) is a unique identifier for the resource that changes when the resource does. Java Network Programming 4th Edition


  8. For example, if your documents are crawled from a web server, then one way to detect that the content has changed is to look for a changed ETag HTTP header. Lucene in Action 2nd Edition


  9. Next, we have the eTagSeed attribute, which is responsible for generating the ETag HTTP cache header. Apache Solr 4 Cookbook


  10. ETag MD5 hash of an object within swift, used to ensure data integrity. OpenStack Operations Guide



1297、单词 lowercased 的匹配文本:


  1. The first letter after "get" or "set" will automatically be lowercased by any tools that look at the methods, in order to produce the property name. Thinking in Java 4th Edition


  2. In order to actually perform the autocomplete, we’ll insert lowercased contact names. Redis in Action


  3. Lowercase filter: It makes all the tokens lowercased • ASCII folding filter: It removes non ASCII parts from tokens. Mastering ElasticSearch


  4. Returns a list containing all elements of the thrill list in alphabetical order of the first character lowercased (returns List("fill", "until", "Will")) Programming in Scala 2nd Edition


  5. Remember that facet values are the final result of text analysis, and therefore are probably lowercased for fields you might want to do term completion on. Solr 3 Enterprise Search Server


  6. We tell Solr to split the data on whitespaces, split on case change, make text parts lowercased, and finally to stem text with the appropriate algorithm—in this case, it is one of the English stemming algorithms available in Solr and Lucene. Solr 3.1 Cookbook


  7. Thus, unless you want to actually bring back facet counts for each of those terms individually and lowercased, you have to consider how you may want to facet on a field when you create the field definition in Solr’s schema.xml. Solr in action


  8. The terms of the contents field, however, were lowercased when indexed. Lucene in Action 2nd Edition


  9. The next parameter lowernames is set to true; this tells Solr to lower all names that come from Tika and have them lowercased. Apache Solr 4 Cookbook


  10. The resolution algorithm starts with interpreting the entire part (AddressZipCode) as a property and checks the domain class for a property with that name (with the first letter lowercased) Spring Data Modern Data Access for Enterprise Java



1298、单词 base64 的匹配文本:


  1. To output binary data, there is the option to return a Base64-encoded string, like this: CouchDB The Definitive Guide


  2. Request Parameters: full (Boolean, return full documents including metadata, default false) revs (Boolean, return a list of previous revisions, default false) rev (String, set a specific revision number) attachments (Boolean, get attachments in Base64, default false) Beginning CouchDB


  3. Because you’ve stored the passwords as simple Strings, you can decode them enough to find the value using the base64 utility: HBase in Action


  4. Discussion Similar to Protocol Buffers, Elephant Bird includes a Base64-encoded, line-based mechanism to serialize Thrift, as well as a block-based mechanism, which uses a Protocol Buffers generic container to write Thrift objects. Hadoop in Practice


  5. The column name and the actual value are encoded in Base64,† as explained in the online schema documentation. HBase:The Definitive Guide


  6. The program is straightforward, except for the computation of the Base64 encoding. Core Java Volume II Advanced Features 9th Edition


  7. Here we are taking a binary file (a PDF document) and encoding the bytes to a base64 encoded string. Getting Started with Google Guava


  8. The BASE64 encoder encoded binary data for instances where the data needs to be stored and transferred across textual mediums, such as e-mail and XML files. Java 7 New Features Cookbook


  9. The CertificateFactory class serves as a certificate parser, providing the ability to convert a stream of bytes (or the base64 encoding of those bytes) into a Certificate, a CertPath or a CRL object. Java in a Nutshell 5th Edition


  10. The other dependency is an API that encodes arbitrary strings first with the Metaphone algorithm, then the Base64 format. Building and Testing with Gradle



1299、单词 nrpe 的匹配文本:


  1. The NRPE and check_log Nagios plugins can be used to achieve this simple goal, with a few simple steps. HBase Administration Cookbook


  2. Collectively, these enable Nagios to query metric information from gweb instead of relying on remote execution systems such as Nagios Service Check Acceptor (NSCA) and Nagios Remote Plugin Executor (NRPE) Monitoring with Ganglia


  3. Install the Nagios Remote Plugin Executor (NRPE) package with the following command: sudo yum install nrpe. Hadoop Operations and Cluster Management Cookbook


  4. You can create automated alerts for critical processes by using Nagios and NRPE. OpenStack Operations Guide



1300、单词 spinner 的匹配文本:


  1. Several variations of the JSpinnerJSpinner component component The values in the spinner can be numbers, dates, values from a list, or, in the most general case, any sequence of values for which predecessors and successors can be determined. Core Java Volume II Advanced Features 9th Edition


  2. However, a widget called a Spinner is constructed within the activity as well. Java 7 Recipes A Problem-Solution Approach



1301、单词 tokenizers 的匹配文本:


  1. You can defi ne Tokenizers specifi c to your fi eld types. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. Analyzers, Tokenizers, Filters, oh my! The various components involved in text analysis go by various names, which are about to be defined. Solr 3 Enterprise Search Server


  3. Furthermore, type definition is composed of tokenizers and filters (both token filters and character filters) Solr 3.1 Cookbook


  4. The first property, tokenizers B, registers all LineTokenizers in a map. Spring Batch in Action


  5. Ironically, the next step may be to fall back on the StringTokenizer class, use of which is discouraged in the class Javadocs comments in favor of the newer regular expression-based tokenizers. Hadoop in Practice


  6. Elasticsearch provides many character filters, tokenizers and token filters out. Elasticsearch The Definitive Guide


  7. In Solr, you must specify the factory class instead of the underlying Tokenizer implementation class because most tokenizers do not provide a default no-arg constructor. Solr in action


  8. It’s also straightforward to build your own analyzer, or create arbitrary analyzer chains combining Lucene’s tokenizers and token filters, to customize how tokens are created. Lucene in Action 2nd Edition


  9. The indentation of the actual output was changed to fit the page.) For those not familiar with parsers and tokenizers, this can look overwhelming. Programming Hive



1302、单词 soa 的匹配文本:


  1. Attune Infocom provides enterprise open source solutions and services for SOA, BPM, ESB, Portal, cloud computing, and ECM. Web Crawling and Data Mining with Apache Nutch


  2. Additionally, he is an InfoQ editor on SOA and Big Data, and a frequent speaker at industry conferences. Professional Hadoop Solutions


  3. Remember, OSGi is sometimes referred to as a service-oriented architecture (SOA) in a JVM. Spring Dynamic Modules in Action


  4. Oracle SOA Suite in the delivery of real-world SOA applications. Oracle JRockit The Definitive Guide


  5. In many SOA scenarios, service consumers may not be under the control of the service implementation. Spring Integration in Action


  6. They are now employed in a variety of use cases, such as content management and supply chain systems, document management, publishing, and support of SOA initiatives. Cassandra The Definitive Guide


  7. SOA, CEP, and many other higher-level concepts built on top of enterprise messaging wouldn’t have come about. ActiveMQ in Action


  8. Service Oriented Architecture (SOA) is an architectural style that enables complex business applications to be built from smaller individual services distributed over the network. Java Performance


  9. One of the most hyped TLAs (three-letter acronyms) in recent years is SOA (serviceoriented architecture) Spring in Action 4th Edition



1303、单词 andy 的匹配文本:


  1. In this book by Andy Hunt, you’ll learn how our brains are wired, and how to take advantage of your brain’s architecture. Programming Concurrency on the JVM


  2. Thanks to Andy Oram, Rachel Head, and the whole O’Reilly staff who do such a classy job of publishing books and running conferences. High Performance MySQL 3rd Edition


  3. This practice is better known by the DRY acronym and was introduced by Andy Hunt and Dave Thomas in their book, The Pragmatic Programmer. Sonar Code Quality Testing Essentials


  4. Acknowledgments Thanks to Andy Oram for making this happen at O’Reilly and editing the book. ZeroMQ


  5. The support staff, especially our editor Andy Oram, has been extremely helpful in guiding us through this process. MapReduce Design Patterns


  6. To discuss supporting OSGi in Spring, Adrian proposed a conference call with Hal Hildebrand (Oracle), John Wells (BEA), and one of the authors of this book: Andy Piper (BEA) Spring Dynamic Modules in Action


  7. Now let’s consider a third roommate, Andy, who loves almonds. A Little Riak Book


  8. Acknowledgements We thank Pieter Noordhuis for thoroughly reviewing several chapters of our book, our editor Andy Oram for his work on making us look good, Salvatore Sanfilippo for his words of encouragement, and our respective companies for the extra free time to write this book. Redis Cookbook


  9. First, a big thank you to everyone at Manning for working hard, for their patience when the writing wasn’t moving as fast as it should have, and for prodding me along to get it done: Marjan Bace, Michael Stephens, Cynthia Kane, Andy Carroll, Benjamin Berg, Alyson Brener, Dottie Marisco, Mary Piergies, Janet Vail, and many others behind the scenes. Spring in Action 4th Edition


  10. Interestingly enough, my editor happened to be Andy Oram, who also participated in the publication of Understanding the Linux Kernel and Linux Device Drivers. Understanding MySQL Internals



1304、单词 kickstart 的匹配文本:


  1. See also f The Creating the kickstart file and boot media recipe in Chapter 2, Preparing. Hadoop Operations and Cluster Management Cookbook


  2. Instructions for installing these RPMs and configuring them with a kickstart script are available at the Mirantis blog. Deploying OpenStack


  3. In addition, Escalate Software can help your business get started and succeed with Scala through its “kickstart” program, which helps you get skills and infrastructure in place for your build, development, and deployment. Programming in Scala 2nd Edition


  4. Tip If you use a provisioning tool like Kickstart or Preseed, you can also include Puppet installation and signing as part of your build process. Pro Puppet


  5. To ease the burden of installing and maintaining the same software on each node, it is normal to use an automated installation method such as Red Hat Linux’s Kickstart or Debian’s Fully Automatic Installation. Hadoop:The Definitive Guide 3rd Edition


  6. Kickstart users can commission entire Hadoop clusters without manual intervention. HBase:The Definitive Guide


  7. Ubuntu and Red Hat Linux both include mechanisms for configuring the operating system, including preseed and kickstart, that you can use after a network boot. OpenStack Operations Guide



1305、单词 unmodifiable 的匹配文本:


  1. The Collections class has methods that produce unmodifiable views of collections. Core Java Volume I Fundamentals 9th Edition


  2. In Java we could pass an unmodifiable Collection as messages to send more than one object in. Programming Concurrency on the JVM


  3. The full list of "unmodifiable" Collections methods is described later. Thinking in Java 4th Edition


  4. But an unmodifiable collection is just a wrapper over a modifiable collection. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. SetView is a static, abstract inner class of the Sets class and represents an unmodifiable view of a given Set instance. Getting Started with Google Guava


  6. For example, the Java Collections Framework has thirty-two convenience implementations of its collection interfaces, providing unmodifiable collections, synchronized collections, and the like. Effective Java 2nd Edition


  7. Immutable or unmodifiable collections throw this exception when a modification method, such as add( ) or delete( ), is called. Java in a Nutshell 5th Edition


  8. The headers should be returned in an unmodifiable map with keys that are the HTTP header field names and values that are lists of values for each named HTTP header. Java Network Programming 4th Edition



1306、单词 externalizable 的匹配文本:


  1. You can control the process of serialization by implementing the Externalizable interface instead of the Serializable interface. Thinking in Java 4th Edition


  2. Externalizable classes supply custom read and write methods that take over the output of their instance fields. Core Java Volume II Advanced Features 9th Edition


  3. An Externalizable class defines a writeExternal() method responsible for writing the complete state of an object to the serialization stream. Java in a Nutshell 5th Edition


  4. The Externalizable interface also provides support for serializing objects using ObjectInputStream and ObjectOutputStream. Java Performance Tuning 2nd Edition


  5. As such, it requires implementing the Externalizable interface in lieu of the Serializable interface. Java 7 Recipes A Problem-Solution Approach



1307、单词 interoperable 的匹配文本:


  1. A core part of sFlow is the specification of standard sets of metrics, allowing each switch vendor to embed the measurements in hardware and produce interoperable results. Monitoring with Ganglia


  2. In a world of heterogeneous networks, this is an important quality, as it makes networks interoperable. Java Network Programming and Distributed Computing


  3. A more interoperable solution is to convert data to XML format, which allows it to be consumed by a large variety of platforms and languages. Thinking in Java 4th Edition


  4. The second ingredient for building interoperable data management components is a shared storage format. Dremel - Interactive Analysis of WebScale Datasets


  5. Grid computing has mostly focused on the problem of making diverse virtual organizations share geographically distributed and separately administered resources in a secure and interoperable way. Mesos A Platform for Fine-Grained Resource Sharing in the Data Center


  6. Interoperable For some systems, it is desirable to be able to support clients that are written in different languages to the server, so the format needs to be designed to make this possible. Hadoop:The Definitive Guide 3rd Edition


  7. This makes REST ideal for communicating between heterogeneous systems: the protocol layer takes care of transporting the data in an interoperable format. HBase:The Definitive Guide


  8. It was deemed critical that all of this code remain legal and interoperable with new code that does use generics. Effective Java 2nd Edition


  9. Note that CharSequence implementations do not typically have interoperable equals( ) or hashCode( ) methods, and it is not usually possible to compare two CharSequence objects or use multiple sequences in a set or hashtable unless they are instances of the same implementing class. Java in a Nutshell 5th Edition


  10. If you are already experienced with the Apache Hadoop software stack and are adding Impala as another arrow in your quiver, you will find it interoperable on several levels. Cloudera Impala



1308、单词 lowercasing 的匹配文本:


  1. For example, if you are using stemming and lowercasing during indexing, you need to be sure that the term in the query are also lowercased and stemmed, or your queries will return no results at all. Mastering ElasticSearch


  2. One interesting thing to notice about the analysis results is that Quoting ultimately became quot after stemming and lowercasing. Solr 3 Enterprise Search Server


  3. The second filter defined is the lowercase filter; you can see how it works by referring to the Lowercasing the whole string recipe in this chapter. Solr 3.1 Cookbook


  4. Token filters Last, each term is passed through any token filters in turn, which can change terms (eg lowercasing "Quick"), remove terms (eg stopwords like "a", "and","the" etc) or add terms (eg synonyms like "jump" and "leap") Elasticsearch The Definitive Guide


  5. Transformation – Changing the token to a different form such as lowercasing all letters or stemming. Solr in action


  6. An analyzer tokenizes text by performing any number of operations on it, which could include extracting words, discarding punctuation, removing accents from characters, lowercasing (also called normalizing), removing common words, reducing words to a root form (stemming), or changing words into the basic form (lemmatization) Lucene in Action 2nd Edition


  7. Lowercasing the whole string Imagine you have a system where you only want to have perfect matches for names of the documents. Apache Solr 4 Cookbook


  8. Specifically, the bean was given an ID of sgtPeppers by lowercasing the first letter of the class name. Spring in Action 4th Edition



1309、单词 tabbed 的匹配文本:


  1. The JTabbedPane allows you to create a "tabbed dialog," which has filefolder tabs running across one edge. Thinking in Java 4th Edition


  2. The report is divided into a three-tabbed table summed and sorted by files, folders, and languages. Jenkins Continuous Integration Cookbook


  3. Once you connect to the process, you will see a tabbed window with various details in it. HBase:The Definitive Guide


  4. We will finish the chapter by covering component organizers, such as tabbed panes and desktop panes with internal frames. Core Java Volume II Advanced Features 9th Edition


  5. Its tabbed and well integrated user interface allows for rapid browsing and experimentation. Lucene in Action 2nd Edition


  6. Each menu option will add an instance of a TabPlacementAction that is responsible for calling the tabbed pane’s setTabPlacement() method. Java 7 Recipes A Problem-Solution Approach



1310、单词 inode 的匹配文本:


  1. In the case of the nodiratime attribute, mounting disks does not update the directory inode access times on the filesystem. HBase Administration Cookbook


  2. When a file has two names of equal weight and the inode table (Linux files don’t actually live in directories; they are assigned an inode number, which Linux uses to locate them) points directly to the blocks on the disk that contain the data, the link is a hard link. Pro Java 7 NIO.2


  3. You can use partitioning to avoid some bottlenecks in specific workloads, such as per-index mutexes with InnoDB or per-inode locking with the ext3 filesystem. High Performance MySQL 3rd Edition


  4. Noatime does not update inode information each time it is read. Cassandra High Performance Cookbook


  5. Continued reliance on traditional file system principles like allocation lists and inode tables and a reluctance to delegate intelligence to the OSDs have further limited scalability and performance, and increased the cost of reliability. Ceph A Scalable, High-Performance Distributed File System


  6. The metadata includes things like filename, inode number, block location, and so on; the data is the actual content of the file. HBase in Action


  7. Each inode is an internal representation of a file or directory’s metadata and contains such information as the file’s replication level, modification and access times, access permissions, block size, and the blocks a file is made up of. Hadoop:The Definitive Guide 3rd Edition


  8. The inode data and the list of blocks belonging to each file comprise the metadata of the name system called the image. The Hadoop Distributed File System


  9. In UNIX, the device id or inode is used for this purpose. Java 7 New Features Cookbook


  10. Similar to ext4, allocation is extent-based, but its extents are within allocation groups, each of which is responsible for maintaining its own inode table and space. Hadoop Operations



1311、单词 concat 的匹配文本:


  1. Another good optimization is to manually generate the IN() list by executing the subquery as a separate query with GROUP_CONCAT() High Performance MySQL 3rd Edition


  2. You’ve assumed that the missing methods insertAll and concat are themselves functional and deduced that your function subsets is also, because no operation in it mutates any existing structure. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. The following are a few additional notes to help with the concat_ws() function: Hadoop RealWorld Solutions Cookbook


  4. The lists to be concatenated are supplied as direct arguments to concat: Programming in Scala 2nd Edition


  5. Calling concatUpper with an underscore ( _ ) turns the method into a function value. Programming Scala


  6. Use the concat function for the latter in both MySQL and Hive. Hadoop:The Definitive Guide 3rd Edition


  7. GROUP_CONCAT( ) that concatenates all the values in the group into one. SQL Antipatterns


  8. Other examples include string manipulation functions, like ucase(), which converts the string to upper case; reverse(), which reverses a string; and concat(), which joins multiple input strings into one output string. Programming Hive


  9. If you want to concatenate Strings onto the end of each other, use the concat() method. Java 7 Recipes A Problem-Solution Approach


  10. You can concatenate two streams with the static concat method of the Stream class: Java SE 8 for the Really Impatient



1312、单词 underflow 的匹配文本:


  1. In particular, the Java Virtual Machine requires full support of IEEE 754 denormalized floating-point numbers and gradual underflow, which make it easier to prove desirable properties of particular numerical algorithms. The Java Virtual Machine Specification Java SE 7 Edition


  2. This example demonstrates, among other things, that gradual underflow can result in a gradual loss of precision. The Java Language Specification Java SE 7 Edition


  3. Integer arithmetic in Java is modular, which means that it never produces an overflow or an underflow when you exceed the range of a given integer type. Java in a Nutshell 5th Edition


  4. You have to be careful about data overflow or underflow when doing arithmetic. Programming Hive



1313、单词 joda 的匹配文本:


  1. A good example of a well-written Java library is Joda-Time (http://jodatime.sourceforge.net) Scala in Action


  2. As shown in Listing 14-1, for the birth date attribute, we use JodaTime’s DateTime class. Pro Spring 3


  3. The consequence is that all these flaws and inconsistencies have encouraged the use of third-party date and time libraries, such as Joda-Time. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Still, some popular, high-quality libraries, such as Joda-Time and Google’s collections library, provide excellent support for programming with immutable data. Functional Programming Patterns in Scala and Clojure


  5. This class uses the Joda Time project (http://joda-time.sourceforge.net) to represent dates. Spring Integration in Action


  6. During scaffolding, Roo takes this information and uses it to translate the date into a localized String using the Joda-Time library. Spring Roo in Action


  7. The @DateTimeFormat annotation no longer has a hard dependency on JodaTime. Spring in Action 4th Edition



1314、单词 maintainers 的匹配文本:


  1. He is currently one of the maintainers of the Ganglia project. Monitoring with Ganglia


  2. We’ve been working closely with the Netty maintainers to improve on parts of Netty that both Finagle and the wider community can benefit from60 Netty in Action MEAP V08


  3. Imagine all the angry maintainers of alternative collection frameworks such as Guava and Apache Commons who now need to modify all the classes implementing the List interface to provide an implementation for the sort method too! Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Equally important is the fact that this construct shows future maintainers of your code that you know what you are doing and are comfortable with idiomatic JavaScript. Testable JavaScript


  5. With this book, written by Tika’s two main creators and maintainers, Chris and Jukka, you’ll understand the problems of document analysis and document information extraction. Tika in Action


  6. Chief among these are the maintainers of javac, the reference compiler for the Java programming language. The Java Language Specification Java SE 7 Edition


  7. Imagine also that the maintainers of StringUtil decide to return a List instead of the default: Programming Scala


  8. Software maintainers must give priority to maintaining and improving their most popular product features. Chubby - The lock service for loosely-coupled distributed systems


  9. The only major downside is that package maintainers may not always keep up with the latest MongoDB releases. MongoDB in Action


  10. Redis evolves very quickly and package maintainers have a hard time keeping up with the latest developments. Redis Cookbook



1315、单词 datasource 的匹配文本:


  1. Several subpackages provide support for embedded databases, database initialization, and various datasource lookup mechanisms. Pro Spring 3


  2. We are going to use profiles extensively when we cover testing and deploying to the cloud because the two tasks require different configurations for the datasource. Pro Spring MVC


  3. Notice that the DAO no longer has a hardwired dependency on BasicDatasource. Spring in Practice


  4. You can also see here that in the tag you are also allowed to specify SQL script locations you want to execute when the datasource is started up. Pro Spring Security


  5. If the class name is specified as ataSource, an XA datasource must be provided. Java EE 7 Specification


  6. The same scenario applies to the dataaccess and then the datasource bundles; all the application contexts are properly destroyed without breaking service reference integrity! Spring Dynamic Modules in Action


  7. The journal can be used with any JDBC datasource, but it’s important to know when it should and shouldn’t be used. ActiveMQ in Action



1316、单词 mozilla 的匹配文本:


  1. By convention, browser applications begin their identification string with the keyword Mozilla, which was the identification string of the Netscape browser. Java Network Programming and Distributed Computing


  2. Songbird is a free software media player with an integrated web browser, based on the Mozilla XULRunner platform. CouchDB The Definitive Guide


  3. All tests are configured to work exclusively against Mozilla Firefox. Gradle in Action


  4. Both Mozilla and Google provide browser-based graphical interfaces for REST. api-quick-start-onepager


  5. CouchDB now featured a JavaScript view engine based on Mozilla Spidermonkey and an attractive web-based. Beginning CouchDB


  6. The Mozilla Foundation is responsible for the Firefox web browser and Thunderbird email client. HBase in Action


  7. Mozilla, for instance, uses the Neo4j graph database as part of its next-generation cloud browser, Pancake. Graph Databases


  8. These reports are subsequently read and analyzed by the Mozilla development team to make their software more reliable on the vast number of machines and configurations on which it is used. HBase:The Definitive Guide


  9. You can find a good example for the use of entities in the user interface descriptions for the Mozilla/Netscape 6 browser. Core Java Volume II Advanced Features 9th Edition


  10. The W3C has had considerably less success in convincing vendors like Mozilla and Microsoft to fully and consistently implement its standards. Java Network Programming 4th Edition



1317、单词 bytearray 的匹配文本:


  1. The bytes are returned in network byte order, with the highest byte as bytearray[0] Java Network Programming and Distributed Computing


  2. It orders numeric data numerically, Strings (chararray) lexicographically, and bytearray as binary data. Hadoop in Practice


  3. The types default to bytearray, the most general type, representing a binary string. Hadoop:The Definitive Guide 3rd Edition


  4. By default, they are returned as bytearray, matching the way they are stored in the HBase table. HBase:The Definitive Guide


  5. In defining a schema, if you leave out the type, Pig will default to bytearray as the most generic type. Hadoop in Action


  6. With the exception of bytearray, they are all represented in Pig interfaces by java.lang classes, making them easy to work with in UDFs: int. Programming Pig



1318、单词 tsdb 的匹配文本:


  1. OpenTSDB is an extremely scalable Time Series Database (TSDB) built on top of HBase. HBase Administration Cookbook


  2. OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on top of HBase. HBase in Action



1319、单词 usages 的匹配文本:


  1. One of the example usages of this functionality is dividing the cluster topology between virtual machines or physical locations, to be sure that you don''t have a single point of failure. Mastering ElasticSearch


  2. Fortunately, as you’ll see, when Java 8 refers to functions these usages very nearly coincide. Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. In one of the most common usages of Sonar, Sonar automatically runs a set of Maven code quality related plugins against your Maven project, and stores the results into a relational database. Jenkins The Definitive Guide


  4. Important future direction: Our workloads are increasingly seek heavy Best usages are still being explored. Storage Architecture and Challenges


  5. To ensure correct operation, web components should obey wing guidelines, and the web container must support at least these usages. Java EE 7 Specification


  6. These areas of Hadoop haven’t received much coverage, and that’s what attracted me to the potential of this book—that of going beyond the fundamental word-count Hadoop usages and covering some of the more tricky and dirty aspects of Hadoop. Hadoop in Practice


  7. Few Kafka usages Some of the companies that are using Apache Kafka in their respective use cases are as follows: Apache Kafka


  8. Class redefinition is beyond the scope of this chapter, but the following code uses the new instrumentation and management features of Java 5.0 to determine resource usages of a Java program. Java in a Nutshell 5th Edition


  9. They provide a variety of useful information including lock contentions, usages of the various pools, I/O activity, as well as a quick feel about the load on the system. Java Performance


  10. They present highlevel design principles and strategies, technical details surrounding constructs, utilities that encapsulate common usages, and associated design patterns that address particular concurrency problems. Concurrent Programming in Java Design Principles and Pattern



1320、单词 apk 的匹配文本:


  1. You will either have to find the .apk file for a particular app, yourself, or add other marketplaces, such as SlideME (http://m.slideme.org) Jenkins Continuous Integration Cookbook


  2. First, you need to build the driver application bundle (.apk file) and install it on your device or in the simulator. Testable JavaScript


  3. The AndroidDriver requires the Android server APK to be installed on the emulator or device. Selenium Testing Tools Cookbook


  4. This will compile and build a deployable Android .apk file for your application. Java 7 Recipes A Problem-Solution Approach


  5. The emulator will then get the Selenium WebDriver APK installed and then that will control the browser on the device. Selenium 2 Testing Tools Beginner''s Guide



1321、单词 oreillymedia 的匹配文本:


  1. Find us on Facebook: http://facebook.com/oreilly Follow us on Twitter: http://twitter.com/oreillymedia Watch us on YouTube: http://www.youtube.com/oreillymedia. Getting Started with CouchDB



1322、单词 schemaless 的匹配文本:


  1. They embrace schemaless data, run on clusters, and have the ability to trade off traditional consistency for other useful properties. NoSQL Distilled


  2. Most applications that need schemaless data really need it for only a. SQL Antipatterns


  3. Instead, it is a key-value, schemaless solution that takes a column-oriented view of data; columns can be added at run-time and depend on the values inserted into HBase. Hadoop Beginners Guide


  4. It might seem odd to refer to schema design when MongoDB is considered a schemaless database. The Definitive Guide to MongoDB


  5. Second, and more significantly, a schemaless model allows you to represent data with truly variable properties. MongoDB in Action


  6. With the schemaless nature of the graph database, the order will be stored in a node but only contains the state (and relationships) that are needed in the current state (and those still needed from past states) Spring Data Modern Data Access for Enterprise Java


  7. Or you might consider one of the new breed of NoSQL databases (or schemaless databases, as I prefer to call them) that are popular these days. Spring in Action 4th Edition



1323、单词 hmaster 的匹配文本:


  1. A fully distributed HBase cluster has one or more master nodes (HMaster), which coordinate the entire cluster, and many slave nodes (RegionServer), which handle the actual data storage and request. HBase Administration Cookbook


  2. Similar to HDFS, HBase implements master/slave (HMaster/region server) architecture, as shown in Figure 2-7 Professional Hadoop Solutions


  3. This will return an RPC proxy instance of HMasterInterface, allowing you to communicate directly with the master server. HBase:The Definitive Guide


  4. In particular, RegionServer metrics are far more useful for evaluating the health of the cluster than HMaster or ZooKeeper metrcs. Apache Hadoop Goes Realtime at Facebook


  5. At a very high level, the master server (HMaster) is responsible for keeping track of the region servers (HRegionServer) available and assigning regions to servers. ZooKeeper Distributed Process Coordination



1324、单词 infoset 的匹配文本:


  1. Both DOM and JAXB also support partial processing of the documents: DOM through the use of DOM Level 3 load and save APIs, and JAXB by accepting the XML infoset in different forms (DOM, SAX, StreamSource) Java Performance



1325、单词 importtsv 的匹配文本:


  1. We will then describe how to use the importtsv and bulk load tools to load TSV data files into HBase. HBase Administration Cookbook


  2. Loading large datasets to an Apache HBase data store using importtsv and bulkload tools. Hadoop MapReduce Cookbook


  3. HBase ships with a command-line tool called importtsv which, when given files containing data in tab-separated value (TSV) format, can prepare this data for bulk import into HBase. HBase:The Definitive Guide



1326、单词 statsd 的匹配文本:


  1. One of the solutions for dealing with counter values is statsd. Monitoring with Ganglia


  2. I am not sure what the Etsy server does but our StatsD server turns timing metrics into five derivative metrics with new segments appended, so it probably works as coded. bk-admin-guide-cloud-latest



1327、单词 ebr 的匹配文本:


  1. However, if you plan to deploy your application in an OSGi container (e.g., Spring dynamic modules), then use Spring EBR. Pro Spring 3


  2. There’s not a lot to say here, but one thing to point out is that because you’re using SpringSource’s Enterprise Bundle Repository (EBR), you need to help the Failsafe plug-in deal with the fact that the EBR renames the JUnit artifact (or TestNG if you’re using that) Spring in Practice


  3. You can download it from the SpringSource Enterprise Bundle Repository (EBR) by following these steps: Spring Dynamic Modules in Action



1328、单词 rtti 的匹配文本:


  1. When the actor receives a message, we check whether the message is an instance of one of the message types we expect using runtime type identification’s (RTTI’s) instanceof. Programming Concurrency on the JVM


  2. Type Information Runtime type information (RTTI) allows you to discover and use type information while a program is running. Thinking in Java 4th Edition



1329、单词 monadic 的匹配文本:


  1. Option is a monad because it has both a flatten and flatMap operation that abide by the monadic laws. Scala in Depth


  2. Now that we have our primitive combinators for monads, we can look back at previous chapters and see if there were some other combinators that we implemented for each of our monadic data types. Functional Programming in Scala


  3. In fact, the sequence comprehensions we went over are both examples of a technique popularized by the highly functional language Haskell that is known as monadic transformations, which rely on a concept from category theory known as monads. Functional Programming Patterns in Scala and Clojure



1330、单词 netscape 的匹配文本:


  1. Various versions of Netscape and Internet Explorer ran different versions of Java, some of which were seriously outdated. Core Java Volume I Fundamentals 9th Edition


  2. Both Netscape and Microsoft licensed this technology for use in their respective Web browsers, and the language became a runaway success. Java Network Programming and Distributed Computing


  3. One of the first uses of cookies on the web was pioneered by Netscape way back in the mid ’90s, and ultimately resulted in the login session cookies we just talked about. Redis in Action


  4. Some browsers (e.g., Netscape 6) allow you to accept the public key certificate when you first run the signed program, but usually you must install the public key certificate before. Java Security 2nd Edition


  5. You can find a good example for the use of entities in the user interface descriptions for the Mozilla/Netscape 6 browser. Core Java Volume II Advanced Features 9th Edition


  6. A Netscape engineer who shan’t be named once passed a. SQL Antipatterns


  7. The refusal of Netscape and other key vendors to participate or even acknowledge the process was a crucial problem. Java Network Programming 4th Edition



1331、单词 olap 的匹配文本:


  1. Many applications have a mixture of online transaction processing (OLTP) and online analytical processing (OLAP) queries. High Performance MySQL 3rd Edition


  2. Online analytical processing (OLAP) systems are used to answer analytical queries about the data stored in them. HBase in Action


  3. They can be thought of as being in the same category as other technologies for analysis of data in bulk, such as data mining and online analytical processing (OLAP) Graph Databases


  4. If you want to analyze this data the traditional mechanism for doing so would be to periodically copy that data into a OLAP data warehouse. Hadoop in Practice


  5. On the integration side, JDBC and ODBC drivers are being developed in order to integrate with OLAP and BI tools. Hadoop:The Definitive Guide 3rd Edition


  6. Columnar (or “columnoriented”) databases are well-suited to online analytical processing (OLAP) work, where queries are executed over a broad dataset. Cassandra The Definitive Guide


  7. These conditions are often satisfied by OLAP queries with star joins to large dimensions, a type of query often used with Tenzing. Tenzing - A SQL Implementation On The MapReduce Framework


  8. This intentional denormalization of data is very common in OLAP systems in general, and in large data-processing systems such as Hadoop in particular. Programming Pig


  9. An OLAP or OLTP, i.e., SQL, interface is provided to then mine the data interactively. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. It’s closer to being an OLAP tool, Online Analytic Processing, but as we’ll see, Hive isn’t ideal for satisfying the “online” part of OLAP, at least today, since there can be significant latency between issuing a query and receiving a reply, both due to the overhead of Hadoop and due to the size of the data sets Hadoop was designed to serve. Programming Hive



1332、单词 cyclomatic 的匹配文本:


  1. A report produced by Cobertura contains the percentage of line and branch coverage, as well as the cyclomatic complexity for each package. Gradle in Action


  2. Measures such as JSLint, cyclomatic complexity, lines of code, and fan-in and fan-out are important. Testable JavaScript


  3. Additionally, it calculates cyclomatic complexity for each class and the average cyclomatic complexity for each package. Sonar Code Quality Testing Essentials



1333、单词 iteratively 的匹配文本:


  1. By using oversharding and iteratively replacing full shards (database nodes that host only one partition) with proxy nodes that point to another set of oversharded partitions, we can grow the cluster to very large sizes while incurring a minimum of latency. CouchDB The Definitive Guide


  2. In the spirit of agile development, you’ll iteratively build the functionality in digestible pieces. Gradle in Action


  3. Then, more implementation details are added iteratively as the discussion (and solution implementation) progresses. Professional Hadoop Solutions


  4. You’ll start with a simple schema design and iteratively improve it, and we’ll introduce important concepts along the way. HBase in Action


  5. Since then, we have iteratively built up the collection infrastructure, programming interfaces, and an interactive web-based user interface to help Dapper users solve their problems independently. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  6. You need to iterate over the development of a MapReduce job, and have a large dataset that you want to work with to iteratively test with. Hadoop in Practice


  7. They can start with a small application that just filters a logfile, but then can iteratively build up more features into the application as needed. Hadoop:The Definitive Guide 3rd Edition


  8. It iteratively moves replicas from DataNodes with higher utilization to DataNodes with lower utilization. The Hadoop Distributed File System


  9. Then it starts the process of moving the regions by calling the unassign() method of the administrative API iteratively. HBase:The Definitive Guide


  10. Similar to the maximum function, you can globally compute these functions by iteratively applying them to smaller chunks of data. Hadoop in Action



1334、单词 spock 的匹配文本:


  1. By the end of this chapter, you’ll write tests with JUnit, TestNG, and Spock and execute them as part of the build lifecycle. Gradle in Action


  2. Now start up two more mongod servers with the replSet option and the same identifier (spock): these will be the other members of the set: MongoDB The Definitive Guide 2nd Edition


  3. This format is also used by many other Java testing tools, such as TestNG, Spock and Easyb. Jenkins The Definitive Guide


  4. The second is a full automation of the testing pipeline, including separating integration tests from unit test and the leveraging of more advanced testing frameworks like Spock and Geb. Building and Testing with Gradle



1335、单词 textbox 的匹配文本:


  1. Handling checkboxes Capturing checkbox submission is similar to that of capturing textbox submissions, which we encountered earlier in Chapter 2, Recording Your First Test. Performance Testing With JMeter 2.9


  2. Locate ROW-READ-STAGE and select Attributes, and change the CorePoolSize by entering in the textbox on the right. Cassandra High Performance Cookbook


  3. Enter a search term in the q textbox and click on Execute Query. Hadoop MapReduce Cookbook


  4. For example, a disabled textbox gets enabled based on the user''s rights. Selenium Testing Tools Cookbook


  5. In the Run category, the Arguments textbox allows you to set the command line arguments, as shown in the following screenshot: Java 7 New Features Cookbook


  6. The Target textbox allows you to input the location of the element that you want to work against. Selenium 2 Testing Tools Beginner''s Guide



1336、单词 stemmer 的匹配文本:


  1. For example, a stemmer might stem the word catfish to cat or fish. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. There are some "ICU" Unicode classes for multilingual support, a Chinese stemmer, and a Polish stemmer. Solr 3 Enterprise Search Server


  3. The file that describes the index for English documents uses an English stemmer for text data, and the file that describes the index for German documents uses a German stemmer for text data. Solr 3.1 Cookbook


  4. KStemFilterFactory Stemming on English text; less aggressive than the Porter stemmer. Solr in action


  5. The Porter stemming algorithm (or ‘Porter stemmer’) is a process for removing the commoner morphological and inflexional endings from words in English. Lucene in Action 2nd Edition


  6. Using Hunspell as a stemmer Solr supports numerous stemmers for various languages. Apache Solr 4 Cookbook



1337、单词 editable 的匹配文本:


  1. If the drop-down list box is set to be editable, then you can edit the current selection as if it were a text field. Core Java Volume I Fundamentals 9th Edition


  2. This field is set to non-editable because it’s used only to display data, not to collect it. Thinking in Java 4th Edition


  3. For example, we might create a checksum over the not editable fields (like id) in the controller (or an interceptor), and then review this checksum to see whether any fields have been tampered with. Pro Spring MVC


  4. Managing quality profiles Sonar comes with three predefined profiles—they are not editable but you can use them as a basis for a new custom one. Sonar Code Quality Testing Essentials


  5. The bare-bones sample contact-management app is a master list of contacts with editable details pages corresponding to individual contacts. Spring in Practice


  6. In this case you want to display an editable TextField that contains the data the user has entered so far. Programming in Scala 2nd Edition


  7. The editable configuration is not saved to disk! It is purely for live trialand-error debugging. Solr 3 Enterprise Search Server


  8. Changing the field name is very simple—simply double-click the name, and it will show up as an editable text box once again. Beginning CouchDB


  9. To enable cell editing, the table model must indicate which cells are editable by defining the isCellEditable method. Core Java Volume II Advanced Features 9th Edition


  10. Problem You need to create some settings for your program that needs to be editable either with text or programmatically. Java 7 Recipes A Problem-Solution Approach



1338、单词 passphrase 的匹配文本:


  1. The passphrase for the ssh keyfile is a secret and we don''t want to keep it in a file somewhere, so we ask the user for it. Gradle Effective Implementation Guide


  2. If a machine is compromised, a key with no passphrase would allow a user to spring to another server. Cassandra High Performance Cookbook


  3. This requires that both the sender and receiver of the data have a shared passphrase that they have kept secret, though that''s often easier than sharing a secret key. Java Security 2nd Edition


  4. Check that you can login to the localhost and all other nodes using SSH without a passphrase by running one of the following commands: Hadoop MapReduce Cookbook


  5. Even though we want password-less logins, keys without passphrases are not considered good practice (it’s OK to have an empty passphrase when running a local pseudodistributed cluster, as described in Appendix A), so we specify a passphrase when prompted for one. Hadoop:The Definitive Guide 3rd Edition


  6. To avoid the hassle of being asked for the passphrase for every single command sent to a remote server, it is recommended that you use ssh-agent, a helper that comes with SSH. HBase:The Definitive Guide


  7. Be certain to avoid entering a passphrase, or you’ll have to manually enter that phrase every time the master node attempts to access another node. Hadoop in Action


  8. We do this by creating a Secure Shell (SSH) key pair that has an empty passphrase. Hadoop Beginners Guide


  9. The private key must not require a passphrase, although it should be protected by filesystem permissions such that only the HDFS super user can access it (usually this is user hdfs) Hadoop Operations


  10. Fill the KeyStore object with keys and certificates; for instance, by loading them from the filesystem using the passphrase they’re encrypted with. Java Network Programming 4th Edition



1339、单词 gregorian 的匹配文本:


  1. This particular description follows the Gregorian calendar, which is the calendar used in most countries of the world. Core Java Volume I Fundamentals 9th Edition


  2. The Gregorian calendar might not be the local preference for expressing dates. Core Java Volume II Advanced Features 9th Edition


  3. It is simply a wrapper around a long number of milliseconds but can represent that instant in time according to the calendar of the current locale (usually a Gregorian calendar) and the time zone of the current locale. Java in a Nutshell 5th Edition


  4. The scheme Cassandra uses is a version one UUID, which means that it is generated based on conflating the computer’s MAC address and the number of 100-nanosecond intervals since the beginning of the Gregorian calendar. Cassandra The Definitive Guide


  5. ZonedDateTime is a point in time in a given time zone (similar to GregorianCalendar) Java SE 8 for the Really Impatient



1340、单词 rowkeys 的匹配文本:


  1. Why can’t you store the user’s name directly? All data in HBase is stored as raw data in the form of a byte array, and that includes the rowkeys. HBase in Action



1341、单词 sasl 的匹配文本:


  1. What’s the difference between the sasl log and the other one? SASL (System Application Support Libraries) is a set of libraries that are part of the Erlang-OTP distribution. RabbitMQ in Action


  2. If SASL is enabled (see below), this file will NOT be used. Sonar Code Quality Testing Essentials


  3. I’m not going to give a deep explanation now, since we’ll see SASL in action a bit later. ZeroMQ


  4. You have seen how users, Hadoop services, and many projects within the Hadoop ecosystem (for example, Oozie and HBase) authenticate via Kerberos via the SASL/GSSAPI, often referred to as Kerberos RPC. Professional Hadoop Solutions


  5. If you need to implement SASL in your application, look at http://docs.oracle.com/javase/7/docs/technotes/guides/security/sasl/sasl-refguide.html. Core Java Volume II Advanced Features 9th Edition


  6. Next we see a message about SASL, which we will ignore, followed by a confirmation that the client successfully established a TCP connection with the local ZooKeeper server. ZooKeeper Distributed Process Coordination



1342、单词 hypertext 的匹配文本:


  1. The key to the World Wide Web was the browser translating the hypertext page to the screen. Core Java Volume I Fundamentals 9th Edition


  2. Hypertext documents contain hyperlinks, which are links to other hypertext documents and files. Java Network Programming and Distributed Computing


  3. To launch the program, you need a download page containing a hypertext link to the .jnlp file. Thinking in Java 4th Edition


  4. Representational State Transfer (REST) in this sense refers to the fact that data access is available via a series of simple web services that are implemented in Hypertext Transfer Protocol (HTTP) and adhere to the principles of REST. Beginning CouchDB


  5. In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. The anatomy of a large-scale hypertextual Web search engine


  6. This method is often used for testing hypertext links for validity, accessibility, and recent modification. Java EE 7 Essentials


  7. It locates all hypertext references in a web page and prints them. Core Java Volume II Advanced Features 9th Edition


  8. For example, the Hypertext Transfer Protocol (HTTP) defines how web browsers and. Java Network Programming 4th Edition


  9. Hypertext Transfer Protocol (HTTP) was originally designed to transmit hypertext (HTML) pages over the web. ActiveMQ in Action


  10. The user’s browser then sends a hypertext transport protocol (HTTP) request to one of these clusters, and thereafter, the processing of that query is entirely local to that cluster. Web search for a planet the google cluster architecture



1343、单词 tpc 的匹配文本:


  1. We wanted to show you a more complex read/write benchmark (such as TPC-C) over a broader range of concurrencies, but we found it ultimately impossible to do across such a diversity of server versions. High Performance MySQL 3rd Edition


  2. From the viewpoints of both data compression and query execution times, these results are consistent with the previously presented TPC-H results. RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems


  3. To give a deep understanding of the Tajo architecture and behavior during query processing, the demonstration will allow users to submit TPC-H queries to 32 Tajo cluster nodes. Tajo A Distributed Data Warehouse System on Large Clusters


  4. In many ways, the index server’s memory system behavior resembles the behavior reported for the Transaction Processing Performance Council’s benchmark D (TPC-D).6 Web search for a planet the google cluster architecture


  5. We’ll also evaluate Percolator with microbenchmarks and a synthetic workload based on the well-known TPC-E benchmark [1]; this test will give us a chance to evaluate the scalability and efficiency of Percolator relative to Bigtable and DBMSs. Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications



1344、单词 jasmine 的匹配文本:


  1. Testing frameworks like Jasmine (http://pivotal.github.io/jasmine/) let you write test code in JavaScript. Gradle in Action


  2. However, two popular, behavior-driven development (BDD) testing frameworks are also available for server-side testing: Vows and Jasmine. Testable JavaScript



1345、单词 successively 的匹配文本:


  1. You can see that the methods provide successively more information—each is effectively a superset of the previous one. Thinking in Java 4th Edition


  2. This query shows that increasing the prefix length results in successively smaller improvements as it approaches seven characters. High Performance MySQL 3rd Edition


  3. It’s possible to successively put a number and a string inside the same variable. Scala in Action


  4. In summary, this chapter introduces ideas that are successively elaborated in the rest of the book. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. One common example of this is if you want several build jobs to all work successively in the same directory. Jenkins The Definitive Guide


  6. Shrinking: After we have found a failing test case, we can run a separate procedure to minimize the test case, by successively decreasing its "size" until it no longer fails. Functional Programming in Scala


  7. The following code works by determining the magnitude of the number first, then successively stripping off the highest digit, as shown. Java Performance Tuning 2nd Edition


  8. TokenFilters then take over the remainder of the analysis, initially wrapping a Tokenizer and successively wrapping nested TokenFilters. Lucene in Action 2nd Edition


  9. This means that, for each collection you want to back up, you can call mongodump successively to add a given collection to your backup, as shown in the following example: The Definitive Guide to MongoDB


  10. Successively, the largest chunk is split into two (ideally evenly balanced) chunks. PowerDrill - Processing a Trillion Cells per Mouse Click



1346、单词 deliverable 的匹配文本:


  1. Although in many projects it won’t make a difference if you leave the test code in the deliverable (especially if you make all the test methods private, which you can do if you like), in some cases you will want to strip out the test code either to keep the deliverable small or so that it is not exposed to the client. Thinking in Java 4th Edition


  2. Both teams sit together at Tom’s computer to run the compilation for the whole application, build a deliverable, and deploy it to a web server in a test environment. Gradle in Action


  3. An artifact might be a binary executable (a JAR or WAR file for a Java project, for example), or some other related deliverable, such as documentation or source code. Jenkins The Definitive Guide


  4. This package can combine bundles with arbitrary resources into a single deliverable that can be installed and uninstalled. Spring Dynamic Modules in Action


  5. Roo uses Maven’s pom project type as an aggregator; it allows you to add subprojects below but doesn’t actually create a new deliverable artifact. Spring Roo in Action



1347、单词 nonempty 的匹配文本:


  1. Caution Trying to copy a nonempty directory will result in an empty directory. Pro Java 7 NIO.2


  2. For an antijoin, if at least one of the lists is empty, output the records from the nonempty list with an empty Text object. MapReduce Design Patterns


  3. The fact that subtree Y in Figure 4.34 has had an item inserted into it guarantees that it is nonempty. Data Structures and Algorithm Analysis in Java 3rd Edition


  4. The test, therefore, asserts that the query returns a nonempty result containing a distance value of 4 Graph Databases


  5. Putting this all together for this case, our function will take as arguments the value to return in the case of the empty list, and the function to add an element to the result in the case of a nonempty list:12 Functional Programming in Scala


  6. It’s because the seq of an empty collection is nil, which evaluates to false, while the seq of any other collection yields a nonempty sequence. Functional Programming Patterns in Scala and Clojure


  7. A nonempty directory is deleted, along with its contents, only if recursive is true (otherwise, an IOException is thrown) Hadoop:The Definitive Guide 3rd Edition


  8. The clone method on HashTable allocates a new buckets array of the proper size and iterates over the original buckets array, deep-copying each nonempty bucket. Effective Java 2nd Edition


  9. Again, if you have followed the factory design pattern, it is relatively simple to replace the return new Something( ) statement with an access to the collection of spare objects (presumably testing for a nonempty collection as well) Java Performance Tuning 2nd Edition


  10. NOTE: There is no convenient method for removing or copying a nonempty directory. Java SE 8 for the Really Impatient



1348、单词 jdepend 的匹配文本:


  1. The static code analysis tool JDepend (http://clarkware.com/software/JDepend.html) produces metrics that measure the design quality of your code. Gradle in Action


  2. When you get used to the implications of the relatively simple summary of JavaNCSS, consider adding JDepend to your safety net of code metrics. Jenkins Continuous Integration Cookbook


  3. There are tools already available for Java and Groovy projects to analyze and check source code, such as Checkstyle, JDepend, PMD, FindBugs, CodeNarc, and Sonar. Gradle Effective Implementation Guide



1349、单词 vpn 的匹配文本:


  1. To enable the use of a proxy server, check the box for Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections) as shown in the following screenshot: Performance Testing With JMeter 2.9


  2. Compute generates a certificate and key for the user to access the VPN and starts the VPN automatically. bk-compute-adminguide-grizzly


  3. When you are developing remotely and don’t have a VPN connection, you won’t be able to connect to the database and thus won’t be able to run the tests. Pro Spring MVC


  4. In order for a user to access the instances in their project, a special VPN instance (code named cloudpipe) needs to be created. bk-admin-guide-cloud-latest


  5. In order to access and provision into the private network, we need to set up an IPSEC-based VPN access to the private network. Storm Real-Time Processing Cookbook


  6. With certain environments that have high latency or lower bandwidths, rendering internal frames across the network can be too expensive (for example, remoting using a VPN) Java 7 Recipes A Problem-Solution Approach


  7. In doing this, consider running your logging over a management network, or using an encrypted VPN to avoid interception. OpenStack Operations Guide



1350、单词 abstracting 的匹配文本:


  1. The MapReduce model simplifies parallel processing by abstracting away the complexities involved in working with distributed systems, such as computational parallelization, work distribution, and dealing with unreliable hardware and software. Hadoop in Practice


  2. Cascading simplifies this by abstracting away keys and values and replacing them with tuples that have corresponding field names, similar in concept to tables and column names in a relational database. Hadoop:The Definitive Guide 3rd Edition


  3. This very visually oriented model makes building MapReduce jobs more like construction work, while abstracting the complexity of the actual work involved. HBase:The Definitive Guide


  4. However, it always concerns abstracting a physical resource as a virtual one. Oracle JRockit The Definitive Guide


  5. We are currently working on abstracting the specific object interface to allow key/value. RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.


  6. By abstracting away the underlying complexity, ZooKeeper makes it easy to implement consensus, leader election, presence protocols, and other primitives, and frees the developer to focus on the semantics of her application. Hadoop in Action


  7. Instead of just abstracting the database interactions from the developer’s point of view, the way JDBC does, an entirely different mechanism is used. Cassandra The Definitive Guide


  8. Optimizing for extensibility entails removing hard-wired decisions and instead allowing them to vary, for example by encapsulating them as overridable methods, supporting callback hooks, or abstracting functionality via interfaces that can be re-implemented in completely different ways by dynamically loaded components. Concurrent Programming in Java Design Principles and Pattern


  9. Make use of the factory method pattern to instantiate instances of the class while abstracting the creation process from the creator. Java 7 Recipes A Problem-Solution Approach


  10. The main thing to notice in the C++ code sample is that there are no explicit classes for abstracting databases and collections. MongoDB in Action



1351、单词 collection1 的匹配文本:


  1. The collection1 and bin directories contain the necessary configuration files. Web Crawling and Data Mining with Apache Nutch


  2. Click on the link labeled "collection1" to access more tools such as the query form. Solr in action


  3. Your cores may have other names than the default collection1, so please be aware of that. Apache Solr 4 Cookbook



1352、单词 cloudpipe 的匹配文本:


  1. If users need to access the instances in their project across a VPN, a special VPN instance (code named cloudpipe) needs to be created as described in the section titled Cloudpipe — Per Project VPNs. bk-compute-adminguide-grizzly


  2. In order for a user to access the instances in their project, a special VPN instance (code named cloudpipe) needs to be created. bk-admin-guide-cloud-latest



1353、单词 metaprogramming 的匹配文本:


  1. Groovy is a good choice among languages if we’re looking for easy metaprogramming and dynamic typing while preserving Java semantics. Programming Concurrency on the JVM


  2. They’re primarily used to automatically convert from one type to another as needed, but can also be used to limited forms of compiler time metaprogramming. Scala in Depth


  3. If you’re used to metaprogramming in Ruby, Groovy, or other programming languages, implicit conversions are Scala’s way of doing metaprogramming but in a more controlled way. Scala in Action


  4. Those great features include a lot of metaprogramming techniques, new powerful constructs like closures, faster development cycles, and a clearer and more concise and developer-friendly syntax. Pro Spring Security


  5. Scala’s philosophy in cases like this is to include the minimum, orthogonal support in the core language such that a wide variety of metaprogramming tools can be written. Programming in Scala 2nd Edition


  6. Matthew has a long history in the build tool space, and his insights into build metaprogramming were no small help in getting that chapter right. Gradle Beyond The Basics


  7. Clojure Macros Macros are a form of metaprogramming: they are pieces of code that transform other pieces of code. Functional Programming Patterns in Scala and Clojure


  8. Programmer… Dynamically typed languages like Ruby, Python, Groovy, JavaScript, and Smalltalk offer very high productivity due to their flexibility, powerful metaprogramming, and elegance. Programming Scala


  9. Why’s Dwemthy’s Array example demonstrated Ruby’s metaprogramming by creating a subclass of Ruby’s Array class and using method_missing to forward method calls intended for a particular monster to the first monster in the Array. Beginning Scala


  10. Gradle offers the flexibility of Ant, which many teams still cherish, but with the dependency management style of Ivy, the intelligent defaults of Maven, the speed and hashing of Git, and the metaprogramming power of Groovy. Building and Testing with Gradle



1354、单词 adder 的匹配文本:


  1. As a result, you can see that IntegerAdder and IntegerSubtracter can be types of Combiner. Thinking in Java 4th Edition


  2. SmartAdder isn’t a functional interface because it specifies two abstract methods called add (one is inherited from Adder) Java 8 in Action Lambdas, Streams and Functional-style Programming


  3. For instance, the method shown in Listing 18.6 constructs a half-adder. Programming in Scala 2nd Edition


  4. For example, all possible uses of the following StatelessAdder class and its add method are obviously always safe and live: Concurrent Programming in Java Design Principles and Pattern


  5. In this example you create two unit tests for the MathAdder class. Java 7 Recipes A Problem-Solution Approach


  6. If you anticipate high contention, you should simply use a LongAdder instead of an AtomicLong. Java SE 8 for the Really Impatient



1355、单词 upcasting 的匹配文本:


  1. We call this process of treating a derived type as though it were its base type upcasting. Thinking in Java 4th Edition


  2. This is an example of upcasting, which is legal in Java. Java 7 New Features Cookbook



1356、单词 brian 的匹配文本:


  1. However, for more intricate system-level programming, we suggest that you turn to a more advanced reference, such as Java Concurrency in Practice by Brian Goetz et al. Core Java Volume I Fundamentals 9th Edition


  2. For an exceptional treatise on the topic of Java Memory Model and how it affects concurrency, refer to Brian Goetz’s seminal book Java Concurrency in Practice [Goe06] Programming Concurrency on the JVM


  3. But strongly resist the urge to rely on it yourself; see Brian’s Rule of Synchronization, presented earlier. Thinking in Java 4th Edition


  4. In particular, Java Language Architect Brian Goetz clearly stated the purpose of Optional is to support the optional-return idiom only. Java 8 in Action Lambdas, Streams and Functional-style Programming


  5. A sincere thanks to the many people who read the early drafts of these chapters, and provided comments, edits, insights, and ideas — specifically Mike Daconta, Ralph Perko, Praveena Raavichara, Frank Tyler, and Brian Uri. Professional Hadoop Solutions


  6. I owe a great deal to Brian Femiano, Bruce Miller, and Jonathan Larson for allowing me to be exposed to many great ideas, points of view, and zealous inspiration. Hadoop RealWorld Solutions Cookbook


  7. In their book The C Programming Language, Brian Kernighan and Dennis Ritchie said that they felt that the C language "wears well as one''s experience with it grows." If you like C, we think you will like the Java programming language. The Java Language Specification Java SE 7 Edition


  8. I''m very grateful to Brian Goetz for clarifying several points in this section. Java Performance Tuning 2nd Edition


  9. The following snippet adds the author Griffin, Brian into the authors array because it isn’t there yet: The Definitive Guide to MongoDB


  10. The members of Ralph Johnson''s patterns seminar (especially Brian Foote and Ian Chai) read through early forms of some patterns and suggested many improvements. Concurrent Programming in Java Design Principles and Pattern



1357、单词 jstl 的匹配文本:


  1. Runtime dependencies like the JSTL library aren’t needed for the compilation process, but are needed at runtime. Gradle in Action


  2. For web applications you need servlet.jar and jstl.jar if you want to use Spring’s JSTL support. Pro Spring 3


  3. We will also submit the index (which we retrieve from the JSTL supplied by a varStatus object) through a hidden field. Pro Spring MVC


  4. This approach relies on standard JSTL conditional page rendering and data binding, and is slightly more complicated than Spring Security tag libraries; however, it is more in line with the standard web application MVC logical design. Spring Security 3.1


  5. The important part is that you are using the standard jstl tag library to iterate through the list of movies  you are expecting to get back from your controller in the model. Pro Spring Security


  6. All products are required to provide JSTL for use by all JSP pages. Java EE 7 Specification


  7. We’ll add JSP support to our basic container configuration by providing a JSP compiler, because JSPs are compiled to Java classes, and use the JSP Standard Tag Library (JSTL) for our tag library. Spring Dynamic Modules in Action


  8. This security configuration can be extended and customized by adding more JSTL scriptlets and conditions using the Expression Language (EL) Spring Roo in Action


  9. It uses the JavaServer Pages Standard Tag Library (JSTL) to iterate over the collection of Employee instances and display the name of each employee as well as the name of the department to which that employee is assigned. Pro JPA2


  10. Therefore, the spittles.jsp file can use JavaServer Pages Standard Tag Library’s (JSTL)

    tag to render the list of spittles: Spring in Action 4th Edition



1358、单词 pipelined 的匹配文本:


  1. Figure 17.2 shows how pipelined requests are handled in the correct order which means the response for the first request will be returned and then the response for the second and so on... Netty in Action MEAP V08


  2. Then, reduce tasks will run on the shuffled data and generate final (or intermediate, if multiple consecutive MapReduce jobs are pipelined) results. Hadoop Operations and Cluster Management Cookbook


  3. Our implementation of Paxos is pipelined, so as to improve Spanner’s throughput in the presence of WAN latencies; but writes are applied by Paxos in order (a fact on which we will depend in Section 4) Spanner - Google’s Globally-Distributed Database


  4. When a file is closed, the remaining data in the temporary local file is pipelined to the DataNodes. Professional Hadoop Solutions


  5. If you need to do a noncombining reducer with one of these frameworks, you’ll need to separate the processing into pipelined map-reduce steps. NoSQL Distilled


  6. Let’s look at the basic concepts in Crunch, including its type system and pipelined architecture. Hadoop in Practice


  7. Unfortunately, calling this method involves a pipelined write to N servers (where N is the replication factor set for the write-ahead log files) HBase:The Definitive Guide


  8. Also, a prefetch retrieves an entire cache line, which takes time, so unless the prefetch operation can be pipelined, or hidden in parallel with other actions before its data is to be used, it has no or even negative effects. Oracle JRockit The Definitive Guide


  9. The processor consists of a series of pipelined operations, including stream processing and protocol parsing. Java Performance


  10. Exploitation of such opportunities (via pipelined superscalar CPUs, multilevel caches, load/store balancing, interprocedural register allocation, and so on) is responsible for a significant amount of the massive improvements in execution speed seen in computing over the past decade. Concurrent Programming in Java Design Principles and Pattern



1359、单词 whitespaces 的匹配文本:


  1. Coding standards define agreed-on source code conventions within a team or organization, and can range from simple code formatting aspects, such as the use of whitespaces and indentation, to programming best practices. Gradle in Action


  2. The output of the multiLine variable has leading whitespaces, and maybe you don’t want that. Scala in Action


  3. We tell Solr to split the data on whitespaces, split on case change, make text parts lowercased, and finally to stem text with the appropriate algorithm—in this case, it is one of the English stemming algorithms available in Solr and Lucene. Solr 3.1 Cookbook


  4. We see q that WordCount uses Java’s StringTokenizer in its default setting, which tokenizes based only on whitespaces. Hadoop in Action


  5. Whitespaces within the text of a JSP page, even though not significant, are preserved. Java Performance


  6. Of course you could index the whole page, splitting it by whitespaces, but then you would probably hear the clients complain about the HTML tags being searchable and so on. Apache Solr 4 Cookbook



1360、单词 map2 的匹配文本:


  1. For example, you can implement map2 using foldLeft and reverse the result: Scala in Action


  2. Write abothMatch generic function , that combines two  values using a binary function.map2 Option If either  value is , then the return value is too. Functional Programming in Scala


  3. In this setup, you should think of Map2 and Reduce as the core of the MapReduce job, with the standard partitioning and shuffling applied between the mapper and reducer. Hadoop in Action



1361、单词 windowing 的匹配文本:


  1. Eclipse is written in Java, but since it uses a nonstandard windowing library, it is not quite as portable as Java itself. Core Java Volume I Fundamentals 9th Edition


  2. This type of system is widely used in object-oriented programming, and in the context of Java, the Abstract Windowing Toolkit. Java Network Programming and Distributed Computing


  3. Instead, the Java l.o Abstract Windowing Toolkit (AWT) produced a GUI that looked equally mediocre on all systems. Thinking in Java 4th Edition


  4. It requires a windowing system such as X11 to run on the system you start JConsole from, not on the server it will connect to. Cassandra High Performance Cookbook


  5. It didn’t really work out that way, except in some rare cases, like the windowing APIs of various platforms. Programming Scala


  6. Which is better? MDI reduces window clutter, but having separate top-level windows means that you can use the buttons and hotkeys of the host windowing system to flip through your windows. Core Java Volume II Advanced Features 9th Edition


  7. A class that allocates system resources other than memory (such as file descriptors or windowing system graphic contexts) should override the finalize( ) method to release these resources when the object is no longer referred to and is about to be garbage-collected. Java in a Nutshell 5th Edition


  8. On top of this we feel a library of helpful stream utilities, such as different windowing functions or join techniques will be beneficial to this kind of applications. Kafka a Distributed Messaging System for Log Processing


  9. For example, if you can create the startup window as quickly as possible and then run any initializations in the background without blocking windowing activity, the user sees this as a faster startup than if you waited for initializations to finish before creating the initial window. Java Performance Tuning 2nd Edition


  10. Long before the Swing API, the Abstract Window Toolkit (AWT) alone provided an abstraction layer over the native platform to enable standard windowing functionality. Java 7 Recipes A Problem-Solution Approach



1362、单词 jlabel 的匹配文本:


  1. The constructor for a JLabel lets you specify the initial text or icon and, optionally, the alignment of the content. Core Java Volume I Fundamentals 9th Edition


  2. We can make things a little more interesting by adding a JLabel to the JFrame: Thinking in Java 4th Edition


  3. If the rendered image just contains text, an icon, and possibly a change of color, you can get by with configuring a JLabel. Core Java Volume II Advanced Features 9th Edition


  4. In the inheritance hierarchy, all Swing components such as the JLabel, JTextField and JPanel classes extend from the JComponent class. Java 7 Recipes A Problem-Solution Approach



1363、单词 london 的匹配文本:


  1. There are too many to name, but all of you in Dublin, Portland, Lisbon, London, Zurich, San Francisco, Mountain View, Dortmund, Stockholm, Hamburg, Frankfurt, Salt Lake City, Blacksburg, San Diego, and Amsterdam: you know who you are—thanks! To my family, friends, and coworkers: thanks you for your support and your patience with me over the last year. CouchDB The Definitive Guide


  2. Alex Blewitt is a technical architect, working at an investment bank in London. Jenkins Continuous Integration Cookbook


  3. Indeed, Tony Hoare, one of the giants of computer science, said in a presentation at QCon London 2009: Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Or you could specify that a particular build job is only to be run on a UAT environment in Sydney or London: Jenkins The Definitive Guide


  5. In the early 2000s, a young entrepreneur out of the London financial sector cofounded a company for caching Java objects: Metalogic. RabbitMQ in Action


  6. Martin and Cindy are a couple considering this room, but they are discussing this on the phone because Martin is in London and Cindy is in Boston. NoSQL Distilled


  7. Currently living and working in London, Carlo focuses mostly on Java and Ruby software development. Pro Spring Security


  8. The diversity of the drawings in the Collection of the Dresses of Different Nations speaks vividly of the richness of the costumes presented on the London stage over 200 years ago. Spring Batch in Action


  9. London, which is in southeastern England, which, in turn, is in Great Britain. Graph Databases


  10. This configuration will update any clients that are using the failover transport protocol with the locations of any brokers joining that have newyork or london in their broker names. ActiveMQ in Action



1364、单词 destructor 的匹配文本:


  1. The most common activity in a destructor is reclaiming the memory set aside for objects. Core Java Volume I Fundamentals 9th Edition


  2. Java has no destructor or similar concept, so you must create an ordinary method to perform this cleanup. Thinking in Java 4th Edition


  3. Example 4-7 defines the worker class: a structure and a set of functions that act as constructor, destructor, and methods on worker objects. ZeroMQ


  4. It includes two methods that effectively act as the constructor and destructor for the class: Hadoop in Action


  5. As we will see, they do not behave quite like C++ destructor methods either. Java in a Nutshell 5th Edition



1365、单词 xtradb 的匹配文本:


  1. They have also contributed directly to that improvement via XtraDB and XtraBackup. High Performance MySQL 3rd Edition



1366、单词 nix 的匹配文本:


  1. Solr being a web-based application can run on many operating systems such as *nix and Windows. Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr


  2. For *nix systems, this is the shell script gradlew; for Windows operating systems, it’s gradlew.bat. Gradle in Action


  3. For a typical *NIX system, it will be similar to this: Jenkins Continuous Integration Cookbook


  4. For it to be really useful to sysadmins, it has to provide command-line tools that allow access to its features from a machine that lacks a window system, like most *nix servers. RabbitMQ in Action


  5. You can nix the setter method and declare a constructor instead: Spring in Practice


  6. For *nix and Mac users, building it from source is your best option. The Little Redis Book


  7. The following example shows how to find all loaded classes containing the string LoadAnd on a *NIX system: Oracle JRockit The Definitive Guide


  8. If you do not have it installed, you can install it (on *NIX systems) by running: Scaling MongoDB


  9. For the purposes of this chapter, we’ll stick to the *NIX version, where Redis is most at home. Spring Data Modern Data Access for Enterprise Java


  10. Linux and MacOS Installation To install Gradle on *nix operating systems like Linux and MacOS, follow this command-line script: Building and Testing with Gradle



1367、单词 nullable 的匹配文本:


  1. This is similar to accessing nullable values within other languages. Scala in Depth


  2. A lot of tables include nullable columns even when the application does not need to store NULL (the absence of a value), merely because it’s the default. High Performance MySQL 3rd Edition


  3. A common use for unions is to specify nullable values. Professional Hadoop Solutions


  4. Sparse tables with nullable columns require special checking in code, despite the presence of a schema. Graph Databases


  5. Note that each field must be nullable in the child classes, so that the SQL statement can insert the row without those columns. Spring Roo in Action


  6. In this unit test example, we are creating an Optional instance using the fromNullable static method. Getting Started with Google Guava


  7. Consider the following case, where a nullable column may behave in a more intuitive way by serendipity. SQL Antipatterns


  8. The ofNullable method is intended as a bridge from the use of null values to optional values. Java SE 8 for the Really Impatient


  9. Primary keys are assumed to be insertable, but not nullable or updatable. Pro JPA2



1368、单词 multiplier 的匹配文本:


  1. Where RETRY_BACKOFF is a retry back off multiplier table, which has the following definition: HBase Administration Cookbook


  2. The Multiplier class (which only works with positive integers) has a multiply( ) method which calls the private add( ) method numerous times to perform multiplication. Thinking in Java 4th Edition


  3. To keep things simple, we’ll say that the score of an item is a function of the time that the article was posted, plus a constant multiplier times the number of votes that the article has received. Redis in Action


  4. Score boosting You can easily modify the degree to which a clause in the query string contributes to the ultimate relevancy score by adding a multiplier. Solr 3 Enterprise Search Server


  5. We defined a variable, factor, to use as the multiplication factor, and we pulled out the previous anonymous function into a value called multiplier that now uses factor. Programming Scala


  6. Both CPU usage attributes have a pre-multiplier setting of 100 Oracle JRockit The Definitive Guide


  7. These are the private fields set by the constructor private String abbrev; private double multiplier; Java in a Nutshell 5th Edition


  8. The weight of a pool simply acts as a multiplier; a weight of 2 means the pool receives two slots to every one slot the other pools receive. Hadoop Operations


  9. You can configure the amount of time the ActiveMQ broker should wait before trying to resend the message, whether that time should increase after every failed attempt (use an exponential back-off and back-off multiplier), and the maximum number of redelivery attempts before the message(s) are moved to a dead-letter queue. ActiveMQ in Action


  10. The 1.5 multiplier is a fudge factor to prevent underutilization of the cluster. Programming Hive



1369、单词 rdf 的匹配文本:


  1. Outer XML tags (called root elements), namespaces, and referenced schemas are some of the clues that Tika uses to determine an XML file’s real type (RDF, RSS, and so on) Tika in Action


  2. The employee network is stored in HDFS as a line-separated list of RDF triples. Hadoop RealWorld Solutions Cookbook


  3. You can use the Open Directory project RDF dump (http://rdf.dmoz. Hadoop MapReduce Cookbook


  4. There are three dominant graph data models: the property graph, Resource Description Framework (RDF) triples, and hypergraphs. Graph Databases


  5. It allows you to execute SPARQL Protocol and RDF Query Language (SPARQL) queries for interacting with Resource Description Framework (RDF) and acts as a partial Web Ontology Language (OWL) store. Cassandra The Definitive Guide


  6. For instance, it omits various XML applications such as RDF (.rdf), XSL (.xsl), and so on that should have the MIME type application/xml. Java Network Programming 4th Edition


  7. But more often these days documents are binary in nature (PDF, Microsoft Office, Open Office, Adobe Flash, streaming video and audio multimedia files, etc.) or contain substantial markups which you must remove before indexing (RDF, XML, HTML) Lucene in Action 2nd Edition



1370、单词 reordering 的匹配文本:


  1. Compilers are required to respect locks by flushing local caches as necessary and not inappropriately reordering instructions. Core Java Volume I Fundamentals 9th Edition


  2. This may be fixed by reordering the classpath or by explicitly setting the XML parser by setting a Java property. Pro Hadoop


  3. In effect, reads and writes go directly to memory, and are not cached, volatile also restricts compiler reordering of accesses during optimization. Thinking in Java 4th Edition


  4. Several mechanisms can produce this reordering: the just-in-time compiler and the processor may rearrange code. JSR-133 Java Memory Model and Thread


  5. Join reordering – The larger tables are streamed and not materialized in memory in the reducer while the smaller tables are kept in memory. Hive A Petabyte Scale Data Warehouse Using Hadoop


  6. Reordering the fields, methods, or constructors in an existing type declaration. The Java Language Specification Java SE 7 Edition


  7. All of the stores are always sorted by key, so no reordering is required to fit new keys in between existing ones. HBase:The Definitive Guide


  8. The representation tries to capture the minimal set of constraints on the execution of each operation, which allows for aggressive reordering of operations and global value numbering, which reduces redundant computations. Java Performance


  9. Reordering Rows The next step is to “help” Zippy in compressing the elements (chunk-ids) representing the data columns. PowerDrill - Processing a Trillion Cells per Mouse Click


  10. Even though JLS addresses some particular legal and illegal reorderings that can occur, interactions with these other issues reduce practical guarantees to saying that the results may reflect just about any possible interleaving of just about any possible reordering. Concurrent Programming in Java Design Principles and Pattern



1371、单词 rethrow 的匹配文本:


  1. Sometimes, you just want to log an exception and rethrow it without any change: Core Java Volume I Fundamentals 9th Edition


  2. Sometimes you’ll want to rethrow the exception that you just caught, particularly when you use Exception to catch any exception. Thinking in Java 4th Edition


  3. Any execution failure (e.g., authenticati errors from client) bubbles up as exception, which is caught and rethrow as specific Gradle exception. Gradle in Action


  4. The big question is, should you rethrow the exception, or swallow it? Hadoop in Practice


  5. We simply rethrow the exception10 and let the caller create a new ZooKeeper instance, so that the whole write() method can be retried. Hadoop:The Definitive Guide 3rd Edition


  6. Rethrowing exceptions in Java 7 When an exception is caught in a catch block, it is sometimes desirable to rethrow the exception. Java 7 New Features Cookbook


  7. You can catch ThreadDeath errors to do any necessary cleanup for a thread, but if you do, you must rethrow the error so that the thread actually terminates. Java in a Nutshell 5th Edition


  8. You need to rethrow the same exception type as you caught. Java 7 Recipes A Problem-Solution Approach


  9. However, the doInOrderAsync returns right away and does the work in a separate thread, so it is not possible to have the method rethrow the exception. Java SE 8 for the Really Impatient


  10. We factor out the flushing from the rest of the processing code because every method must flush and catch the exception and then rethrow a domain-specific application exception. Pro JPA2



1372、单词 vip 的匹配文本:


  1. We will set up a Virtual IP (VIP) address using Heartbeat and Pacemaker, and then associate it with the active master node. HBase Administration Cookbook


  2. The NVP LBaaS and FWaaS plugins require the routed-insertion extension, which adds the router_id attribute to the VIP (Virtual IP address) and firewall resources and binds these services to a logical router. bk-admin-guide-cloud-latest


  3. Users with the role ROLE_VIP can see the movie budget. Pro Spring Security


  4. This recipe will take a list of Nigerian VIPs and join any Nigerian ACLED events that occurred on any VIP''s birthday. Hadoop RealWorld Solutions Cookbook


  5. Configure the VIP First of all, we need to select and assign a virtual IP address (VIP) that can freely float between cluster nodes. high-availability-guide


  6. The logical name can be anything you want, and is analogous to a virtual IP (VIP) Hadoop Operations



1373、单词 mariadb 的匹配文本:


  1. Two of the three (Percona Server and MariaDB) are drop-in replacements, so the effort involved in trying them out is not large. High Performance MySQL 3rd Edition


  2. In fact, for some systems, such as MariaDB or Percona, this may be your only option.) For example, to check the status of the cluster, open the MySQL client and check the status of the various parameters: high-availability-guide



1374、单词 wordcount 的匹配文本:


  1. Test the queue configuration by submitting an example wordcount job to the queue hdqueue using the following command: Hadoop Operations and Cluster Management Cookbook


  2. The wordcount field counts instances of rows that have the same values along the dimensions provided. Storm Blueprints Patterns for Distributed Real-time Computation


  3. We construct the head of our pipe assembly and name it “wordcount.” This name is used to bind the source and sink Taps to the assembly. Hadoop:The Definitive Guide 3rd Edition


  4. Without specifying any arguments, executing wordcount will show its usage information: Hadoop in Action


  5. You will see it has a single entry called wordcount, which is a directory. Hadoop Beginners Guide


  6. The wordcount application is written using the Hadoop MapReduce API. Spring Data Modern Data Access for Enterprise Java



1375、单词 interleaved 的匹配文本:


  1. In addition to allowing interleaved reads and writes, the concurrent collections provide better throughput than the synchronized versions. Programming Concurrency on the JVM


  2. If more than one consolidation function is used (for example, MIN, MAX, and AVERAGE values are retained for a single data source), these values are not interleaved. Monitoring with Ganglia


  3. The DataNode storage will be spread over multiple disks or arrays to allow interleaved I/O, and the TaskTracker intermediate output will also go to a separate disk or array. Pro Hadoop


  4. The figure also illustrates the interleaved layout for the example database: for. Spanner - Google’s Globally-Distributed Database


  5. One of the biggest practical advantages of using languages with pattern matching is that you can avoid using big chains of switch or if-then-else statements interleaved with field-selection operations. Java 8 in Action Lambdas, Streams and Functional-style Programming


  6. Should we reach the length of lat before lon, or vice versa, print 0 for an interleaved spot in the iteration. Hadoop RealWorld Solutions Cookbook


  7. A "preferred local" policy can also be selected, that is JRockit should try to use the local node, but interleaved allocation is also fine. Oracle JRockit The Definitive Guide


  8. This ensures that multiple invocations won’t be interleaved, and that each invocation will see the effects of all previous invocations. Effective Java 2nd Edition


  9. The problem is that even though these 10 tasks are attempting to perform sequential reads, because all I/O requests are issued to the same underlying device, the end result of interleaved reads will look like random reads, drastically reducing throughput. Hadoop Operations


  10. For example, an interleaved add operation would not interfere with the required semantics of a typical traversal, yet would cause an exception to be thrown here. Concurrent Programming in Java Design Principles and Pattern



1376、单词 gpars 的匹配文本:


  1. We could use Akka actors from Groovy; instead, we’ll use GPars, which is a concurrency library with close ties to Groovy. Programming Concurrency on the JVM



1377、单词 paul 的匹配文本:


  1. Speaking of MySQL AB, thanks to all the other great folks there who have encouraged me in writing this: Kerry, Larry, Joe, Marten, Brian, Paul, Jeremy, Mark, Harrison, Matt, and the rest of the team there. High Performance MySQL 3rd Edition


  2. Accountants Don’t Use Erasers Say you want to give $100 to your cousin Paul for the New York cheesecake he sent to you. CouchDB The Definitive Guide


  3. Paul Sandoz in the email he sent to the lambda-dev mailing list: http://mail.openjdk.java.net/pipermail/lambda-dev/2013-November/011516.html. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. Beyond CouchDB, Paul contributes to a number of other open source projects. Beginning CouchDB


  5. These bundles are currently being managed by Paul Phillips on the GitHub website. Programming Scala


  6. Also thanks are due to Andrew Ryan, Matthew Welty and Paul Tuckfield for doing a lot of work on operations, monitoring and the statistics setup that makes these tasks easy. Apache Hadoop Goes Realtime at Facebook


  7. Paul is a former contributing editor for MacTutor Magazine, served as a technical reviewer for Paradigms of. Beginning Scala


  8. In particular, Paul O’Rorke was the technical reviewer that went beyond his call of duty and made some wonderful suggestions on how to make the manuscript better. Hadoop in Action


  9. Special thanks also go to Brian Aker, Martin “MC” Brown, and Paul Kinzelman for their reviews and suggestions. Understanding MySQL Internals


  10. Patrick Lightbody and Paul Hammant thought that there must be a better way to drive their tests and in a way that they could use their favorite development language. Selenium 2 Testing Tools Beginner''s Guide



1378、单词 inferencer 的匹配文本:


  1. First, we changed the result variable to a val and let the type inferencer determine the type. Scala in Depth


  2. This is due to a limitation in Scala’s type inferencer, which fails to infer the correct type of the list automatically. Programming in Scala 2nd Edition


  3. If a block of code returns a value of type Any unexpectedly, chances are good that the type inferencer couldn’t figure out what type to return, so it chose the most generic type possible. Programming Scala


  4. For the most part, the type inferencer will calculate the type parameters so you don’t have to explicitly pass them. Beginning Scala



1379、单词 whois 的匹配文本:


  1. The WHOIS protocol allows users to look up information about a domain name (such as awl.com, or microsoft.com) Java Network Programming and Distributed Computing


  2. It shows you how to write network clients that interact with TCP servers of all kinds including whois, dict, and HTTP. Java Network Programming 4th Edition



1380、单词 untyped 的匹配文本:


  1. The subsequent call to actorOf() turns the regular object that extends from UntypedActor into an Akka actor. Programming Concurrency on the JVM


  2. Another downside to sending actors directly in the Scala standard library is that actors are untyped. Scala in Depth


  3. The Java Virtual Machine has a large complement of instructions that manipulate the contents of the operand stack as untyped values. The Java Virtual Machine Specification Java SE 7 Edition


  4. In practice, many applications end up using a mix of typed and untyped validations. CouchDB The Definitive Guide


  5. The following listing shows how to declare multiple, untyped parameters for a closure. Gradle in Action


  6. The class is called UntypedActor because Akka includes the concept of a TypedActor. Scala in Action


  7. In addition, the values flowing through a Cascading pipeline are special untyped “tuple” values, and Cascading operations focus on transforms over tuples; in contrast, a FlumeJava pipeline computes over arbitrary Java objects using arbitrary Java computations. FlumeJava - Easy, Efficient Data-Parallel Pipelines


  8. Here''s a very simple query executor, which uses  as theMap[String,Any] (untyped) row representation: Functional Programming in Scala


  9. You can see your fields, and the default Pig types for untyped fields, bytearray. Hadoop in Practice


  10. As TypedQuery extends Query, a strongly typed query can always be treated as an untyped version, though not vice versa. Pro JPA2



1381、单词 subelements 的匹配文本:


  1. To get the element’s children (which may be subelements, text, comments, or other nodes), use the getChildNodes method. Core Java Volume II Advanced Features 9th Edition


  2. If one wants to print subelements of a particular element, one can use the following idiom: Steps in Scala An Introduction to Object-Functional Programming


  3. These annotations mark the root element of the schema file and all other subelements. Java 7 Recipes A Problem-Solution Approach


  4. The subelements of entity-mappings can be categorized into four main scoping and functional groups: persistence unit defaults, mapping files defaults, queries and generators, and managed classes and mappings. Pro JPA2



1382、单词 cogroup 的匹配文本:


  1. The COGROUP method in Pig does a great job of bringing data together while preserving the original structure. MapReduce Design Patterns


  2. Problem How do you group tuples together in Pig and apply aggregate functions on them? Solution Use the GROUP and COGROUP operators, and learn how they relate to JOIN and FLATTEN. Hadoop in Practice


  3. The COGROUP statement is similar to JOIN, but instead creates a nested set of output tuples. Hadoop:The Definitive Guide 3rd Edition


  4. Afterward we’ll go into more details on the more complex operators such as COGROUP and FOREACH. Hadoop in Action


  5. Another way to think of cogroup is as the first half of a join. Programming Pig



1383、单词 unbuffered 的匹配文本:


  1. In this chapter, you will see at work methods for buffered and unbuffered streams, leaving coverage of the methods for channels for the next chapters, in which you will see the real power of NIO. Pro Java 7 NIO.2


  2. The remaining FilterlnputStream classes modify the way an InputStream behaves internally: whether it’s buffered or unbuffered, whether it keeps track of the lines it’s reading (allowing you to ask for line numbers or set the line number), and whether you can push back a single character. Thinking in Java 4th Edition


  3. You can also specify this when connecting, which will make every statement unbuffered: High Performance MySQL 3rd Edition


  4. The first argument is the underlying stream from which unbuffered data will be read or to which buffered data will be written. Java Network Programming 4th Edition


  5. One application I know of had a severe disk I/O problem caused by using unbuffered streams (all disk I/O was done byte by byte, which led to awful performance) Java Performance Tuning 2nd Edition



1384、单词 durations 的匹配文本:


  1. Cloud service vendors provide the ability to spin up clusters of almost arbitrary size and capacities for short to long durations. Pro Hadoop


  2. In addition to these explicit locks, queries acquire implicit locks on tables for their durations. High Performance MySQL 3rd Edition


  3. For the purpose of this analysis, we consider two distributions of the task durations: Mesos A Platform for Fine-Grained Resource Sharing in the Data Center


  4. Finally, these numbers are typically reported for different time durations by the ad networks. Hadoop:The Definitive Guide 3rd Edition


  5. The design should include reference to scalability both for users and for data/object volumes, the amount of distribution possible for the application depending on the required level of messaging between distributed components, and the transaction mechanisms and modes (pessimistic, optimistic, required locks, durations of transactions and locks held) Java Performance Tuning 2nd Edition


  6. Applications that can benefit from the Parallel GC are those that require high throughput and have pause time requirements that can be met by the worst-case stop-the-world induced full garbage collection durations along with being run on machines with more than one processor. Java Performance


  7. When all threads perform approximately the same actions for approximately the same durations, creating only as many tasks as CPUs, or perhaps a small multiple of the number of CPUs, can work well. Concurrent Programming in Java Design Principles and Pattern


  8. By obtaining the marked durations (in milliseconds) from the previous recipe you will create media marker events at points into the video. Java 7 Recipes A Problem-Solution Approach


  9. Durations require more than a long value for their internal storage. Java SE 8 for the Really Impatient


  10. In practice, stateful session beans do not add a significant amount of overhead to an operation, even when used for short durations. Pro JPA2



1385、单词 infrastructures 的匹配文本:


  1. It’s a key component of today’s Web—a powerful, dynamic language with a rich ecosystem of professional-grade development tools, infrastructures, frameworks, and toolkits. Programming Concurrency on the JVM


  2. Conclusion Even in this world of disposable and fully automated infrastructures, node- and clusterlevel metrics matter a great deal. Monitoring with Ganglia


  3. Since Internet packets are being transmitted across a public network, composed of a diverse range of network infrastructures, it is likely that packets will become lost at some point in a connection. Java Network Programming and Distributed Computing


  4. Not only is it highly configurable, it also strives to be as compatible as possible with existing dependency management infrastructures (like Maven and Ivy) Gradle in Action


  5. Introduction In this chapter, we will discuss the security of Jenkins, taking into account that it can live in a rich variety of infrastructures. Jenkins Continuous Integration Cookbook


  6. Another class of related technologies comes from the world of cloud infrastructures such as EC2, Azure, Eucalyptus and VMWare offerings. Apache Hadoop YARN Yet Another Resource Negotiator


  7. Hadoop and Hive at Facebook Hadoop can be used to form core backend batch and near real-time computing infrastructures. Hadoop:The Definitive Guide 3rd Edition


  8. Since it is not trivial to program against each of the APIs providing dynamic cluster infrastructures, it would be useful to abstract the provisioning part and, once the cluster is operational, simply launch the MapReduce jobs the same way you would on a local, static cluster. HBase:The Definitive Guide


  9. At the same time, other trends have changed how software and systems are deployed, using cloud resources alongside or even in preference to more traditional infrastructures. Hadoop Beginners Guide


  10. I occasionally use proprietary communications for testing purposes and for comparison against other communications infrastructures, as I have done in this chapter. Java Performance Tuning 2nd Edition



1386、单词 dept 的匹配文本:


  1. ManyToOne is mapped to the DEPT_ID foreign key column and can be customized. Java EE 7 Essentials


  2. Consider the table Employee with fields emp id, ldap user, name, dept id, and salary. Tenzing - A SQL Implementation On The MapReduce Framework


  3. The EMPLOYEE table has a foreign key column named DEPT_ID that references the DEPARTMENT table. Pro JPA2



1387、单词 james 的匹配文本:


  1. Naughton and James Gosling (a Sun Fellow and all-around computer wizard), wanted to design a small computer language that could be used for consumer devices like cable TV switchboxes. Core Java Volume I Fundamentals 9th Edition


  2. Thanks, as well, to James Phillips and Bob Wiederhold at Couchbase for supporting me while I developed this book. Getting Started with CouchDB


  3. The virtual machine that evolved into the Java Virtual Machine was originally designed by James Gosling in 1992 to support the Oak programming language. The Java Virtual Machine Specification Java SE 7 Edition


  4. Thanks to the readers that have helped me clarify concepts by submitting comments, including James Watson, Howard Lovatt, Michael Barker, and others, in particular those who helped with generics. Thinking in Java 4th Edition


  5. The first person I showed the draft to was David James, the organizer of the Washington DC–area Groovy user group. Gradle in Action


  6. I would like to thank my parents James and Patricia Owens, for their support and introducing me to technology at a young age. Hadoop RealWorld Solutions Cookbook


  7. Finally, I must offer my thanks to James for all his patience and support and for putting up with my continual state of distraction during phases of this process. Java Security 2nd Edition


  8. Finally, I''d like to thank my coauthors, James Gosling, Bill Joy, and Guy Steele for inviting me to participate in this work. The Java Language Specification Java SE 7 Edition


  9. James Gosling, fellow and vice president, Sun Microsystems, Inc., and inventor of the Java programming language. Effective Java 2nd Edition


  10. Next, Example 7-8 shows the same configuration, but now user james is limited to only three concurrent jobs, no matter what pool he uses. Hadoop Operations



1388、单词 col 的匹配文本:


  1. A HBase Bloom Filter is a space-efficient mechanism to test whether a StoreFile contains a specific row or row-col cell. HBase Administration Cookbook


  2. The Matrix class takes an array of double values and provides two similar methods: row and col. Scala in Depth


  3. If MySQL knows col can never be NULL, it can also optimize a COUNT(col) expression by converting it to COUNT(*) internally. High Performance MySQL 3rd Edition


  4. However, you can create collections with "_id" indexes by setting the autoIndexId option to false when calling createCol lection. MongoDB The Definitive Guide 2nd Edition


  5. All of the needed parameters except for the colVal are specified by the users at the start of the program. Cassandra High Performance Cookbook


  6. The map() function splits each incoming line''s value and extracts the string located at col_ pos. Hadoop RealWorld Solutions Cookbook


  7. Recall that a FileCol lection is a lazy specification of files that when queried (i.e., when the actual files are asked for), turns that specification into a concrete list of files. Gradle Beyond The Basics


  8. The usual way of writing output from map and reduce tasks is by using the OutputCol lector to collect key-value pairs. Hadoop:The Definitive Guide 3rd Edition


  9. NEXT_COL Skip the remainder of the current column, proceeding to the next. HBase:The Definitive Guide


  10. Wait, zero?? There is a bug when trying to use count(DISTINCT col) when col is a partition column. Programming Hive



1389、单词 dba 的匹配文本:


  1. Why? Because there’s so much data in your clusters now that altering the schema will take a long time and involve a lot of precious DBA time. MongoDB The Definitive Guide 2nd Edition


  2. Other similar commands let you generate an SQL script (if you need to submit it to your local DBA for approval, for example), or rollback to a previous version of the schema. Jenkins The Definitive Guide


  3. You could implement the implies( ) method such that if the name is "DBA," it implies every other DBPrincipal: Java Security 2nd Edition


  4. First, you need to make sure that you don’t run with too many map tasks, because bombarding your database with thousands of concurrent reads will probably bring the DBA knocking on your door. Hadoop in Practice


  5. The scaling of an RDBMS usually involves breaking Codd’s rules, loosening ACID restrictions, forgetting conventional DBA wisdom, and on the way losing most of the desirable properties that made relational databases so convenient in the first place. Hadoop:The Definitive Guide 3rd Edition


  6. This use case is supported by allowing the schema generation to occur into DDL scripts, which can then be further tuned by a DBA. Java EE 7 Essentials


  7. This is why Hive is so important to Hadoop, whether you are a DBA or a Java developer. Programming Hive


  8. This is normally a privileged operation that is often restricted to the DBA. Pro JPA2


  9. MongoDB’s design philosophy dictates that memory management is better handled by the operating system than by a DBA or application developer. MongoDB in Action


  10. Parallel databases can achieve especially high performance when administered by a highly skilled DBA who can carefully design, deploy, tune, and maintain the system, but recent advances in automating these tasks and bundling the software into appliance (pre-tuned and pre-configured) offerings have given many parallel databases high performance out of the box. HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads



1390、单词 navigates 的匹配文本:


  1. For example, let’s make the title of a book a link that navigates to a book’s details page that shows an image of and some information about the book. Pro Spring MVC


  2. Garbage collection is a background process that navigates the objects in memory to determine which are no longer reachable. Cassandra High Performance Cookbook


  3. The withPrefix method navigates through the tree, creating sub-maps as necessary if some prefix of characters is not yet contained as a path in the tree. Programming in Scala 2nd Edition


  4. Thereafter, the client navigates the ZooKeeper hierarchy to learn cluster attributes such as server locations.4 Regionserver slave nodes are listed in the HBase conf/regionservers file as you would list datanodes and tasktrackers in the Hadoop conf/slaves file. Hadoop:The Definitive Guide 3rd Edition


  5. Then, the test code navigates through the application, executing the steps needed for the test case using the variables loaded with data values. Selenium Testing Tools Cookbook


  6. Most of them have the capability to capture the key clicks as a user navigates the Web site, and this can later be replayed to simulate the interaction. Java Performance


  7. Requery each time the user navigates to a new page. Lucene in Action 2nd Edition


  8. The only problem is that all your code is spread across a couple of pages, and clicking certain actions (e.g., submitting a post) currently navigates you away from the page, which is undesirable behavior. The Definitive Guide to MongoDB


  9. In order to display the department name for the employee, the JSP expression navigates to the Department entity from the Employee entity. Pro JPA2


  10. For leveraging the connectedness of the products, a custom (annotated) query navigates from a start (or root) category, via the next zero through five relationships, to the products connected to this subtree. Spring Data Modern Data Access for Enterprise Java



1391、单词 celsius 的匹配文本:


  1. The first line in the body of this class defines a var, celsius, which will contain the temperature in degrees Celsius. Programming in Scala 2nd Edition


  2. For instance, we can convert temperatures from Fahrenheit to Celsius by applying the formula. Data Structures and Algorithm Analysis in Java 3rd Edition


  3. Suppose we would like to write a program that prompts the user for a temperature in degrees fahrenheit, then converts this value to celsius and echoes it to the user. Functional Programming in Scala


  4. For example, each scale constant could have a method that took a double value and normalized it to Celsius. Effective Java 2nd Edition


  5. The following example is a bash shell script that converts degrees in Celsius to degrees in Fahrenheit: Programming Hive



1392、单词 flashcache 的匹配文本:


  1. Using Flashcache Although there are many opportunities to make tradeoffs between flash storage, hard disks, and RAM, these don’t have to be treated as single-component tiers in the storage hierarchy. High Performance MySQL 3rd Edition



1393、单词 datum 的匹配文本:


  1. Each line contained one datum: user ID, item ID, and preference value. Mahout in Action


  2. For a given dataset, specifically, for each datum within a dataset, there exist certain classes of fields: Storm Real-Time Processing Cookbook


  3. There are two important objects here: the DatumWriter and the Encoder. Hadoop:The Definitive Guide 3rd Edition


  4. When Avro reads from a datafile, each individual element is called a datum and each datum will follow the structure defined in the schema. Hadoop Beginners Guide


  5. Any sequence of bytes of the right length can be interpreted as a primitive datum. Java Network Programming 4th Edition


  6. Otherwise, read on to see how to insert your first datum. MongoDB in Action



1394、单词 selinux 的匹配文本:


  1. SELinux and Firewall If you are new to Ganglia and feel that you have followed all the installation instructions to the letter, but for some reason you are not getting any graphs on the web frontend, you might want to check whether your OS has SELinux enabled by default. Monitoring with Ganglia


  2. There are some network parameters that you might need to change, and you might wish to tweak a number of other things (such as disabling SELinux), but those changes are a matter of preference. High Performance MySQL 3rd Edition


  3. For example, we need to configure users, groups, and system security, such as firewalls and SELinux. Hadoop Operations and Cluster Management Cookbook


  4. SELinux troubles often when you use Fedora/RHEL-based packages and you choose configuration options that do not match the standard policy. bk-admin-guide-cloud-latest


  5. It might also mean configuring any NAT firewall between nodes to allow direct connections, or disabling SELinux or configuring it to allow mysqld to listen to sockets at unprivileged ports. high-availability-guide



1395、单词 gpl 的匹配文本:


  1. If you see logs such as "Could not load native gpl library", there is an issue with the LZO installation. HBase Administration Cookbook


  2. Everyone is given a patent grant to use and modify Java, subject to the GPL, but only on desktop and server platforms. Core Java Volume I Fundamentals 9th Edition


  3. MySQL remains GPL-licensed and open source, with the full source code (except for commercially licensed plugins, of course) available to the community. High Performance MySQL 3rd Edition


  4. This is free software, and you are welcome to modify and redistribute it under the GPL v2 license. Deploying OpenStack


  5. Soon after IBM got involved, CouchDB’s license was changed to use the Apache License rather than the restrictive and cumbersome GNU General Public License (GPL) Beginning CouchDB


  6. At one time LZO/LZOP came bundled with Hadoop, but they have since been removed due to GPL licensing restrictions. Hadoop in Practice


  7. As of this writing, CloudBase has a less active developer community than Pig or Hive, and its GPL license is more restrictive than the Apache license. Hadoop in Action


  8. To download LZO, go to http://code.google.com/a/apache-extras.org/p/hadoop-gpl-com pression and click on the Downloads tab. Programming Pig


  9. Some users might have licensing issues (due to Cygwin’s use of the GPL) Redis Cookbook


  10. This is free software, and you are welcome to modify and redistribute it under the GPL license. Understanding MySQL Internals



1396、单词 resizable 的匹配文本:


  1. What kind of list can you pass? According to the documentation, the list must be modifiable but need not be resizable. Core Java Volume I Fundamentals 9th Edition


  2. They only have resizable containers with significantly more functionality than C/C++/Javastyle arrays. Thinking in Java 4th Edition


  3. Your library contains a Resizable interface that defines many methods a simple resizable shape must support: setHeight, setWidth, getHeight, getWidth, and setAbsoluteSize. Java 8 in Action Lambdas, Streams and Functional-style Programming


  4. You could make the spreadsheet resizable, so that the number of rows and columns can be changed interactively. Programming in Scala 2nd Edition


  5. However, if a frame is in the maximum state, you first need to set it to be resizable by calling setMaximum(false) Core Java Volume II Advanced Features 9th Edition


  6. You’ll notice the call to super() where you will pass in Booleans to the super class to set the JInternalFrame object to be resizable, closable, maximizable, and iconifiable. Java 7 Recipes A Problem-Solution Approach



1397、单词 tabular 的匹配文本:


  1. There is a tabular printout of the numeric metadata, including statistics such as the standard deviation.9 This is followed by a histogram of the response times. High Performance MySQL 3rd Edition


  2. Its name conjured up a tabular structure which it realized with sparse columns and no schema. NoSQL Distilled


  3. As you can see in Figure 7-1, CouchDB does not store data in a tabular, structured way but rather as individual, self-contained documents. Beginning CouchDB


  4. We were blown away by the idea that it might be possible to replace the tabular SQL semantic with a graph-centric model that would be much easier for developers to work with when navigating connected data. Graph Databases


  5. Unstructured text works well when you’re working with scalar or tabular data. Hadoop in Practice


  6. Primitive values as well as collections, arrays, composite data, and tabular data can be viewed. Oracle JRockit The Definitive Guide


  7. A table is a GUI component that can be used to display data in a tabular form (for example, think of a spreadsheet, which is the archetypal application that displays data in a tabular form) Steps in Scala An Introduction to Object-Functional Programming


  8. Solaris and Linux also offer a tabular view of CPU utilization for each virtual processor using the command line tool mpstat. Java Performance


  9. Even with SQL text relegated to property files or other application metadata, there comes a point when working with JDBC not only feels wrong but also simply becomes a cumbersome exercise in taking tabular row and column data and continuously having to convert it back and forth into objects. Pro JPA2


  10. If the distinction between a tabular and object representation of data is new to you, then you probably have a lot of questions. MongoDB in Action



1398、单词 lightbulb 的匹配文本:


  1. Tip Often, an Eclipse error report is accompanied by a lightbulb icon. Core Java Volume I Fundamentals 9th Edition


  2. For example, to define an interface for a remote lightbulb system (a high-tech version of the traditional on-off switch for a networked world), we could define an interface such as the following: Java Network Programming and Distributed Computing


  3. Click the yellow lightbulb to resolve the namespace prefix-URI mapping for h: Java EE 7 Essentials



1399、单词 npm 的匹配文本:


  1. If you exclusively work with JavaScript, this problem can be tackled by using the node package manager (NPM) Gradle in Action


  2. The following code will install the jscheckstyle package: % sudo npm install jscheckstyle -g. Testable JavaScript


  3. Discussion For this recipe, we’ll assume that you have a recent installation of Node.js, as well as npm in order to install the necessary node libraries to support the chat system (Socket.IO and Redis) Redis Cookbook



1400、单词 subtrees 的匹配文本:


  1. When possible, it examines all potential combinations of subtrees, beginning with all one-table plans. High Performance MySQL 3rd Edition


  2. Working with Subtrees As we grow the number of clients, the size of our shared store will also grow. ZeroMQ


  3. Figure 2: Ceph dynamically maps subtrees of the directory hierarchy to metadata servers based on the current workload. Ceph A Scalable, High-Performance Distributed File System


  4. This means that a functional update creates between one and five nodes that each contain up to 32 elements or subtrees. Programming in Scala 2nd Edition


  5. Figure 4.11 shows that a binary tree consists of a root and two subtrees, TL and TR, both of which could possibly be empty. Data Structures and Algorithm Analysis in Java 3rd Edition


  6. We do not have enough space to do the trace view justice, but it is characterized by a global time line (seen at the top) and the ability to expand and collapse subtrees interactively. Dapper a Large-Scale Distributed Systems Tracing Infrastructure


  7. For example, HIR lends itself well to value numbering in expression trees, substituting two equivalent subtrees of an expression with one subtree and a temporary variable assignment. Oracle JRockit The Definitive Guide


  8. Click on the circle icons (the handles) to open up the subtrees. Core Java Volume II Advanced Features 9th Edition


  9. This package contains Source and Result implementations that work with DOM document trees and subtrees. Java in a Nutshell 5th Edition


  10. As you might expect, it is also possible to check only subtrees of the filesystem by providing a path other than the root. Hadoop Operations



涉及文献数目:166
1、A Java ForkJoin Framework
2、A Little Riak Book
3、ActiveMQ in Action
4、Apache Flume Distributed Log Collection for Hadoop
5、Apache Hadoop Goes Realtime at Facebook
6、Apache Hadoop YARN Yet Another Resource Negotiator
7、Apache Kafka
8、Apache Solr 4 Cookbook
9、Beginning CouchDB
10、Beginning Scala
11、Bigtable - A Distributed Storage System for Structured Data
12、Building and Testing with Gradle
13、Cassandra A Decentralized Structured Storage System
14、Cassandra High Performance Cookbook
15、Cassandra The Definitive Guide
16、Ceph A Scalable, High-Performance Distributed File System
17、Chubby - The lock service for loosely-coupled distributed systems
18、Cloudera Impala
19、Concurrent Programming in Java Design Principles and Pattern
20、Core Java Volume I Fundamentals 9th Edition
21、Core Java Volume II Advanced Features 9th Edition
22、CouchDB The Definitive Guide
23、Dapper a Large-Scale Distributed Systems Tracing Infrastructure
24、Data Structures and Algorithm Analysis in Java 3rd Edition
25、Deploying OpenStack
26、Dremel - Interactive Analysis of WebScale Datasets
27、Effective Java 2nd Edition
28、Elasticsearch The Definitive Guide
29、Elasticsearch in Action MEAP V07 cHAPTER 1
30、FlumeJava - Easy, Efficient Data-Parallel Pipelines
31、Functional Programming Patterns in Scala and Clojure
32、Functional Programming in Scala
33、Fundamental Networking in Java
34、Getting Started with CouchDB
35、Getting Started with Google Guava
36、Getting Started with Storm
37、Gradle Beyond The Basics
38、Gradle Effective Implementation Guide
39、Gradle in Action
40、Graph Databases
41、HBase Administration Cookbook
42、HBase in Action
43、HBase:The Definitive Guide
44、HDFS scalability the limits to growth
45、Hadoop Beginners Guide
46、Hadoop MapReduce Cookbook
47、Hadoop Operations
48、Hadoop Operations and Cluster Management Cookbook
49、Hadoop RealWorld Solutions Cookbook
50、Hadoop in Action
51、Hadoop in Practice
52、HadoopDB An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads
53、Hadoop:The Definitive Guide 3rd Edition
54、High Performance MySQL 3rd Edition
55、Hive A Petabyte Scale Data Warehouse Using Hadoop
56、Incoop MapReduce for Incremental Computations
57、Interpreting the Data Parallel Analysis with Sawzall
58、Introduction to Tajo
59、JSR-133 Java Memory Model and Thread
60、Java 7 Concurrency Cookbook
61、Java 7 New Features Cookbook
62、Java 7 Recipes A Problem-Solution Approach
63、Java 8 in Action Lambdas, Streams and Functional-style Programming
64、Java EE 7 Essentials
65、Java EE 7 Specification
66、Java Network Programming 4th Edition
67、Java Network Programming and Distributed Computing
68、Java Performance
69、Java Performance Tuning 2nd Edition
70、Java SE 8 for the Really Impatient
71、Java Security 2nd Edition
72、Java Threads 3rd Edition
73、Java in a Nutshell 5th Edition
74、Jenkins Continuous Integration Cookbook
75、Jenkins The Definitive Guide
76、Kafka a Distributed Messaging System for Log Processing
77、Lucene in Action 2nd Edition
78、Mahout in Action
79、Managing Infrastructure with Puppet
80、MapReduce - Simplied Data Processing on Large Clusters
81、MapReduce Design Patterns
82、MapReduce a flexible data processing tool
83、MapReduce and Parallel DBMSs Friends or Foes
84、Mastering ElasticSearch
85、Megastore - Providing Scalable, Highly Available Storage for Interactive Services
86、Mesos A Platform for Fine-Grained Resource Sharing in the Data Center
87、MongoDB The Definitive Guide 2nd Edition
88、MongoDB in Action
89、Monitoring with Ganglia
90、Namenode High Availability
91、Netty in Action MEAP V08
92、NoSQL Distilled
93、Omega - flexible, scalable schedulers for large compute clusters
94、OpenStack Operations Guide
95、Oracle JRockit The Definitive Guide
96、Percolator - Large-scale Incremental Processing Using Distributed Transactions and Notifications
97、Performance Testing With JMeter 2.9
98、PowerDrill - Processing a Trillion Cells per Mouse Click
99、Pregel  -  A System for Large-Scale Graph Processing
100、Pro Hadoop
101、Pro JPA2
102、Pro Java 7 NIO.2
103、Pro Puppet
104、Pro Spring 3
105、Pro Spring MVC
106、Pro Spring Security
107、Professional Hadoop Solutions
108、Programming Concurrency on the JVM
109、Programming Hive
110、Programming Pig
111、Programming Scala
112、Programming Scala Tackle Multi-Core Complexity on the Java Virtual Machine
113、Programming in Scala 2nd Edition
114、RADOS A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.
115、RCFile A Fast and Space-efficient Data Placement Structure in MapReduce-based Warehouse Systems
116、RabbitMQ in Action
117、Redis Cookbook
118、Redis in Action
119、SQL Antipatterns
120、Scala for the Impatient
121、Scala in Action
122、Scala in Depth
123、Scaling Big Data with Hadoop and Solr Chapter-02 Understanding Solr
124、Scaling CouchDB
125、Scaling MongoDB
126、Selenium 2 Testing Tools Beginner''s Guide
127、Selenium Testing Tools Cookbook
128、Solr 3 Enterprise Search Server
129、Solr 3.1 Cookbook
130、Solr in action
131、Sonar Code Quality Testing Essentials
132、Spanner - Google’s Globally-Distributed Database
133、Spring Batch in Action
134、Spring Data Modern Data Access for Enterprise Java
135、Spring Dynamic Modules in Action
136、Spring Integration in Action
137、Spring Roo in Action
138、Spring Security 3.1
139、Spring in Action 4th Edition
140、Spring in Practice
141、Steps in Scala An Introduction to Object-Functional Programming
142、Storage Architecture and Challenges
143、Storm Blueprints Patterns for Distributed Real-time Computation
144、Storm Real-Time Processing Cookbook
145、Tajo A Distributed Data Warehouse System on Large Clusters
146、Tenzing - A SQL Implementation On The MapReduce Framework
147、Testable JavaScript
148、The Definitive Guide to MongoDB
149、The Hadoop Distributed File System
150、The Java Language Specification Java SE 7 Edition
151、The Java Virtual Machine Specification Java SE 7 Edition
152、The Little Redis Book
153、The anatomy of a large-scale hypertextual Web search engine
154、Thinking in Java 4th Edition
155、Tika in Action
156、Understanding MySQL Internals
157、Web Crawling and Data Mining with Apache Nutch
158、Web search for a planet the google cluster architecture
159、ZeroMQ
160、ZooKeeper Distributed Process Coordination
161、admin-user-guide-trunk
162、api-quick-start-onepager
163、bk-admin-guide-cloud-latest
164、bk-compute-adminguide-grizzly
165、high-availability-guide
166、user-guide

我们今天的关于MSSQL 2000 错误823恢复数据案例sql错误823什么意思的分享就到这里,谢谢您的阅读,如果想了解更多关于2000 个软件开发领域的高频特殊词及精选例句(一)、2000 个软件开发领域的高频特殊词及精选例句(三)、2000 个软件开发领域的高频特殊词及精选例句(二)、2000 个软件开发领域的高频特殊词及精选例句(五)的相关信息,可以在本站进行搜索。

本文标签: