以上就是给各位分享Zero-KnowledgeProofofBalance:AFriendlyZKPDemo,同时本文还将给你拓展Aflawofjavaknowledgefoundbytakingle
以上就是给各位分享Zero-Knowledge Proof of Balance: A Friendly ZKP Demo,同时本文还将给你拓展A flaw of java knowledge found by taking leetcode Contest、ACK Acknowledgement 确认 AES Advanced Encryption Standard 高级加密标准 ATM Asynchronous Transfer Mode异步传输模式、AcknowledgementsBundle、AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...等相关知识,如果能碰巧解决你现在面临的问题,别忘了关注本站,现在开始吧!
本文目录一览:- Zero-Knowledge Proof of Balance: A Friendly ZKP Demo
- A flaw of java knowledge found by taking leetcode Contest
- ACK Acknowledgement 确认 AES Advanced Encryption Standard 高级加密标准 ATM Asynchronous Transfer Mode异步传输模式
- AcknowledgementsBundle
- AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...
Zero-Knowledge Proof of Balance: A Friendly ZKP Demo
Here at Stratumn Research we are currently applying various cryptographic primitives to allow participants finer control over their confidential information, even when it is shared in some way. Our general approach is to utilize zero-knowledge proofs (ZKPs) as it offers the ability to share proofs without sharing data.
In this post, we explore an use case where users are able to prove their minimum balance without exposing their actual balance.
Here are a couple of hypothetical examples that point to a category of use cases that interests us, sometimes known as Proof of Balance or Proof of Solvency:
- Satya wants to join the famous Bering Sea Billionaire’s Club, and needs to convince Chloe, the club’s president, that he’s a billionaire. However, he’s not comfortable sharing his actual net worth.
- Antoshka wants to go out dancing, and has to prove he’s over 21 to the bouncer. However, his identification is a foreign passport that contains his nationality, address, travel history, birthdate, and ID number, none of which he wants the bouncer to see.
By now the gist is becoming clear – we want to share some property of our private information (such as its magnitude, or how it interacts with other publicly available date) without revealing it. We refer to this as the selective disclosure of confidential information. In a more emotional language: “I seek your approval, but I don’t trust you.”
In each of these cases, we have a prover and a verifier. The prover needs to demonstrate evidence of the truth of a certain conjecture (I am 21! I deserve to go dance!) to the verifier. This evidence is referred to as a “proof”.
In this context, proofs should be interpreted in the detective sense (“Do you have any proof of that, Sherlock?) rather than in the mathematical sense (“Here is an inductive proof of your theorem, Watkins). That is, we are interested in empirical proofs rather than formal proofs.
Proof of Balance
Note: If you are already familiar with Part I of this series, you may skip this section straight to the code.
Let’s continue with the example in which Satya wants to join the Billionaire’s Club. At the moment, what are Satya’s options?
- Manual hiding: He could print out his bank balance, meet with Chloe, and place his hand over some of the numbers. Or he could ink over some of the numbers he didn’t want to share.
- Trusted third party: He could get a signed letter from the bank attesting his balance is big enough.
- Hijinx: He could move all but 1 billion dollars out of the account in question, print the statement, and then return the rest.
The second solution, using the Bank as a trusted third party, seems to be the most reasonable. But what if Satya is joining new clubs all the time and the time and process required to get a signed attestation is cramping his style? Is there a way Satya could have a little more control over the situation, that requires less work and interaction from the bank?
If only Satya could prove his minimum balance without asking his bank to sign custom documents every time! What Satya needs is an on-demand proof generator on his side. So let’s cook one up.
A Recipe for On-Demand Proof Generation
Ingredients
First we need three players:
- Prover: Satya
- Verifier: Chloe
- Trusted Third Party: Bank
And then we need some information from our (trusted) bank, namely:
- Bank Statement (BS): A digitally signed document consisting of:
- The name of the prover (Satya)
- The time the balance was taken
- The balance, but encrypted with the prover’s public key
-
Prover Kit (PK): A proof-generating executable used by a client wishing to demonstrate their proof of balance. The PK takes public two inputs (an encrypted balance and a balance to prove), and one private input (a private key to decrypt the encrypted balance.) The code for the PK is posted publicly and verifiably, so the prover feels safe running it. The PK outputs a confirmation of whether the encrypted balance is greater than or equal the balance to prove, and a proof (π) that the confirmation was correctly calculated.
-
Verifier Kit (VK): A proof-confirming executable used by someone wishing to verify the authenticity of a balance claim. VK outputs “true” if and only if it is run with the following inputs:
- The same inputs (encrypted balance and balance to prove) used to generate the proof π
- The confirmation generated by the PK
- The proof (π) generated by the PK, ensuring the confirmation is correct
Directions
- Setup: Bank sets up the pair, PK and VK. They correspond to each other through being generated with a shared secret key, which we trust the bank to destroy thereafter.
- Commitment: Bank sends Satya his BS
- Proposal: Satya meets Chloe to express his desire to join the club.
- Challenge: Chloe challenges Satya if he can prove that he has at least 1 billion in his bank balance
- Construction: Satya runs PK with his encrypted balance (from BS), the balance to prove (1 billion), and his private key, to prove he indeed has the money.
- Response: Satya sends his BS, and the outputs of PK (the confirmation and and the proof π) over to Chloe.
- Verification: Chloe first checks the confirmation is positive. Then she verifies the signature on BS to make sure it has not been modified or corrupted. Finally, she runs VK on the Satya’s encrypted balance and the balance she challenged Satya to prove.
With this kind of system, anytime Satya runs into a new club he wants (proposes) to join, he can use PK to generate a proof π. As long as the club officials have access to the corresponding VK, he can convince them beyond a reasonable doubt.
Tip
Note that the verifier cannot verify the proof using someone else’s encrypted balance nor change the balance amount to prove. This is because VK must be run on exactly the same inputs as PK in order to verify properly.
So there can be no funny-business at the Club trying out various numbers with the aim to determine Satya’s actual balance.
Satya’s actual bank balance stays private between the Bank and Satya. However, the proof of the balance stays private between Satya and the Chloe as neither needs to contact the Bank to verify the proof. Thus, the same data, Satya’s bank balance, discloses itself in one way between the Bank and Satya, and in another way between Satya and the Club.
The selective disclosure of confidential information is a key feature of this system.
Is this for real?
This kind of proof system actually does exist, and is one example of a general zero-knowledge proof. The approach we used here (ZK-SNARKs) has evolved through years of research, and there are some nice technical explanations of it (like these ones). Some more general intuitive explanations use color-blindnessand caves to illustrate the concepts involved.
Here’s a stab at a brief explanation: In a ZKP, the verifier (Chloe) with the help of a trusted third party (bank) asks a bunch of questions to the prover (Satya). In our implementation, one such key question to Satya is: “Can you decrypt your bank balance?”
If Satya didn’t know the right answer, there’s a pretty low chance that he would answer any of the questions correctly, and therefore, a really low chance he could answer all the questions correctly. However, the questions are so obfuscated that the verifier can’t really learn anything about the prover’s secret through the answers to his own questions.
Thus, by answering questions correctly whose right answers demands the knowledge of the secret, the prover can convince the verifier that he indeed knows the secret. In logic, this approach is referred to as proof by elimination. We will come back to this towards the end of this article.
In our demo, the questions are set up by the trusted third party. We’ll go into a little bit more detail, just enough for our demo to make sense, but those of you who want to the understand the underlying math are encouraged to check out the references above.
The Code
The source code for our implementation is located at https://github.com/stratumn/pequin, and is essentially some wrappers around the ZKP implementation done by the fine folks at pepper-project, using the libsnark library. It is based on the theory of verifiable computation, which is a technology to prove the correctness of computations without the need to re-run the computation.
We have prepared a Docker container based on the above repository, which can be run as follows:
# docker run -it stratumn/zk-proof-of-balance bash
Anatomy of the Zero-Knowledge Proof System
A ZKP System consists of:
- The same three players as above:
- Prover (P)
- Verifier (V)
- Trusted third party (T)
- Core
- Function L which decrypts P’s balance to check if it is greater than a the amount to be proven
- Public Input x: "1 billion" that is the amount to be proven
- Private Input w: P’s private key to decrypt the amount
- Three algorithms
- Generate: Key generator for
- σ: Prover''s Key
- τ: Verifier''s Key
- Prove: Algorithm which generates a proof given certain public inputs (x), as well as the mandatory private inputs (w)
- Verify: Algorithm which a verifies a proof given by P, given the same public inputs (x) and outputs, although the private input w is not needed for this to work
- Generate: Key generator for
Note: The Prove algorithm along with the prover’s key as well as the Verify algorithm with the verifier’s key roughly translates to the Proving Kit and the Verifying Kit respectively as described earlier.
We will assume that the bank’s commitment, the prover’s proposal, the verifier’s challenge and the response is already set in place, since these three steps are external to the core ZKP approach.
Thus, use of this system proceeds in three phases of decreasing computational complexity and resource usage: setup, construction, and verification.
Phase I: Setup
First, T compiles the function L into a binary circuit C whose output is true if and only if its arguments (the public inputs x and the private inputs w) satisfy the logic in the function L. The logic is to decrypt the balance and check the difference with 1 billion. T may even post this function in plaintext and its compiled equivalent publicly for the sake of transparency.
T then runs Generate to generate keys for the prover and verifier. Generate takes some secret information (known only to T) as an argument that would ideally be destroyed later, so no one else could regenerate the prover’s key in the future. The compiled circuit C is also an input to Generate.
T then makes signed copies of σ and τ available to those who might want to prove or verify with this system.
We encapsulate these steps in the following script:
# ./pob-setup.sh
which generates everything necessary for the prover and verifier to continue
Input to Generate:
- The body of the function (that is supposed to receive x & w as inputs) to be converted into a binary circuit:
/proof_of_balance/proof_of_balance.c
This is equivalent to L. - Bank''s secret for generating the keys
- Value of k where k is the security parameter for 1^k key length
Output of Generate:
- Compiled R1CS circuit, that gets embedded in the prover’s key σ (compiled from /proofofbalance/proofofbalance.c)
- Keys for
- Verifier τ:
/verification_material/proof_of_balance.vkey
- Prover σ:
/proving_material/proof_of_balance.pkey
- Verifier τ:
- Executables for
- Verifier v:
/bin/pepper_verifier_proof_of_balance
- Prover p:
/bin/pepper_prover_proof_of_balance
- Verifier v:
Phase II: Construction
When P receives a request for verification from V, P can run Prove with:
- Proving key σ
- Public inputs (x) that P and V agreed upon
- The secret inputs (w) that only P knows
In our case, we run the following script:
# ./pob-prove.sh 9999999999 1000000000
The first argument (9999999999) is the prover’s actual balance, and then second argument the balance to prove. The proving script encrypts the actual balance with a public key (generated by the script) and then generates a files with the public and private inputs to the Prove at prover_verifier_shared/proof_of_balance.inputs
and bin/exo0
, respectively.
Input to Prove:
- Proving key σ:
/proving_material/proof_of_balance.pkey
- The public input, word x -
encrypted_balance, amount_to_check
:/prover_verifier_shared/proof_of_balance.inputs
- Prover''s secret input w (NP-witness w for x):
bin/exo0
Output of Prove:
- The result r of the computation:
/prover_verifier_shared/proof_of_balance.outputs
- The proof π of the computation:
/prover_verifier_shared/proof_of_balance.proof
P can then send π to V, along with the public inputs used in the proof generation. In our case, of course, V is using the same machine.
The secret w is in no way related to the secret in the setup phase. The result r of the computation can be 0 or 1, where 1 indicates success.
Phase III: Verification
Once V receives the relevant information from the P, as well as the verification key (either from the P or T), V checks that the inputs P used were correct. As there is no point in verifying a conjecture that holds no interest to the verifier, V also checks if the result r is successful. If V is satisfied with the inputs and outputs, V runs Verify with inputs as:
- Verification key τ
- Proof π
- Public inputs (x) that P and V agreed upon
# ./pob-verify.sh
Verify will tell the verifier if the proof has been accepted or rejected. It will always fail if the inputs (x) that V provides are different from the inputs that P provided. In this way, V can be sure P ran Prove with the correct inputs, while at the same time P can be sure V cannot learn more about P by running Verify with a variety of inputs to learn more about P’s actual balance.
And why should we trust this?
As the references on ZKP point out, this system has some nice mathematical properties that allow us to trust it, including:
- Completeness: A prover should be able to convince a verifier anything that is correct. In other words, you can prove anything that’s right.
If x is in L and w is witness for x, then the proof π produced by the prover on input (x, w) will be accepted by the verifier with, except possibly with some small probability of doubt. - Computational soundness: A prover should not be able to prove a verifier anything is incorrect. In other words, you cannot prove anything that’s wrong.
For any polynomial-time adversary running on input (1^k, pub) and producing a pair (x, π), the probability that x is not in L and that (x, π) is accepted by V is negligible in k, except with some small probability known as the soundness error. - Succinctness: It can be verified easily.
The length of π is polynomial in k. - Zero knowledge: Nothing more than what is proved is learned, that is, no malicious verifier can figure out the exact balance of Satya.
When x ∈ L and the prover is honest, even a malicious verifier V'' “learns nothing” beyond the fact that x ∈ L.
ZKP: Proving a secret without revealing it
For the sake of Satya’s confidentiality, instead of directly proving his balance, he can prove three things in return:
- Knowledge: That he is in possession of a legit bank statement, based on which, he knows his bank balance
- Use: That he is using the knowledge of his bank balance as stated in point 1 to claim that he has at least the amount being asked for
- Calculation: That he is accurately calculating if he has enough funds. That is, he has to prove his subtraction of the amount asked from his balance has been carried out correctly.
What the prover is essentially saying:
"I, Satya, have the knowledge of my statement of balance (as of a certain date and time) signed by the bank that you, the club, have trust in. Using that and only that knowledge, I subtract the amount asked from my balance in order to prove to you that I have at least the amount you are looking for."
In order for the verifier to be convinced that the prover has the right secret, she must be able to check
- Knowledge: the prover’s (Satya’s) secret came from a trustworthy source (the bank)
- Use: that he used this trustworthy secret to claim that he has sufficient funds
- Calculation: that he calculated if he has enough funds correctly to prove his claim
What the verifier is essentially saying:
"I, Chloe, will let you in the club, if you can prove you have enough funds even though I have no access to your account balance, as long as the bank has informed you of your account balance which you are using to calculate the difference correctly."
The centerpiece of every ZKP is figuring out an action that would evaluate to true only if the secret to be proved is absolutely necessary for it. In our case we used the fact that the secret, being Satya’s private key is absolutely necessary to decrypt his balance, and thus we we made the decryption as the centerpiece of the bank’s circuit. With this, what every zero knowledge proof is trying to say is that “you must have had knowledge of the secret to be able to run this circuit and have this particular result”. Every ZKP utilizes this proof by elimination technique as its basis.
A flaw of java knowledge found by taking leetcode Contest
This morning , I took my first leetcode online.
However, that is far beyond what I expected . I didn’t make it to finish in 1 hour and 30 minutes. But via solving the 1st problem , which is marked easy, I realized that I’m not that perfectly familiar with java.
The problem looks like follows:
You''re now a baseball game point recorder.>
Given a list of strings, each string can be one of the 4 following types:
Integer (one round''s score): Directly represents the number of points you get in this round.>
“+” (one round''s score): Represents that the points you get in this round are the sum of the last two valid round''s points.>
“D” (one round''s score): Represents that the points you get in this round are the doubled data of the last valid round''s points.>
“C” (an operation, which isn''t a round''s score): Represents the last valid round''s points you get were invalid and should be removed.>
Each round''s operation is permanent and could have an impact on the > round before and the round after.You need to return the sum of the points you could get in all the rounds.
And this is my java code:
class Solution {
public int calPoints(String[] ops) {
int sum=0;
int len=ops.length;
boolean valid[] = new boolean[len];
int point[] = new int[len];
for (int i = 0; i < len; i++) {
valid[i]=true;
point[i]=0;
}
for (int i = 0; i < len; i++) {
if (isdigit(ops[i])) {
point[i] = Integer.parseInt(ops[i]);
sum += point[i];
}
else if (ops[i].equals("+")) {
int j=i-1;
while (j>=0) {
if (valid[j]) {
point[i] += point[j];
break;
}
j--;
}
j=j-1;
while (j>=0) {
if (valid[j]) {
point[i] += point[j];
break;
}
j--;
}
sum+=point[i];
}
else if (ops[i].equals("D")) {
int j=i-1;
while (j>=0) {
if (valid[j]) {
point[i] += 2*point[j];
sum+=point[i];
break;
}
j--;
}
}
else if (ops[i].equals("C")) {
valid[i]=false;
int j=i-1;
while (j>=0) {
if (valid[j]) {
valid[j] = false;
sum-=point[j];
break;
}
j--;
}
}
}
return sum;
}
public boolean isdigit(String str) {
int len = str.length();
char [] a;
a = str.toCharArray();
for (int i = 0; i < a.length; i++) {
if (!(a[i]>=''0''&&a[i]<=''9'')) {
return false;
}
}
return true;
}
}
I spent about 30 minutes to finish it but 2 hours to figure out what exactly is wrong with my code, cuz I ran it on my local IDE and it turned out to be right.
However, when I paste the code on the online editor and ran it , it just could not get the right answer. Which make me very confused.
I did debug on the playground, and first, I found that the length of the array is not the correct one. Thus I begin to doubt if the array.length return the correct number.
But it’s right, I searched online , it is the one.
Then I found that my code cannot enter the “D”,“C” and “+” section.
Why?
Then the fatal one appears in my mind
~if the code does not enter that two section, so the condition must be wrong, thus I come to look at the “=“, ~
And I googled it .
OMG!….
equals will only compare what it is written to compare, no more, no less. That being said, the equals() method compares the "value" inside String instances (on the heap)
the "==" operator compares the value of two object references to see whether they refer to the same String instance.
Just due to this tiny difference, it cost me more than 2 hours!
A SOLID FOUNDATION OF LANGUAGE IS FAR MORE IMPORTANT THAN YOU EXPECTED!
ACK Acknowledgement 确认 AES Advanced Encryption Standard 高级加密标准 ATM Asynchronous Transfer Mode异步传输模式
ACK Acknowledgement 确认
AES Advanced Encryption Standard 高级加密标准
ATM Asynchronous Transfer Mode异步传输模式
ADSL Asymmetric Digital Subscriber Line 非对称数字用户线路
ALOHA Aloha协议
ARP Address Resolution Protocol 地址解析协议
BGP Border Gateway Protocol 边界网关协议
CIDR Classless Inter Domain Routing 无类别域间路由
CRC Cyclic Redundancy Check 循环冗余校验码CDMA Code Division Multiple Access码分多路访问
CSMA/CD Carrier Sense Multiple Access with Collision Detection
载波监听多路访问/冲突检测
CSMA/CA Carrier Sense Multiple Access with Collision Avoidance载波监听多路访问/冲突避免
DHCP Dynamic Host Configuration Protocol 动态主机配置协议
DES Data Encryption Standard 数据加密标准
DNS Domain Name System 域名系统
EGP Exterior Gateway Protocol外部网关协议
FCS Frame Check Sequence帧检验序列.
FDDI Fiber Distributed Data Interface 光纤分布式数据接口
FR Frame Relay 帧中继IFS Inter Frame Spacing帧间隔
FTP File Transfer Protocol文件传输协议
FTTH Fiber To The Home 光纤到户
FDM Frequency Division Multiplexing 频分多路复用
HDLC High-level Data Link Control高级数据链路控制
HTML Hyper Text Markup Language 超文本标记语言
HTT
本文分享 CSDN - TrueDei。
如有侵权,请联系 support@oschina.cn 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一起分享。
AcknowledgementsBundle
AcknowledgementsBundle 介绍
AckNowledgementsBundle 可以很方便使用 CocoaPods 的 ackNowledgements 添加一个
Settings.bundle 到你的 iOS 应用中。
AcknowledgementsBundle 官网
https://github.com/rivera-ernesto/AcknowledgementsBundle
AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...
Knowledge-based agents
Intelligent agents need knowledge about the world in order to reach good decisions.
Knowledge is contained in agents in the form of sentences in a knowledge representation language that are stored in a knowledge base.
Knowledge base (KB): a set of sentences, is the central component of a knowledge-based agent. Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world.
Axiom: Sometimes we dignify a sentence with the name axiom, when the sentence is taken as given without being derived from other sentences.
TELL: The operation to add new sentences to the knowledge base.
ASK: The operation to query what is known.
Inference: Both TELL and ASK may involve, deriving new sentences from old.
The outline of a knowledge-based program:
A knowledge-base agent is composed of a knowledge base and an inference mechanism. It operates by storing sentences about the world in its knowledge base, using the inference mechanism to infer new sentences, and using these sentences to decide what action to take.
The knowledge-based agent is not an arbitrary program for calculating actions, it is amenable to a description at the knowledge level, where we specify only what the agent knows and what its goals are, in order to fix its behavior, the analysis is independent of the implementation level.
Declarative approach: A knowledge-based agent can be built simply by TELLing it what it needs to know. Starting with an empty knowledge base, the gent designer can TELL sentences one by one until the agent knows how to operate in its environment.
Procedure approach: encodes desired behaviors directly as program code.
A successful agent often combines both declarative and procedural elements in its design.
A fundamental property of logical reasoning: The conclusion is guaranteed to be correct if the available information is correct.
Logic
A representation language is defined by its syntax, which specifies the structure of sentences, and its semantics, which defines the truth of each sentence in each possible world or model.
Syntax: The sentences in KB are expressed according to the syntax of the representation language, which specifies all the sentences that are well formed.
Semantics: The semantics defines the truth of each sentence with respect to each possible world.
Models: We use the term model in place of “possible world” when we need to be precise. Possible world might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentences.
If a sentence α is true in model m, we say that m satisfies α, or m is a model of α. Notation M(α) means the set of all models of α.
The relationship of entailment between sentence is crucial to our understanding of reasoning. A sentence α entails another sentence β if β is true in all world where α is true. Equivalent definitions include the validity of the sentence α⇒β and the unsatisfiability of sentence α∧¬β.
Logical entailment: The relation between a sentence and another sentence that follows from it.
Mathematical notation: α ⊨ β: αentails the sentence β.
Formal definition of entailment:
α ⊨ β if and only if M(α) ⊆ M(β)
i.e. α ⊨ β if and only if, in every model in which αis true, β is also true.
(Notice: if α ⊨ β, then α is a stronger assertion than β: it rules out more possible worlds. )
Logical inference: The definition of entailment can be applied to derive conclusions.
E.g. Apply analysis to the wupus-world.
The KB is false in models that contradict what the agent knows. (e.g. The KB is false in any model in which [1,2] contains a pit because there is no breeze in [1, 1]).
Consider 2 possible conclusions α1and α2.
We see: in every model in which KB is true,α1 is also true. Hence KB⊨α1 , so the agent can conclude that there is no pit in [1, 2].
We see: in some models in which KB is true,α2 is false. Hence KB⊭α2, so the agent cannot conclude that there is no pit in [1, 2].
The inference algorithm used is called model checking: Enumerate all possible models to check that α is true in all models in which KB is true, i.e. M(KB) ⊆ M(α).
If an inference algorithm i can derive α from KB, we write KB⊨iα,pronounced as “α is derived from KB by i” or “i derives α from KB.”
Sound/truth preserving: An inference algorithm that derives only entailed sentences. Soundness is a highly desirable property. (e.g. model checking is a sound procedure when it is applicable.)
Completeness: An inference algorithm is complete if it can derive any sentence that is entailed. Completeness is also a desirable property.
Inference is the process of deriving new sentences from old ones. Sound inference algorithms derive only sentences that are entailed; complete algorithms derive all sentences that are entailed.
If KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world.
Grounding: The connection between logical reasoning process and the real environment in which the agent exists.
In particular, how do we know that KB is true in the real world?
Propositional logic
Propositional logic is a simple language consisting of proposition symbols and logical connectives. It can handle propositions that are known true, known false, or completely unknown.
1. Syntax
The syntax defines the allowable sentences.
Atomic sentences: consist of a single proposition symbol, each such symbol stands for a proposition that can be true or false. (e.g. W1,3 stand for the proposition that the wumpus is in [1, 3].)
Complex sentences: constructed from simpler sentences, using parentheses and logical connectives.
Semantics
The semantics defines the rules for determining the truth of a sentence with respect to a particular model.
The semantics for propositional logic must specify how to compute the truth value of any sentence, given a model.
For atomic sentences: The truth value of every other proposition symbol must be specified directly in the model.
For complex sentences:
A simple inference procedure
To decide whether KB ⊨ α for some sentence α:
Algorithm 1: Model-checking approach
Enumerate the models (assignments of true or false to every relevant proposition symbol), check that α is true in every model in which KB is true.
e.g.
TT-ENTAILS?: A general algorithm for deciding entailment in propositional logic, performs a recursive enumeration of a finite space of assignments to symbols.
Sound and complete.
Time complexity: O(2n)
Space complexity: O(n), if KB and α contain n symbols in all.
Propositional theorem proving
We can determine entailment by model checking (enumerating models, introduced above) or theorem proving.
Theorem proving: Applying rules of inference directly to the sentences in our knowledge base to construct a proof of the desired sentence without consulting models.
Inference rules are patterns of sound inference that can be used to find proofs. The resolution rule yields a complete inference algorithm for knowledge bases that are expressed in conjunctive normal form. Forward chaining and backward chaining are very natural reasoning algorithms for knowledge bases in Horn form.
Logical equivalence:
Two sentences α and β are logically equivalent if they are true in the same set of models. (write as α ≡ β).
Also: α ≡ β if and only if α ⊨ β and β ⊨ α.
Validity: A sentence is valid if it is true in all models.
Valid sentences are also known as tautologies—they are necessarily true. Every valid sentence is logically equivalent to True.
The deduction theorem: For any sentence αand β, α ⊨ β if and only if the sentence (α ⇒ β) is valid.
Satisfiability: A sentence is satisfiable if it is true in, or satisfied by, some model. Satisfiability can be checked by enumerating the possible models until one is found that satisfies the sentence.
The SAT problem: The problem of determining the satisfiability of sentences in propositional logic.
Validity and satisfiability are connected:
α is valid iff ¬α is unsatisfiable;
α is satisfiable iff ¬α is not valid;
α ⊨ β if and only if the sentence (α∧¬β) is unsatisfiable.
Proving β from α by checking the unsatisfiability of (α∧¬β) corresponds to proof by refutation / proof by contradiction.
Inference and proofs
Inferences rules (such as Modus Ponens and And-Elimination) can be applied to derived to a proof.
·Modus Ponens:
Whenever any sentences of the form α⇒β and α are given, then the sentence β can be inferred.
·And-Elimination:
From a conjunction, any of the conjuncts can be inferred.
·All of logical equivalence (in Figure 7.11) can be used as inference rules.
e.g. The equivalence for biconditional elimination yields 2 inference rules:
·De Morgan’s rule
We can apply any of the search algorithms in Chapter 3 to find a sequence of steps that constitutes a proof. We just need to define a proof problem as follows:
·INITIAL STATE: the initial knowledge base;
·ACTION: the set of actions consists of all the inference rules applied to all the sentences that match the top half of the inference rule.
·RESULT: the result of an action is to add the sentence in the bottom half of the inference rule.
·GOAL: the goal is a state that contains the sentence we are trying to prove.
In many practical cases, finding a proof can be more efficient than enumerating models, because the proof can ignore irrelevant propositions, no matter how many of them they are.
Monotonicity: A property of logical system, says that the set of entailed sentences can only increased as information is added to the knowledge base.
For any sentences α and β,
If KB ⊨ αthen KB ∧β ⊨ α.
Monotonicity means that inference rules can be applied whenever suitable premises are found in the knowledge base, what else in the knowledge base cannot invalidate any conclusion already inferred.
Proof by resolution
Resolution: An inference rule that yields a complete inference algorithm when coupled with any complete search algorithm.
Clause: A disjunction of literals. (e.g. A∨B). A single literal can be viewed as a unit clause (a disjunction of one literal ).
Unit resolution inference rule: Takes a clause and a literal and produces a new clause.
where each l is a literal, li and m are complementary literals (one is the negation of the other).
Full resolution rule: Takes 2 clauses and produces a new clause.
where li and mj are complementary literals.
Notice: The resulting clause should contain only one copy of each literal. The removal of multiple copies of literal is called factoring.
e.g. resolve(A∨B) with (A∨¬B), obtain(A∨A) and reduce it to just A.
The resolution rule is sound and complete.
Conjunctive normal form
Conjunctive normal form (CNF): A sentence expressed as a conjunction of clauses is said to be in CNF.
Every sentence of propositional logic is logically equivalent to a conjunction of clauses, after converting a sentence into CNF, it can be used as input to a resolution procedure.
A resolution algorithm
e.g.
KB = (B1,1⟺(P1,2∨P2,1))∧¬B1,1
α = ¬P1,2
Notice: Any clause in which two complementary literals appear can be discarded, because it is always equivalent to True.
e.g. B1,1∨¬B1,1∨P1,2 = True∨P1,2 = True.
PL-RESOLUTION is complete.
Horn clauses and definite clauses
Definite clause: A disjunction of literals of which exactly one is positive. (e.g. ¬ L1,1∨¬Breeze∨B1,1)
Every definite clause can be written as an implication, whose premise is a conjunction of positive literals and whose conclusion is a single positive literal.
Horn clause: A disjunction of literals of which at most one is positive. (All definite clauses are Horn clauses.)
In Horn form, the premise is called the body and the conclusion is called the head.
A sentence consisting of a single positive literal is called a fact, it too can be written in implication form.
Horn clause are closed under resolution: if you resolve 2 horn clauses, you get back a horn clause.
Inference with horn clauses can be done through the forward-chaining and backward-chaining algorithms.
Deciding entailment with Horn clauses can be done in time that is linear in the size of the knowledge base.
Goal clause: A clause with no positive literals.
Forward and backward chaining
forward-chaining algorithm: PL-FC-ENTAILS?(KB, q) (runs in linear time)
Forward chaining is sound and complete.
e.g. A knowledge base of horn clauses with A and B as known facts.
fixed point: The algorithm reaches a fixed point where no new inferences are possible.
Data-driven reasoning: Reasoning in which the focus of attention starts with the known data. It can be used within an agent to derive conclusions from incoming percept, often without a specific query in mind. (forward chaining is an example)
Backward-chaining algorithm: works backward rom the query.
If the query q is known to be true, no work is needed;
Otherwise the algorithm finds those implications in the KB whose conclusion is q. If all the premises of one of those implications can be proved true (by backward chaining), then q is true. (runs in linear time)
in the corresponding AND-OR graph: it works back down the graph until it reaches a set of known facts.
(Backward-chaining algorithm is essentially identical to the AND-OR-GRAPH-SEARCH algorithm.)
Backward-chaining is a form of goal-directed reasoning.
Effective propositional model checking
The set of possible models, given a fixed propositional vocabulary, is finite, so entailment can be checked by enumerating models. Efficient model-checking inference algorithms for propositional logic include backtracking and local search methods and can often solve large problems quickly.
2 families of algorithms for the SAT problem based on model checking:
a. based on backtracking
b. based on local hill-climbing search
1. A complete backtracking algorithm
David-Putnam algorithm (DPLL):
DPLL embodies 3 improvements over the scheme of TT-ENTAILS?: Early termination, pure symbol heuristic, unit clause heuristic.
Tricks that enable SAT solvers to scale up to large problems: Component analysis, variable and value ordering, intelligent backtracking, random restarts, clever indexing.
Local search algorithms
Local search algorithms can be applied directly to the SAT problem, provided that choose the right evaluation function. (We can choose an evaluation function that counts the number of unsatisfied clauses.)
These algorithms take steps in the space of complete assignments, flipping the truth value of one symbol at a time.
The space usually contains many local minima, to escape from which various forms of randomness are required.
Local search methods such as WALKSAT can be used to find solutions. Such algorithm are sound but not complete.
WALKSAT: one of the simplest and most effective algorithms.
On every iteration, the algorithm picks an unsatisfied clause, and chooses randomly between 2 ways to pick a symbol to flip:
Either a. a “min-conflicts” step that minimizes the number of unsatisfied clauses in the new state;
Or b. a “random walk” step that picks the symbol randomly.
When the algorithm returns a model, the input sentence is indeed satifiable;
When the algorithm returns failure, there are 2 possible causes:
Either a. The sentence is unsatisfiable;
Or b. We need to give the algorithm more time.
If we set max_flips=∞, p>0, the algorithm will:
Either a. eventually return a model if one exists
Or b. never terminate if the sentence is unsatifiable.
Thus WALKSAT is useful when we expect a solution to exist, but cannot always detect unsatisfiability.
The landscape of random SAT problems
Underconstrained problem: When we look at satisfiability problems in CNF, an underconstrained problem is one with relatively few clauses constraining the variables.
An overconstrained problem has many clauses relative to the number of variables and is likely to have no solutions.
The notation CNFk(m, n) denotes a k-CNF sentence with m clauses and n symbols. (with n variables and k literals per clause).
Given a source of random sentences, where the clauses are chosen uniformly, independently and without replacement from among all clauses with k different literals, which are positive or negative at random.
Hardness: problems right at the threshold > overconstrained problems > underconstrained problems
Satifiability threshold conjecture: A theory says that for every k≥3, there is a threshold ratio rk, such that as n goes to infinity, the probability that CNFk(n, rn) is satisfiable becomes 1 for all values or r below the threshold, and 0 for all values above. (remains unproven)
Agent based on propositional logic
1. The current state of the world
We can associate proposition with timestamp to avoid contradiction.
e.g. ¬Stench3, Stench4
fluent: refer an aspect of the world that changes. (E.g. Ltx,y)
atemporal variables: Symbols associated with permanent aspects of the world do not need a time superscript.
Effect axioms: specify the outcome of an action at the next time step.
Frame problem: some information lost because the effect axioms fails to state what remains unchanged as the result of an action.
Solution: add frame axioms explicity asserting all the propositions that remain the same.
Representation frame problem: The proliferation of frame axioms is inefficient, the set of frame axioms will be O(mn) in a world with m different actions and n fluents.
Solution: because the world exhibits locaility (for humans each action typically changes no more than some number k of those fluents.) Define the transition model with a set of axioms of size O(mk) rather than size O(mn).
Inferential frame problem: The problem of projecting forward the results of a t step lan of action in time O(kt) rather than O(nt).
Solution: change one’s focus from writing axioms about actions to writing axioms about fluents.
For each fluent F, we will have an axiom that defines the truth value of Ft+1 in terms of fluents at time t and the action that may have occurred at time t.
The truth value of Ft+1 can be set in one of 2 ways:
Either a. The action at time t cause F to be true at t+1
Or b. F was already true at time t and the action at time t does not cause it to be false.
An axiom of this form is called a successor-state axiom and has this schema:
Qualification problem: specifying all unusual exceptions that could cause the action to fail.
2. A hybrid agent
Hybrid agent: combines the ability to deduce various aspect of the state of the world with condition-action rules, and with problem-solving algorithms.
The agent maintains and update KB as a current plan.
The initial KB contains the atemporal axioms. (don’t depend on t)
At each time step, the new percept sentence is added along with all the axioms that depend on t (such as the successor-state axioms).
Then the agent use logical inference by ASKING questions of the KB (to work out which squares are safe and which have yet to be visited).
The main body of the agent program constructs a plan based on a decreasing priority of goals:
1. If there is a glitter, construct a plan to grab the gold, follow a route back to the initial location and climb out of the cave;
2. Otherwise if there is no current plan, plan a route (with A* search) to the closest safe square unvisited yet, making sure the route goes through only safe squares;
3. If there are no safe squares to explore, if still has an arrow, try to make a safe square by shooting at one of the possible wumpus locations.
4. If this fails, look for a square to explore that is not provably unsafe.
5. If there is no such square, the mission is impossible, then retreat to the initial location and climb out of the cave.
Weakness: The computational expense goes up as time goes by.
3. Logical state estimation
To get a constant update time, we need to cache the result of inference.
Belief state: Some representation of the set of all possible current state of the world. (used to replace the past history of percepts and all their ramifications)
e.g.
We use a logical sentence involving the proposition symbols associated with the current time step and the temporal symbols.
Logical state estimation involves maintaining a logical sentence that describes the set of possible states consistent with the observation history. Each update step requires inference using the transition model of the environment, which is built from successor-state axioms that specify how each fluent changes.
State estimation: The process of updating the belief state as new percepts arrive.
Exact state estimation may require logical formulas whose size is exponential in the number of symbols.
One common scheme for approximate state estimation: to represent belief state as conjunctions of literals (1-CNF formulas).
The agent simply tries to prove Xt and ¬Xt for each symbol Xt, given the belief state at t-1.
The conjunction of provable literals becomes the new belief state, and the previous belief state is discarded.
(This scheme may lose some information as time goes along.)
The set of possible states represented by the 1-CNF belief state includes all states that are in fact possible given the full percept history. The 1-CNF belief state acts as a simple outer envelope, or conservative approximation.
4. Making plans by propositional inference
We can make plans by logical inference instead of A* search in Figure 7.20.
Basic idea:
1. Construct a sentence that includes:
a) Init0: a collection of assertions about the initial state;
b) Transition1, …, Transitiont: The successor-state axioms for all possible actions at each time up to some maximum time t;
c) HaveGoldt∧ClimbedOutt: The assertion that the goal is achieved at time t.
2. Present the whole sentence to a SAT solver. If the solver finds a satisfying model, the goal is achievable; else the planning is impossible.
3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true.
Together they represent a plan to ahieve the goals.
Decisions within a logical agent can be made by SAT solving: finding possible models specifying future action sequences that reach the goal. This approach works only for fully observable or sensorless environment.
SATPLAN: A propositional planning. (Cannot be used in a partially observable environment)
SATPLAN finds models for a sentence containing the initial sate, the goal, the successor-state axioms, and the action exclusion axioms.
(Because the agent does not know how many steps it will take to reach the goal, the algorithm tries each possible number of steps t up to some maximum conceivable plan length Tmax.)
Precondition axioms: stating that an action occurrence requires the preconditions to be satisfied, added to avoid generating plans with illegal actions.
Action exclusion axioms: added to avoid the creation of plans with multiple simultaneous actions that interfere with each other.
Propositional logic does not scale to environments of unbounded size because it lacks the expressive power to deal concisely with time, space and universal patterns of relationshipgs among objects.
关于Zero-Knowledge Proof of Balance: A Friendly ZKP Demo的问题我们已经讲解完毕,感谢您的阅读,如果还想了解更多关于A flaw of java knowledge found by taking leetcode Contest、ACK Acknowledgement 确认 AES Advanced Encryption Standard 高级加密标准 ATM Asynchronous Transfer Mode异步传输模式、AcknowledgementsBundle、AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...等相关内容,可以在本站寻找。
本文标签: