From 74f71028a5957df10b37eccbbc4d484adaff47d9 Mon Sep 17 00:00:00 2001 From: wujian2023 Date: Wed, 1 Nov 2023 17:21:44 +0800 Subject: [PATCH] =?UTF-8?q?=E4=B8=BAGit=E6=9C=8D=E5=8A=A1=E5=A2=9E?= =?UTF-8?q?=E5=8A=A0Nostr=E5=8D=8F=E8=AE=AE=E6=94=AF=E6=8C=81=E5=88=86?= =?UTF-8?q?=E5=B8=83=E5=BC=8F=E5=8D=8F=E4=BD=9C=20-=200.1.0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- gitnip-rs/Nostr_ActivityPub.md | 123 ++++ gitnip-rs/README.md | 5 + gitnip-rs/p2p/BUILD | 46 ++ gitnip-rs/p2p/Cargo.toml | 28 + gitnip-rs/p2p/README.md | 61 ++ gitnip-rs/p2p/src/lib.rs | 35 ++ gitnip-rs/p2p/src/network/behaviour.rs | 103 +++ gitnip-rs/p2p/src/network/event_handler.rs | 697 +++++++++++++++++++++ gitnip-rs/p2p/src/network/mod.rs | 12 + gitnip-rs/p2p/src/node/client.rs | 260 ++++++++ gitnip-rs/p2p/src/node/input_command.rs | 318 ++++++++++ gitnip-rs/p2p/src/node/mod.rs | 64 ++ gitnip-rs/p2p/src/node/relay_server.rs | 148 +++++ gitnip-rs/p2p/src/peer.rs | 69 ++ gitnip-rs/solana.md | 128 ++++ 15 files changed, 2097 insertions(+) create mode 100644 gitnip-rs/Nostr_ActivityPub.md create mode 100644 gitnip-rs/README.md create mode 100644 gitnip-rs/p2p/BUILD create mode 100644 gitnip-rs/p2p/Cargo.toml create mode 100644 gitnip-rs/p2p/README.md create mode 100644 gitnip-rs/p2p/src/lib.rs create mode 100644 gitnip-rs/p2p/src/network/behaviour.rs create mode 100644 gitnip-rs/p2p/src/network/event_handler.rs create mode 100644 gitnip-rs/p2p/src/network/mod.rs create mode 100644 gitnip-rs/p2p/src/node/client.rs create mode 100644 gitnip-rs/p2p/src/node/input_command.rs create mode 100644 gitnip-rs/p2p/src/node/mod.rs create mode 100644 gitnip-rs/p2p/src/node/relay_server.rs create mode 100644 gitnip-rs/p2p/src/peer.rs create mode 100644 gitnip-rs/solana.md diff --git a/gitnip-rs/Nostr_ActivityPub.md b/gitnip-rs/Nostr_ActivityPub.md new file mode 100644 index 00000000..21ba47bf --- /dev/null +++ b/gitnip-rs/Nostr_ActivityPub.md @@ -0,0 +1,123 @@ +| | Nostr | ActivityPub | +| ------------ | ------------------------------------------------------ | ------------------------------------ | +| 设计理念 | 去中心化 | 去中心化 | +| 信息传播 | 多个服务器(relay)平行传播 | 由用户连接单个服务器(instance)传播 | +| 用户数据存储 | 用户连接的所有服务器 | 用户指定连接的一个服务器 | +| 内容审核机制 | 自行决定 | 自行决定 | +| 激励机制 | 服务器提供更高级的服务(比如永久存储),可以向用户收费 | 不明确 | +| 发展现状 | 80万用户 | 400万用户 | +| 优点 | 家用电脑可以当服务器;协议简单;用户数据自己控制 | 产品成熟;web2的用户体验 | + +# Nostr + +[Nostr协议:开启去中心化社交网络 - Nostr协议 (nostrtips.com)](https://nostrtips.com/) + +Nostr本身只是一种协议,用来在互联网上发送信息。 + +## 基本概念 + +### 用户的概念 + +公私钥对的形式:`公钥`为地址,`私钥`为密码 + +### Relay服务器 + +relay服务器的基本职责只有转发用户的信息 + +如果需要**额外的服务**,需要**自行设计** + +**中继器之间并不通信,他们只和客户端通信。** + +### 数据存储 + +存在用户所连接的所有Relay里面。 + +用户可以将其导出、自行存储。其实Relay最基本的职责只有转发用户的信息,并没有存储用户数据的义务。 + +不过每个Relay可以自行设计用户数据存储的策略(存什么类型的数据,存多久)。 + +### 客户端client + +客户端从中继器读取信息,并且将新生成的数据发送给中继器以便其他客户端读取。 + +信息包含签名,这些签名可以确保数据由真实的发送方发送。 + +客户端使用私钥来创建签名。第一次使用桌面或者手机客户端,需要将私钥存储其中。 + +## 工作原理 + +### 事件(Events) + +事件是`Nostr`上唯一的`object`结构。每一个事件结构都有一个类别(kind)。类别用来标记事件的种类(用户实施了哪 +种操作或者接收了哪种信息)。 +下面是kind为1的某个事件(kind为1, 代表短的文字信息,类似推特的推文)。 + +``` +{ + "id": "4376c65d2f232afbe9b882a35baa4f6fe8667c4e684749af565f981833ed6a65", + "pubkey": "6e468422dfb74a5738702a8823b9b28168abab8655faacb6853cd0ee15deee93", + "created_at": 1673347337, + "kind": 1, + "tags": [ + ["e", "3da979448d9ba263864c4d6f14984c423a3838364ec255f03c7904b1ae77f206"], + ["p", "bf2376e17ba4ec269d10fcc996a4746b451152be9031fa48e74553dde5526bce"] + ], + "content": "围墙的花园变成了监狱,nostr是拆掉监狱围墙的第一步", + "sig": "908a15e46fb4d8675bab026fc230a0e3542bfade63da02d542fb78b2a8513fcd0092619a2c8c1221e581946e0191f2af505dfdf8657a414dbca329186f009262" +} +``` + +- 代表事件的ID。 +- 代表发送此事件用户的公钥。 +- 代表该事件创建的时间。 +- 代表事件的类别。 +- 代表事件的标签。这通常用于创建链接、添加媒体,或者提到其他用户或其他事件。 +- 代表事件的内容。在上面的例子里,表示短推文的内容。 +- 代表签名。客户端用此验证该事件确实由拥有该公钥的用户所发出。 + +### NIPs + +[nostr-protocol/nips: Nostr Implementation Possibilities (github.com)](https://github.com/nostr-protocol/nips) + +Nostr实施标准(A Nostr Implementation Possibilty,简称NIP),用来规范兼容Nostr的中继器和客户端软件, 哪些必须、哪些应当、哪些可以实施。NIP是 概述Nostr协议工作原理的参考文档。 + +#### From client to relay: sending events and creating subscriptions + +Clients can send 3 types of messages, which must be JSON arrays, according to the following patterns: + +- `["EVENT", ]`, used to publish events. +- `["REQ", , ...]`, used to request events and subscribe to new updates. +- `["CLOSE", ]`, used to stop previous subscriptions. + +#### From relay to client: sending events and notices + +Relays can send 4 types of messages, which must also be JSON arrays, according to the following patterns: + +- `["EVENT", , ]`, used to send events requested by clients. +- `["OK", , , ]`, used to indicate acceptance or denial of an `EVENT` message. +- `["EOSE", ]`, used to indicate the *end of stored events* and the beginning of events newly received in real-time. +- `["NOTICE", ]`, used to send human-readable error messages or other things to clients. + +# ActivityPub + +ActivityPub 协议是一个去中心化的社交网络协议,其基于 ActivityStreams 2.0 数据格式。 + +[ActivityStreams 2.0 Terms (w3.org)](https://www.w3.org/ns/activitystreams) + +在 ActivityPub 协议里,一个用户在服务器上的角色为“参与者(actor)”。用户在不同的服务器上的角色为不同的“参与者”。每一名“参与者”有: + +- 一个收件箱:用于接收消息 +- 一个发件箱:用于发送消息 + +![75fd0a90f5ea474d90194252430adef1](https://gitee.com/wujian2023/typora_images/raw/master/auto_upload/75fd0a90f5ea474d90194252430adef1.png) + +简单来说就是: + +- 用户可以使用 POST 操作来将消息发送至别人的收件箱(服务端对服务端,仅在接入网络的情况下) +- 用户可以使用 GET 操作来从自己的收件箱获取消息(客户端对服务端,类似于读取你的社交网络数据流) +- 用户可以使用 POST 操作来向发件箱发送消息(客户端对服务端) +- 用户可以使用 GET 操作在对方允许的情况下来向好友的发件箱获取消息(客户端对服务端或者服务端对服务端) + +ActivityPub中有一个名叫Instance(实例)的角色,可以把它简单理解为“服务器”。用户在自己指定的一个Instance中,注册自己的账号;当一个用户A要发送信息的时候,先发至自己注册的Instance,再由Instance负责和其它Instance通信。 + +Instance和Instance之间有一套通讯协议,用于互相传输、读取信息,这样用户B、C、D可以接收到用户A发出的信息。从这里也可以看出,其实Instance的角色和单一项目方中心化服务器的角色非常像,只不过它是由一个联邦网络所组成的。如果网络中只有一个Instance,那这和Web2社交产品的架构区别不大了。 \ No newline at end of file diff --git a/gitnip-rs/README.md b/gitnip-rs/README.md new file mode 100644 index 00000000..2952065e --- /dev/null +++ b/gitnip-rs/README.md @@ -0,0 +1,5 @@ +Nostr 协议运行在mega项目的p2p部分,并不能直接运行 + +详情请查看 + +https://github.com/web3infra-foundation/mega.git \ No newline at end of file diff --git a/gitnip-rs/p2p/BUILD b/gitnip-rs/p2p/BUILD new file mode 100644 index 00000000..a0c1979d --- /dev/null +++ b/gitnip-rs/p2p/BUILD @@ -0,0 +1,46 @@ + +load("@crate_index//:defs.bzl", "aliases", "all_crate_deps") +load("@rules_rust//rust:defs.bzl", "rust_library", "rust_test", "rust_doc_test") + + +rust_library( + name = "p2p", + srcs = glob([ + "src/**/*.rs", + ]), + aliases = aliases(), + deps = all_crate_deps() + [ + "//git", + "//common", + "//database", + "//database/entity" + ], + proc_macro_deps = all_crate_deps( + proc_macro = True, + ), + visibility = ["//visibility:public"], +) + +rust_test( + name = "test", + crate = ":p2p", + aliases = aliases( + normal_dev = True, + proc_macro_dev = True, + ), + deps = all_crate_deps( + normal_dev = True, + ) + [ + "//git", + "//common", + "//database", + ], + proc_macro_deps = all_crate_deps( + proc_macro_dev = True, + ), +) + +rust_doc_test( + name = "doctests", + crate = ":p2p", +) diff --git a/gitnip-rs/p2p/Cargo.toml b/gitnip-rs/p2p/Cargo.toml new file mode 100644 index 00000000..537ad288 --- /dev/null +++ b/gitnip-rs/p2p/Cargo.toml @@ -0,0 +1,28 @@ +[package] +name = "p2p" +version = "0.1.0" +edition = "2021" + +# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html + +[lib] +name = "p2p" +path = "src/lib.rs" + + +[dependencies] +git = { path = "../git" } +database = { path = "../database" } +entity = { path = "../database/entity" } +common = { path = "../common" } +bytes = "1.4.0" +tokio = "1.32.0" +tracing = "0.1.37" +futures = "0.3.28" +futures-timer = "3.0.2" +async-std = { version = "1.10", features = ["attributes"] } +libp2p = { version = "0.52.4", features = ["dcutr", "kad", "yamux", "noise", "identify", "macros", "relay", "tcp", "async-std", "rendezvous", "request-response", "cbor"] } +serde = { version = "1.0.188", features = ["derive"] } +clap = { version = "4.4.0", features = ["derive"] } +#sea-orm = "0.12.2" +serde_json = "1.0.105" diff --git a/gitnip-rs/p2p/README.md b/gitnip-rs/p2p/README.md new file mode 100644 index 00000000..b5be574e --- /dev/null +++ b/gitnip-rs/p2p/README.md @@ -0,0 +1,61 @@ +## How to use the p2p function + +### start a relay-server + +``` +cargo run p2p --host 0.0.0.0 --port 8001 --relay-server +``` + +### start a client + +``` +cargo run p2p --host 0.0.0.0 --port 8002 --bootstrap-node /ip4/{relay-server-ip}/tcp/8001 +``` + +### start another client + +``` +cargo run p2p --host 0.0.0.0 --port 8003 --bootstrap-node /ip4/{relay-server-ip}/tcp/8001 +``` + +### try to use DHT + +#### put a key-value to p2p network in one terminal + +``` +kad put 123 abc +``` + +#### get a key-value from p2p network in another terminal + +``` +kad get 123 +``` + +### try to clone a repository + +``` +mega clone p2p://12D3KooWPjceQrSwdWXPyLLeABRXmuqt69Rg3sBYbU1Nft9HyQ6X/mega_test.git +``` + +``` +mega pull p2p://12D3KooWPjceQrSwdWXPyLLeABRXmuqt69Rg3sBYbU1Nft9HyQ6X/mega_test.git +``` + +### share a repository to DHT + +``` +mega provide mega_test.git +``` + +### clone git-object from p2p network + +``` +mega clone-object mega_test.git +``` + +### pull git-object from p2p network + +``` +mega pull-object mega_test.git +``` \ No newline at end of file diff --git a/gitnip-rs/p2p/src/lib.rs b/gitnip-rs/p2p/src/lib.rs new file mode 100644 index 00000000..e890390b --- /dev/null +++ b/gitnip-rs/p2p/src/lib.rs @@ -0,0 +1,35 @@ +//! +//! +//! +//! +//! +//! + +use database::driver::ObjectStorage; +use git::protocol::{PackProtocol, Protocol}; +use std::path::PathBuf; +use std::sync::Arc; + +pub mod network; +pub mod node; +pub mod peer; + +async fn get_pack_protocol(path: &str, storage: Arc) -> PackProtocol { + let path = del_ends_str(path, ".git"); + PackProtocol::new(PathBuf::from(path), storage, Protocol::P2p) +} + +pub fn get_repo_full_path(repo_name: &str) -> String { + let repo_name = del_ends_str(repo_name, ".git"); + "/projects/".to_string() + repo_name +} + +pub fn del_ends_str<'a>(mut s: &'a str, end: &str) -> &'a str { + if s.ends_with(end) { + s = s.split_at(s.len() - end.len()).0; + } + s +} + +#[cfg(test)] +mod tests {} diff --git a/gitnip-rs/p2p/src/network/behaviour.rs b/gitnip-rs/p2p/src/network/behaviour.rs new file mode 100644 index 00000000..0b05eeea --- /dev/null +++ b/gitnip-rs/p2p/src/network/behaviour.rs @@ -0,0 +1,103 @@ +use entity::git_obj; +use libp2p::kad; +use libp2p::kad::store::MemoryStore; +use libp2p::swarm::NetworkBehaviour; +use libp2p::{dcutr, identify, relay, rendezvous, request_response}; +use serde::{Deserialize, Serialize}; +use std::collections::HashSet; + +#[derive(NetworkBehaviour)] +#[behaviour(to_swarm = "Event")] +pub struct Behaviour { + pub relay_client: relay::client::Behaviour, + pub identify: identify::Behaviour, + pub dcutr: dcutr::Behaviour, + pub kademlia: kad::Behaviour, + pub rendezvous: rendezvous::client::Behaviour, + pub git_upload_pack: request_response::cbor::Behaviour, + pub git_info_refs: request_response::cbor::Behaviour, + pub git_object: request_response::cbor::Behaviour, +} + +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct GitUploadPackReq( + //want + pub HashSet, + //have + pub HashSet, + //path + pub String, +); +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct GitUploadPackRes(pub Vec, pub String); + +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct GitInfoRefsReq(pub String, pub Vec); +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct GitInfoRefsRes(pub String, pub Vec); + +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct GitObjectReq(pub String, pub Vec); +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct GitObjectRes(pub Vec); + +#[derive(Debug)] +#[allow(clippy::large_enum_variant)] +pub enum Event { + Identify(identify::Event), + RelayClient(relay::client::Event), + Dcutr(dcutr::Event), + Kademlia(kad::Event), + Rendezvous(rendezvous::client::Event), + GitUploadPack(request_response::Event), + GitInfoRefs(request_response::Event), + GitObject(request_response::Event), +} + +impl From for Event { + fn from(e: identify::Event) -> Self { + Event::Identify(e) + } +} + +impl From for Event { + fn from(e: relay::client::Event) -> Self { + Event::RelayClient(e) + } +} + +impl From for Event { + fn from(e: dcutr::Event) -> Self { + Event::Dcutr(e) + } +} + +impl From for Event { + fn from(e: kad::Event) -> Self { + Event::Kademlia(e) + } +} + +impl From for Event { + fn from(e: rendezvous::client::Event) -> Self { + Event::Rendezvous(e) + } +} + +impl From> for Event { + fn from(event: request_response::Event) -> Self { + Event::GitUploadPack(event) + } +} + +impl From> for Event { + fn from(event: request_response::Event) -> Self { + Event::GitInfoRefs(event) + } +} + +impl From> for Event { + fn from(event: request_response::Event) -> Self { + Event::GitObject(event) + } +} diff --git a/gitnip-rs/p2p/src/network/event_handler.rs b/gitnip-rs/p2p/src/network/event_handler.rs new file mode 100644 index 00000000..ff9d161d --- /dev/null +++ b/gitnip-rs/p2p/src/network/event_handler.rs @@ -0,0 +1,697 @@ +use super::behaviour; +use crate::network::behaviour::{ + GitInfoRefsReq, GitInfoRefsRes, GitObjectReq, GitObjectRes, GitUploadPackReq, GitUploadPackRes, +}; +use crate::node::{get_utc_timestamp, ClientParas, Fork, MegaRepoInfo}; +use crate::{get_pack_protocol, get_repo_full_path}; +use bytes::Bytes; +use common::utils; +use entity::git_obj::Model; +use git::protocol::RefCommand; +use git::structure::conversion; +use libp2p::kad::record::Key; +use libp2p::kad::store::MemoryStore; +use libp2p::kad::{ + AddProviderOk, GetClosestPeersOk, GetProvidersOk, GetRecordOk, PeerRecord, PutRecordOk, + QueryResult, Quorum, Record, +}; +use libp2p::{identify, kad, multiaddr, rendezvous, request_response, PeerId, Swarm}; +use std::collections::HashSet; +use std::path::Path; +use std::str::FromStr; + +pub const NAMESPACE: &str = "rendezvous_mega"; + +pub async fn kad_event_handler( + swarm: &mut Swarm, + client_paras: &mut ClientParas, + event: kad::Event, +) { + if let kad::Event::OutboundQueryProgressed { id, result, .. } = event { + match result { + QueryResult::GetRecord(Ok(GetRecordOk::FoundRecord(PeerRecord { record, peer }))) => { + let peer_id = match peer { + Some(id) => id.to_string(), + None => "local".to_string(), + }; + tracing::info!( + "Got record key[{}]={},from {}", + String::from_utf8(record.key.to_vec()).unwrap(), + String::from_utf8(record.value.clone()).unwrap(), + peer_id + ); + if let Some(object_id) = client_paras.pending_repo_info_update_fork.get(&id) { + tracing::info!("update repo info forks"); + // update repo info forks + if let Ok(p) = serde_json::from_slice(&record.value) { + let mut repo_info: MegaRepoInfo = p; + let local_peer_id = swarm.local_peer_id().to_string(); + let fork = Fork { + peer: local_peer_id.clone(), + latest: object_id.clone(), + timestamp: get_utc_timestamp(), + }; + repo_info.forks.retain(|r| r.peer != local_peer_id); + repo_info.forks.push(fork); + let record = Record { + key: Key::new(&repo_info.name), + value: serde_json::to_vec(&repo_info).unwrap(), + publisher: None, + expires: None, + }; + if let Err(e) = swarm + .behaviour_mut() + .kademlia + .put_record(record, Quorum::One) + { + eprintln!("Failed to store record:{}", e); + } + } + client_paras.pending_repo_info_update_fork.remove(&id); + } else if let Some(repo_name) = client_paras + .pending_repo_info_search_to_download_obj + .clone() + .get(&id) + { + //try to search origin node + tracing::info!("try to get origin node to search git_obj_id_list"); + if let Ok(p) = serde_json::from_slice(&record.value) { + let repo_info: MegaRepoInfo = p; + //save all node that have this repo,the first one is origin + let mut node_id_list: Vec = Vec::new(); + node_id_list.push(repo_info.origin.clone()); + for fork in &repo_info.forks { + node_id_list.push(fork.peer.clone()); + } + client_paras + .repo_node_list + .insert(repo_name.clone(), node_id_list); + let remote_peer_id = PeerId::from_str(&repo_info.origin).unwrap(); + let path = get_repo_full_path(repo_name); + //to get local git_obj id + let local_git_ids = get_all_git_obj_ids(&path, client_paras).await; + let request_file_id = swarm + .behaviour_mut() + .git_info_refs + .send_request(&remote_peer_id, GitInfoRefsReq(path, local_git_ids)); + client_paras + .pending_git_obj_id_download + .insert(request_file_id, repo_name.to_string()); + } + } + client_paras + .pending_repo_info_search_to_download_obj + .remove(&id); + } + QueryResult::GetRecord(Err(err)) => { + tracing::error!("Failed to get record: {err:?}"); + } + QueryResult::PutRecord(Ok(PutRecordOk { key })) => { + tracing::info!( + "Successfully put record {:?}", + std::str::from_utf8(key.as_ref()).unwrap() + ); + } + QueryResult::PutRecord(Err(err)) => { + tracing::error!("Failed to put record: {err:?}"); + } + QueryResult::GetClosestPeers(Ok(GetClosestPeersOk { peers, .. })) => { + for x in peers { + tracing::info!("{}", x); + } + } + QueryResult::GetClosestPeers(Err(err)) => { + tracing::error!("Failed to get closest peers: {err:?}"); + } + QueryResult::GetProviders(Ok(GetProvidersOk::FoundProviders { providers, .. }), ..) => { + tracing::info!("FoundProviders: {providers:?}"); + } + QueryResult::GetProviders(Err(e)) => { + tracing::error!("GetProviders error: {e:?}"); + } + QueryResult::StartProviding(Ok(AddProviderOk { key, .. }), ..) => { + tracing::info!("StartProviding: {key:?}"); + } + _ => {} + } + } +} + +pub fn rendezvous_client_event_handler( + swarm: &mut Swarm, + client_paras: &mut ClientParas, + event: rendezvous::client::Event, +) { + match event { + rendezvous::client::Event::Registered { + namespace, + ttl, + rendezvous_node, + } => { + tracing::info!( + "Registered for namespace '{}' at rendezvous point {} for the next {} seconds", + namespace, + rendezvous_node, + ttl + ); + if let Some(rendezvous_point) = client_paras.rendezvous_point { + swarm.behaviour_mut().rendezvous.discover( + Some(rendezvous::Namespace::from_static(NAMESPACE)), + client_paras.cookie.clone(), + None, + rendezvous_point, + ) + } + } + rendezvous::client::Event::Discovered { + registrations, + cookie: new_cookie, + .. + } => { + client_paras.cookie.replace(new_cookie); + for registration in registrations { + for address in registration.record.addresses() { + let peer = registration.record.peer_id(); + tracing::info!("Rendezvous Discovered peer {} at {}", peer, address); + if peer == *swarm.local_peer_id() { + continue; + } + //dial via relay + if let Some(bootstrap_address) = client_paras.bootstrap_node_addr.clone() { + swarm + .dial( + bootstrap_address + .with(multiaddr::Protocol::P2pCircuit) + .with(multiaddr::Protocol::P2p(peer)), + ) + .unwrap(); + } + } + } + } + rendezvous::client::Event::RegisterFailed { error, .. } => { + tracing::error!("Failed to register: error_code={:?}", error); + } + rendezvous::client::Event::DiscoverFailed { + rendezvous_node, + namespace, + error, + } => { + tracing::error!( + "Failed to discover: rendezvous_node={}, namespace={:?}, error_code={:?}", + rendezvous_node, + namespace, + error + ); + } + event => { + tracing::debug!("Rendezvous client event:{:?}", event); + } + } +} + +pub fn rendezvous_server_event_handler(event: rendezvous::server::Event) { + match event { + rendezvous::server::Event::PeerRegistered { peer, registration } => { + tracing::info!( + "Peer {} registered for namespace '{}'", + peer, + registration.namespace + ); + } + rendezvous::server::Event::DiscoverServed { + enquirer, + registrations, + .. + } => { + tracing::info!( + "Served peer {} with {} registrations", + enquirer, + registrations.len() + ); + } + + event => { + tracing::info!("Rendezvous server event:{:?}", event); + } + } +} + +pub fn identify_event_handler(kademlia: &mut kad::Behaviour, event: identify::Event) { + match event { + identify::Event::Received { peer_id, info } => { + tracing::info!("IdentifyEvent Received peer_id:{:?}", peer_id); + tracing::info!("IdentifyEvent Received info:{:?}", info); + for addr in info.listen_addrs { + kademlia.add_address(&peer_id, addr); + } + } + + identify::Event::Error { error, .. } => { + tracing::debug!("IdentifyEvent Error :{:?}", error); + } + _ => {} + } +} + +pub async fn git_upload_pack_event_handler( + swarm: &mut Swarm, + client_paras: &mut ClientParas, + event: request_response::Event, +) { + match event { + request_response::Event::Message { message, .. } => match message { + request_response::Message::Request { + request, channel, .. + } => { + //receive git upload pack request + tracing::debug!( + "Git upload pack event handler, {:?}, {:?}", + request, + channel + ); + let want = request.0; + let have = request.1; + let path = request.2; + tracing::info!("path: {}", path); + match git_upload_pack_handler(&path, client_paras, want, have).await { + Ok((send_pack_data, object_id)) => { + let _ = swarm + .behaviour_mut() + .git_upload_pack + .send_response(channel, GitUploadPackRes(send_pack_data, object_id)); + } + Err(e) => { + tracing::error!("{}", e); + let response = format!("ERR: {}", e); + let _ = swarm.behaviour_mut().git_upload_pack.send_response( + channel, + GitUploadPackRes(response.into_bytes(), utils::ZERO_ID.to_string()), + ); + } + } + } + request_response::Message::Response { + request_id, + response, + } => { + // receive a git_upload_pack response + tracing::info!( + "Git upload pack event response, request_id: {:?}", + request_id, + ); + if let Some(repo_name) = client_paras.pending_git_upload_package.get(&request_id) { + let package_data = response.0; + let object_id = response.1; + if package_data.starts_with("ERR:".as_bytes()) { + tracing::error!("{}", String::from_utf8(package_data).unwrap()); + return; + } + let path = get_repo_full_path(repo_name); + let mut pack_protocol = + get_pack_protocol(&path, client_paras.storage.clone()).await; + let old_object_id = pack_protocol.get_head_object_id(Path::new(&path)).await; + tracing::info!( + "new_object_id:{}; old_object_id:{}", + object_id.clone(), + old_object_id + ); + let command = RefCommand::new( + old_object_id, + object_id.clone(), + String::from("refs/heads/master"), + ); + pack_protocol.command_list.push(command); + let result = pack_protocol + .git_receive_pack(Bytes::from(package_data)) + .await; + match result { + Ok(_) => { + tracing::info!("Save git package successfully :{}", repo_name); + //update repoInfo + let kad_query_id = swarm + .behaviour_mut() + .kademlia + .get_record(Key::new(&repo_name)); + client_paras + .pending_repo_info_update_fork + .insert(kad_query_id, object_id); + } + Err(e) => { + tracing::error!("{}", e); + } + } + client_paras.pending_git_upload_package.remove(&request_id); + } + } + }, + request_response::Event::OutboundFailure { peer, error, .. } => { + tracing::error!("Git upload pack outbound failure: {:?},{:?}", peer, error); + } + event => { + tracing::debug!("Request_response event:{:?}", event); + } + } +} + +pub async fn git_info_refs_event_handler( + swarm: &mut Swarm, + client_paras: &mut ClientParas, + event: request_response::Event, +) { + match event { + request_response::Event::Message { message, peer, .. } => match message { + request_response::Message::Request { + request, channel, .. + } => { + //receive git info refs request + tracing::debug!("Receive git info refs event, {:?}, {:?}", request, channel); + let path = request.0; + tracing::info!("path: {}", path); + let git_ids_they_have = request.1; + tracing::info!("git_ids_they_have: {:?}", git_ids_they_have); + let pack_protocol = get_pack_protocol(&path, client_paras.storage.clone()).await; + let ref_git_id = pack_protocol.get_head_object_id(Path::new(&path)).await; + let mut git_obj_ids = get_all_git_obj_ids(&path, client_paras).await; + if !git_ids_they_have.is_empty() { + git_obj_ids.retain(|id| !git_ids_they_have.contains(id)); + } + tracing::info!("git_ids_they_need: {:?}", git_obj_ids); + let _ = swarm + .behaviour_mut() + .git_info_refs + .send_response(channel, GitInfoRefsRes(ref_git_id, git_obj_ids)); + } + request_response::Message::Response { + request_id, + response, + } => { + //receive git info refs response + tracing::info!("Response git info refs event, request_id: {:?}", request_id); + if let Some(repo_name) = client_paras.pending_git_pull.get(&request_id) { + //have git_ids and try to send pull request + // mega pull and mega clone + let ref_git_id = response.0; + let _git_ids = response.1; + tracing::info!("repo_name: {}", repo_name); + tracing::info!("ref_git_id: {:?}", ref_git_id); + if ref_git_id == *utils::ZERO_ID { + eprintln!("Repo not found"); + return; + } + let path = get_repo_full_path(repo_name); + let pack_protocol = + get_pack_protocol(&path, client_paras.storage.clone()).await; + //generate want and have collection + let mut want: HashSet = HashSet::new(); + let mut have: HashSet = HashSet::new(); + want.insert(ref_git_id); + let commit_models = pack_protocol + .storage + .get_all_commits_by_path(&path) + .await + .unwrap(); + commit_models.iter().for_each(|model| { + have.insert(model.git_id.clone()); + }); + //send new request to git_upload_pack + let new_request_id = swarm + .behaviour_mut() + .git_upload_pack + .send_request(&peer, GitUploadPackReq(want, have, path)); + client_paras + .pending_git_upload_package + .insert(new_request_id, repo_name.to_string()); + client_paras.pending_git_pull.remove(&request_id); + return; + } + if let Some(repo_name) = client_paras + .pending_git_obj_id_download + .clone() + .get(&request_id) + { + // have git_ids and try to download git obj + // git clone-obj and git pull-obj + let _ref_git_id = response.0; + let git_ids_need = response.1; + let path = get_repo_full_path(repo_name); + tracing::info!("path: {}", path); + tracing::info!("git_ids_need: {:?}", git_ids_need); + //trying to download git_obj from peers + if let Some(r) = client_paras.repo_node_list.clone().get(repo_name) { + let mut repo_list = r.clone(); + if !repo_list.is_empty() { + repo_list.retain(|r| *r != swarm.local_peer_id().to_string()); + tracing::info!("try to download git object from: {:?}", repo_list); + tracing::info!("the origin is: {}", repo_list[0]); + // Try to download separately + let split_git_ids = split_array(git_ids_need.clone(), repo_list.len()); + let repo_id_need_list_arc = client_paras.repo_id_need_list.clone(); + { + let mut repo_id_need_list = repo_id_need_list_arc.lock().unwrap(); + repo_id_need_list.insert(repo_name.to_string(), git_ids_need); + } + + for i in 0..repo_list.len() { + // send get git object request + let ids = split_git_ids[i].clone(); + let repo_peer_id = PeerId::from_str(&repo_list[i].clone()).unwrap(); + let new_request_id = swarm + .behaviour_mut() + .git_object + .send_request(&repo_peer_id, GitObjectReq(path.clone(), ids)); + client_paras + .pending_git_obj_download + .insert(new_request_id, repo_name.to_string()); + } + } + } + client_paras.pending_git_obj_id_download.remove(&request_id); + } + } + }, + request_response::Event::OutboundFailure { peer, error, .. } => { + tracing::error!("Git info refs outbound failure: {:?},{:?}", peer, error); + } + event => { + tracing::debug!("Request_response event:{:?}", event); + } + } +} + +pub async fn git_object_event_handler( + swarm: &mut Swarm, + client_paras: &mut ClientParas, + event: request_response::Event, +) { + match event { + request_response::Event::Message { peer, message, .. } => match message { + request_response::Message::Request { + request, channel, .. + } => { + //receive git object request + tracing::debug!("Receive git object event, {:?}, {:?}", request, channel); + let path = request.0; + let git_ids = request.1; + tracing::info!("path: {}", path); + tracing::info!("git_ids: {:?}", git_ids); + let pack_protocol = get_pack_protocol(&path, client_paras.storage.clone()).await; + let git_obj_models = match pack_protocol.storage.get_obj_data_by_ids(git_ids).await + { + Ok(models) => models, + Err(e) => { + tracing::error!("{:?}", e); + return; + } + }; + let _ = swarm + .behaviour_mut() + .git_object + .send_response(channel, GitObjectRes(git_obj_models)); + } + request_response::Message::Response { + request_id, + response, + } => { + //receive git object response + tracing::debug!("Response git object event, request_id: {:?}", request_id); + let git_obj_models = response.0; + tracing::info!( + "Receive {:?} git_obj, from {:?}", + git_obj_models.len(), + peer + ); + let receive_id_list: Vec = git_obj_models + .clone() + .iter() + .map(|m| m.git_id.clone()) + .collect(); + tracing::info!("git_obj_id_list:{:?}", receive_id_list); + + if let Some(repo_name) = client_paras.pending_git_obj_download.get(&request_id) { + let repo_receive_git_obj_model_list_arc = + client_paras.repo_receive_git_obj_model_list.clone(); + { + let mut receive_git_obj_model_map = + repo_receive_git_obj_model_list_arc.lock().unwrap(); + receive_git_obj_model_map + .entry(repo_name.clone()) + .or_default(); + let receive_obj_model_list = + receive_git_obj_model_map.get(repo_name).unwrap(); + let mut clone = receive_obj_model_list.clone(); + clone.append(&mut git_obj_models.clone()); + tracing::info!("receive_obj_model_list:{:?}", clone.len()); + receive_git_obj_model_map.insert(repo_name.to_string(), clone); + } + + let repo_id_need_list_arc = client_paras.repo_id_need_list.clone(); + let mut finish = false; + { + let mut repo_id_need_list_map = repo_id_need_list_arc.lock().unwrap(); + if let Some(id_need_list) = repo_id_need_list_map.get(repo_name) { + let mut clone = id_need_list.clone(); + clone.retain(|x| !receive_id_list.contains(x)); + if clone.is_empty() { + finish = true; + } + repo_id_need_list_map.insert(repo_name.to_string(), clone); + } + } + println!("finish:{}", finish); + if finish { + let repo_receive_git_obj_model_list_arc2 = + client_paras.repo_receive_git_obj_model_list.clone(); + let mut obj_model_list: Vec = Vec::new(); + { + let mut receive_git_obj_model_map = + repo_receive_git_obj_model_list_arc2.lock().unwrap(); + if !receive_git_obj_model_map.contains_key(repo_name) { + tracing::error!("git_object cache error"); + return; + } + let receive_git_obj_model = + receive_git_obj_model_map.get(repo_name).unwrap(); + obj_model_list.append(&mut receive_git_obj_model.clone()); + receive_git_obj_model_map.remove(repo_name); + } + tracing::info!("receive all git_object :{:?}", obj_model_list.len()); + let path = get_repo_full_path(repo_name); + match conversion::save_node_from_git_obj( + client_paras.storage.clone(), + Path::new(&path), + obj_model_list.clone(), + ) + .await + { + Ok(_) => { + tracing::info!( + "Save {:?} git_obj to database successfully", + obj_model_list.len() + ); + let path = get_repo_full_path(repo_name); + let pack_protocol = + get_pack_protocol(&path, client_paras.storage.clone()).await; + let object_id = + pack_protocol.get_head_object_id(Path::new(&path)).await; + //update repoInfo + let kad_query_id = swarm + .behaviour_mut() + .kademlia + .get_record(Key::new(&repo_name)); + client_paras + .pending_repo_info_update_fork + .insert(kad_query_id, object_id); + } + Err(e) => { + tracing::error!("{:?}", e); + } + } + } + client_paras.pending_git_obj_download.remove(&request_id); + } + } + }, + request_response::Event::OutboundFailure { peer, error, .. } => { + tracing::error!("Git object outbound failure: {:?},{:?}", peer, error); + } + event => { + tracing::debug!("Request_response event:{:?}", event); + } + } +} + +async fn git_upload_pack_handler( + path: &str, + client_paras: &mut ClientParas, + want: HashSet, + have: HashSet, +) -> Result<(Vec, String), String> { + let pack_protocol = get_pack_protocol(path, client_paras.storage.clone()).await; + let object_id = pack_protocol.get_head_object_id(Path::new(path)).await; + if object_id == *utils::ZERO_ID { + return Err("Repository not found".to_string()); + } + tracing::info!("object_id:{}", object_id); + tracing::info!("want: {:?}, have: {:?}", want, have); + if have.is_empty() { + //clone + let send_pack_data = match pack_protocol.get_full_pack_data(Path::new(path)).await { + Ok(send_pack_data) => send_pack_data, + Err(e) => { + tracing::error!("{}", e); + return Err(e.to_string()); + } + }; + Ok((send_pack_data, object_id)) + } else { + //pull + let send_pack_data = match pack_protocol + .get_incremental_pack_data(Path::new(&path), &want, &have) + .await + { + Ok(send_pack_data) => send_pack_data, + Err(e) => { + tracing::error!("{}", e); + return Err(e.to_string()); + } + }; + Ok((send_pack_data, object_id)) + } +} + +async fn get_all_git_obj_ids(path: &str, client_paras: &mut ClientParas) -> Vec { + let pack_protocol = get_pack_protocol(path, client_paras.storage.clone()).await; + let mut git_ids: Vec = Vec::new(); + if let Ok(commit_models) = pack_protocol.storage.get_all_commits_by_path(path).await { + commit_models.iter().for_each(|model| { + git_ids.push(model.git_id.clone()); + }); + } + if let Ok(blob_and_tree) = pack_protocol + .storage + .get_node_by_path(Path::new(&path)) + .await + { + blob_and_tree.iter().for_each(|model| { + git_ids.push(model.git_id.clone()); + }); + } + git_ids +} + +fn split_array(a: Vec, count: usize) -> Vec> { + let mut result = vec![]; + let split_num = a.len() / count; + for i in 0..count { + let v: Vec<_> = if i != count - 1 { + a.clone() + .drain(i * split_num..(i + 1) * split_num) + .collect() + } else { + a.clone().drain(i * split_num..).collect() + }; + result.push(v); + } + result +} diff --git a/gitnip-rs/p2p/src/network/mod.rs b/gitnip-rs/p2p/src/network/mod.rs new file mode 100644 index 00000000..2b2fecb9 --- /dev/null +++ b/gitnip-rs/p2p/src/network/mod.rs @@ -0,0 +1,12 @@ +//! +//! +//! +//! +//! +//! + +pub mod behaviour; +pub mod event_handler; + +#[cfg(test)] +mod tests {} diff --git a/gitnip-rs/p2p/src/node/client.rs b/gitnip-rs/p2p/src/node/client.rs new file mode 100644 index 00000000..19f61062 --- /dev/null +++ b/gitnip-rs/p2p/src/node/client.rs @@ -0,0 +1,260 @@ +use super::input_command; +use super::ClientParas; +use crate::network::behaviour::{self, Behaviour, Event}; +use crate::network::event_handler; +use async_std::io; +use async_std::io::prelude::BufReadExt; +use database::DataSource; +use entity::git_obj::Model; +use futures::executor::block_on; +use futures::{future::FutureExt, stream::StreamExt}; +use libp2p::kad::store::MemoryStore; +use libp2p::request_response::ProtocolSupport; +use libp2p::swarm::SwarmEvent; +use libp2p::{ + dcutr, identify, identity, kad, multiaddr, noise, rendezvous, request_response, tcp, yamux, + Multiaddr, PeerId, StreamProtocol, +}; +use std::collections::HashMap; +use std::error::Error; +use std::sync::{Arc, Mutex}; + +pub async fn run( + local_key: identity::Keypair, + p2p_address: String, + bootstrap_node: String, + data_source: DataSource, +) -> Result<(), Box> { + let local_peer_id = PeerId::from(local_key.public()); + tracing::info!("Local peer id: {local_peer_id:?}"); + + let store = MemoryStore::new(local_peer_id); + + let mut swarm = libp2p::SwarmBuilder::with_existing_identity(local_key) + .with_async_std() + .with_tcp( + tcp::Config::default().port_reuse(true), + noise::Config::new, + yamux::Config::default, + )? + .with_relay_client(noise::Config::new, yamux::Config::default)? + .with_behaviour(|keypair, relay_behaviour| Behaviour { + relay_client: relay_behaviour, + identify: identify::Behaviour::new(identify::Config::new( + "/mega/0.0.1".to_string(), + keypair.public(), + )), + dcutr: dcutr::Behaviour::new(keypair.public().to_peer_id()), + //DHT + kademlia: kad::Behaviour::new(keypair.public().to_peer_id(), store), + //discover + rendezvous: rendezvous::client::Behaviour::new(keypair.clone()), + // git pull, git clone + git_upload_pack: request_response::cbor::Behaviour::new( + [( + StreamProtocol::new("/mega/git_upload_pack"), + ProtocolSupport::Full, + )], + request_response::Config::default(), + ), + // git info refs + git_info_refs: request_response::cbor::Behaviour::new( + [( + StreamProtocol::new("/mega/git_info_refs"), + ProtocolSupport::Full, + )], + request_response::Config::default(), + ), + // git download git_obj + git_object: request_response::cbor::Behaviour::new( + [(StreamProtocol::new("/mega/git_obj"), ProtocolSupport::Full)], + request_response::Config::default(), + ), + })? + .build(); + + // Listen on all interfaces + swarm.listen_on(p2p_address.parse()?)?; + + tracing::info!("Connect to database"); + let storage = database::init(&data_source).await; + let mut client_paras = ClientParas { + cookie: None, + rendezvous_point: None, + bootstrap_node_addr: None, + storage, + pending_git_upload_package: HashMap::new(), + pending_git_pull: HashMap::new(), + pending_git_obj_download: HashMap::new(), + pending_repo_info_update_fork: HashMap::new(), + pending_repo_info_search_to_download_obj: HashMap::new(), + pending_git_obj_id_download: HashMap::new(), + repo_node_list: HashMap::new(), + repo_id_need_list: Arc::new(Mutex::new(HashMap::>::new())), + repo_receive_git_obj_model_list: Arc::new(Mutex::new(HashMap::>::new())), + }; + // Wait to listen on all interfaces. + block_on(async { + let mut delay = futures_timer::Delay::new(std::time::Duration::from_secs(1)).fuse(); + loop { + futures::select! { + event = swarm.next() => { + match event.unwrap() { + SwarmEvent::NewListenAddr { address, .. } => { + tracing::info!("Listening on {:?}", address); + } + event => panic!("{event:?}"), + } + } + _ = delay => { + // Likely listening on all interfaces now, thus continuing by breaking the loop. + break; + } + } + } + }); + + //dial to bootstrap_node + if !bootstrap_node.is_empty() { + let bootstrap_node_addr: Multiaddr = bootstrap_node.parse()?; + tracing::info!("Trying to dial bootstrap node{:?}", bootstrap_node_addr); + swarm.dial(bootstrap_node_addr.clone())?; + block_on(async { + let mut learned_observed_addr = false; + let mut told_relay_observed_addr = false; + let mut relay_peer_id: Option = None; + let mut delay = futures_timer::Delay::new(std::time::Duration::from_secs(10)).fuse(); + loop { + futures::select! { + event = swarm.next() => { + match event.unwrap() { + SwarmEvent::NewListenAddr { .. } => {} + SwarmEvent::Dialing{peer_id, ..} => { + tracing::info!("Dialing: {:?}", peer_id) + }, + SwarmEvent::ConnectionEstablished { peer_id, .. } => { + client_paras.rendezvous_point.replace(peer_id); + let p2p_suffix = multiaddr::Protocol::P2p(peer_id); + let bootstrap_node_addr = + if !bootstrap_node_addr.ends_with(&Multiaddr::empty().with(p2p_suffix.clone())) { + bootstrap_node_addr.clone().with(p2p_suffix) + } else { + bootstrap_node_addr.clone() + }; + client_paras.bootstrap_node_addr.replace(bootstrap_node_addr.clone()); + swarm.behaviour_mut().kademlia.add_address(&peer_id.clone(),bootstrap_node_addr.clone()); + tracing::info!("ConnectionEstablished:{} at {}", peer_id, bootstrap_node_addr); + }, + SwarmEvent::Behaviour(behaviour::Event::Identify(identify::Event::Sent { + .. + })) => { + tracing::info!("Told Bootstrap Node our public address."); + told_relay_observed_addr = true; + }, + SwarmEvent::Behaviour(Event::Identify( + identify::Event::Received { + info ,peer_id + }, + )) => { + tracing::info!("Bootstrap Node told us our public address: {:?}", info.observed_addr); + learned_observed_addr = true; + relay_peer_id.replace(peer_id); + }, + event => tracing::info!("{:?}", event), + } + if learned_observed_addr && told_relay_observed_addr { + //success connect to bootstrap node + tracing::info!("Dial bootstrap node successfully"); + if let Some(bootstrap_node_addr) = client_paras.bootstrap_node_addr.clone(){ + let public_addr = bootstrap_node_addr.with(multiaddr::Protocol::P2pCircuit); + swarm.listen_on(public_addr.clone()).unwrap(); + swarm.add_external_address(public_addr); + //register rendezvous + if let Err(error) = swarm.behaviour_mut().rendezvous.register( + rendezvous::Namespace::from_static(event_handler::NAMESPACE), + relay_peer_id.unwrap(), + None, + ){ + tracing::error!("Failed to register: {error}"); + } + } + break; + } + } + _ = delay => { + tracing::error!("Dial bootstrap node failed: Timeout"); + break; + } + } + } + }); + } + + let mut stdin = io::BufReader::new(io::stdin()).lines().fuse(); + block_on(async { + loop { + futures::select! { + line = stdin.select_next_some() => { + let line :String = line.expect("Stdin not to close"); + if line.is_empty() { + continue; + } + //kad input + input_command::handle_input_command(&mut swarm,&mut client_paras,line.to_string()).await; + }, + event = swarm.select_next_some() => { + match event{ + SwarmEvent::NewListenAddr { address, .. } => { + tracing::info!("Listening on {:?}", address); + } + SwarmEvent::ConnectionEstablished { + peer_id, endpoint, .. + } => { + tracing::info!("Established connection to {:?} via {:?}", peer_id, endpoint); + swarm.behaviour_mut().kademlia.add_address(&peer_id,endpoint.get_remote_address().clone()); + let peers = swarm.connected_peers(); + for p in peers { + tracing::info!("Connected peer: {}",p); + }; + }, + SwarmEvent::ConnectionClosed { peer_id, .. } => { + tracing::info!("Disconnect {:?}", peer_id); + }, + SwarmEvent::OutgoingConnectionError{error,..} => { + tracing::debug!("OutgoingConnectionError {:?}", error); + }, + //Identify events + SwarmEvent::Behaviour(Event::Identify(event)) => { + event_handler::identify_event_handler(&mut swarm.behaviour_mut().kademlia, event); + }, + //RendezvousClient events + SwarmEvent::Behaviour(Event::Rendezvous(event)) => { + event_handler::rendezvous_client_event_handler(&mut swarm, &mut client_paras, event); + }, + //kad events + SwarmEvent::Behaviour(Event::Kademlia(event)) => { + event_handler::kad_event_handler(&mut swarm, &mut client_paras, event).await; + }, + //GitUploadPack events + SwarmEvent::Behaviour(Event::GitUploadPack(event)) => { + event_handler::git_upload_pack_event_handler(&mut swarm, &mut client_paras, event).await; + }, + //GitInfoRefs events + SwarmEvent::Behaviour(Event::GitInfoRefs(event)) => { + event_handler::git_info_refs_event_handler(&mut swarm, &mut client_paras, event).await; + }, + //GitObject events + SwarmEvent::Behaviour(Event::GitObject(event)) => { + event_handler::git_object_event_handler(&mut swarm, &mut client_paras, event).await; + }, + _ => { + tracing::debug!("Event: {:?}", event); + } + }; + } + } + } + }); + + Ok(()) +} diff --git a/gitnip-rs/p2p/src/node/input_command.rs b/gitnip-rs/p2p/src/node/input_command.rs new file mode 100644 index 00000000..260850eb --- /dev/null +++ b/gitnip-rs/p2p/src/node/input_command.rs @@ -0,0 +1,318 @@ +use super::ClientParas; +use crate::network::behaviour; +use crate::network::behaviour::{GitInfoRefsReq, GitUploadPackReq}; +use crate::node::{get_utc_timestamp, MegaRepoInfo}; +use crate::{get_pack_protocol, get_repo_full_path}; +use common::utils; +use libp2p::kad::record::Key; +use libp2p::kad::store::MemoryStore; +use libp2p::kad::{Quorum, Record}; +use libp2p::{kad, PeerId, Swarm}; +use std::collections::HashSet; +use std::path::Path; +use std::str::FromStr; + +pub async fn handle_input_command( + swarm: &mut Swarm, + client_paras: &mut ClientParas, + line: String, +) { + let line = line.trim(); + if line.is_empty() { + return; + } + let mut args = line.split_whitespace(); + match args.next() { + Some("kad") => { + handle_kad_command(&mut swarm.behaviour_mut().kademlia, args.collect()); + } + Some("mega") => { + handle_mega_command(swarm, client_paras, args.collect()).await; + } + _ => { + eprintln!("expected command: kad, mega"); + } + } +} + +pub fn handle_kad_command(kademlia: &mut kad::Behaviour, args: Vec<&str>) { + let mut args_iter = args.iter().copied(); + match args_iter.next() { + Some("get") => { + let key = { + match args_iter.next() { + Some(key) => Key::new(&key), + None => { + eprintln!("Expected key"); + return; + } + } + }; + kademlia.get_record(key); + } + Some("put") => { + let key = { + match args_iter.next() { + Some(key) => Key::new(&key), + None => { + eprintln!("Expected key"); + return; + } + } + }; + let value = { + match args_iter.next() { + Some(value) => value.as_bytes().to_vec(), + None => { + eprintln!("Expected value"); + return; + } + } + }; + let record = Record { + key, + value, + publisher: None, + expires: None, + }; + if let Err(e) = kademlia.put_record(record, Quorum::One) { + eprintln!("Put record failed :{}", e); + } + } + Some("k_buckets") => { + for (_, k_bucket_ref) in kademlia.kbuckets().enumerate() { + println!("k_bucket_ref.num_entries:{}", k_bucket_ref.num_entries()); + for (_, x) in k_bucket_ref.iter().enumerate() { + println!( + "PEERS[{:?}]={:?}", + x.node.key.preimage().to_string(), + x.node.value + ); + } + } + } + Some("get_peer") => { + let peer_id = match parse_peer_id(args_iter.next()) { + Some(peer_id) => peer_id, + None => { + return; + } + }; + kademlia.get_closest_peers(peer_id); + } + _ => { + eprintln!("expected command: get, put, k_buckets, get_peer"); + } + } +} + +pub async fn handle_mega_command( + swarm: &mut Swarm, + client_paras: &mut ClientParas, + args: Vec<&str>, +) { + let mut args_iter = args.iter().copied(); + match args_iter.next() { + //mega provide ${your_repo}.git + Some("provide") => { + let repo_name = { + match args_iter.next() { + Some(path) => path.to_string(), + None => { + eprintln!("Expected repo_name"); + return; + } + } + }; + if !repo_name.ends_with(".git") { + eprintln!("repo_name should end with .git"); + return; + } + let path = get_repo_full_path(&repo_name); + let pack_protocol = get_pack_protocol(&path, client_paras.storage.clone()).await; + let object_id = pack_protocol.get_head_object_id(Path::new(&path)).await; + if object_id == *utils::ZERO_ID { + eprintln!("Repository not found"); + return; + } + //Construct repoInfo + let mega_repo_info = MegaRepoInfo { + origin: swarm.local_peer_id().to_string(), + name: repo_name.clone(), + latest: object_id, + forks: vec![], + timestamp: get_utc_timestamp(), + }; + + let record = Record { + key: Key::new(&repo_name), + value: serde_json::to_vec(&mega_repo_info).unwrap(), + publisher: None, + expires: None, + }; + if let Err(e) = swarm + .behaviour_mut() + .kademlia + .put_record(record, Quorum::One) + { + eprintln!("Failed to store record:{}", e); + } + } + Some("search") => { + let repo_name = { + match args_iter.next() { + Some(path) => path.to_string(), + None => { + eprintln!("Expected repo_name"); + return; + } + } + }; + swarm + .behaviour_mut() + .kademlia + .get_record(Key::new(&repo_name)); + } + Some("clone") => { + // mega clone p2p://12D3KooWFgpUQa9WnTztcvs5LLMJmwsMoGZcrTHdt9LKYKpM4MiK/abc.git + let mega_address = { + match args_iter.next() { + Some(key) => key, + None => { + eprintln!("Expected mega_address"); + return; + } + } + }; + let (peer_id, repo_name) = match parse_mega_address(mega_address) { + Ok((peer_id, repo_name)) => (peer_id, repo_name), + Err(e) => { + eprintln!("{}", e); + return; + } + }; + let path = get_repo_full_path(repo_name); + let request_file_id = swarm.behaviour_mut().git_upload_pack.send_request( + &peer_id, + GitUploadPackReq(HashSet::new(), HashSet::new(), path), + ); + client_paras + .pending_git_upload_package + .insert(request_file_id, repo_name.to_string()); + } + Some("pull") => { + // mega pull p2p://12D3KooWFgpUQa9WnTztcvs5LLMJmwsMoGZcrTHdt9LKYKpM4MiK/abc.git + let mega_address = { + match args_iter.next() { + Some(key) => key, + None => { + eprintln!("Expected mega_address"); + return; + } + } + }; + let (peer_id, repo_name) = match parse_mega_address(mega_address) { + Ok((peer_id, repo_name)) => (peer_id, repo_name), + Err(e) => { + eprintln!("{}", e); + return; + } + }; + let path = get_repo_full_path(repo_name); + let pack_protocol = get_pack_protocol(&path, client_paras.storage.clone()).await; + let object_id = pack_protocol.get_head_object_id(Path::new(&path)).await; + if object_id == *utils::ZERO_ID { + eprintln!("local repo not found"); + return; + } + // Request to get git_info_refs + let request_id = swarm + .behaviour_mut() + .git_info_refs + .send_request(&peer_id, GitInfoRefsReq(path, Vec::new())); + client_paras + .pending_git_pull + .insert(request_id, repo_name.to_string()); + } + Some("clone-object") => { + // mega clone-object mega_test.git + let repo_name = { + match args_iter.next() { + Some(path) => path.to_string(), + None => { + eprintln!("Expected repo_name"); + return; + } + } + }; + if !repo_name.ends_with(".git") { + eprintln!("repo_name should end with .git"); + return; + } + + let kad_query_id = swarm + .behaviour_mut() + .kademlia + .get_record(Key::new(&repo_name)); + client_paras + .pending_repo_info_search_to_download_obj + .insert(kad_query_id, repo_name); + } + Some("pull-object") => { + // mega pull-object mega_test.git + let repo_name = { + match args_iter.next() { + Some(path) => path.to_string(), + None => { + eprintln!("Expected repo_name"); + return; + } + } + }; + if !repo_name.ends_with(".git") { + eprintln!("repo_name should end with .git"); + return; + } + + let kad_query_id = swarm + .behaviour_mut() + .kademlia + .get_record(Key::new(&repo_name)); + client_paras + .pending_repo_info_search_to_download_obj + .insert(kad_query_id, repo_name); + } + _ => { + eprintln!("expected command: clone, pull, provide, clone-object"); + } + } +} + +fn parse_peer_id(peer_id_str: Option<&str>) -> Option { + match peer_id_str { + Some(peer_id) => match PeerId::from_str(peer_id) { + Ok(id) => Some(id), + Err(err) => { + eprintln!("peer_id parse error:{}", err); + None + } + }, + None => { + eprintln!("Expected peer_id"); + None + } + } +} + +fn parse_mega_address(mega_address: &str) -> Result<(PeerId, &str), String> { + // p2p://12D3KooWFgpUQa9WnTztcvs5LLMJmwsMoGZcrTHdt9LKYKpM4MiK/abc.git + let v: Vec<&str> = mega_address.split('/').collect(); + if v.len() < 4 { + return Err("mega_address invalid".to_string()); + }; + let peer_id = match PeerId::from_str(v[2]) { + Ok(peer_id) => peer_id, + Err(e) => return Err(e.to_string()), + }; + Ok((peer_id, v[3])) +} diff --git a/gitnip-rs/p2p/src/node/mod.rs b/gitnip-rs/p2p/src/node/mod.rs new file mode 100644 index 00000000..dc03a0ec --- /dev/null +++ b/gitnip-rs/p2p/src/node/mod.rs @@ -0,0 +1,64 @@ +//! +//! +//! +//! +//! +//! + +use database::driver::ObjectStorage; +use entity::git_obj::Model; +use libp2p::kad::QueryId; +use libp2p::rendezvous::Cookie; +use libp2p::request_response::RequestId; +use libp2p::{Multiaddr, PeerId}; +use serde::{Deserialize, Serialize}; +use std::collections::HashMap; +use std::sync::{Arc, Mutex}; +use std::time::{SystemTime, UNIX_EPOCH}; + +pub mod client; +mod input_command; +pub mod relay_server; + +#[cfg(test)] +mod tests {} + +#[derive(Serialize, Deserialize, PartialEq, Debug)] +pub struct MegaRepoInfo { + pub origin: String, + pub name: String, + pub latest: String, + pub forks: Vec, + pub timestamp: i64, +} + +#[derive(Serialize, Deserialize, PartialEq, Debug)] +pub struct Fork { + pub peer: String, + pub latest: String, + pub timestamp: i64, +} + +pub struct ClientParas { + pub cookie: Option, + pub rendezvous_point: Option, + pub bootstrap_node_addr: Option, + pub storage: Arc, + pub pending_git_upload_package: HashMap, + pub pending_git_pull: HashMap, + pub pending_git_obj_download: HashMap, + pub pending_repo_info_update_fork: HashMap, + pub pending_repo_info_search_to_download_obj: HashMap, + pub pending_git_obj_id_download: HashMap, + pub repo_node_list: HashMap>, + pub repo_id_need_list: Arc>>>, + // pub repo_receive_git_obj_model_list: HashMap>, + pub repo_receive_git_obj_model_list: Arc>>>, +} + +pub fn get_utc_timestamp() -> i64 { + SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap() + .as_millis() as i64 +} diff --git a/gitnip-rs/p2p/src/node/relay_server.rs b/gitnip-rs/p2p/src/node/relay_server.rs new file mode 100644 index 00000000..8e8c2576 --- /dev/null +++ b/gitnip-rs/p2p/src/node/relay_server.rs @@ -0,0 +1,148 @@ +use super::input_command; +use crate::network::event_handler; +use async_std::io; +use async_std::io::prelude::BufReadExt; +use futures::executor::block_on; +use futures::stream::StreamExt; +use libp2p::kad::store::MemoryStore; +use libp2p::kad::{ + AddProviderOk, GetClosestPeersOk, GetProvidersOk, GetRecordOk, PeerRecord, PutRecordOk, + QueryResult, +}; +use libp2p::{ + identify, identity, + identity::PeerId, + kad, noise, relay, rendezvous, + swarm::{NetworkBehaviour, SwarmEvent}, + tcp, yamux, +}; +use std::error::Error; + +pub fn run(local_key: identity::Keypair, p2p_address: String) -> Result<(), Box> { + let local_peer_id = PeerId::from(local_key.public()); + tracing::info!("Local peer id: {local_peer_id:?}"); + + let store = MemoryStore::new(local_peer_id); + + let mut swarm = libp2p::SwarmBuilder::with_existing_identity(local_key) + .with_async_std() + .with_tcp( + tcp::Config::default().port_reuse(true), + noise::Config::new, + yamux::Config::default, + )? + .with_behaviour(|key| ServerBehaviour { + relay: relay::Behaviour::new(key.public().to_peer_id(), Default::default()), + identify: identify::Behaviour::new(identify::Config::new( + "/mega/0.0.1".to_string(), + key.public(), + )), + kademlia: kad::Behaviour::new(key.public().to_peer_id(), store), + rendezvous: rendezvous::server::Behaviour::new(rendezvous::server::Config::default()), + })? + .build(); + + // Listen on all interfaces + swarm.listen_on(p2p_address.parse()?)?; + + let mut stdin = io::BufReader::new(io::stdin()).lines().fuse(); + + block_on(async { + loop { + futures::select! { + line = stdin.select_next_some() => { + let line :String = line.expect("Stdin not to close"); + if line.is_empty() { + continue; + } + //kad input + input_command::handle_kad_command(&mut swarm.behaviour_mut().kademlia,line.to_string().split_whitespace().collect()); + }, + event = swarm.select_next_some() => match event{ + SwarmEvent::Behaviour(ServerBehaviourEvent::Identify(identify::Event::Received { + info,peer_id + })) => { + swarm.add_external_address(info.observed_addr.clone()); + for listen_addr in info.listen_addrs.clone(){ + swarm.behaviour_mut().kademlia.add_address(&peer_id.clone(),listen_addr); + } + tracing::info!("Identify Event Received, peer_id :{}, info:{:?}", peer_id, info); + } + SwarmEvent::NewListenAddr { address, .. } => { + tracing::info!("Listening on {address:?}"); + } + //kad events + SwarmEvent::Behaviour(ServerBehaviourEvent::Kademlia(event)) => { + kad_event_handler(event); + } + //RendezvousServer events + SwarmEvent::Behaviour(ServerBehaviourEvent::Rendezvous(event)) => { + event_handler::rendezvous_server_event_handler(event); + }, + _ => { + tracing::debug!("Event: {:?}", event); + } + } + } + } + }); + + Ok(()) +} + +pub fn kad_event_handler(event: kad::Event) { + if let kad::Event::OutboundQueryProgressed { result, .. } = event { + match result { + QueryResult::GetRecord(Ok(GetRecordOk::FoundRecord(PeerRecord { record, peer }))) => { + let peer_id = match peer { + Some(id) => id.to_string(), + None => "local".to_string(), + }; + tracing::info!( + "Got record key[{}]={},from {}", + String::from_utf8(record.key.to_vec()).unwrap(), + String::from_utf8(record.value).unwrap(), + peer_id + ); + } + QueryResult::GetRecord(Err(err)) => { + tracing::error!("Failed to get record: {err:?}"); + } + QueryResult::PutRecord(Ok(PutRecordOk { key })) => { + tracing::info!( + "Successfully put record {:?}", + std::str::from_utf8(key.as_ref()).unwrap() + ); + } + QueryResult::PutRecord(Err(err)) => { + tracing::error!("Failed to put record: {err:?}"); + } + QueryResult::GetClosestPeers(Ok(GetClosestPeersOk { peers, .. })) => { + for x in peers { + tracing::info!("{}", x); + } + } + QueryResult::GetClosestPeers(Err(err)) => { + tracing::error!("Failed to get closest peers: {err:?}"); + } + QueryResult::GetProviders(Ok(GetProvidersOk::FoundProviders { providers, .. }), ..) => { + tracing::info!("FoundProviders: {providers:?}"); + } + QueryResult::GetProviders(Err(e)) => { + tracing::error!("GetProviders error: {e:?}"); + } + QueryResult::StartProviding(Ok(AddProviderOk { key, .. }), ..) => { + tracing::info!("StartProviding: {key:?}"); + } + _ => {} + } + } +} + +#[derive(NetworkBehaviour)] +pub struct ServerBehaviour { + pub relay: relay::Behaviour, + pub identify: identify::Behaviour, + pub kademlia: kad::Behaviour, + pub rendezvous: rendezvous::server::Behaviour, +} diff --git a/gitnip-rs/p2p/src/peer.rs b/gitnip-rs/p2p/src/peer.rs new file mode 100644 index 00000000..14ad6c9e --- /dev/null +++ b/gitnip-rs/p2p/src/peer.rs @@ -0,0 +1,69 @@ +//! +//! +//! +//! +//! + +use super::node::client; +use super::node::relay_server; +use clap::Args; +use database::DataSource; +use libp2p::identity; + +/// Parameters for starting the p2p service +#[derive(Args, Clone, Debug)] +pub struct P2pOptions { + #[arg(long, default_value_t = String::from("127.0.0.1"))] + pub host: String, + + #[arg(short, long, default_value_t = 8001)] + pub port: u16, + + #[arg(short, long, default_value_t = String::from(""))] + pub bootstrap_node: String, + + #[arg(short, long)] + pub secret_key_seed: Option, + + #[arg(short, long, default_value_t = false)] + pub relay_server: bool, + + #[arg(short, long, value_enum, default_value = "postgres")] + pub data_source: DataSource, +} + +/// run as a p2p node +pub async fn run(options: &P2pOptions) -> Result<(), Box> { + let P2pOptions { + host, + port, + bootstrap_node, + secret_key_seed, + relay_server, + data_source, + } = options; + let p2p_address = format!("/ip4/{}/tcp/{}", host, port); + + // Create a PeerId. + let local_key = if let Some(secret_key_seed) = secret_key_seed { + tracing::info!("Generate keys with fix seed={}", secret_key_seed); + generate_ed25519_fix(*secret_key_seed) + } else { + tracing::info!("Generate keys randomly"); + identity::Keypair::generate_ed25519() + }; + + if *relay_server { + relay_server::run(local_key, p2p_address)?; + } else { + client::run(local_key, p2p_address, bootstrap_node.clone(), *data_source).await?; + } + Ok(()) +} + +fn generate_ed25519_fix(secret_key_seed: u8) -> identity::Keypair { + let mut bytes = [0u8; 32]; + bytes[0] = secret_key_seed; + + identity::Keypair::ed25519_from_bytes(bytes).expect("only errors on wrong length") +} diff --git a/gitnip-rs/solana.md b/gitnip-rs/solana.md new file mode 100644 index 00000000..9990698f --- /dev/null +++ b/gitnip-rs/solana.md @@ -0,0 +1,128 @@ +### Solana + +[solana-labs/solana: Web-Scale Blockchain for fast, secure, scalable, decentralized apps and marketplaces. (github.com)](https://github.com/solana-labs/solana) + +Web-Scale **Blockchain** for fast, secure, scalable, decentralized apps and marketplaces. + + + +1. 执行智能合约,执行简单的代码,solidity +2. 存一些轻量级的数据,元数据,但存不了大型的数据 + + + +结合: + +1. 我们可以把仓库的信息存到区块链,别人也是从区块链去取仓库的信息(DHT-> 区块链) + +2. 我们可以仓库的数据(大型的数据)存到别的平台IPFS,区块链记录这个IPFS地址 + + (mega provide -> 1.元数据存到区块链 2.大型的数据存IPFS 3.区块链做记录) + + + + + + + +### PoH 共识机制 + +### 1. 历史证明 + +历史证明(Proof of History),就是一个经过计算的序列,这个序列,可以提供一种基于密码学的方式验证两个事件之间时间的流逝。 +由于其密码学安全性(其实就是单向散列算法,hash 运算),输出的内容,并不能通过输入预测出来。安全函数是执行在 GPU 单核中,它的上一个输出,将作为当前的输入,周期性的记录当前输出和调用次数。输出的结果可以被 ***验证节点*** 以并行计算的方式通过重新计算并验证。 + +#### 1.1 描述 + +系统被设计为如下的工作模式: +从一个随机值开始,运行 hash 函数,并将输出(output)作为输入(input)再次运行该函数。记录函数执行的次数以及每次调用的结果(output). 次数,提供了顺序和时间两个维度的支持;将输出作为输入,依次头尾相连,形成了一条完整的证据链。 + +随机值的选择,可以选择纽约时报当天的标题,或者其他的事实。 +比如: + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-1.png) + +hashN 代表实际的 Hash 输出。 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-2.png) + +只要选择的哈希函数是抗碰撞的,这个哈希集和就只能被单线程顺序计算出来。这满足了在 index 为 300 时,如果不通过算法实际运行 300 次,无法获得这样的结果的设定。 + +因此,我们可以从数据结构中推测出从 index 0 到 index 300 真实的过去的时间。这样以来,每次 hash,是不是就向沙漏中一粒沙子落下。 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/time.png) + +如下图所示,当次数为 510144806912 时,哈希值为 62f51643c1,当次数为 510146904064 时,哈希值为 c43d5862d88。根据前面提到的 PoH 算法,我们可以相信从 510144806912 到 510146904064 过去的时间。 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-4.png) + +#### 1.2 事件的时间戳 + +哈希序列中也可以被用来记录数据。使用 “combine” 函数将数据和 哈希值结合起来。数据部分,可以被简化为任意事件数据的哈希,此时,生成的哈希就代表了数据的时间戳,因为,它一定是指定的数据被插入之后生成的。 + +比如: +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-5.png) + +这时,一些外部事件发生了,比如拍照,或者其他数据被创建: + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-6.png) + +Hash336 的计算,是从 hash335 以及照片的 sha256 联合起来,再经过 hash256 结算的结果。索引 和 照片的 sha256 将被记录下来作为输出序列的一部分。因此,任何人验证数据的可靠性,甚至这种验证可以并行执行。 + +由于整个过程就是有序执行的,因此,我们可以断定进入队列的事件一定发生在未来某个被计算的hash事件之前。 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-table-1.png) + +如表一所示,photograph2 在 hash600 之前被创建,photograph1 在 hash 336 之前被创建。在哈希序列中插入数据,将会导致后续整个子序列的变化。 + +如下图 3 所示,输入的 cfd40df8… 被插入到 PoH 序列中,计数器为 510145855488,最终被插入的哈希是 3d039eef3。 +所有后续的 哈希值都将因这个变化而被修改,这种变化在下面通过颜色做了区分。 + +每个发现这个序列的节点,可以通过序列确定所有事件的顺序,并且评估两次插入之间的真实时间。 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-7.png) + +#### 1.3 验证 + +序列可以被多核 CPU 验证,相比生成序列,这将大幅降低时间。 +举个例子: + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-13-1.png) + +比如,如果 GPU 有 4000 个核心,验证器可以将哈希序列分割成 4000 份做并行运算,确保每个分片从开始的 hash 到最后的 hash 都是正确的。 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-13-2.png) + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-13-3.png) + +#### 1.4 一致性 + +不管在任何情况下,用户都希望生成的序列能保持一致性,并且能够防止在最后一个插入的事件后再插入事件。 +为了防止恶意的 领导者节点 伪造序列数据,每一任领导者在生成序列的时候,第一个 Hash 来自于其记录的自己信任的上一个序列的最后一个 Hash。 + +为了防止 领导者 节点重写事件,每个事件,又都是用户签名后的结果。 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-14-1.png) + +### 2. 网络设计 + +![img](https://raw.githubusercontent.com/yuegs/yuegs.github.io/master/images/blockChain/solana/poh/poh-2-1.png) + +如上图所示,生成 PoH 历史序列的领导者(Leader),提供全网读一致性和可验证的时间。 +领导者将用户消息按照顺序排列,以便它们可以被系统中的其他节点高效地处理,从而获得最大化的吞吐量。 +它将基于当前 RAM 存储的状态(current state),执行交易,最终发布交易和最后状态的签名(signature of the final state)到被称为验证器的复制节点上。 + +验证器将基于其状态的拷贝,执行相同的交易,并将最终计算出来的状态的签名(signature of state)作为验证结果发布出去。 +发布出去的验证结果,将作为一致性算法的投票。 + +在非分区状态下,网络中始终都只有一个领导者。 +每个验证器节点和领导者拥有相同的硬件能力,且验证器可以被选举为领导者,这种选举是基于 PoS 的。 +根据 CAP(一致性/可用性/分区容错性) 理论,对一致性(Consistency)的要求,总是要高于可用性(Availability)以分区容错性(Partition Tolerance)。 + +这种网络设计,我们可以看出,在出块速度上相比于以太坊将会有大幅度提升,但是,在去中心化程度上,却不如以太坊。 + +参考链接: +https://cointelegraphcn.com/news/what-is-solana-and-how-does-it-work +https://solana.com/solana-whitepaper.pdf +https://www.naukri.com/learning/articles/delegated-proof-of-stake-dpos/ +https://medium.com/solana-labs/solanas-network-architecture-8e913e1d5a40 \ No newline at end of file -- Gitee