From c3c167307f5338dc7e749e524dc75034627fc67b Mon Sep 17 00:00:00 2001 From: MO NAN <651932351@qq.com> Date: Wed, 3 Jul 2024 10:39:15 +0800 Subject: [PATCH] Add rst translate (#1819) * Update 3.x/.readthedocs.yaml * Update translate, add rst translate --- 3.x/.readthedocs.yaml | 2 +- 3.x/en/MVP.docx | Bin 10300 -> 0 bytes .../advanced_function/distributed_event.md | 10 +- .../advanced_function/distributed_identity.md | 6 +- .../docs/advanced_function/privacy/index.md | 10 +- .../docs/advanced_function/privacy/privacy.md | 52 +- .../docs/advanced_function/privacy/wedpr.md | 12 +- .../docs/advanced_function/privacy_protect.md | 14 +- 3.x/en/docs/advanced_function/safety.md | 4 +- .../docs/advanced_function/trusted_oracle.md | 6 +- 3.x/en/docs/advanced_function/wecross.md | 6 +- .../1_conception/distributed_system.md | 120 ++--- .../1_conception/on_and_off_the_blockchain.md | 124 ++--- .../articles/1_conception/point_to_point.md | 26 +- .../1_conception/safe_and_controllable.md | 76 +-- .../1_conception/simplify_blockchain.md | 22 +- .../1_conception/the_truth_of_tampering.md | 60 +-- .../1_conception/understandable_blockchain.md | 44 +- .../1_conception/what_should_not_trust.md | 58 +-- .../articles/1_conception/what_to_trust.md | 92 ++-- .../1_conception/why_blockchain_slow.md | 66 +-- .../articles/2_required/entry_to_master.md | 64 +-- .../2_required/go_through_sourcecode.md | 64 +-- .../2_required/practical_skill_tree.md | 82 +-- ...d_parallel_transaction_execution_engine.md | 72 +-- .../distributed_storage_design.md | 62 +-- .../distributed_storage_experience.md | 44 +- .../group_architecture_design.md | 58 +-- .../group_architecture_practice.md | 38 +- ...ct_development_framework_with_tutorials.md | 86 ++-- .../parallel_transformation.md | 54 +- .../30_architecture/transaction_lifetime.md | 26 +- .../transaction_pool_optimization_strategy.md | 56 +- .../cachedstorage_deadlock_debug.md | 28 +- ...consensus_and_sync_process_optimization.md | 42 +- ...d_parallel_transaction_execution_engine.md | 72 +-- .../3_features/31_performance/flow_control.md | 66 +-- ...ct_development_framework_with_tutorials.md | 86 ++-- .../31_performance/parallel_transformation.md | 54 +- .../performance_optimization.md | 48 +- .../performance_optimization_tools.md | 82 +-- .../sync_and_its_performance_optimization.md | 38 +- .../31_performance/sync_optimization.md | 46 +- .../32_consensus/consensus_optimization.md | 56 +- .../pbft_empty_block_processing.md | 26 +- .../32_consensus/rpbft_design_analysis.md | 44 +- .../3_features/33_storage/crud_guidance.md | 68 +-- .../33_storage/data_chain_or_database.md | 38 +- .../33_storage/storage_by_table_structure.md | 44 +- .../33_storage/why_switch_to_rocksdb.md | 26 +- .../34_protocol/amop_introduction.md | 30 +- .../34_protocol/network_compression.md | 42 +- .../34_protocol/network_interface.md | 34 +- .../16skills_to_high-level_smart_contracts.md | 134 ++--- .../3_features/35_contract/abi_of_contract.md | 36 +- ...ct_design_practice_deposit&points_scene.md | 172 +++---- .../35_contract/contract_name_service.md | 40 +- .../35_contract/entry_quick_guide.md | 100 ++-- .../35_contract/outside_account_generation.md | 30 +- ...e-compiled_contract_architecture_design.md | 52 +- ...pre-compiled_contract_rapid_development.md | 26 +- .../smart_contract_concept_and_evolution.md | 82 +-- .../smart_contract_test_practice.md | 134 ++--- .../smart_contract_write_elegantly.md | 94 ++-- .../35_contract/solidity_advanced_features.md | 102 ++-- .../35_contract/solidity_basic_features.md | 74 +-- .../35_contract/solidity_design_patterns.md | 82 +-- .../solidity_design_programming_strategy.md | 108 ++-- .../solidity_operation_principle.md | 74 +-- .../35_contract/solidity_presensation.md | 30 +- .../36_cryptographic/ecdsa_analysis.md | 54 +- .../36_cryptographic/elliptic_curve.md | 40 +- ...ational_cryptography_deployment_example.md | 88 ++-- .../national_cryptography_features.md | 62 +-- .../37_safety/access_control_glance.md | 46 +- .../37_safety/certificate_description.md | 36 +- .../3_features/37_safety/disk_encryption.md | 42 +- .../role_authority_model_realization.md | 42 +- .../third-party-CA_node_deployment.md | 82 +-- .../37_safety/tsl1.2_establish_process.md | 32 +- .../articles/3_features/38_privacy/index.md | 4 +- ...acy_protection_group_and_ring_signature.md | 66 +-- ...ivacy_protection_homomorphic_encryption.md | 36 +- .../4_tools/41_webase/walk_in_webase_zoo.md | 90 ++-- .../4_tools/41_webase/webase-transaction.md | 44 +- .../4_tools/41_webase/webase_data_output.md | 106 ++-- .../41_webase/webase_node_preposition.md | 36 +- .../4_tools/41_webase/webase_release.md | 54 +- .../4_tools/42_buildchain/fast_build_chain.md | 60 +-- .../4_tools/43_console/console_details.md | 46 +- ...asdk_performance_improvement_8000-30000.md | 50 +- .../4_tools/44_sdk/multilingual_sdk.md | 100 ++-- .../4_tools/44_sdk/node.js_sdk_quick_start.md | 40 +- ...hon-sdk_origin_function_and_realization.md | 62 +-- .../4_tools/44_sdk/python_blockchain_box.md | 46 +- .../talking_about_java-contract-code.md | 14 +- .../4_tools/44_sdk/use_javasdk_in_eclipse.md | 22 +- .../contract_analysis_tool_guide.md | 32 +- .../caliper_stress_test_practice.md | 58 +-- .../47_maintenance/access_control_glance.md | 44 +- .../five_step_to_develop_application.md | 36 +- .../5_corporation/how_to_submit_pr.md | 48 +- .../application_bsn_officially_designated.md | 18 +- ...on_industry_digitalization_jianxinzhuhe.md | 34 +- .../application_manufacturing_changhong.md | 60 +-- ...ation_multiple_enterprises_jianxinzhuhe.md | 20 +- .../application_online_lending_platforms.md | 28 +- .../application_people_copyright.md | 26 +- .../application_westlake_longjingtea_yifei.md | 32 +- .../industry_application_case.md | 12 +- .../articles/7_community/group_deploy_case.md | 44 +- .../suibe_blockchain_center_toolbox.md | 70 +-- ...64\345\214\272\345\235\227\351\223\276.md" | 34 +- .../ansible_FISCO-BCOS_Webase-deploy.md | 50 +- .../build_chain_with_wsl_on_windows.md | 44 +- .../deploy_webase_management_platform.md | 188 +++++++ ...form_compiles_and_runs_fisco-bcos-2.6.0.md | 28 +- ...72\351\223\276\350\277\207\347\250\213.md" | 20 +- 3.x/en/docs/articles/index.md | 4 +- 3.x/en/docs/community.md | 2 +- 3.x/en/docs/community/MVP_list_new.md | 6 +- 3.x/en/docs/community/contributor_list_new.md | 36 +- 3.x/en/docs/community/partner_list_new.md | 214 ++++---- 3.x/en/docs/community/pr.md | 28 +- 3.x/en/docs/components/data_index.md | 58 +-- 3.x/en/docs/components/governance_index.md | 110 ++-- 3.x/en/docs/components/index.md | 8 +- 3.x/en/docs/components/smartdev_index.md | 42 +- 3.x/en/docs/components/webase.md | 12 +- .../docs/contract_develop/Liquid_develop.md | 10 +- .../c++_contract/add_precompiled_impl.md | 34 +- .../c++_contract/precompiled_contract_api.md | 80 +-- .../c++_contract/precompiled_error_code.md | 22 +- .../c++_contract/use_crud_precompiled.md | 44 +- .../c++_contract/use_group_ring_sig.md | 42 +- .../c++_contract/use_kv_precompiled.md | 36 +- .../c++_contract/use_precompiled.md | 20 +- 3.x/en/docs/contract_develop/opcode_diff.md | 22 +- .../docs/contract_develop/solidity_develop.md | 12 +- 3.x/en/docs/design/amop_protocol.md | 56 +- 3.x/en/docs/design/architecture.md | 18 +- 3.x/en/docs/design/boostssl.md | 18 +- .../docs/design/cns_contract_name_service.md | 72 +-- 3.x/en/docs/design/committee_design.md | 60 +-- 3.x/en/docs/design/compatibility.md | 42 +- 3.x/en/docs/design/consensus/consensus.md | 44 +- 3.x/en/docs/design/consensus/index.rst | 35 ++ 3.x/en/docs/design/consensus/pbft.md | 54 +- 3.x/en/docs/design/consensus/raft.md | 58 +-- 3.x/en/docs/design/consensus/rpbft.md | 72 +-- 3.x/en/docs/design/contract.md | 8 +- 3.x/en/docs/design/contract_directory.md | 120 ++--- 3.x/en/docs/design/guomi.md | 10 +- 3.x/en/docs/design/hsm.md | 18 +- 3.x/en/docs/design/index.md | 16 +- 3.x/en/docs/design/network_compress.md | 26 +- 3.x/en/docs/design/p2p.md | 176 +++---- 3.x/en/docs/design/parallel/DMC.md | 46 +- 3.x/en/docs/design/parallel/dag.md | 44 +- 3.x/en/docs/design/parallel/group.md | 22 +- 3.x/en/docs/design/parallel/index.md | 2 +- 3.x/en/docs/design/parallel/pipeline.md | 24 +- 3.x/en/docs/design/parallel/sharding.md | 12 +- 3.x/en/docs/design/protocol_description.md | 34 +- 3.x/en/docs/design/rip.md | 2 +- .../security_control/certificate_list.md | 22 +- .../security_control/committee_design.md | 60 +-- 3.x/en/docs/design/security_control/index.rst | 36 ++ .../security_control/node_management.md | 96 ++-- .../security_control/permission_control.md | 44 +- 3.x/en/docs/design/storage/archive.md | 10 +- 3.x/en/docs/design/storage/storage.md | 40 +- .../docs/design/storage/storage_security.md | 24 +- 3.x/en/docs/design/sync.md | 34 +- 3.x/en/docs/design/tx_procedure.md | 28 +- 3.x/en/docs/design/virtual_machine/evm.md | 16 +- 3.x/en/docs/design/virtual_machine/gas.md | 20 +- 3.x/en/docs/design/virtual_machine/index.rst | 35 ++ .../design/virtual_machine/precompiled.md | 10 +- 3.x/en/docs/design/virtual_machine/wasm.md | 154 +++--- 3.x/en/docs/develop/account.md | 48 +- 3.x/en/docs/develop/amop.md | 30 +- 3.x/en/docs/develop/api.md | 292 +++++------ 3.x/en/docs/develop/committee_usage.md | 70 +-- .../docs/develop/console_deploy_contract.md | 18 +- 3.x/en/docs/develop/contract_life_cycle.md | 60 +-- .../docs/develop/contract_safty_practice.md | 164 +++--- 3.x/en/docs/develop/index.md | 16 +- 3.x/en/docs/develop/privacy.md | 44 +- 3.x/en/docs/develop/smartdev_index.md | 42 +- 3.x/en/docs/introduction/change_log/3_0_0.md | 40 +- .../docs/introduction/change_log/3_0_0_rc1.md | 28 +- .../docs/introduction/change_log/3_0_0_rc2.md | 30 +- .../docs/introduction/change_log/3_0_0_rc3.md | 36 +- .../docs/introduction/change_log/3_0_0_rc4.md | 30 +- 3.x/en/docs/introduction/change_log/3_0_1.md | 4 +- 3.x/en/docs/introduction/change_log/3_1_0.md | 10 +- 3.x/en/docs/introduction/change_log/3_1_1.md | 8 +- 3.x/en/docs/introduction/change_log/3_1_2.md | 10 +- 3.x/en/docs/introduction/change_log/3_2_0.md | 14 +- 3.x/en/docs/introduction/change_log/3_2_1.md | 10 +- 3.x/en/docs/introduction/change_log/3_2_2.md | 8 +- 3.x/en/docs/introduction/change_log/3_2_3.md | 8 +- 3.x/en/docs/introduction/change_log/3_2_4.md | 8 +- 3.x/en/docs/introduction/change_log/3_2_5.md | 8 +- 3.x/en/docs/introduction/change_log/3_2_6.md | 8 +- 3.x/en/docs/introduction/change_log/3_2_7.md | 8 +- 3.x/en/docs/introduction/change_log/3_3_0.md | 16 +- 3.x/en/docs/introduction/change_log/3_4_0.md | 10 +- 3.x/en/docs/introduction/change_log/3_5_0.md | 18 +- 3.x/en/docs/introduction/change_log/3_6_0.md | 14 +- 3.x/en/docs/introduction/change_log/3_6_1.md | 10 +- 3.x/en/docs/introduction/change_log/3_7_0.md | 12 +- 3.x/en/docs/introduction/change_log/3_7_1.md | 14 +- 3.x/en/docs/introduction/change_log/3_8_0.md | 12 +- .../change_log/feature_bugfix_list.md | 14 +- 3.x/en/docs/introduction/change_log/index.rst | 330 ++++++++++++ .../docs/introduction/change_log/upgrade.md | 4 +- 3.x/en/docs/introduction/function_overview.md | 6 +- 3.x/en/docs/introduction/introduction.md | 10 +- 3.x/en/docs/introduction/key_feature.md | 34 +- 3.x/en/docs/key_concepts.md | 142 ++--- 3.x/en/docs/manual/certificate_list.md | 12 +- 3.x/en/docs/manual/log_description.md | 44 +- .../docs/manual/operation_and_maintenance.md | 48 +- .../operation_and_maintenance/add_new_node.md | 28 +- .../docs/operation_and_maintenance/browser.md | 18 +- .../operation_and_maintenance/build_chain.md | 2 +- .../committee_usage.md | 82 +-- .../console/console_commands.md | 254 ++++----- .../console/console_config.md | 64 +-- .../console/console_error.md | 16 +- .../console/index.md | 14 +- .../data_archive_tool.md | 12 +- .../operation_and_maintenance/data_index.md | 60 +-- .../governance_index.md | 112 ++-- .../light_monitor.md | 28 +- .../operation_and_maintenance/log/index.md | 4 +- .../log/log_description.md | 44 +- .../log/system_log_audit.md | 34 +- .../node_management.md | 70 +-- .../operation_and_maintenance.md | 48 +- .../operation_and_maintenance/storage_tool.md | 18 +- .../stress_testing.md | 32 +- .../docs/operation_and_maintenance/upgrade.md | 110 ++-- .../docs/operation_and_maintenance/webase.md | 10 +- 3.x/en/docs/quick_start/air_installation.md | 36 +- .../docs/quick_start/hardware_requirements.md | 8 +- .../docs/quick_start/solidity_application.md | 128 ++--- .../quick_start/wbc_liquid_application.md | 144 +++--- 3.x/en/docs/sdk/c_sdk/api.md | 484 +++++++++--------- 3.x/en/docs/sdk/c_sdk/appendix.md | 16 +- 3.x/en/docs/sdk/c_sdk/assemble_transaction.md | 70 +-- 3.x/en/docs/sdk/c_sdk/compile.md | 8 +- 3.x/en/docs/sdk/c_sdk/config.md | 34 +- 3.x/en/docs/sdk/c_sdk/dev.md | 4 +- 3.x/en/docs/sdk/c_sdk/dylibs.md | 4 +- 3.x/en/docs/sdk/c_sdk/env.md | 2 +- 3.x/en/docs/sdk/c_sdk/faq.md | 12 +- 3.x/en/docs/sdk/c_sdk/index.md | 4 +- .../docs/sdk/c_sdk/transaction_data_struct.md | 32 +- 3.x/en/docs/sdk/cert_config.md | 22 +- 3.x/en/docs/sdk/cpp_sdk/index.md | 6 +- 3.x/en/docs/sdk/csharp_sdk/index.md | 2 +- 3.x/en/docs/sdk/csharp_sdk/quick_start.md | 42 +- 3.x/en/docs/sdk/go_sdk/amopExamples.md | 16 +- 3.x/en/docs/sdk/go_sdk/api.md | 14 +- 3.x/en/docs/sdk/go_sdk/console.md | 20 +- 3.x/en/docs/sdk/go_sdk/contractExamples.md | 54 +- 3.x/en/docs/sdk/go_sdk/env_conf.md | 24 +- 3.x/en/docs/sdk/go_sdk/event_sub.md | 34 +- 3.x/en/docs/sdk/go_sdk/index.rst | 31 ++ 3.x/en/docs/sdk/index.md | 10 +- 3.x/en/docs/sdk/java_sdk/amop.md | 36 +- 3.x/en/docs/sdk/java_sdk/assemble_service.md | 44 +- .../docs/sdk/java_sdk/assemble_transaction.md | 68 +-- 3.x/en/docs/sdk/java_sdk/config.md | 44 +- 3.x/en/docs/sdk/java_sdk/contract_parser.md | 52 +- 3.x/en/docs/sdk/java_sdk/contracts_to_java.md | 74 +-- 3.x/en/docs/sdk/java_sdk/crypto.md | 6 +- 3.x/en/docs/sdk/java_sdk/event_sub.md | 60 +-- 3.x/en/docs/sdk/java_sdk/index.md | 6 +- 3.x/en/docs/sdk/java_sdk/keytool.md | 14 +- .../sdk/java_sdk/precompiled_service_api.md | 166 +++--- 3.x/en/docs/sdk/java_sdk/quick_start.md | 22 +- .../remote_sign_assemble_transaction.md | 54 +- 3.x/en/docs/sdk/java_sdk/retcode_retmsg.md | 30 +- 3.x/en/docs/sdk/java_sdk/rpc_api.md | 116 ++--- 3.x/en/docs/sdk/java_sdk/spring_boot_crud.md | 26 +- .../docs/sdk/java_sdk/spring_boot_starter.md | 24 +- .../sdk/java_sdk/transaction_data_struct.md | 32 +- .../docs/sdk/java_sdk/transaction_decode.md | 30 +- 3.x/en/docs/sdk/nodejs_sdk/api.md | 90 ++-- 3.x/en/docs/sdk/nodejs_sdk/configuration.md | 32 +- 3.x/en/docs/sdk/nodejs_sdk/index.rst | 36 ++ 3.x/en/docs/sdk/nodejs_sdk/install.md | 44 +- 3.x/en/docs/sdk/python_sdk/api.md | 56 +- 3.x/en/docs/sdk/python_sdk/configuration.md | 42 +- 3.x/en/docs/sdk/python_sdk/console.md | 102 ++-- 3.x/en/docs/sdk/python_sdk/demo.md | 8 +- 3.x/en/docs/sdk/python_sdk/index.md | 4 +- 3.x/en/docs/sdk/python_sdk/index.rst | 36 ++ 3.x/en/docs/sdk/python_sdk/install.md | 42 +- 3.x/en/docs/sdk/rust_sdk/index.md | 8 +- 3.x/en/docs/tutorial/air/build_chain.md | 52 +- 3.x/en/docs/tutorial/air/config.md | 84 +-- 3.x/en/docs/tutorial/air/expand_node.md | 24 +- 3.x/en/docs/tutorial/air/index.md | 4 +- 3.x/en/docs/tutorial/air/multihost.md | 30 +- 3.x/en/docs/tutorial/air/storage_security.md | 20 +- 3.x/en/docs/tutorial/air/use_hsm.md | 30 +- 3.x/en/docs/tutorial/compile_binary.md | 56 +- 3.x/en/docs/tutorial/docker.md | 20 +- 3.x/en/docs/tutorial/lightnode.md | 26 +- .../tutorial/max/deploy_max_by_buildchain.md | 28 +- .../tutorial/max/expand_max_withoutTars.md | 16 +- 3.x/en/docs/tutorial/max/index.md | 6 +- 3.x/en/docs/tutorial/max/installation.md | 96 ++-- 3.x/en/docs/tutorial/max/max_builder.md | 90 ++-- 3.x/en/docs/tutorial/pro/config.md | 38 +- .../tutorial/pro/deploy_pro_by_buildchain.md | 30 +- 3.x/en/docs/tutorial/pro/expand_group.md | 16 +- 3.x/en/docs/tutorial/pro/expand_node.md | 26 +- .../tutorial/pro/expand_pro_withoutTars.md | 10 +- 3.x/en/docs/tutorial/pro/expand_service.md | 24 +- 3.x/en/docs/tutorial/pro/index.md | 4 +- 3.x/en/docs/tutorial/pro/installation.md | 108 ++-- .../tutorial/pro/installation_without_tars.md | 132 ++--- 3.x/en/docs/tutorial/pro/pro_builder.md | 120 ++--- 3.x/en/docs/tutorial/promax_expand_air.md | 16 +- 3.x/en/docs/tutorial/support_os.md | 14 +- 3.x/en/index.rst | 156 +++--- 332 files changed, 8136 insertions(+), 7409 deletions(-) delete mode 100644 3.x/en/MVP.docx create mode 100644 3.x/en/docs/articles/7_practice/deploy_webase_management_platform.md create mode 100644 3.x/en/docs/design/consensus/index.rst create mode 100644 3.x/en/docs/design/security_control/index.rst create mode 100644 3.x/en/docs/design/virtual_machine/index.rst create mode 100644 3.x/en/docs/introduction/change_log/index.rst create mode 100644 3.x/en/docs/sdk/go_sdk/index.rst create mode 100644 3.x/en/docs/sdk/nodejs_sdk/index.rst create mode 100644 3.x/en/docs/sdk/python_sdk/index.rst diff --git a/3.x/.readthedocs.yaml b/3.x/.readthedocs.yaml index 26f5be9b8..a395ee789 100644 --- a/3.x/.readthedocs.yaml +++ b/3.x/.readthedocs.yaml @@ -14,7 +14,7 @@ build: # golang: "1.20" sphinx: - configuration: 2.x/conf.py + configuration: 3.x/conf.py # Optionally build your docs in additional formats such as PDF and ePub formats: diff --git a/3.x/en/MVP.docx b/3.x/en/MVP.docx deleted file mode 100644 index cea3f6319f30423cfa94493178b56331a1278f1d..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 10300 zcmZ{K1yEg0vn}rK?he5%XmEFTcXxLSuEE{iH8{cDU4lCV3l5j$e*d2@ymwF4-lxu~ zT3s`1W_r3;x2yy(2nqlY1O&iO*gn;~$BA_tKmY(zZ~y>g0000D0UK*aBWp(;MK@a` z2Q6AxE6WC@A?pBoglE(p3s7nI;!-M-5ZVD&`5AcI(NE=DLO)}6&pAH@lv4TEL!y%~ zU7_(B_gdC07}bN~C$Lbaks+h`#j+|CiY^Ttk#?Ab-&;D_A`qFg5E@|%-gddgU{4Z} z%=_htTeqW$is;yq?-Wt^Zb*~4XsQS3()XtZjG4&9gqQXnxXDy$sQK3&)SQ+T$djd0 ze?bv7T35Um%MM|@CTJpiG@^THOhl+Xgv8$M8I#+JjQc?qU`ol=0VU<>Y2wxIQtb^c zC{9Z9Y%OED%%y_FNKGvjlTYTjE)L;$=a}+DfbXnN)ea`pzHmDw2j2>j7lb%24P^F` z?Ilh3b7~UCVlTfM=@_Zf=OXIs0_bMa`qwV8Q+;?@8+kA_NnspU#Yub*a@vYl5uky! zvMHaWav*iDi!U0f4s$DN0|Jl+lJ4I)%QqM_*Z5SX_*3+Vr;omR1X4te?$cxo8Q4Xf zPszE^x?XLh`2foPwLVQQ`RDs@>oN@l008~=tz&Ow=|D&O>s=8iB@N5~BV2{#wsC<1 z#b0d3RxIxV@da2l)PyYeUT%xOg&UACyBJ2x5p)0QDt`Cg{=DfZ#leojG@11S1ss_= zx;t!i+T0FUV)#9wkdf_tIv{o#%6 z{ZH|LA(9P^SeMJvd{)}XSv^r&`!E|qbyqk$gP(Om#S2&KOkG2la-V6~P@$osk4MK# zzIklwE=kK*KC9s**%XyJ=!}3;7RVKh%zSxS=W$1?B?%-m#zS|PiNMmza1b+AA$jnL z;9=B}cC}3pXZxEEbx0&reqaEAC5uiZMQtRaKUhw)6E z<`t5N6sl$>1`x`FA^K?}>2z{!vfa@6fPYIsuDAh(ujO;WAgShDqCP=_`uBlqrZt`m z^z`=Jlg38a5jirjY}yj>7%TJZIv80H78=S}g2%S1vQ5Gj zG;@~bYATqaPkbULH_U}|BE%&a(Ji(~fSp}5@f}4Qj88erjApxzn5F2@PkJF- zIM)}f@6I&F3&`~`8!Y|PhEU85JNcIx7VT_9ED#P~hOXdLtP~SE_Y>WnPub9IgWJYA zaDnOuI3Pg!j+_QDEcx_EA764qAyP9YC`=3y4emHJrh; zz2Iyz%AMw%#1e`^LOek(Q&nY%9S5FqzaA@gqHm9F=<;>poA!%5HrEfe8GgFmgK zio4d{NN30tPQGvLTx6Yb>L~KqMgI_>MIT+K?D^D|-W^1lzD#ICenx0gW0;6EcL^hX zI>kUcAZGTp*4uxE)WWABZd|34R`qdZj!l6Cm3&Dn9b@1=Puw^nLP|{|&y`CpBZEHt ze0AVRy+=Kbm#(Pla&dbp{y0(n-YI=z@>ARszr97~eJ9Ys?P_zP@s(>_ZgSmSnU77V z)58$Wht%DmVdZovC#!61fw(I}J#=)WBzaAl@go zCoFfD2-Vdap)na&`a0m-xv$HeO&T>_xCf)Cl;$xt-=#LP=EpQ23G65lgG13&LO2jo=I zV+`_kLDYc&0O0)R!2Pcm`uDN>?U5R0*QCD@pmbfR5bm!}OJ{^*u8h?hHmT$LPR~TE zX8SQY+0cDejSUa-1RuG^}{yGcOV zw%O`^O+&+={YiwG$No$yMEzkj%9P)%d?v0Z&*Gk9uFNu)?$UE;%@a&>!f@BD_f@i9 zYKD4M&1u&3Ai8@M$0>;Rhaavv;F&CXm-wMN382DgkESkHO3#ehCtgIfSXFIsaK~FjeNYl33YOM9 zlkxjV3zaF#I^_k+iMEQQz`Sp6mpp;~c2|K*5yVz+ta-jYxPP-}Y-8hSZR2R<@XKrc zrcWw@4-SYBMyQJat4kPd1jDycYdGC&09a)LxFA`5t_~Wlm$uuNHt#EIo>gX4JKJ;` z3^bigG5>%8FHTKetv&4X*^nFtB^F{tCj^fw6~UWWrb9)l5rV2|#v2BWF7X#BzYD`t z%#~Mg3{CI9$5}RZCn@o^$~j;F07(BCr?r!nzLC9|waKp$wtwl4>LEY~)(NuJ$!VBh z{%lM~X3d($-3;&hVV>+0c6P<%Pw0lZ12X}6I?22JtReI`%=wk@@7YG;Nnc3lll35y zM@!ohK>c__)--^c(wvfpg>h19sj5Nci*Wr+431C;Gcr=L-IeDgmsd;V{jofhK#!=t zER+r>&o|23qUhrg^GQHz$+*u7PmOxA0CBDU{ODJPdL|{G)H2qXfP8l*=$lT1Wj!o2 z&**(uv1EFnbT-@FuVL*_01E>5j8%hdOvBEa&g8&`zqp$S>~c?xEi^8j*Z!hRn=pOx zF{8`K^#1F$q|p?wZi1%U)OTU8CX;Gi_iBPL&%vIurg+H#xXaJ|ps$Nob^-hE+_ukcG`uvAif^MY z9#$-00RE;v_zQ?&+uLo1EDrzx{~zid9NjGccW0bxN;s^rBXyit=^j=$Mc^fp4r7~$ zC+3A&Fx9Zh$1#cWBlVMXgYt2%>vRI#0TIR!330@hF(*Mzntne%gEs{Tym^_(%Cz9( zLn1oSxefM#aRL?qKrp@pgUnM4d)Q4+gV14N=+o+LXuKd~_?|0nUx^ z;s<=dv;>9>FRM(l!n1A}18Y-*>o!^RJf%y2hKY?1Lm|7{KI)ig?ov&{P0)Tjijd-6sSynVi1mf(6Fl>fJIRPKx`;X235@j zJ0Q1o{h~JE&@}o;#{P2seun++!J_i*!@#udt}zo!UValHwYuYVf|92R`D)TCRX%tx z{wUz*wzHKrV7_F62co%q&YdoN&I2Fl9O9!#NCduzX4HX!J@5s@2AA?(be`OivIB{( zY!F}H4}jg3N%xcBc#;M&tmh`vul>pK7Y||!DeSmB*gwEH=`8Nz=kemGF=~W)Nl|Wzzw%&vR$aPh_BBVgZ@mGI0omi*cc208s^YzPac%pEqKmOR7+cG6rK%g z_#mOqmTi-_Xj<+aNPpM!ctHInD|P}c-}_5|@HWu-fUzEfXUD^ebnKF4Xh6QBatsiO zR~V3wg#t>7wHI`9oDbQe+8nm&h}s9LffBBTbX26BuoxQ^*CmY0_P35zyFIF}Hf?y0m& z^7=s6-RjGBXi=qQ<%Qjs42My>fAyz^zT$1V)=?U<{D#2>cb+zMPozGreS71tLX32; z2B=2c(=$PLGXV`+VDPLn9m)zOADUkf(CFpwt0N0zvGRoizV` z>a?7rYazu|yjm>8C@V_4spM2RlMrfH)HsS#Wv)_K)QYU*YI5%Oum7~$K*Ms`B&J6C zQQEl_6mt&lq-3vQW9_Gt6XKaDQRTHeK;<(TA&Vl#*tz3n33GmWET;tCh?%mI<+zdM z@b?_11fGhrTS)dSr36Ajjxj+-lWazYpAHyD2R8TTsPGSP`MZ4}I_qSmdIdtZ+4Bks zWqay_-#6zMLAN%wcN<>~Z{$>O6Suh}bO~j+VF-%E$)aNM$szUVV0Ro?r(Pop7Lp2!JWithjpW)5rzOMH4x3Cl$@n67139S_)0mDdu9LgQX{k;+In-u7wBX?qO?i!}YAu_zx2!nB#ic&7|k`h^dlCL#Fe zJNOVd&~Utt`W3%`Hdd&O4hBOJ3b+ ztk$g=6}i`8df45=ogMwqFSx-XWd^-8m!ax@*z<)H z*#tv6SgB?$KVc^=X^fadCaJH6h0gP}O1AU|S|$OxM@3``v=v`y6rVOV8%10|m6Fuh zpo@Dff4D#M)-&WNnBT2>Ue@H+bo9EFbv37(r_^pvF9@na%(5ZKy_+;zDBqjkVdJG< z@({;Z*k@A?%OBDSrn%~bNztS=`34M=a94zM+T|ZYtj|Ob$GK5fmWqJM-!ELXj6q>m zlB^h*3?BO&{W^C94@PYj%7#5f0O?>omC1KsDr+^_xe1ML7>R!i6DO$d!CWl{)B1zR zlRg=4i*0a-j|^4}{J;hf)S@safNo^@dltj>MUZY%SI#qTv-)C6KJ38-0DY4S%tJ8W zeeSn(NNKps=|0~h0u%rEfKfIZlsom=+HzRi863w6GoC^;NBNEM8HK&$D8MQpQ{wN5 zo65aK;8{qG@smUoB;?>6xUD!(-r?~~M0&zP0JC6LlGWuyz091fo8!>*Q{ zVzY8Dr*J<9vQQrMh6F!AV1JS!Xbk-MN%!(f+nuBMQu*UtrvJDXXOU%{sgPo8B1)iD zwoJy7F7q+QgG#fJ-YayBh}V|fZf81 zR}`EP*Zrn6!367o?n(=|B$I($%in-Da(_bM4;9=RWHt17EU35Cnv*I<7fF2f>9j0b z_G%_fgNXyx2|q1FNV&jLKaOs2_2Jy1(_c`mIRRJ%#vYFAQoyvBVhWKLpKiAcV?~IN zX@Hxgl%~Wp0ZooZY4jv&c zOymiVE*Ko6Q>K#_z3)b5Sbr$1dzL@I@bH$5%G%r5I?x%|*c<&ySPJ4stlxML^6AsJ3Ai#3!ZDem zDpM|?4zL7P}oUC z1158TAS8+d#NAy~<=Sk<9UkC; zf1_);(r7x6!LmztxNtn;!De1*C8$Rl^Hn}BuOI9vOgqtx6d1xWX9F+HF^!YGp0+qS zq3noR?i}<>0ZH40mom?Q%tRS!iHIfh&=QTw3@Y{gj-O27#(+WP1Kp?iOvj3If=ocK zt>X>Z*LPe38cGWrzHvGG#wEhr-uf?>dbYN|sLYTbk^V-H&~{6myBaFtR%)+Bh_D3! zQ9_yi!6ROkz0n98#Am{KCYXRdcIIq!{KOckZmBQ9c$@yalsezc&`nuckOmDy%+vzXfM~ zUmwCEWg5bZ2;;dYL%*g4{iqUf!OoAwMJxWrZXtYVjZ8F1B|{FxhmoWt2R>1mN4~dl zn&&n>k|I)<16$T+YDdOTqUdzI+r-RbCV*XJbMa(LdAf(iJDK4O@y@N1y`4P`uO%<5 z+9D4I*IQ_uZ0%EH0{5okXf?(w`330TryR?TH1hM?O3c1JzhwFUu84t?gQJbr??oB! zhU%e50GZT*Y-tL`6BM7(?yU=@ zP+6oxX1KB!qx5=vXTgH?B`JOK?EHYCZRV`x4EXPXbAN%V$9xkJ9^TXr)VG2E)@L0} zjjW95e!m%h4gFM2%4UTXqvKo!;cHW^CAE`3htSF5>3kt9Qj(B+-&Mx;UJIz=eiJ7!2~9YQQ?00Q$LS)T)apCoun6;D(KS z59T$y8>$>7q-BXfX@iRPxpJgs(r&GvIZbotRZ5W7VyYTMFmx4F4d@V1 zs7N^YG~F|!iIi0I$l{fl7r3G4C=IebmKD>CfHlQstq|*ZmK1I~cOC+%^uw(rCJ0P=Ldvw{xaSx#u1@na=A>*ZktokcWF6lhQ5@*g@TlOcr;2&<)~g&aywg?H|?m48v;J^=n=XEPZC*e zWs#yDKR2wTRnQn8j6xVF|NH=upKM(#g8E{u8!su-Is_N>npE;Ht7$|!F=$~ zvh+U?_l2iFu$CozYW++|g<2%6D~2)x!pF#ABPif-OvAetz#uhlwZl334u9`gt_tB? zLchXSW^SU?1 zIbYmlNE2aUP{X|dgq~)f#faaD&FzLQy{(ln|NNNv)Ir*5GgPIzOU<@bd%OIQqYfEi zF0+U%IoOem=onB|IaaXtA}d|$M}$iLK#~yc;DxH&mF*4|5f)toaVwB4${i(-_(RjV zi;ushO-!|+XRZ%L;&v`wW0i~)Ck*$<wrr4Auq7;r(UK_0QJ)I!T$%Sk#|Q!Z1EJ7D+|qd$C1DtJDGzp5)hgLA;b5g9K4Ou2YH_5W9s-!M&YB35V{(o74w; zEbEF1R8q)oee6s6yb_f@bSk?;{u!&Ht@Q}_!k@LdxD`=XRm*yG#u!p$gUBpQyYxOH zML;#rle|C%j7A=^>g(gBqXU{}AY=EwLyR{2psHH^_l#;!%Vat{F(ibeBOQ4vpMq5; zf)LH~mrNEge57^MZ6?ugUr!{|J1!lUFXv#zt!n-KjEM4U@kt4P220{WBruv` z#8?P4(8PJarelS}HJ~Lz4q<5{gmL2l1RE_N6AGBB>||yVjitB@db;hzT}CNjjK-h5 zV&D=nilbksV*x=23TX@(&!URH165 zGmpY*v1OqADy3U@jnh5abVfHjmo-vf%Z0(i({yZ zphn)DG>b>`CUKvb*Dwi3oTu2rePi!fh?~N|_XaU7+;#^m;%XcOrE#*5TDWu@OJJsI zbgOw^-6WJ9XvLI%>}~vgmw2p-z9m5b03DEjB@EV%ihBB%zhk`9FIqAI^eDmSx74Ci zZ4M~G@tccVBF=|FaKNC>}WkyfSYm{&7kmMFG$vR52an zPRT-p3^M#MF9KRRs75PdpfMMd!8T4>tx^uT;z_JtzEr8wP_iL-*0nKw020}hwYe>M zm6cN6hybF9(0%Dr)ObwaU@$Z$mE-$T42>X${&$PjhZv)##$HOf)YPgES_1s+BaM$v z@)%1_JEOga>(zu|u@re)@x~U_C>%YugC?{MTR4=~73|={#iyai?A{t?eP+^G%bb=) zn6#Kh%kH@aH#8p+z38>L`iTLKx^4CLZo0X0pIzuhxar})#M5JU-?NN@t5FCm1=Ydt zz;>n0exi<9$25fA!PW$rd(Q71P|a~T+63mu&wkk7<0%*EAr@__LMt;~XK|WaJ!?VZ zL|_6#c_g4cGIz8{H~Fmf*(Ju}T*3?fW}gLN%_q{9ccu>h@1^`e$G_hA7G1W#g$Do4 zS{#fV9p9=8|C_ZaNhz$*BedVDxO!SwBNLn&LX4uM+WRJ!K+ZYa~#d{$&!23FU*=E%&-w*yH{<3Bj%@OEDv<>^;RLO&h$cL-ei ze7#&b%TbwLrXr^A1{$`6tPns}$y}VAM{13%km&y{ax)U>G)^d$7MZBbm7d|anXerS z$5`B!c|O6qKT`cBEu^X%_?3;u-!MwP{!6iyZQbPuXKp8QdqY3ZI%!P~0+0O>9wIe4WN9 ziVOg6^F*IA0g6>eId)~eo$ib|?9w8cC>IrtR9gIW1I^#kJDP=mrlzG(X!wo2XeG63 zdsD=-7Y$G&V$rNiTI7UA*M!&^ulfp61I+TU##4f!U{Tp?(6bdIQLBDh)J&bi8lEK^ z#l0y?=aaTdq$f+C7eDAXt^HG`i|{_#Yn^@yf3Y(eftjR2U|^~cB2HCsbA$RiZt)Dg z;nAAP6AeRg1}q3#KQkK}iSw50onC7^5r`~=(X!u0^^$GUxO&)}!IG+9%^A5Z{DF2ClU5xY< z{{bmzTL zu{6QrXAU1g0|7;_3&q!MPR~fvIH-XS^#erUDH#a6m9za<#K$Q=1j#dxeuv#{{-pcR zdmy!iLEfN;lk^!Fq2?rm!_-T0eUac;^7DDK9bA)z3*wPVvt?}6{TJpKTy-$wZV^nARB zzpq*UffK&@pWpERqk{e3();@DA4@uK4O9QJ^p}qHyPEsn)cX?aA5-9O)xY1S{wlb> z$G@)={=xgd-H*TVf2xM>!SA)VKVXYDDegD;uNl0@zgM09;A`H})8F`iYf$gu@5P!w z@RqlO@+Z0dN4j|rf3HvcfrFy`3;vH<@gDs?m;QrJLjM=~PiFnz!26u`j{#YXe 2 effect, but more people, the account is complex, there may be similar to "information asymmetry" and other problems, then we need to organize into alliances, joint accounting, sharing data, so that everything happens in the sun, which eliminates the "information asymmetry"。Such a collaborative environment is trustworthy and efficient, everyone's interests are protected, and the business environment is well developed.。This is the more important "blockchain thinking," so that more people involved in the operation of the rules: honest work will be due to the benefits, if cheating will naturally be known to all, thousands of people.。Technology is the foundation that helps implement this model。 +If you just talk about technology, you may not have the charm of the blockchain yet。As mentioned above, the blockchain record must be multi-person participation, so why do you spend energy to participate in the trouble of bookkeeping, why do you have to pay the cost to buy books and pens to bookkeeping?This involves "multi-party collaboration."。In modern society, many things are difficult to accomplish by one person, and we must work together to achieve 1+1>2 effect, but more people, the account is complex, there may be similar to "information asymmetry" and other problems, then we need to organize into alliances, joint accounting, sharing data, so that everything happens in the sun, which eliminates the "information asymmetry"。Such a collaborative environment is trustworthy and efficient, everyone's interests are protected, and the business environment is well developed。This is the more important "blockchain thinking," so that more people involved in the operation of the rules: honest work will be due to the benefits, if cheating will naturally be known to all, thousands of people。Technology is the foundation that helps implement this model。 ## What can blockchain do?? -Integrating blockchain technology and blockchain thinking, you can consider using blockchain as long as it involves multiple parties and has complex accounting and data sharing requirements.。Blockchain can not only keep accounts, but also record information about people and things, and become credible by jointly verifying and solidifying the information.。 +Integrating blockchain technology and blockchain thinking, you can consider using blockchain as long as it involves multiple parties and has complex accounting and data sharing requirements。Blockchain can not only keep accounts, but also record information about people and things, and become credible by jointly verifying and solidifying the information。 -在在**traditional financial scene**If the transaction takes place on the blockchain, it can play the role of "transaction is reconciliation," greatly improving operational efficiency.。Furthermore, in typical financial services such as supply chain, cross-border payment and bills, blockchain can be used to build public ledgers between partners, and massive amounts of funds and assets can be recorded, verified and traded on the chain, with credible, accurate and efficient ledgers, which can greatly expand the scale of financial services, improve operational efficiency, reduce costs and risks, and better solve a series of problems such as financing difficulties for small and micro enterprises, bank wind control difficulties, and regulatory。In addition, compliant and standardized digital assets can also be defined, circulated and accepted on the blockchain, which can build innovative business models.。 +在在**traditional financial scene**If the transaction takes place on the blockchain, it can play the role of "transaction is reconciliation," greatly improving operational efficiency。Furthermore, in typical financial services such as supply chain, cross-border payment and bills, blockchain can be used to build public ledgers between partners, and massive amounts of funds and assets can be recorded, verified and traded on the chain, with credible, accurate and efficient ledgers, which can greatly expand the scale of financial services, improve operational efficiency, reduce costs and risks, and better solve a series of problems such as financing difficulties for small and micro enterprises, bank wind control difficulties, and regulatory。In addition, compliant and standardized digital assets can also be defined, circulated and accepted on the blockchain, which can build innovative business models。 -在在**Judicial depository areas**If the signing process of the contract is fully recorded on the blockchain and witnessed by the participants, including the judiciary, then in the event of a dispute, the judiciary can extract evidence from the chain with one click for verification, proving that the contract has not been modified from birth to the time of the evidence.。Due to the involvement of the judiciary in the chain, such evidence already has some judicial effect, greatly reducing the cost of justice。 +在在**Judicial depository areas**If the signing process of the contract is fully recorded on the blockchain and witnessed by the participants, including the judiciary, then in the event of a dispute, the judiciary can extract evidence from the chain with one click for verification, proving that the contract has not been modified from birth to the time of the evidence。Due to the involvement of the judiciary in the chain, such evidence already has some judicial effect, greatly reducing the cost of justice。 -在在**government service**On the other hand, blockchain is used for identity authentication, allowing people's identification to be verified in one place and available everywhere.。Using blockchain to connect multiple departments, you can do "more errands when doing things, users less errands," and "prove that I am me, my mother is my mother" things will no longer exist, but also to protect user data privacy.。 +在在**government service**On the other hand, blockchain is used for identity authentication, allowing people's identification to be verified in one place and available everywhere。Using blockchain to connect multiple departments, you can do "more errands when doing things, users less errands," and "prove that I am me, my mother is my mother" things will no longer exist, but also to protect user data privacy。 -In addition, blockchain technology can also be applied to copyright, property, Internet of Things, smart city, new energy, entertainment, talent exchange and other massive fields。Correctly integrating blockchain technology and blockchain thinking with matters related to the national economy and people's livelihood can greatly enhance the level of intelligence and precision, enable interconnection between industries, and ensure the orderly and efficient flow of production factors in the region.。 +In addition, blockchain technology can also be applied to copyright, property, Internet of Things, smart city, new energy, entertainment, talent exchange and other massive fields。Correctly integrating blockchain technology and blockchain thinking with matters related to the national economy and people's livelihood can greatly enhance the level of intelligence and precision, enable interconnection between industries, and ensure the orderly and efficient flow of production factors in the region。 [Click to view the blockchain application case compilation (with HD PDF full download)](https://mp.weixin.qq.com/s/cUjuWf1eGMbG3AFq60CBUA) ## Why countries attach so much importance to blockchain? -Blockchain has great potential for development, itself contains a very dense technical content, the technology has enough research and control, in order to make its own development is not affected by external influences and constraints.。Blockchain technology will be used in a wide range of scenarios related to the national economy and people's livelihood, involving the financial and personal information of many people, and even important information in key areas such as the financial industry, government affairs and people's livelihood.。Therefore, the current state has put forward a clear request to us: "pay attention to the current situation and trend of blockchain technology development, improve the ability to use and manage blockchain technology," so as to better build a network power, develop the digital economy, and help economic and social development.。 +Blockchain has great potential for development, itself contains a very dense technical content, the technology has enough research and control, in order to make its own development is not affected by external influences and constraints。Blockchain technology will be used in a wide range of scenarios related to the national economy and people's livelihood, involving the financial and personal information of many people, and even important information in key areas such as the financial industry, government affairs and people's livelihood。Therefore, the current state has put forward a clear request to us: "pay attention to the current situation and trend of blockchain technology development, improve the ability to use and manage blockchain technology," so as to better build a network power, develop the digital economy, and help economic and social development。 ## What is the current situation of blockchain development in China? -Many large financial companies, Internet companies, and technology companies are conducting blockchain research, and in the past few years, they have gradually solved or nearly solved a series of core issues in the blockchain field, such as performance, security, ease of use, and compliance.。 +Many large financial companies, Internet companies, and technology companies are conducting blockchain research, and in the past few years, they have gradually solved or nearly solved a series of core issues in the blockchain field, such as performance, security, ease of use, and compliance。 -We focus on the development of the blockchain field is the "alliance chain," alliance chain and anonymous, virtual token "public chain" (such as Bitcoin, Ethereum, etc.) is very different, alliance chain does not issue currency does not mine, abandoned the disadvantages of barbaric operation, can be more standardized, legal operation, can effectively serve the real economy.。 +We focus on the development of the blockchain field is the "alliance chain," alliance chain and anonymous, virtual token "public chain" (such as Bitcoin, Ethereum, etc.) is very different, alliance chain does not issue currency does not mine, abandoned the disadvantages of barbaric operation, can be more standardized, legal operation, can effectively serve the real economy。 -The blockchain alliance, Golden Chain Alliance, jointly initiated by domestic institutions such as WeBank and Shenzhen Financial Technology Association, mastered a number of core technologies and released the underlying blockchain platform and series of solutions represented by FISCO BCOS in 2017.。This series of open source projects are safe and controllable, excellent performance, free and easy to use, committed to serving the real economy, in a large number of industries related to the national economy and people's livelihood has been widely landed, technology and model has been verified by a large number of practical cases.。 +The blockchain alliance, Golden Chain Alliance, jointly initiated by domestic institutions such as WeBank and Shenzhen Financial Technology Association, mastered a number of core technologies and released the underlying blockchain platform and series of solutions represented by FISCO BCOS in 2017。This series of open source projects are safe and controllable, excellent performance, free and easy to use, committed to serving the real economy, in a large number of industries related to the national economy and people's livelihood has been widely landed, technology and model has been verified by a large number of practical cases。 -In addition, it is worth mentioning that FISCO BCOS adopts the path of open source technology to accelerate the development of the industry, incubates and expands the largest and most active industry ecological community in China, attaches importance to personnel training in the promotion, trains a large number of blockchain professionals in universities, society and industrial institutions, and helps blockchain technology and industrial development accelerate breakthroughs in talent.。 +In addition, it is worth mentioning that FISCO BCOS adopts the path of open source technology to accelerate the development of the industry, incubates and expands the largest and most active industry ecological community in China, attaches importance to personnel training in the promotion, trains a large number of blockchain professionals in universities, society and industrial institutions, and helps blockchain technology and industrial development accelerate breakthroughs in talent。 ## How do I cut into the blockchain space?? -If you are a**Non-engineering people interested in blockchain**Then you can pay more attention to the formal and authoritative public numbers and mainstream media in this field, learn more about blockchain-related news trends and industry trends, establish a correct blockchain concept, eliminate noise from virtual coins and capital plates, and gradually compare "blockchain thinking" such as "multi-party peer-to-peer collaboration" and "openness and transparency" to your current work life.。 +If you are a**Non-engineering people interested in blockchain**Then you can pay more attention to the formal and authoritative public numbers and mainstream media in this field, learn more about blockchain-related news trends and industry trends, establish a correct blockchain concept, eliminate noise from virtual coins and capital plates, and gradually compare "blockchain thinking" such as "multi-party peer-to-peer collaboration" and "openness and transparency" to your current work life。 -If you are a**Students in Engineering, Informatics**It is recommended that you lay a good academic foundation in school, such as mathematics, algorithms and data structures, probability theory, game theory, cryptography, etc., master one or two major computer languages, consult experienced professors and teachers, actively participate in the school's blockchain community activities, or participate in the FISCO BCOS blockchain open source technology community and school joint courses to get started with blockchain applications in three days.。 +If you are a**Students in Engineering, Informatics**It is recommended that you lay a good academic foundation in school, such as mathematics, algorithms and data structures, probability theory, game theory, cryptography, etc., master one or two major computer languages, consult experienced professors and teachers, actively participate in the school's blockchain community activities, or participate in the FISCO BCOS blockchain open source technology community and school joint courses to get started with blockchain applications in three days。 -If you are**Business and liberal arts background**We can focus on the trend of distributed business, make good use of the experience and knowledge of humanities, management, economic theory and game theory, meet the challenges of multi-party collaboration, and use innovative thinking to open up new scenarios that can serve the real economy and people's lives.。 +If you are**Business and liberal arts background**We can focus on the trend of distributed business, make good use of the experience and knowledge of humanities, management, economic theory and game theory, meet the challenges of multi-party collaboration, and use innovative thinking to open up new scenarios that can serve the real economy and people's lives。 -If you are**Professionals working in IT**You are welcome to join the FISCO BCOS blockchain open source technology community.。There are active WeChat groups discussing technology and industry issues, and public numbers regularly push technical analysis articles, event notifications, etc.。More importantly, you can get a full range of open source and free blockchain solutions, from the underlying blockchain platform to identity, IoT, graphical tools, cloud services, a wealth of documentation and professional online and offline courses to help you learn quickly, from entry to proficiency.。Blockchain technology can connect a number of technologies, including artificial intelligence, Internet of Things, big data, financial technology, etc.。 +If you are**Professionals working in IT**You are welcome to join the FISCO BCOS blockchain open source technology community。There are active WeChat groups discussing technology and industry issues, and public numbers regularly push technical analysis articles, event notifications, etc。More importantly, you can get a full range of open source and free blockchain solutions, from the underlying blockchain platform to identity, IoT, graphical tools, cloud services, a wealth of documentation and professional online and offline courses to help you learn quickly, from entry to proficiency。Blockchain technology can connect a number of technologies, including artificial intelligence, Internet of Things, big data, financial technology, etc。 If you are already**Hard Core Blockchain Professionals**And willing to participate in the open source community, you are welcome to pay attention to the FISCO BCOS blockchain open source technology community, exchange what is needed, jointly in-depth study of core technologies, contribute to the open source community, code optimization, document editing, etc., together to achieve more functions, and jointly create the best blockchain technology platform。 diff --git a/3.x/en/docs/articles/1_conception/what_should_not_trust.md b/3.x/en/docs/articles/1_conception/what_should_not_trust.md index da4a28dac..17ce05bae 100644 --- a/3.x/en/docs/articles/1_conception/what_should_not_trust.md +++ b/3.x/en/docs/articles/1_conception/what_should_not_trust.md @@ -4,21 +4,21 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS Previous post shared "[What Exactly Are You Trusting When You Trust a Blockchain??](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485345&idx=1&sn=eab5bbcf45ec46bd7f69cb48de1db4b3&chksm=9f2ef5bda8597cab2f0c938251cb876d3920915f8faef1f0c60857ed44f4c8865fc355f00709&token=1692289815&lang=zh_CN#rd)"(Children's shoes that haven't been seen yet, click on the title to go directly), this time change the angle, stroll through the dark side of the moon, talk about in**Blockchain Systems and Business Design**When you don't trust anything。 -Let's start with the conclusion: you can't believe almost anything.!Creating Don'The concept of t Trust, Just Verify, is the right attitude to the blockchain world。- By what I said casually +Let's start with the conclusion: you can't believe almost anything!Creating Don'The concept of t Trust, Just Verify, is the right attitude to the blockchain world。- By what I said casually ## Do not trust other nodes -Blockchain nodes and other nodes will establish P2P communication, together form a network, transfer blocks, transactions, consensus signaling and other information.。Other nodes may be held by different institutions, different people, and the person holding the node may be bona fide or malicious。Even in good faith assumptions, the health of the node's operation and survival will be affected by the level of operation and maintenance and resources, such as being in an unstable network, will occasionally hang up, will send random messages, or hard disk full and other reasons leading to data storage failure, and other possible failures.。When making malicious assumptions, it is necessary to assume that other nodes may deceive or harm themselves, such as passing the wrong protocol packet, or using weird instructions to find vulnerabilities to attack, or launching high-frequency spam requests, frequent connections and then disconnections, or massive connections taking up resources.。 +Blockchain nodes and other nodes will establish P2P communication, together form a network, transfer blocks, transactions, consensus signaling and other information。Other nodes may be held by different institutions, different people, and the person holding the node may be bona fide or malicious。Even in good faith assumptions, the health of the node's operation and survival will be affected by the level of operation and maintenance and resources, such as being in an unstable network, will occasionally hang up, will send random messages, or hard disk full and other reasons leading to data storage failure, and other possible failures。When making malicious assumptions, it is necessary to assume that other nodes may deceive or harm themselves, such as passing the wrong protocol packet, or using weird instructions to find vulnerabilities to attack, or launching high-frequency spam requests, frequent connections and then disconnections, or massive connections taking up resources。 -Therefore, the node should regard itself as a hunter who survives alone in the dark jungle, and must have an attitude of "independence" and "self-sufficiency," and take the posture of "not believing in any other node" to protect itself.。Certificate technology is required to authenticate node identity during node admission;On connection control, reject connections with exceptions;Use frequency control to limit the number of connections, requests, etc.;Verify protocol packet format and instruction correctness。The information sent by oneself should not expose its own private information, nor should it be expected that other nodes will necessarily give an immediate and correct response, and must be designed for asynchronous processing and verification fault tolerance.。 +Therefore, the node should regard itself as a hunter who survives alone in the dark jungle, and must have an attitude of "independence" and "self-sufficiency," and take the posture of "not believing in any other node" to protect itself。Certificate technology is required to authenticate node identity during node admission;On connection control, reject connections with exceptions;Use frequency control to limit the number of connections, requests, etc;Verify protocol packet format and instruction correctness。The information sent by oneself should not expose its own private information, nor should it be expected that other nodes will necessarily give an immediate and correct response, and must be designed for asynchronous processing and verification fault tolerance。 ## Node and client do not trust each other -The client, which refers to the module that initiates a request to the blockchain outside the blockchain network, such as the java sdk and wallet client used by the business.。Clients and nodes communicate through network ports。If the client is in the hands of an uncontrolled person, it is possible to make a large number of requests to the node, or send a bunch of spam messages, making the node tired to deal with, or even cleverly construct vulnerability attack information, trying to overreach, steal information or make the node wrong。 +The client, which refers to the module that initiates a request to the blockchain outside the blockchain network, such as the java sdk and wallet client used by the business。Clients and nodes communicate through network ports。If the client is in the hands of an uncontrolled person, it is possible to make a large number of requests to the node, or send a bunch of spam messages, making the node tired to deal with, or even cleverly construct vulnerability attack information, trying to overreach, steal information or make the node wrong。 At the same time, from the client's point of view, the node may not respond or respond slowly, or return the wrong data, including format errors, status errors, said received but actually not processed, etc., or even ulterior motives will set up a "fake" node and client communication, deceiving the client。Nodes react in a way that is not in line with expectations, potentially causing the client to run incorrectly and become functionally impaired。 -In order to enhance the mutual trust between nodes and clients, digital certificates can be assigned to both parties. A two-way handshake must be carried out through the certificate. The client can initiate transaction requests to the node only after the private key is signed. The node should control the client's permissions, reject high-risk interface calls, and do not easily open the node management interface and system configuration interface.。Both parties strictly check the data format and data validity of each communication.。The two sides should also carry out frequency control and asynchronous processing when interacting, and verify the results of each interaction, which cannot be preset for the other party to handle correctly, and must obtain transaction receipts and processing results for confirmation.。 +In order to enhance the mutual trust between nodes and clients, digital certificates can be assigned to both parties. A two-way handshake must be carried out through the certificate. The client can initiate transaction requests to the node only after the private key is signed. The node should control the client's permissions, reject high-risk interface calls, and do not easily open the node management interface and system configuration interface。Both parties strictly check the data format and data validity of each communication。The two sides should also carry out frequency control and asynchronous processing when interacting, and verify the results of each interaction, which cannot be preset for the other party to handle correctly, and must obtain transaction receipts and processing results for confirmation。 When it is considered that communicating with only one node does not guarantee security, the client can use "f+1 query "idea, as much as possible and a few nodes communication。If the consensus security model of the current chain is "3f+1, "then, if from f+The information read by 1 node is consistent, and the result can be confirmed。 @@ -26,65 +26,65 @@ When it is considered that communicating with only one node does not guarantee s ## Untrusted block height -Block height is a very critical piece of information that represents the current state of the entire chain。Operations such as sending transactions to the blockchain, consensus between nodes, and verification of blocks and states all depend on block height.。 +Block height is a very critical piece of information that represents the current state of the entire chain。Operations such as sending transactions to the blockchain, consensus between nodes, and verification of blocks and states all depend on block height。 -When a node is disconnected or the processing speed is slow, its block height may lag behind the entire chain, or when a node maliciously falsifies data, its height may exceed the entire chain.。In the event of a fork in the chain, if the block height on one fork is surpassed by another fork, the backward fork becomes meaningless。Even under normal circumstances, it is still possible for a node to be intermittently one to a few blocks behind the entire chain, and then it is possible to catch up with the latest height within a certain period of time.。 +When a node is disconnected or the processing speed is slow, its block height may lag behind the entire chain, or when a node maliciously falsifies data, its height may exceed the entire chain。In the event of a fork in the chain, if the block height on one fork is surpassed by another fork, the backward fork becomes meaningless。Even under normal circumstances, it is still possible for a node to be intermittently one to a few blocks behind the entire chain, and then it is possible to catch up with the latest height within a certain period of time。 -For example, in the PBFT consensus model, when more than 2 / 3 of the total number of nodes are at the same height, the full chain has the opportunity to reach a consensus and continue to issue blocks.。The remaining 1 / 3 of the nodes may be at a different height from the nodes participating in the consensus, which means that the data read from this node is not the latest data on the network and can only represent a snapshot of the chain at that height.。 +For example, in the PBFT consensus model, when more than 2 / 3 of the total number of nodes are at the same height, the full chain has the opportunity to reach a consensus and continue to issue blocks。The remaining 1 / 3 of the nodes may be at a different height from the nodes participating in the consensus, which means that the data read from this node is not the latest data on the network and can only represent a snapshot of the chain at that height。 -The business logic can use the block height as a reference value, do some decision logic based on the height, and use f on the chain of deterministic consensus (e.g., PBFT).+1 Query and other methods to confirm the latest height of the chain, in the chain of possible forks, you need to refer to the "six block confirmation" logic, carefully select the trusted block height.。 +The business logic can use the block height as a reference value, do some decision logic based on the height, and use f on the chain of deterministic consensus (e.g., PBFT)+1 Query and other methods to confirm the latest height of the chain, in the chain of possible forks, you need to refer to the "six block confirmation" logic, carefully select the trusted block height。 ## Do not trust transaction data -A transaction (Transaction) represents a transaction request initiated by one party to another, which may result in the transfer of assets, change the account status or system configuration, and the blockchain system confirms the transaction through consensus to make the transaction take effect.。Transactions must be accompanied by the sender's digital signature, all data fields in the transaction must be included in the signature, unsigned fields have the potential to be forged and will not be accepted.。When the transaction data is broadcast on the network, it can be read by others, and if the transaction data contains private data, the sender must desensitize or encrypt the data.。Transactions may be reissued for network reasons, or saved by others and deliberately sent again, resulting in "replay" of transactions, so the blockchain system must guard against heavy transactions and avoid "double flowers."。 +A transaction (Transaction) represents a transaction request initiated by one party to another, which may result in the transfer of assets, change the account status or system configuration, and the blockchain system confirms the transaction through consensus to make the transaction take effect。Transactions must be accompanied by the sender's digital signature, all data fields in the transaction must be included in the signature, unsigned fields have the potential to be forged and will not be accepted。When the transaction data is broadcast on the network, it can be read by others, and if the transaction data contains private data, the sender must desensitize or encrypt the data。Transactions may be reissued for network reasons, or saved by others and deliberately sent again, resulting in "replay" of transactions, so the blockchain system must guard against heavy transactions and avoid "double flowers."。 ## Do not trust state data -The state (State) data of the blockchain is generated after the smart contract is run, and ideally, the contract engine of each node is consistent, the inputs are consistent, and the rules are consistent, so the output state should be consistent.。However, different nodes may have different software versions installed, or the sandbox mechanism of the contract engine is not tight enough to introduce uncertainties, or even be hacked, tampered with, or there are other inexplicable bugs, which may lead to inconsistent output from the contract, so consistency and transactionality cannot be guaranteed.。 +The state (State) data of the blockchain is generated after the smart contract is run, and ideally, the contract engine of each node is consistent, the inputs are consistent, and the rules are consistent, so the output state should be consistent。However, different nodes may have different software versions installed, or the sandbox mechanism of the contract engine is not tight enough to introduce uncertainties, or even be hacked, tampered with, or there are other inexplicable bugs, which may lead to inconsistent output from the contract, so consistency and transactionality cannot be guaranteed。 -State verification is a very expensive thing, the typical verification method is to use MPT (Merkle Patricia Tree) tree, all states are crammed into the tree management.。The MPT tree can attribute all states to a Merkleroot Hash, a state tree Merkleroot generated after the transaction is confirmed between nodes in the consensus process to ensure consistent states.。 +State verification is a very expensive thing, the typical verification method is to use MPT (Merkle Patricia Tree) tree, all states are crammed into the tree management。The MPT tree can attribute all states to a Merkleroot Hash, a state tree Merkleroot generated after the transaction is confirmed between nodes in the consensus process to ensure consistent states。 -This tree has a complex structure, a large amount of data, and consumes a lot of computing and storage resources, which can easily become a performance bottleneck.。Therefore, the verification of the state needs to have a faster, simpler, and more secure solution, such as the combination of version verification, incremental hash verification and other algorithms, supplemented by data caching, can reduce the number of repeated calculations and optimize IO, can ensure consistency, correctness at the same time, effectively improve the efficiency of verification.。 +This tree has a complex structure, a large amount of data, and consumes a lot of computing and storage resources, which can easily become a performance bottleneck。Therefore, the verification of the state needs to have a faster, simpler, and more secure solution, such as the combination of version verification, incremental hash verification and other algorithms, supplemented by data caching, can reduce the number of repeated calculations and optimize IO, can ensure consistency, correctness at the same time, effectively improve the efficiency of verification。 ## Do not trust private key holder -Using private keys to sign transactions and other key operations, and then using public keys to verify them, is the most basic verification logic on the blockchain.。This logic is secure as long as the private key is used correctly。 +Using private keys to sign transactions and other key operations, and then using public keys to verify them, is the most basic verification logic on the blockchain。This logic is secure as long as the private key is used correctly。 -But the private key is only a piece of data, only rely on the private key, the user is anonymous.。In the scenario faced by the alliance chain, you need to use a permissioned identity, first confirm the identity through real-world authentication methods such as KYC, due diligence, and authoritative authentication, and then bind the identity and the public key and publish it, or issue a public-private key in combination with the digital certificate of the PKI system, so that the identity corresponding to the private key is known, trusted, and controllable.。 +But the private key is only a piece of data, only rely on the private key, the user is anonymous。In the scenario faced by the alliance chain, you need to use a permissioned identity, first confirm the identity through real-world authentication methods such as KYC, due diligence, and authoritative authentication, and then bind the identity and the public key and publish it, or issue a public-private key in combination with the digital certificate of the PKI system, so that the identity corresponding to the private key is known, trusted, and controllable。 -Private keys may be stolen by others due to loss, leakage, or loss of assets due to forgetting。Therefore, in the preservation of the private key, we need to consider the use of comprehensive protection schemes, such as encrypted storage, TEE environment, password card, USBkey, soft and hard encryption machine and so on.。In the management of the private key, you need to consider how to safely reset and retrieve the key after it is lost.。 +Private keys may be stolen by others due to loss, leakage, or loss of assets due to forgetting。Therefore, in the preservation of the private key, we need to consider the use of comprehensive protection schemes, such as encrypted storage, TEE environment, password card, USBkey, soft and hard encryption machine and so on。In the management of the private key, you need to consider how to safely reset and retrieve the key after it is lost。 -There are several ways to use the enhanced version of the private key, such as the use of multi-signature, threshold signature, etc., each transaction must be signed with multiple private keys, private keys can be kept in different places, high security, but the technical solution and use experience is complex.。 +There are several ways to use the enhanced version of the private key, such as the use of multi-signature, threshold signature, etc., each transaction must be signed with multiple private keys, private keys can be kept in different places, high security, but the technical solution and use experience is complex。 -Another is the separation of the transaction private key and the management private key.。The transaction private key is used to manage assets, the management private key is used to manage personal data, the transaction private key can be reset by the management private key, and the management private key itself is stored separately for reset or retrieval through algorithms such as thresholds and sharding.。 +Another is the separation of the transaction private key and the management private key。The transaction private key is used to manage assets, the management private key is used to manage personal data, the transaction private key can be reset by the management private key, and the management private key itself is stored separately for reset or retrieval through algorithms such as thresholds and sharding。 ## Do not trust other chains -In the cross-chain scenario, each chain has its own assets, consensus, the security model between the chains becomes very complex, such as a chain of bookkeepers colluding to fake, or the chain has a fork, block height rollback, then if the other modules outside the chain and the chain have not been rigorous enough interaction, will cause data inconsistency or asset loss.。If different chains still use different platform architectures, it will be more complex in engineering.。 +In the cross-chain scenario, each chain has its own assets, consensus, the security model between the chains becomes very complex, such as a chain of bookkeepers colluding to fake, or the chain has a fork, block height rollback, then if the other modules outside the chain and the chain have not been rigorous enough interaction, will cause data inconsistency or asset loss。If different chains still use different platform architectures, it will be more complex in engineering。 -Cross-chain, side-chain is still the industry in the research and gradual realization of the subject, the main purpose is to solve the communication between the chain and the chain, asset lock-in and asset exchange, to ensure the overall consistency of the whole process, transaction transactions, and anti-fraud.。To transfer an asset from chain A to chain B, it is necessary to ensure that the assets on chain A are locked or destroyed, and that a corresponding asset is added to chain B, and that there is a mechanism to ensure the safety of assets in both directions in the time window when there may be forks and rollbacks on both sides, respectively.。 +Cross-chain, side-chain is still the industry in the research and gradual realization of the subject, the main purpose is to solve the communication between the chain and the chain, asset lock-in and asset exchange, to ensure the overall consistency of the whole process, transaction transactions, and anti-fraud。To transfer an asset from chain A to chain B, it is necessary to ensure that the assets on chain A are locked or destroyed, and that a corresponding asset is added to chain B, and that there is a mechanism to ensure the safety of assets in both directions in the time window when there may be forks and rollbacks on both sides, respectively。 -In the existing cross-chain scheme, there are trunking, inter-chain HUB and other ways, the design of these systems itself to achieve a high degree of credibility and reliability of the standard, the security level should not be lower than or even higher than the docking chain, the same should also adopt a multi-center, group consensus system design, the overall complexity can be regarded as the N-th power of the chain.。 +In the existing cross-chain scheme, there are trunking, inter-chain HUB and other ways, the design of these systems itself to achieve a high degree of credibility and reliability of the standard, the security level should not be lower than or even higher than the docking chain, the same should also adopt a multi-center, group consensus system design, the overall complexity can be regarded as the N-th power of the chain。 ## Do not trust the network layer -Blockchain nodes need to communicate with other nodes, so they must expose their communication ports on the network.。The node must protect itself at the network layer, including setting up IP black and white lists on the gateway, setting port policies, DDOS traffic protection, and monitoring network traffic and network status.。Non-essential ports should not be open to the public network. For example, RPC ports used for management and monitoring can only be open to the organization.。 +Blockchain nodes need to communicate with other nodes, so they must expose their communication ports on the network。The node must protect itself at the network layer, including setting up IP black and white lists on the gateway, setting port policies, DDOS traffic protection, and monitoring network traffic and network status。Non-essential ports should not be open to the public network. For example, RPC ports used for management and monitoring can only be open to the organization。 ## Untrusted code -"Code is law" is indeed a loud slogan, but before the programmer's hair falls out, the code he writes may have bugs, just to see if it's fast to write bugs or fix bugs.。 +"Code is law" is indeed a loud slogan, but before the programmer's hair falls out, the code he writes may have bugs, just to see if it's fast to write bugs or fix bugs。 -Whether it is the underlying code or smart contract code, there may be technical or logical pits, but all the data and instruction behavior generated by the code, it needs to be strictly verified by another piece of code, the code itself also needs to be static and dynamic scanning, including the use of formal proof and other technologies to conduct a comprehensive audit verification to detect possible logical errors, security vulnerabilities or whether there is information disclosure.。Some time ago, there was a code of a hotel system published on github, which actually included the username and password of mysql connection, and the database port was actually open to the public network, this kind of pit is simply unimaginable。 +Whether it is the underlying code or smart contract code, there may be technical or logical pits, but all the data and instruction behavior generated by the code, it needs to be strictly verified by another piece of code, the code itself also needs to be static and dynamic scanning, including the use of formal proof and other technologies to conduct a comprehensive audit verification to detect possible logical errors, security vulnerabilities or whether there is information disclosure。Some time ago, there was a code of a hotel system published on github, which actually included the username and password of mysql connection, and the database port was actually open to the public network, this kind of pit is simply unimaginable。 -Open out the open source code, of course, can be reviewed, feedback to improve security, may also be rummaging for loopholes, random modification, and even malicious mine。But in general, open source still has more advantages than disadvantages。In the open source community, developers will submit PR (Pull Request) to the project。PR audit is very critical and very heavy work, it is worth to arrange experts and allocate a lot of time to do audit。An old driver of an open source project revealed that the PR of the core module of the project took years to review, otherwise "adding a feature to introduce two bugs" would have been a real loss, not to mention if it had been planted with a loophole.。 +Open out the open source code, of course, can be reviewed, feedback to improve security, may also be rummaging for loopholes, random modification, and even malicious mine。But in general, open source still has more advantages than disadvantages。In the open source community, developers will submit PR (Pull Request) to the project。PR audit is very critical and very heavy work, it is worth to arrange experts and allocate a lot of time to do audit。An old driver of an open source project revealed that the PR of the core module of the project took years to review, otherwise "adding a feature to introduce two bugs" would have been a real loss, not to mention if it had been planted with a loophole。 ## Do not trust the bookkeeper -The process of consensus can be roughly abstracted as, select the bookkeeper, the bookkeeper publishes the block, and the other nodes check and confirm.。Bookkeeping in the public chain can be done in a "mining" manner (e.g. Bitcoin), where the miner endorses his own integrity at the expense of a large amount of computing power, or obtains the right to book with a large amount of asset equity collateral (consensus such as Pos and DPos).。In algorithms such as PBFT / Raft, which are commonly used in affiliate chains, the list of bookkeepers can be generated randomly or in rotation, with the bookkeeper giving a proposal and other voters submitting it in multiple steps to collect votes.。According to the principle that the minority is subordinate to the majority, consensus can only be reached if more than 2 / 3 of the consensus nodes agree.。 +The process of consensus can be roughly abstracted as, select the bookkeeper, the bookkeeper publishes the block, and the other nodes check and confirm。Bookkeeping in the public chain can be done in a "mining" manner (e.g. Bitcoin), where the miner endorses his own integrity at the expense of a large amount of computing power, or obtains the right to book with a large amount of asset equity collateral (consensus such as Pos and DPos)。In algorithms such as PBFT / Raft, which are commonly used in affiliate chains, the list of bookkeepers can be generated randomly or in rotation, with the bookkeeper giving a proposal and other voters submitting it in multiple steps to collect votes。According to the principle that the minority is subordinate to the majority, consensus can only be reached if more than 2 / 3 of the consensus nodes agree。 -From a system availability perspective, the bookkeeper has the potential to make errors, crash, or run slowly, affecting the entire chain out of the block.。Or the bookkeeper can only include transactions with high fees and discard some transactions, resulting in some transactions always failing to be concluded。Some bookkeepers can also rely on computing power or black-box operation, "pre-digging" or "block attack," destroying the game relationship...... +From a system availability perspective, the bookkeeper has the potential to make errors, crash, or run slowly, affecting the entire chain out of the block。Or the bookkeeper can only include transactions with high fees and discard some transactions, resulting in some transactions always failing to be concluded。Some bookkeepers can also rely on computing power or black-box operation, "pre-digging" or "block attack," destroying the game relationship..... -If the bookkeeper fails or commits an evil act that exceeds the consensus safety threshold, it will directly harm the value base of the entire chain.。According to different bookkeeping models, bookkeepers need to design different fault tolerance, verification, anti-fraud algorithms, implement incentive and punishment mechanisms, regularly check the health of bookkeepers during operation, and for bookkeeping nodes that are unable to keep accounts or do evil, the whole network will not accept their bookkeeping results and punish them, or even kick them out of the network.。 +If the bookkeeper fails or commits an evil act that exceeds the consensus safety threshold, it will directly harm the value base of the entire chain。According to different bookkeeping models, bookkeepers need to design different fault tolerance, verification, anti-fraud algorithms, implement incentive and punishment mechanisms, regularly check the health of bookkeepers during operation, and for bookkeeping nodes that are unable to keep accounts or do evil, the whole network will not accept their bookkeeping results and punish them, or even kick them out of the network。 ------ @@ -92,7 +92,7 @@ There are many more to list, including contracts, certificates, synchronization, Compared with the software design in a single environment, the design ideas in the blockchain field are indeed subversive, and developers have to jump out of the thinking mode of "doing functions, only fault-tolerant, not cheat-proof," and design with the attitude of "doubting everything."。 -When facing the blockchain field, developers should not only think about how to implement a function, but also think about whether there will be errors in the whole process, whether data will be tampered with, vulnerabilities will be discovered, systems will be attacked, and other participants will be defrauded.。To empathize with the functions you achieve, how they will be used by others, how they will behave in different environments, and what consequences they may have.。Any received information, any process input, output, must be strictly verified to be accepted, developers can do this, is to open the door to the new world of blockchain, in order to live to at least the second episode of the series.。 +When facing the blockchain field, developers should not only think about how to implement a function, but also think about whether there will be errors in the whole process, whether data will be tampered with, vulnerabilities will be discovered, systems will be attacked, and other participants will be defrauded。To empathize with the functions you achieve, how they will be used by others, how they will behave in different environments, and what consequences they may have。Any received information, any process input, output, must be strictly verified to be accepted, developers can do this, is to open the door to the new world of blockchain, in order to live to at least the second episode of the series。 Distributed algorithms, symmetric asymmetric encryption, HASH, certificates, security and privacy technologies are popular in the blockchain field, all in order to protect the information at the same time, to add layers of proof and verifiable factors to the information, which makes the whole system complex and cumbersome, but it is worth it, because it can be verified together to build "security" and "trust."。 diff --git a/3.x/en/docs/articles/1_conception/what_to_trust.md b/3.x/en/docs/articles/1_conception/what_to_trust.md index aaced9478..0ca618029 100644 --- a/3.x/en/docs/articles/1_conception/what_to_trust.md +++ b/3.x/en/docs/articles/1_conception/what_to_trust.md @@ -4,14 +4,14 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS Currently, "Blockchain: The trust machine" has become a slogan, followed by a series of powerful-sounding terms such as "decentralization, group consensus, immutability, high consistency, security and privacy protection."。Exactly how much magic blockchain has to make people so trusting, or rather, we're saying "**Letters**What is the time to believe?。 -Information, which refers to natural attributes and behavioral information such as identity, assets, prices, geographic location, etc., is not inherently trustworthy because the information is scattered, incomplete, may be false, and some may even take advantage of the asymmetry of the information for profit.。 +Information, which refers to natural attributes and behavioral information such as identity, assets, prices, geographic location, etc., is not inherently trustworthy because the information is scattered, incomplete, may be false, and some may even take advantage of the asymmetry of the information for profit。 -Organize the information into structured data, through data verification, to ensure that it can maintain integrity, network-wide consistency, traceability, and will not be maliciously tampered with.;Through redundant storage, ensure that it is open, shared, accessible, and ensure that the data is always valid.。Then, this information itself can be "trusted," thus becoming everyone's "public knowledge," become the whole network participants are recognized. "**greatest common divisor**”。 +Organize the information into structured data, through data verification, to ensure that it can maintain integrity, network-wide consistency, traceability, and will not be maliciously tampered with;Through redundant storage, ensure that it is open, shared, accessible, and ensure that the data is always valid。Then, this information itself can be "trusted," thus becoming everyone's "public knowledge," become the whole network participants are recognized. "**greatest common divisor**”。 -Information is commercially "creditworthy" if it embodies value and if it is recognized, recognized, quantifiable, has a tradable equivalent attribute, or may increase in value over time, or even recognized by judicial endorsement.。 +Information is commercially "creditworthy" if it embodies value and if it is recognized, recognized, quantifiable, has a tradable equivalent attribute, or may increase in value over time, or even recognized by judicial endorsement。 -Just because we know someone doesn't mean we trust them.。However, this person has always performed well, in the community words and deeds, gradually gained everyone's trust。Trust at this point is still not equal to credit, unless the person has considerable assets, or his or her personal history has the ability to make profits and repay, and the probability is that he or she will continue to hold assets and accept debt in the future, then this person has "**CREDIT**”。 +Just because we know someone doesn't mean we trust them。However, this person has always performed well, in the community words and deeds, gradually gained everyone's trust。Trust at this point is still not equal to credit, unless the person has considerable assets, or his or her personal history has the ability to make profits and repay, and the probability is that he or she will continue to hold assets and accept debt in the future, then this person has "**CREDIT**”。 The blockchain system is based on algorithms rather than human governance, and is expected to solidify information into everyone's**Trust**anchor point;It is expected to convert various real-world resources into redeemable digital assets through technical means, and launch a series of multi-party business collaboration activities, which is the so-called "information to trust to credit," and even because of the blockchain, the black technology, effective and incomprehensible mystery, the word "letter" seems to have been sublimated into "**Faith**”。 @@ -20,98 +20,98 @@ So what do we believe in when we say we believe in blockchain?? ## letter cryptography algorithm -Blockchain uses algorithms to achieve trust, and one of the most important algorithms is cryptography.。The most basic cryptographic applications in blockchains are HASH digests, symmetric and asymmetric encryption algorithms, and related signature verification algorithms.。 +Blockchain uses algorithms to achieve trust, and one of the most important algorithms is cryptography。The most basic cryptographic applications in blockchains are HASH digests, symmetric and asymmetric encryption algorithms, and related signature verification algorithms。 -**HASH algorithm**The old version of has been proven to be hackable and discarded, and the algorithms currently in use, such as SHA256, are still unbreakable.。The characteristic of HASH algorithm is to generate a fixed length of data from a pile of data in one direction, which basically does not collide, and can play the role of "fingerprint" of the original data.。 +**HASH algorithm**The old version of has been proven to be hackable and discarded, and the algorithms currently in use, such as SHA256, are still unbreakable。The characteristic of HASH algorithm is to generate a fixed length of data from a pile of data in one direction, which basically does not collide, and can play the role of "fingerprint" of the original data。 -**Digital Signature**Generally based on the public-private key system, signed with the private key, public key verification or vice versa.。Digital signatures are derived from the reliability of cryptography, making it impossible for someone to forge someone else's private key signature, so a person with a private key can sign his assets through a digital signature to confirm the right, or in the transaction between the two parties, the use of the counterparty's public key to initiate the transaction, the transfer of assets to the other party, the other party with their own private key to verify the signature to unlock, in order to obtain ownership.。 +**Digital Signature**Generally based on the public-private key system, signed with the private key, public key verification or vice versa。Digital signatures are derived from the reliability of cryptography, making it impossible for someone to forge someone else's private key signature, so a person with a private key can sign his assets through a digital signature to confirm the right, or in the transaction between the two parties, the use of the counterparty's public key to initiate the transaction, the transfer of assets to the other party, the other party with their own private key to verify the signature to unlock, in order to obtain ownership。 -**AES, RSA, ECC Elliptic curves**Several symmetric and asymmetric algorithms are widely used in data encryption and decryption, secure communication and other scenarios, the security level depends on the algorithm itself and the key length, when AES uses 128 ~ 512 bit keys, RSA / ECC uses 1024 or even 2048 bit keys, the data it protects theoretically requires hundreds of millions of years of computing time for ordinary computers to brute force。These algorithms have been tested in business, science, and military.。 +**AES, RSA, ECC Elliptic curves**Several symmetric and asymmetric algorithms are widely used in data encryption and decryption, secure communication and other scenarios, the security level depends on the algorithm itself and the key length, when AES uses 128 ~ 512 bit keys, RSA / ECC uses 1024 or even 2048 bit keys, the data it protects theoretically requires hundreds of millions of years of computing time for ordinary computers to brute force。These algorithms have been tested in business, science, and military。 -There are also new directions in the field of cryptography, such as homomorphic encryption, zero-knowledge proof, ring signature group signature, lattice cryptography, etc., which are currently in the stage of development from theory to engineering, and are in the process of rapid optimization in terms of function, security strength and efficiency, and the possibility of landing can already be seen.。At the same time, we also realize that cryptography usually needs to go through a long period of development, verification, stability before it can be widely recognized, either through a lot of tests in practice, or through the audit and certification of authoritative institutions, in order to shine in the field of production.。From theory to engineering in cryptography, there are often long periods of time.。 +There are also new directions in the field of cryptography, such as homomorphic encryption, zero-knowledge proof, ring signature group signature, lattice cryptography, etc., which are currently in the stage of development from theory to engineering, and are in the process of rapid optimization in terms of function, security strength and efficiency, and the possibility of landing can already be seen。At the same time, we also realize that cryptography usually needs to go through a long period of development, verification, stability before it can be widely recognized, either through a lot of tests in practice, or through the audit and certification of authoritative institutions, in order to shine in the field of production。From theory to engineering in cryptography, there are often long periods of time。 -A basic philosophy of encryption algorithms is**Calculate cost**, is safe when the value of the asset protected by an algorithm is much lower than the cost of breaching the algorithm。But if you use an algorithm to protect a priceless treasure, someone will naturally attack and profit at no cost, so the security of cryptography is also dialectical and needs to be quantified.。 +A basic philosophy of encryption algorithms is**Calculate cost**, is safe when the value of the asset protected by an algorithm is much lower than the cost of breaching the algorithm。But if you use an algorithm to protect a priceless treasure, someone will naturally attack and profit at no cost, so the security of cryptography is also dialectical and needs to be quantified。 -With the rise of quantum computers and other theories, classical cryptography may undergo some challenges, but the theoretical improvement of quantum computers and engineering implementation will take time, at present, basically we can almost "unconditional" believe that the blockchain has been used in the cryptography algorithm, at the same time, the blockchain field practitioners are also introducing a variety of quantum-resistant cryptography algorithm, which is a continuous game.。 +With the rise of quantum computers and other theories, classical cryptography may undergo some challenges, but the theoretical improvement of quantum computers and engineering implementation will take time, at present, basically we can almost "unconditional" believe that the blockchain has been used in the cryptography algorithm, at the same time, the blockchain field practitioners are also introducing a variety of quantum-resistant cryptography algorithm, which is a continuous game。 ## letter data -The data structure of the blockchain is nothing more than a block+Chain。The new block will be its own block height, transaction list, and the previous block's HASH, together to generate a HASH as the new block's identity, so the cycle, forming a interlocking data chain.。Any byte or even a Bit in this chain is modified and will be checked and found because of the characteristics of the HASH algorithm.。 +The data structure of the blockchain is nothing more than a block+Chain。The new block will be its own block height, transaction list, and the previous block's HASH, together to generate a HASH as the new block's identity, so the cycle, forming a interlocking data chain。Any byte or even a Bit in this chain is modified and will be checked and found because of the characteristics of the HASH algorithm。 -At the same time, the block data is broadcast to all participants across the network, and the more participants there are, the stronger the scale effect.。Even if a few people forcibly modify or delete their own block data, it is easy for others to check out the anomalies and reject them, and only the data approved by the majority are retained and circulated.。In other words, the data is the form of people staring at people, and there are multiple copies, once landed, as long as the chain is still there, the data can be retained forever.。 +At the same time, the block data is broadcast to all participants across the network, and the more participants there are, the stronger the scale effect。Even if a few people forcibly modify or delete their own block data, it is easy for others to check out the anomalies and reject them, and only the data approved by the majority are retained and circulated。In other words, the data is the form of people staring at people, and there are multiple copies, once landed, as long as the chain is still there, the data can be retained forever。 -Based on easy-to-verify chain data structure, group redundancy preservation, common authentication, blockchain data is "difficult to tamper with," all people get the same data, information is open and transparent, public knowledge can be highlighted and solidified.。 +Based on easy-to-verify chain data structure, group redundancy preservation, common authentication, blockchain data is "difficult to tamper with," all people get the same data, information is open and transparent, public knowledge can be highlighted and solidified。 -From another point of view, the data to achieve trust, but whether to achieve "credit" depends on the value of the data, that is, the information carried by the data itself, whether it can represent valuable assets, useful information, such as identity, transaction relationships, transaction behavior, big data, etc., can represent a certain commercial value.。This data, if shared, is enough to build a complete business foundation.。 +From another point of view, the data to achieve trust, but whether to achieve "credit" depends on the value of the data, that is, the information carried by the data itself, whether it can represent valuable assets, useful information, such as identity, transaction relationships, transaction behavior, big data, etc., can represent a certain commercial value。This data, if shared, is enough to build a complete business foundation。 -But if it's in a scenario where privacy is overemphasized, there's very little information that people are willing to share, and it's hard to reach the "maximum common divisor" of credit.。However, in the current business environment, information isolation and privacy protection are hard demands, and information sharing and privacy protection have become severe spears and shields unless the entire business relationship and business logic are revolutionized.。 +But if it's in a scenario where privacy is overemphasized, there's very little information that people are willing to share, and it's hard to reach the "maximum common divisor" of credit。However, in the current business environment, information isolation and privacy protection are hard demands, and information sharing and privacy protection have become severe spears and shields unless the entire business relationship and business logic are revolutionized。 -Therefore, research related to privacy protection has received a lot of attention, such as "multi-party secure computing," "zero-knowledge proof" theory is popular.。Theoretically, it is possible to publish very little information and be verifiable, but the complexity and computational overhead is something to be addressed at the engineering level.。 +Therefore, research related to privacy protection has received a lot of attention, such as "multi-party secure computing," "zero-knowledge proof" theory is popular。Theoretically, it is possible to publish very little information and be verifiable, but the complexity and computational overhead is something to be addressed at the engineering level。 ## letter game theory -The most mysterious part of the blockchain is the "consensus algorithm."。A consensus algorithm is defined as a mechanism within a group to coordinate bookkeeping, either together or in turn, to arrive at an uncontroversial, unique result and to ensure that the mechanism is sustainable.。 +The most mysterious part of the blockchain is the "consensus algorithm."。A consensus algorithm is defined as a mechanism within a group to coordinate bookkeeping, either together or in turn, to arrive at an uncontroversial, unique result and to ensure that the mechanism is sustainable。 -In other words, we all maintain a ledger together and choose who is the bookkeeper.?What makes you believe that the bookkeeper's actions are correct??How to prevent the bookkeeper from doing evil.?How to get an incentive if the bookkeeper keeps the book correctly.?The consensus mechanism fully answers these questions。 +In other words, we all maintain a ledger together and choose who is the bookkeeper?What makes you believe that the bookkeeper's actions are correct??How to prevent the bookkeeper from doing evil?How to get an incentive if the bookkeeper keeps the book correctly?The consensus mechanism fully answers these questions。 The logic of consensus is happening online, but in reality, behind it is a real-world competitive game。 -POW (Proof of Work) uses computing power to compete for the bookkeeper's seat and get the bookkeeper's reward.。In real life, in order to build a competitive computing power plant, miners usually need to develop or purchase a large number of new models of mining machines, transport them to areas with stable and cheap electricity supply, consume a lot of electricity, network fees and other operating expenses, and move their families when they are regulated, travel around the world, and actually invest a lot of (real-world) money, effort and carry huge risks.。If you want to get a stable and substantial income in the POW competition, the capital invested is easily hundreds of millions of dollars, no less than running a business.。 +POW (Proof of Work) uses computing power to compete for the bookkeeper's seat and get the bookkeeper's reward。In real life, in order to build a competitive computing power plant, miners usually need to develop or purchase a large number of new models of mining machines, transport them to areas with stable and cheap electricity supply, consume a lot of electricity, network fees and other operating expenses, and move their families when they are regulated, travel around the world, and actually invest a lot of (real-world) money, effort and carry huge risks。If you want to get a stable and substantial income in the POW competition, the capital invested is easily hundreds of millions of dollars, no less than running a business。 -POS and DPOS replace computing power consumption with proof of equity, which looks much more environmentally friendly。The token representing the rights and interests, in addition to the founding team's own issuance, the "miners" generally need to be obtained through currency exchange, or direct fiat currency purchase of digital currency, even if the currency exchange, out of the currency is often purchased in fiat currency, or at least these rights and interests can be priced in fiat currency, which is actually the real world of wealth injection and endorsement.。 +POS and DPOS replace computing power consumption with proof of equity, which looks much more environmentally friendly。The token representing the rights and interests, in addition to the founding team's own issuance, the "miners" generally need to be obtained through currency exchange, or direct fiat currency purchase of digital currency, even if the currency exchange, out of the currency is often purchased in fiat currency, or at least these rights and interests can be priced in fiat currency, which is actually the real world of wealth injection and endorsement。 -However, in contrast to real business relationships, consensus such as POW and POS does not have a legal and regulatory mechanism to cover the bottom, and is vulnerable to changing gaming situations, such as the size of the community, changes in miners, and changes in core technology operations teams.。Slowly, people who were originally rich and capable may become richer and more powerful, decentralized networks may gradually become cartels, and the ties between miners and the technical community will continue to make waves, causing bifurcations, rollbacks, price rips, and cutting of leeks.。 +However, in contrast to real business relationships, consensus such as POW and POS does not have a legal and regulatory mechanism to cover the bottom, and is vulnerable to changing gaming situations, such as the size of the community, changes in miners, and changes in core technology operations teams。Slowly, people who were originally rich and capable may become richer and more powerful, decentralized networks may gradually become cartels, and the ties between miners and the technical community will continue to make waves, causing bifurcations, rollbacks, price rips, and cutting of leeks。 -In general, people still trust "autonomy" on the blockchain, in which a single event (such as a transaction) is "probabilistic," while the whole network pursues "ultimate consistency" (consistency of the public ledger).。This short-term probabilistic and long-term certainty can, to some extent, achieve a dynamic "**Nash equilibrium**"The ecology that supports the chain evolves a mysterious sense of" faith. "。 +In general, people still trust "autonomy" on the blockchain, in which a single event (such as a transaction) is "probabilistic," while the whole network pursues "ultimate consistency" (consistency of the public ledger)。This short-term probabilistic and long-term certainty can, to some extent, achieve a dynamic "**Nash equilibrium**"The ecology that supports the chain evolves a mysterious sense of" faith. "。 -On the other hand, the bookkeeper of the alliance chain is generally an institutional role.。The alliance chain requires the identity of the bookkeeper to know that the participants are licensed to access the network and that they are a**cooperative game**。Alliance chains introduce real-world identity information as**Credit Endorsement**, such as industrial and commercial registration information, business reputation, acceptance credit, working capital, or industry status, practice license, legal identity, etc., all the actions of participants in the chain can be audited, traced, but also so that the relevant regulatory authorities can be targeted when necessary, precise punishment, enforcement, with a high deterrent.。 +On the other hand, the bookkeeper of the alliance chain is generally an institutional role。The alliance chain requires the identity of the bookkeeper to know that the participants are licensed to access the network and that they are a**cooperative game**。Alliance chains introduce real-world identity information as**Credit Endorsement**, such as industrial and commercial registration information, business reputation, acceptance credit, working capital, or industry status, practice license, legal identity, etc., all the actions of participants in the chain can be audited, traced, but also so that the relevant regulatory authorities can be targeted when necessary, precise punishment, enforcement, with a high deterrent。 -In this environment, the participants of the alliance chain work together to maintain the network, share the necessary information, conduct transactions in an equal, transparent, secure and trusted network, and only need to prevent the risk of malicious operations by a small number of bookkeepers and avoid the availability risk on the system.。By introducing the necessary trust endorsements in the real world, even though the alliance chain business logic is very complex, the trust model is more intuitive.。 +In this environment, the participants of the alliance chain work together to maintain the network, share the necessary information, conduct transactions in an equal, transparent, secure and trusted network, and only need to prevent the risk of malicious operations by a small number of bookkeepers and avoid the availability risk on the system。By introducing the necessary trust endorsements in the real world, even though the alliance chain business logic is very complex, the trust model is more intuitive。 -So, behind the so-called consensus mechanism is still the real-world competition for financial and material resources and credit endorsement, as well as the corresponding effective incentive and disciplinary mechanisms.。 +So, behind the so-called consensus mechanism is still the real-world competition for financial and material resources and credit endorsement, as well as the corresponding effective incentive and disciplinary mechanisms。 -There is no such thing as a free lunch, and no such thing as plain love or hate.。To "trust" a bookkeeper is to believe in the costs he has invested in the real world, the price he has paid, and the penalties that deter him, given that the whole mechanism has, and to believe that the bookkeeper will not destroy the network for no reason in order to continue to gain and add value.。 +There is no such thing as a free lunch, and no such thing as plain love or hate。To "trust" a bookkeeper is to believe in the costs he has invested in the real world, the price he has paid, and the penalties that deter him, given that the whole mechanism has, and to believe that the bookkeeper will not destroy the network for no reason in order to continue to gain and add value。 ## Letter Smart Contract -Smart contracts were proposed by the prolific cross-cutting legal scholar Nick Szabo.。In several articles published on his website, he mentions the idea of smart contracts, defined as follows: +Smart contracts were proposed by the prolific cross-cutting legal scholar Nick Szabo。In several articles published on his website, he mentions the idea of smart contracts, defined as follows: "A smart contract is a set of digitally defined commitments, including agreements on which contract participants can enforce those commitments.""。 -Simply put, it can be understood as an electronic version of a paper contract, implemented in code, running indiscriminately at every node of the blockchain network, executing the established contract rules with consensus.。 +Simply put, it can be understood as an electronic version of a paper contract, implemented in code, running indiscriminately at every node of the blockchain network, executing the established contract rules with consensus。 -Smart contracts are typically based on a specially crafted virtual machine that runs in sandbox mode, shielding out features that could lead to inconsistencies.。For example, the operation of obtaining system time may have different clocks on different machines, which may lead to problems with time-dependent business logic.。Another example is random numbers, as well as external file systems, external website input, etc., which can lead to different virtual machine execution results and are isolated by the virtual machine sandbox environment.。 +Smart contracts are typically based on a specially crafted virtual machine that runs in sandbox mode, shielding out features that could lead to inconsistencies。For example, the operation of obtaining system time may have different clocks on different machines, which may lead to problems with time-dependent business logic。Another example is random numbers, as well as external file systems, external website input, etc., which can lead to different virtual machine execution results and are isolated by the virtual machine sandbox environment。 -If you want to write a contract in the Java language, either cut out the relevant functions in the JDK (system time, random numbers, network, files, etc.), or put it in a docker with strict permission control and isolation settings.。Or simply design a new language, such as Ethereum's Solidity, that implements only specific instructions。Or give up some "smart" features and use a simple stack instruction sequence to complete the key verification judgment logic.。 +If you want to write a contract in the Java language, either cut out the relevant functions in the JDK (system time, random numbers, network, files, etc.), or put it in a docker with strict permission control and isolation settings。Or simply design a new language, such as Ethereum's Solidity, that implements only specific instructions。Or give up some "smart" features and use a simple stack instruction sequence to complete the key verification judgment logic。 Therefore, the implementation of smart contracts on the blockchain, based on the sandbox mechanism control, with the consensus algorithm of the blockchain, to achieve the whole network consistent, difficult to tamper with, undeniable and other characteristics, the output of the results of the operation is a contract recognized by the whole network, known as "Code is Law"。 -However, as long as it is code, there must be a probability of bugs or vulnerabilities, which may come from the underlying virtual machine and network vulnerabilities, and more from the logic implementation.。Just search for "smart contract security vulnerabilities," there are a bunch of search results, including overflow, re-entry, permission errors, and even low-level errors.。In recent years, these vulnerabilities have caused a variety of asset losses, most notably the DAO project code vulnerability, Parity's multi-sign wallet vulnerability, an Internet company's token trading process overflow to zero...... +However, as long as it is code, there must be a probability of bugs or vulnerabilities, which may come from the underlying virtual machine and network vulnerabilities, and more from the logic implementation。Just search for "smart contract security vulnerabilities," there are a bunch of search results, including overflow, re-entry, permission errors, and even low-level errors。In recent years, these vulnerabilities have caused a variety of asset losses, most notably the DAO project code vulnerability, Parity's multi-sign wallet vulnerability, an Internet company's token trading process overflow to zero..... Technical articles can refer to: [https://paper.seebug.org/601/](https://paper.seebug.org/601/) -At present, the security of smart contracts in the industry is also unique, including security companies and white hat review, formal proof, public testing, etc., the security issues will be improved to a certain extent.。If there is another problem, either the hacker is too powerful, or the programmer can only be caught offering sacrifices to heaven:) +At present, the security of smart contracts in the industry is also unique, including security companies and white hat review, formal proof, public testing, etc., the security issues will be improved to a certain extent。If there is another problem, either the hacker is too powerful, or the programmer can only be caught offering sacrifices to heaven:) -Therefore, the letter smart contract, is conditional, is to believe that after strict testing, long-term stable operation, in case of error there are ways to remedy (rather than desperate can only wait for the fork big move) contract.。The smart contracts in the alliance chain are generally rigorously tested, and the grayscale verification process will be implemented when they are launched, the operation process will be monitored during operation, and measures such as accountability and remediation (correction, reconciliation, freezing...) will be designed according to the governance rules, which is more credible.。 +Therefore, the letter smart contract, is conditional, is to believe that after strict testing, long-term stable operation, in case of error there are ways to remedy (rather than desperate can only wait for the fork big move) contract。The smart contracts in the alliance chain are generally rigorously tested, and the grayscale verification process will be implemented when they are launched, the operation process will be monitored during operation, and measures such as accountability and remediation (correction, reconciliation, freezing...) will be designed according to the governance rules, which is more credible。 ## letter to the middleman (?) -Note that there is a question mark in the title of this section, blockchain advocates the "to the center or multi-center, to the intermediary or weak intermediary" mode of operation, but due to the current development is not perfect, many scenarios actually introduce intermediaries, such as currency exchange usually need to go through the exchange, especially the centralized exchange.。The trading principle is to require users to deposit assets into the exchange's account, the transaction is actually in the exchange's database for bookkeeping, only when depositing or withdrawing money, will interact with the blockchain network.。 +Note that there is a question mark in the title of this section, blockchain advocates the "to the center or multi-center, to the intermediary or weak intermediary" mode of operation, but due to the current development is not perfect, many scenarios actually introduce intermediaries, such as currency exchange usually need to go through the exchange, especially the centralized exchange。The trading principle is to require users to deposit assets into the exchange's account, the transaction is actually in the exchange's database for bookkeeping, only when depositing or withdrawing money, will interact with the blockchain network。 -The trust model of the exchange is somewhat decoupled from the blockchain, when the qualifications of the exchange itself, the technical capabilities, security capabilities, asset credit and acceptance capabilities of the operator, are what users need to be most concerned about。Once there is a problem with the exchange, such as running away, bankruptcy, dark operations, self-theft, basically retail investors can only do leeks.。 +The trust model of the exchange is somewhat decoupled from the blockchain, when the qualifications of the exchange itself, the technical capabilities, security capabilities, asset credit and acceptance capabilities of the operator, are what users need to be most concerned about。Once there is a problem with the exchange, such as running away, bankruptcy, dark operations, self-theft, basically retail investors can only do leeks。 Don't say much, see the famous "Mentougou incident":[https://baike.baidu.com/item/Mt.Gox/3611884](https://baike.baidu.com/item/Mt.Gox/3611884) -So, believing in a custodian is a matter of opinion, except that in the current model, roles like exchanges still operate in certain areas.。In 2018, there were more than 10,000 virtual digital asset exchanges worldwide, and how many of them can achieve high-standard security, standardized operations, and cleanliness... that depends on the situation。 +So, believing in a custodian is a matter of opinion, except that in the current model, roles like exchanges still operate in certain areas。In 2018, there were more than 10,000 virtual digital asset exchanges worldwide, and how many of them can achieve high-standard security, standardized operations, and cleanliness... that depends on the situation。 -One last point: the alliance chain does not have a virtual digital asset exchange like the public chain by default.。 +One last point: the alliance chain does not have a virtual digital asset exchange like the public chain by default。 ------ @@ -119,12 +119,12 @@ There are many details in the blockchain field, the above list of the main point To summarize, in the blockchain world, one can build the following basic confidence: -- I hold assets and information that only I can use or disclose -- I can participate in the transaction according to fair rules, share information, transfer the transferred assets -- Assets that have been transferred to me must be valid and will not be invalidated by repeated spending. -- Once the deal is done, it's a sure thing. -- Everything that has happened is verifiable, traceable -- Those who break the rules lose more -- The people who maintain the network will be properly rewarded for their labor, and the whole model is sustainable. +- I hold assets and information that only I can access or disclose +- I can participate in transactions according to fair rules, share information, transfer in and out of assets +-Assets transferred to me by others must be valid and will not be invalidated by repeated spending +- Once the deal is done, it's a sure thing +- Everything that has happened is verifiable and traceable +- People who break the rules lose more +- The people who maintain the network will be properly rewarded for their work, and the whole model is sustainable -Based on these confidence and trust, under the premise of legal compliance, it would be an ideal state for people to inject various assets into the network and carry out complementary and mutually beneficial, transparent rules, open, fair and just business practices.。 +Based on these confidence and trust, under the premise of legal compliance, it would be an ideal state for people to inject various assets into the network and carry out complementary and mutually beneficial, transparent rules, open, fair and just business practices。 diff --git a/3.x/en/docs/articles/1_conception/why_blockchain_slow.md b/3.x/en/docs/articles/1_conception/why_blockchain_slow.md index 34a3aea9c..95e5276ad 100644 --- a/3.x/en/docs/articles/1_conception/why_blockchain_slow.md +++ b/3.x/en/docs/articles/1_conception/why_blockchain_slow.md @@ -6,86 +6,86 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS Counting money, such as hundreds of millions (isn't it exciting ~) -1, if a number of people, slow, but good in focus, go all out, in the visible time can be counted。This is called single-threaded intensive computing.。 +1, if a number of people, slow, but good in focus, go all out, in the visible time can be counted。This is called single-threaded intensive computing。 2. If N people are counted together, each person is divided equally, the number is divided at the same time, and the total number is finally summarized, the time used is basically 1 / N of the first case, the more people involved, the less time required, the higher the TPS。It's called parallel computing and MapReduce。 -3, if N people count together, but because these N people do not trust each other, have to stare at each other, first draw lots to choose a person, this person picked up a stack of money (such as 10,000 yuan a stack) to count it again, seal, sign and stamp, and then give several other people together at the same time to count again, count the good people are signed and stamped, this stack of money is considered good.。Then draw lots for individuals to check out the next stack of numbers, and so on.。Because when a person counts money, others just stare at it, and when a person counts out and seals and signs a pile of money, others have to repeat the count and sign to confirm, then it is conceivable that this method must be the slowest.。**This is called blockchain.。** +3, if N people count together, but because these N people do not trust each other, have to stare at each other, first draw lots to choose a person, this person picked up a stack of money (such as 10,000 yuan a stack) to count it again, seal, sign and stamp, and then give several other people together at the same time to count again, count the good people are signed and stamped, this stack of money is considered good。Then draw lots for individuals to check out the next stack of numbers, and so on。Because when a person counts money, others just stare at it, and when a person counts out and seals and signs a pile of money, others have to repeat the count and sign to confirm, then it is conceivable that this method must be the slowest。**This is called blockchain。** -But to put it another way, way 1, a number of people may be counted wrong, the person may be sick or on vacation, resulting in no one working, and the worse result is that the person may exchange counterfeit money or hide some of the money and report a wrong total.。 +But to put it another way, way 1, a number of people may be counted wrong, the person may be sick or on vacation, resulting in no one working, and the worse result is that the person may exchange counterfeit money or hide some of the money and report a wrong total。 -Way 2, N individuals will have a certain percentage of the wrong number, or one of them may be on vacation or sabotage, resulting in the final result can not come out, more likely because of the number of people, some people steal money, change fake money, report fake numbers...... +Way 2, N individuals will have a certain percentage of the wrong number, or one of them may be on vacation or sabotage, resulting in the final result can not come out, more likely because of the number of people, some people steal money, change fake money, report fake numbers..... -Way 3, very slow, but very safe, because everyone will stare at the whole process of checking, so certainly not wrong。If one of them drops the line, you can pick up a new wad of money and continue counting without interruption.。All the counted money has seals and signatures on it, so it won't be tampered with, and if something goes wrong, you can find the person responsible for it.。In this case, the security of funds is fully guaranteed unless all participants are complicit.。Under this model, the more people involved, the higher the security of funds.。 +Way 3, very slow, but very safe, because everyone will stare at the whole process of checking, so certainly not wrong。If one of them drops the line, you can pick up a new wad of money and continue counting without interruption。All the counted money has seals and signatures on it, so it won't be tampered with, and if something goes wrong, you can find the person responsible for it。In this case, the security of funds is fully guaranteed unless all participants are complicit。Under this model, the more people involved, the higher the security of funds。 -**Therefore, the blockchain solution is committed to the pursuit of, in the lack of mutual trust in the distributed network environment, to achieve transaction security, fairness, to achieve a high degree of data consistency, tamper-proof, anti-evil, traceability, one of the costs is performance.。** +**Therefore, the blockchain solution is committed to the pursuit of, in the lack of mutual trust in the distributed network environment, to achieve transaction security, fairness, to achieve a high degree of data consistency, tamper-proof, anti-evil, traceability, one of the costs is performance。** -The most famous Bitcoin network, on average, can only process 5 to 7 transactions per second, 10 minutes out of 1 block, to reach the final certainty of the transaction takes 6 blocks, that is, 1 hour, and the block process is quite a loss of computing power (POW mining)。Ethereum, known as the "global computer," can process only two-digit transactions per second, with one block in ten seconds.。Ethereum is also currently using the consensus mechanism of loss of computing power POW mining, will gradually migrate to the POS consensus mechanism.。The two networks can get stuck in a jam when fans explode for deals, with a day or two or more after a large number of deals are sent out before they are packed for confirmation。 +The most famous Bitcoin network, on average, can only process 5 to 7 transactions per second, 10 minutes out of 1 block, to reach the final certainty of the transaction takes 6 blocks, that is, 1 hour, and the block process is quite a loss of computing power (POW mining)。Ethereum, known as the "global computer," can process only two-digit transactions per second, with one block in ten seconds。Ethereum is also currently using the consensus mechanism of loss of computing power POW mining, will gradually migrate to the POS consensus mechanism。The two networks can get stuck in a jam when fans explode for deals, with a day or two or more after a large number of deals are sent out before they are packed for confirmation。 -**But in the scenario where the security of funds is life, some things are "necessary," so even if it is slow, you will still consider choosing blockchain.。** +**But in the scenario where the security of funds is life, some things are "necessary," so even if it is slow, you will still consider choosing blockchain。** ### Why is the blockchain slow? -There is a well-known theory in distributed systems called CAP theory: in 2000, Professor Eric Brewer proposed a conjecture: consistency, availability, and partition fault tolerance cannot be satisfied simultaneously in a distributed system, and can only satisfy at most two of them.。 +There is a well-known theory in distributed systems called CAP theory: in 2000, Professor Eric Brewer proposed a conjecture: consistency, availability, and partition fault tolerance cannot be satisfied simultaneously in a distributed system, and can only satisfy at most two of them。 **General explanation of CAP** -Consistency(Consistency) : The data is updated consistently, and all data changes are synchronized. +Consistency(Consistency) : The data is updated consistently, and all data changes are synchronized Availability(Availability): Good response performance Partition tolerance(partition fault tolerance): Reliability -Although this theory is controversial, in engineering practice, like the speed of light theory, it can approach the extreme infinitely but is difficult to break through.。Blockchain systems can achieve the ultimate in consistency and reliability, but the "good response performance" aspect has been a bit criticized.。 +Although this theory is controversial, in engineering practice, like the speed of light theory, it can approach the extreme infinitely but is difficult to break through。Blockchain systems can achieve the ultimate in consistency and reliability, but the "good response performance" aspect has been a bit criticized。 -We are oriented to the field of "alliance chain," because in the access standards, system architecture, the number of participating nodes, consensus mechanism and other aspects are different from the public chain, its performance is much higher than the public chain, but at present several mainstream blockchain platforms, measured on conventional PC-class server hardware, TPS is generally in the thousand-level, transaction delay is generally in the level of 1 second to 10 seconds.。(I heard that TPS hundreds of thousands and millions of millions of blockchains have been made.?Okay, look forward to) +We are oriented to the field of "alliance chain," because in the access standards, system architecture, the number of participating nodes, consensus mechanism and other aspects are different from the public chain, its performance is much higher than the public chain, but at present several mainstream blockchain platforms, measured on conventional PC-class server hardware, TPS is generally in the thousand-level, transaction delay is generally in the level of 1 second to 10 seconds。(I heard that TPS hundreds of thousands and millions of millions of blockchains have been made?Okay, look forward to) -I have worked in large Internet companies for many years, in the field of mass services, in the face of the C10K problem (concurrent 10000 connection, million-level concurrency) has a familiar solution, for the general e-commerce business or content browsing services, ordinary PC-level server stand-alone up to tens of thousands of TPS, and the average delay of less than 500 milliseconds, flying general experience is normal, after all, the Internet product card may lead to user loss。For fast-growing Internet projects, through parallel expansion, flexible expansion, three-dimensional expansion of the way, almost no bottom line, no limit to the surface of the mountain tsunami of massive traffic.。 +I have worked in large Internet companies for many years, in the field of mass services, in the face of the C10K problem (concurrent 10000 connection, million-level concurrency) has a familiar solution, for the general e-commerce business or content browsing services, ordinary PC-level server stand-alone up to tens of thousands of TPS, and the average delay of less than 500 milliseconds, flying general experience is normal, after all, the Internet product card may lead to user loss。For fast-growing Internet projects, through parallel expansion, flexible expansion, three-dimensional expansion of the way, almost no bottom line, no limit to the surface of the mountain tsunami of massive traffic。 -**In contrast, blockchain performance is slower than Internet services, and it is difficult to expand, because it is still in its "computing for trust" design ideas.。** +**In contrast, blockchain performance is slower than Internet services, and it is difficult to expand, because it is still in its "computing for trust" design ideas。** ### Where exactly is it slow? From the inside of the system of the "classical" blockchain -**1. For security, tamper-proof and leak-proof traceability**, the introduction of encryption algorithms to process transaction data, increasing CPU computing overhead, including HASH, symmetric encryption, elliptic curve or RSA and other algorithms of asymmetric encryption, data signature and verification, CA certificate verification, and even the current slow to outrageous homomorphic encryption, zero-knowledge proof, etc.。In terms of data format, the data structure of the blockchain itself contains a variety of signatures, HASH and other transactions outside the verification data, data packaging and unpacking, transmission, verification and other processing is more cumbersome.。 +**1. For security, tamper-proof and leak-proof traceability**, the introduction of encryption algorithms to process transaction data, increasing CPU computing overhead, including HASH, symmetric encryption, elliptic curve or RSA and other algorithms of asymmetric encryption, data signature and verification, CA certificate verification, and even the current slow to outrageous homomorphic encryption, zero-knowledge proof, etc。In terms of data format, the data structure of the blockchain itself contains a variety of signatures, HASH and other transactions outside the verification data, data packaging and unpacking, transmission, verification and other processing is more cumbersome。 -Compared with Internet services, there will also be steps for data encryption and protocol packaging and unpacking, but the more streamlined the better, optimized to the extreme, if not necessary, never increase the burden of cumbersome computing.。 +Compared with Internet services, there will also be steps for data encryption and protocol packaging and unpacking, but the more streamlined the better, optimized to the extreme, if not necessary, never increase the burden of cumbersome computing。 -**2, in order to ensure the transaction transaction.**The transactions are performed serially and completely serially, first sorting the transactions and then executing the smart contract with a single thread to avoid transaction confusion, data conflicts, etc. caused by out-of-order execution.。Even if a server has a multi-core CPU, the operating system supports multi-threaded multi-process, and there are multiple nodes and multiple servers in the network, all transactions are methodically and strictly single-threaded on each computer.。 +**2, in order to ensure the transaction transaction**The transactions are performed serially and completely serially, first sorting the transactions and then executing the smart contract with a single thread to avoid transaction confusion, data conflicts, etc. caused by out-of-order execution。Even if a server has a multi-core CPU, the operating system supports multi-threaded multi-process, and there are multiple nodes and multiple servers in the network, all transactions are methodically and strictly single-threaded on each computer。 -Internet services, on the other hand, are how many cores of how many servers can be used, using full asynchronous processing, multi-process, multi-threading, coroutine, caching, optimized IOWAIT, etc.。 +Internet services, on the other hand, are how many cores of how many servers can be used, using full asynchronous processing, multi-process, multi-threading, coroutine, caching, optimized IOWAIT, etc。 -**3, in order to ensure the overall availability of the network**The blockchain uses a P2P network architecture and a Gossip-like transmission model, where all blocks and transaction data are broadcast indiscriminately to the network, and the receiving nodes continue to relay, a model that allows the data to be communicated as much as possible to everyone in the network, even if they are in different regions or subnets.。The cost is high transmission redundancy, which takes up more bandwidth, and the arrival time of propagation is uncertain, which may be fast or slow (many transfers)。 +**3, in order to ensure the overall availability of the network**The blockchain uses a P2P network architecture and a Gossip-like transmission model, where all blocks and transaction data are broadcast indiscriminately to the network, and the receiving nodes continue to relay, a model that allows the data to be communicated as much as possible to everyone in the network, even if they are in different regions or subnets。The cost is high transmission redundancy, which takes up more bandwidth, and the arrival time of propagation is uncertain, which may be fast or slow (many transfers)。 -Compared to Internet services, unless there is an error retransmission, the network transmission must be the most streamlined, with limited bandwidth to carry massive amounts of data, and the transmission path will strive for the best, point-to-point transmission.。 +Compared to Internet services, unless there is an error retransmission, the network transmission must be the most streamlined, with limited bandwidth to carry massive amounts of data, and the transmission path will strive for the best, point-to-point transmission。 -**4. To support smart contract features**, similar to blockchain solutions such as Ethereum, in order to implement sandbox features, ensure the security of the operating environment and shield inconsistencies, its smart contract engine is either an interpretive EVM or a computing unit encapsulated by docker, and the startup speed and instruction execution speed of the smart contract core engine have not reached the highest level, and the memory resources consumed have not reached the optimal level.。 +**4. To support smart contract features**, similar to blockchain solutions such as Ethereum, in order to implement sandbox features, ensure the security of the operating environment and shield inconsistencies, its smart contract engine is either an interpretive EVM or a computing unit encapsulated by docker, and the startup speed and instruction execution speed of the smart contract core engine have not reached the highest level, and the memory resources consumed have not reached the optimal level。 -In conventional computer languages such as C.++, JAVA, go, and rust languages directly implement massive Internet services, often without restrictions in this regard。 +In conventional computer languages such as C++, JAVA, go, and rust languages directly implement massive Internet services, often without restrictions in this regard。 -**5. In order to achieve the effect of easy verification and anti-tampering**In addition to the first mentioned, the block data structure carries a lot of data, for the transaction input and output, will use similar merkle tree, Patricia (Patricia) tree and other complex tree structure, through layer-by-layer calculation to obtain data proof, for the follow-up process quick verification.。The details of the tree are not expanded here, you can learn its mechanism through the information on the network.。 +**5. In order to achieve the effect of easy verification and anti-tampering**In addition to the first mentioned, the block data structure carries a lot of data, for the transaction input and output, will use similar merkle tree, Patricia (Patricia) tree and other complex tree structure, through layer-by-layer calculation to obtain data proof, for the follow-up process quick verification。The details of the tree are not expanded here, you can learn its mechanism through the information on the network。 -Basically, the process of generating and maintaining this kind of tree is very, very, very cumbersome, which takes up both CPU computation and storage. After using the tree, the overall effective data carrying capacity (that is, the comparison between the transaction data initiated by the client and the final data actually stored) drops sharply to a few percent. In extreme cases, after receiving 10m of transaction data, it may actually require hundreds of megabytes of data maintenance overhead on the blockchain disk.。 +Basically, the process of generating and maintaining this kind of tree is very, very, very cumbersome, which takes up both CPU computation and storage. After using the tree, the overall effective data carrying capacity (that is, the comparison between the transaction data initiated by the client and the final data actually stored) drops sharply to a few percent. In extreme cases, after receiving 10m of transaction data, it may actually require hundreds of megabytes of data maintenance overhead on the blockchain disk。 -Internet services rarely use this tree proof structure because they basically do not consider the issue of distributed mutual trust.。 +Internet services rarely use this tree proof structure because they basically do not consider the issue of distributed mutual trust。 -**6. In order to achieve the consistency and credibility of the whole network**All blocks and transaction data in the blockchain will be driven by the consensus mechanism framework and broadcast on the network, and all nodes will run multi-step complex calculations and voting, and most nodes will recognize the data before landing.。 +**6. In order to achieve the consistency and credibility of the whole network**All blocks and transaction data in the blockchain will be driven by the consensus mechanism framework and broadcast on the network, and all nodes will run multi-step complex calculations and voting, and most nodes will recognize the data before landing。 -Adding new nodes to the network will not increase the system capacity and improve the processing speed, which completely subverts the conventional Internet system thinking of "insufficient performance hardware compensation," the root of which is that all nodes in the blockchain are doing repeated checking and generating their own data storage, and do not reuse the data of other nodes, and the node computing power is uneven, and even slow down the final confirmation speed.。 +Adding new nodes to the network will not increase the system capacity and improve the processing speed, which completely subverts the conventional Internet system thinking of "insufficient performance hardware compensation," the root of which is that all nodes in the blockchain are doing repeated checking and generating their own data storage, and do not reuse the data of other nodes, and the node computing power is uneven, and even slow down the final confirmation speed。 -Adding nodes to a blockchain system will only increase fault tolerance and the credibility of the network, without enhancing performance, making the possibility of parallel scaling largely missing in the same chain.。 +Adding nodes to a blockchain system will only increase fault tolerance and the credibility of the network, without enhancing performance, making the possibility of parallel scaling largely missing in the same chain。 -Internet services are mostly stateless, data can be cached and reused, the steps between request and return are relatively simple, easy to expand in parallel, can quickly schedule more resources to participate in the service, with unlimited flexibility.。 +Internet services are mostly stateless, data can be cached and reused, the steps between request and return are relatively simple, easy to expand in parallel, can quickly schedule more resources to participate in the service, with unlimited flexibility。 -**7, because the block data structure and consensus mechanism characteristics.**, resulting in transactions to the blockchain, will be sorted first, and then added to the block, with the block as a unit, a small batch of a small batch of data for consensus confirmation, rather than receiving a transaction immediately consensus confirmation, for example: each block contains 1000 transactions, every 3 seconds consensus confirmation, this time the transaction may take 1 to 3 seconds to be confirmed.。 +**7, because the block data structure and consensus mechanism characteristics**, resulting in transactions to the blockchain, will be sorted first, and then added to the block, with the block as a unit, a small batch of a small batch of data for consensus confirmation, rather than receiving a transaction immediately consensus confirmation, for example: each block contains 1000 transactions, every 3 seconds consensus confirmation, this time the transaction may take 1 to 3 seconds to be confirmed。 -Worse, transactions are queued all the time without being packed into blocks (due to queue congestion), resulting in longer acknowledgement delays。This transaction delay is generally much larger than the Internet service 500ms response standard。So blockchain is actually not suitable for direct use in the pursuit of fast response to real-time trading scenarios, the industry usually says "improve transaction efficiency" is the final settlement time is included, such as the T.+1 Up to one or two days of reconciliation or clearing time, reduced to tens of seconds or minutes, making it a "quasi-real-time" experience。 +Worse, transactions are queued all the time without being packed into blocks (due to queue congestion), resulting in longer acknowledgement delays。This transaction delay is generally much larger than the Internet service 500ms response standard。So blockchain is actually not suitable for direct use in the pursuit of fast response to real-time trading scenarios, the industry usually says "improve transaction efficiency" is the final settlement time is included, such as the T+1 Up to one or two days of reconciliation or clearing time, reduced to tens of seconds or minutes, making it a "quasi-real-time" experience。 -To sum up, the blockchain system is born with several mountains, including the large internal computing overhead and storage of a single machine, the original sin of serial computing, the complex and redundant network structure, the long delay caused by the rhythm of block packaging consensus, and the difficulty of directly adding hardware to parallel expansion in scalability, resulting in obvious bottlenecks in both scale up and scale out.。 +To sum up, the blockchain system is born with several mountains, including the large internal computing overhead and storage of a single machine, the original sin of serial computing, the complex and redundant network structure, the long delay caused by the rhythm of block packaging consensus, and the difficulty of directly adding hardware to parallel expansion in scalability, resulting in obvious bottlenecks in both scale up and scale out。 **Scale Out (equivalent to scale horizontally)**Scale out, such as adding a new set of independent machines to the original system and increasing the service capacity with more machines **Scale Up (equivalent to Scale vertically)**: Vertical expansion, upward expansion, such as adding CPU and memory to the original machine, increasing processing power inside the machine -Facing the speed dilemma of blockchain, the developers of FISCO BCOS play the spirit of "Foolish Old Man," and strive to optimize。After a period of hard work, we have moved mountains and rivers, built one high-speed channel after another, so that the blockchain has found a way to the era of extreme speed (see the next article for details), which is what we will analyze in depth in our series of articles.。 \ No newline at end of file +Facing the speed dilemma of blockchain, the developers of FISCO BCOS play the spirit of "Foolish Old Man," and strive to optimize。After a period of hard work, we have moved mountains and rivers, built one high-speed channel after another, so that the blockchain has found a way to the era of extreme speed (see the next article for details), which is what we will analyze in depth in our series of articles。 \ No newline at end of file diff --git a/3.x/en/docs/articles/2_required/entry_to_master.md b/3.x/en/docs/articles/2_required/entry_to_master.md index 8c88d61e7..73e60d093 100644 --- a/3.x/en/docs/articles/2_required/entry_to_master.md +++ b/3.x/en/docs/articles/2_required/entry_to_master.md @@ -2,37 +2,37 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS -At present, more and more people have entered or are ready to enter the field of blockchain, the process can not help but hold all kinds of doubts and problems。Remembering that I spent a few years ago, from "a little understanding" of the blockchain to all in, I also experienced a similar mental journey, this field does have some thresholds, but everything is difficult at the beginning, and the road to explore is far more than eighty-one difficult, here to sort out a few summary difficulties and insights, I would like to share.。 +At present, more and more people have entered or are ready to enter the field of blockchain, the process can not help but hold all kinds of doubts and problems。Remembering that I spent a few years ago, from "a little understanding" of the blockchain to all in, I also experienced a similar mental journey, this field does have some thresholds, but everything is difficult at the beginning, and the road to explore is far more than eighty-one difficult, here to sort out a few summary difficulties and insights, I would like to share。 ## The difficulty of direction Who am I, "" where am I, "" where am I going, "all from such a philosophy of three questions。What is Blockchain??What can blockchain do??Why is the blockchain so hot??Can we not use the block chain??These questions are full of the ultimate torture。 -It's hard to answer these questions thoroughly because there are no standard answers.。All things at the forefront of innovation are probably so, developing in doubt and turmoil, groping in darkness and desolation, essence and dross flying together, oasis and leek, until the tipping point bursts。If you swing from side to side because you're full of doubts, or if you're stuck, the experience will be bad and the results will not be good in the process of doing related work research.。 +It's hard to answer these questions thoroughly because there are no standard answers。All things at the forefront of innovation are probably so, developing in doubt and turmoil, groping in darkness and desolation, essence and dross flying together, oasis and leek, until the tipping point bursts。If you swing from side to side because you're full of doubts, or if you're stuck, the experience will be bad and the results will not be good in the process of doing related work research。 -Share a little personal experience: the blockchain field has attracted countless smart people from all over the world from the very beginning, including geeks, scholars and masters, who have carried out a lot of technical and social practices.。This field contains the essence of mathematics, computer science, cryptography, game theory, economics, sociology and other disciplines, which is a world of intellectual flying and ideological agitation。At present, the entire industry is getting unprecedented attention, including the government, the industry giants are paying attention, a lot of attention and resources continue to pour in, blockchain ushered in the "best season."。 +Share a little personal experience: the blockchain field has attracted countless smart people from all over the world from the very beginning, including geeks, scholars and masters, who have carried out a lot of technical and social practices。This field contains the essence of mathematics, computer science, cryptography, game theory, economics, sociology and other disciplines, which is a world of intellectual flying and ideological agitation。At present, the entire industry is getting unprecedented attention, including the government, the industry giants are paying attention, a lot of attention and resources continue to pour in, blockchain ushered in the "best season."。 -In a modern society that continues to evolve and has a diverse structure, the idea of distributed commerce has become a reality.。There will be more connections and collaboration between people, institutions and institutions, information and value will flow rapidly in the new network model, and blockchain, as one of the representatives of distributed technology, has a great opportunity to become a stronghold for a new generation of infrastructure and innovation.。 +In a modern society that continues to evolve and has a diverse structure, the idea of distributed commerce has become a reality。There will be more connections and collaboration between people, institutions and institutions, information and value will flow rapidly in the new network model, and blockchain, as one of the representatives of distributed technology, has a great opportunity to become a stronghold for a new generation of infrastructure and innovation。 -So, direction is not a problem。Even if you don't regard blockchain as a "belief," just looking at the fascinating technology itself and the opportunities for the deep integration of blockchain and the real economy can still give us confidence.。 +So, direction is not a problem。Even if you don't regard blockchain as a "belief," just looking at the fascinating technology itself and the opportunities for the deep integration of blockchain and the real economy can still give us confidence。 ## Difficulty of concept -Among the three philosophical questions, "what is the blockchain" is the most obscure question, blocks, transactions, accounts, consensus, smart contracts, double flowers...?!When I first came into contact with the blockchain, I also had a feeling of being overturned.。There are some articles that introduce blockchain, often focusing on the social and economic efficiency of blockchain, from the value concept, these are certainly necessary, but as the saying goes, "science should be qualitative and more quantitative," as engineers and technicians, we should pay more attention to the knowledge points and basic principles of blockchain, and then clarify the terminology, grasp the architecture, processing logic and program flow.。 +Among the three philosophical questions, "what is the blockchain" is the most obscure question, blocks, transactions, accounts, consensus, smart contracts, double flowers..?!When I first came into contact with the blockchain, I also had a feeling of being overturned。There are some articles that introduce blockchain, often focusing on the social and economic efficiency of blockchain, from the value concept, these are certainly necessary, but as the saying goes, "science should be qualitative and more quantitative," as engineers and technicians, we should pay more attention to the knowledge points and basic principles of blockchain, and then clarify the terminology, grasp the architecture, processing logic and program flow。 -As mentioned earlier, blockchain contains the essence of a large number of disciplines, and there is also a saying in the industry: "**Blockchain didn't invent any new technology, it was a combination of mature technologies.**”。 +As mentioned earlier, blockchain contains the essence of a large number of disciplines, and there is also a saying in the industry: "**Blockchain didn't invent any new technology, it was a combination of mature technologies**”。 -- These mature technologies include data structures, such as linked lists, trees, graphs, filters, etc., which are the basics of data structures in universities; -- Basic cryptography, including HASH and symmetric asymmetric encryption, digital signatures, etc., has been a classic technology for decades, and the new generation of cryptography in areas such as privacy protection has opened up a broad space for cryptography professionals to play.; -- The discipline of distributed networks and systems itself is all-encompassing, covering the scope of knowledge of massive services, and many students who have been engaged in massive Internet service technology will have a good understanding of blockchain P2P networks and consensus algorithms, parallel computing models, and transactional consistency principles.; -- Game theory and incentive compatibility are knowledge points in collaboration, is an important part of "blockchain thinking," engineering students may need to turn over the book, social science management background of the students will probably be familiar with it; +-These mature technologies include data structures, such as linked lists, trees, graphs, filters, etc., which are the basic knowledge of data structures in universities; +- Basic cryptography, including HASH and symmetric asymmetric encryption, digital signatures, etc., has been a classic technology for decades, and the new generation of cryptography in areas such as privacy protection has opened up a broad space for cryptography professionals to play; +- The discipline of distributed networks and systems itself is all-encompassing, covering the scope of knowledge of massive services, and many students who have been engaged in massive Internet service technology will have a good understanding of blockchain P2P networks and consensus algorithms, parallel computing models, and transactional consistency principles; +-Game theory and incentive compatibility are knowledge points in collaboration and an important part of "blockchain thinking," engineering students may need to turn over books, and students with a background in social science economics and management are expected to be familiar with it; - As for smart contracts, such as solidity language, WebAssembly, etc., rarely heard of it?In fact, these languages and programming mode learning curve may not be as high as javascript, there are a few years of program foundation students can basically get started in a week, write a smooth smart contract to。 -Blockchain makes people feel cognitive difficulties, because it is like a "basket," everything can be loaded into it, involving a complex technology, the combination of different ways and conventional technology routines.。So to a certain extent, learners should first empty themselves, to avoid letting their thinking in the original field interfere with learning, in the enrichment of their knowledge at the same time, accept the blockchain "group consensus," "prevent tampering," "undeniable," "high consistency" and other magical logic, and then dive into each independent concept, and will not feel unattainable。 +Blockchain makes people feel cognitive difficulties, because it is like a "basket," everything can be loaded into it, involving a complex technology, the combination of different ways and conventional technology routines。So to a certain extent, learners should first empty themselves, to avoid letting their thinking in the original field interfere with learning, in the enrichment of their knowledge at the same time, accept the blockchain "group consensus," "prevent tampering," "undeniable," "high consistency" and other magical logic, and then dive into each independent concept, and will not feel unattainable。 -The main point of breaking through the difficulty of the concept is to eliminate the noise from various channels, some information is specious, or each saying their own words, the same thing with N kinds of speech, confusing the definition, blurring the essence, no help at the same time also bring more questions。The reliable method is to focus on reading the formal content of authoritative media, pay attention to the document library of some mainstream blockchain projects, carefully and comprehensively read through the technical documents, and then find a field of interest (such as consensus algorithm), combined with their own experience and knowledge for comparative research.。 +The main point of breaking through the difficulty of the concept is to eliminate the noise from various channels, some information is specious, or each saying their own words, the same thing with N kinds of speech, confusing the definition, blurring the essence, no help at the same time also bring more questions。The reliable method is to focus on reading the formal content of authoritative media, pay attention to the document library of some mainstream blockchain projects, carefully and comprehensively read through the technical documents, and then find a field of interest (such as consensus algorithm), combined with their own experience and knowledge for comparative research。 -At the same time, you can also join the active open source community and technical circles, and experienced people to discuss more, the courage to throw out the problem, each term, each process to discuss thoroughly.。In the early days of our research on blockchain, the team often argued bitterly about the definition of a concept for a long time, and finally reached a happy consensus when everyone felt refreshed.。 +At the same time, you can also join the active open source community and technical circles, and experienced people to discuss more, the courage to throw out the problem, each term, each process to discuss thoroughly。In the early days of our research on blockchain, the team often argued bitterly about the definition of a concept for a long time, and finally reached a happy consensus when everyone felt refreshed。 In the conceptual stage, do not seek full blame, do not become a data collection machine, one bite can not be fat, based on reliable learning materials, clarify basic concepts, in practice to verify and explore new concepts, establish a methodology to discover and solve problems, and slowly be able to draw inferences from one another, maybe one day will be able to fill the top。 @@ -40,53 +40,53 @@ In the conceptual stage, do not seek full blame, do not become a data collection Well, philosophical and conceptual issues are finally not going to get in the way of our learning, so how do we continue the "21-day entry to mastery" path?。As a technician, encounter new technology platforms, software systems, programming languages... That, of course, is: "Don't be unintelligent, just do it."!” -A few years ago, when we first started working on blockchain, we read through the code of several popular open source blockchain projects abroad and built a test network to analyze how these platforms could be used in complex financial businesses.。There was a confusion at the time, if you develop applications directly based on the underlying platform, do you have to modify the underlying platform code directly when you need to implement more features?。 +A few years ago, when we first started working on blockchain, we read through the code of several popular open source blockchain projects abroad and built a test network to analyze how these platforms could be used in complex financial businesses。There was a confusion at the time, if you develop applications directly based on the underlying platform, do you have to modify the underlying platform code directly when you need to implement more features?。 And when you see the "smart contract" this thing, the idea is opened: the use of smart contracts as a middle layer, in the contract to write business logic, and define a clear functional interface for the caller, so that the business can be well decoupled from the underlying, while the underlying platform is positioned as a powerful engine, through the architecture of decoupling, so that the whole development process becomes clear and reasonable, relaxed and happy, it feels like from the "C / S"。 -In addition, we felt that the open source projects at the time were mainly in the form of public chains, which were not so well considered in terms of security, control and compliance, and were not suitable for financial scenarios.。 +In addition, we felt that the open source projects at the time were mainly in the form of public chains, which were not so well considered in terms of security, control and compliance, and were not suitable for financial scenarios。 -So, without taking advantage of the platform, find a way to build one.。Since then, the long road of deep-rooted technology and iterative application verification has been opened.。This process has also established close cooperation with a number of partners in the open source community, which is "from open source and give back to open source." After several years of polishing by the open source working group, FISCO BCOS has become a comprehensive open source, safe and controllable, high-speed and stable, easy-to-use and friendly financial-grade underlying technology platform, providing a wealth of functions and various operating tools for finance and the broader industry.。Rich and comprehensive documentation and easy-to-use experience can help developers from quick start to proficient, the overall technical threshold and development costs have become unprecedented low。 +So, without taking advantage of the platform, find a way to build one。Since then, the long road of deep-rooted technology and iterative application verification has been opened。This process has also established close cooperation with a number of partners in the open source community, which is "from open source and give back to open source." After several years of polishing by the open source working group, FISCO BCOS has become a comprehensive open source, safe and controllable, high-speed and stable, easy-to-use and friendly financial-grade underlying technology platform, providing a wealth of functions and various operating tools for finance and the broader industry。Rich and comprehensive documentation and easy-to-use experience can help developers from quick start to proficient, the overall technical threshold and development costs have become unprecedented low。 -With the underlying underlying platform, downloading, installing, configuring, running, reading user manuals, writing hello world and business applications, Debug and analyzing logs... are all step by step jobs.。 +With the underlying underlying platform, downloading, installing, configuring, running, reading user manuals, writing hello world and business applications, Debug and analyzing logs... are all step by step jobs。 -Our goal is that users can build their own blockchain network in a few minutes with one-click installation, docker, cloud services, etc., and can write a complete smart contract within a week through learning, and implement business logic based on SDKs supporting multiple languages (Java, Node.js, Python, Go... still being added), and release the business online to maintain stable operation.。 +Our goal is that users can build their own blockchain network in a few minutes with one-click installation, docker, cloud services, etc., and can write a complete smart contract within a week through learning, and implement business logic based on SDKs supporting multiple languages (Java, Node.js, Python, Go... still being added), and release the business online to maintain stable operation。 -To this end, we have been continuously optimizing the use of documentation, development manuals, and deployment and operations tools.。As we all know, "code farmers" like to write code, and writing notes and documents is more painful, in order to hand over a beautiful homework to the community, we have devoted their lifetime language level, revised again and again, just wrote hundreds of thousands of words of the document library。 +To this end, we have been continuously optimizing the use of documentation, development manuals, and deployment and operations tools。As we all know, "code farmers" like to write code, and writing notes and documents is more painful, in order to hand over a beautiful homework to the community, we have devoted their lifetime language level, revised again and again, just wrote hundreds of thousands of words of the document library。 -At the same time, the open source community has launched a series of offline and online salons, training, and extensive communication and technical support in a community way.。In many live learning and hackathon competitions, we are pleased to see that developers can implement their ingenious project design based on FISCO BCOS in two or three days, and some developers have contributed their optimizations related to open source projects to Github.。 +At the same time, the open source community has launched a series of offline and online salons, training, and extensive communication and technical support in a community way。In many live learning and hackathon competitions, we are pleased to see that developers can implement their ingenious project design based on FISCO BCOS in two or three days, and some developers have contributed their optimizations related to open source projects to Github。 -To this extent, even for developers who have no experience in blockchain research and development, there is no problem getting started quickly, even if the bottom of the blockchain is still like a black box to be explored, but just like installing App on the computer, using mysql, tomcat and other software, it can be used to feel the charm of the blockchain.。 +To this extent, even for developers who have no experience in blockchain research and development, there is no problem getting started quickly, even if the bottom of the blockchain is still like a black box to be explored, but just like installing App on the computer, using mysql, tomcat and other software, it can be used to feel the charm of the blockchain。 ## The difficulty of going deep -For technicians, there is no end to exploring the connotation of technology: participating in the development of the underlying blockchain, realizing large-scale blockchain applications, adding more useful features and tools to the blockchain ecology, and optimizing the performance of existing functions.。 +For technicians, there is no end to exploring the connotation of technology: participating in the development of the underlying blockchain, realizing large-scale blockchain applications, adding more useful features and tools to the blockchain ecology, and optimizing the performance of existing functions。 -As mentioned earlier, blockchain system knowledge points and frameworks are widely covered, both in terms of knowledge and depth.。If you quantify it, you can apply the 10,000-hour theory: if you study and work every day 8-10 hours, more than a month, a year to get a small success, two years to be familiar with the road, three years to become an old driver... But the old driver to the road is still long。We hope to shorten this process through continuous science, communication and practice, but after all, learning is a basic "proof of work" and there are no other shortcuts.。 +As mentioned earlier, blockchain system knowledge points and frameworks are widely covered, both in terms of knowledge and depth。If you quantify it, you can apply the 10,000-hour theory: if you study and work 8-10 hours a day, you can get a small percentage in a year, two years of familiarity, and three years of being an old driver... But the road ahead for an old driver is still long。We hope to shorten this process through continuous science, communication and practice, but after all, learning is a basic "proof of work" and there are no other shortcuts。 Learning methods, first of all, a lot of extensive reading, every morning a eye to the evening, you can see the continuous update of industry news, public number articles, technology big coffee blog, mail group discussion group, open source projects...... The process of reading may encounter the collision of different views, need to remove the false and keep the true, in an open mind at the same time, but also to maintain their own position and direction。 -Then there is in-depth intensive reading, first select one or two directions of interest, study some classic papers such as cryptography, distributed theory, etc.。The core consensus algorithm of FISCO BCOS uses the PBFT and RAFT algorithms, which are based on the research and interpretation of the original paper, and the implementation and optimization are done with a deep understanding.。Cryptographic principles are widely used in blockchains, with changing scenarios and logic, and the principles may come from a "top meeting" paper.。Intensive reading depth principle analysis of articles and academic papers, based on solid theory, in order to play according to their own needs, creatively solve engineering problems。 +Then there is in-depth intensive reading, first select one or two directions of interest, study some classic papers such as cryptography, distributed theory, etc。The core consensus algorithm of FISCO BCOS uses the PBFT and RAFT algorithms, which are based on the research and interpretation of the original paper, and the implementation and optimization are done with a deep understanding。Cryptographic principles are widely used in blockchains, with changing scenarios and logic, and the principles may come from a "top meeting" paper。Intensive reading depth principle analysis of articles and academic papers, based on solid theory, in order to play according to their own needs, creatively solve engineering problems。 In fact, our online documents have hundreds of thousands of words of scale, all kinds of information, the technical community will regularly interpret the hot knowledge, as long as the reader carefully read the online technical documents, accept the public number of considerate push, and hands-on more practice, over time, will be able to deeply understand the technical principles of the blockchain, through the context of architecture design, the establishment of a solid knowledge system。 -Finally, the object of intensive reading, of course, also includes the source code, after all, "Talk is cheap, show me the code," blockchain open source project code is mostly tens of thousands to hundreds of thousands of lines of level, reading the code is the most direct way to achieve the level of the cow.。In the course of studying blockchain, we have many long nights to review the code, when we read a squint, the code is flying, all kinds of interfaces and objects dance, both elegant and regular, clear vein, that kind of pleasure is simply indescribable.。This experience, before, will continue to have。 +Finally, the object of intensive reading, of course, also includes the source code, after all, "Talk is cheap, show me the code," blockchain open source project code is mostly tens of thousands to hundreds of thousands of lines of level, reading the code is the most direct way to achieve the level of the cow。In the course of studying blockchain, we have many long nights to review the code, when we read a squint, the code is flying, all kinds of interfaces and objects dance, both elegant and regular, clear vein, that kind of pleasure is simply indescribable。This experience, before, will continue to have。 -If you have reached this level, the field threshold has been basically crossed, the test is the developer's mental and physical strength.。 +If you have reached this level, the field threshold has been basically crossed, the test is the developer's mental and physical strength。 ## Sustained Difficulty -In the past few years, due to a series of situations such as confusion, policies and regulations, technical obstacles, etc., blockchain will encounter challenges such as market volatility and delayed application landing.。What is the future, although there is no prophet to tell us, but now we have seen the trend。This goes back to the first question: "direction," a clear and clear direction that answers not only "do you want to enter this field," but also "do you want to stick to it?"。We have been developing scenarios in the distributed business model, providing quality services to the public, and providing the industry with complete and easy-to-use open source technology, which has never changed from the beginning to the present and even in the future.。 +In the past few years, due to a series of situations such as confusion, policies and regulations, technical obstacles, etc., blockchain will encounter challenges such as market volatility and delayed application landing。What is the future, although there is no prophet to tell us, but now we have seen the trend。This goes back to the first question: "direction," a clear and clear direction that answers not only "do you want to enter this field," but also "do you want to stick to it?"。We have been developing scenarios in the distributed business model, providing quality services to the public, and providing the industry with complete and easy-to-use open source technology, which has never changed from the beginning to the present and even in the future。 -To be more specific, if we have deployed a business system on the blockchain, there are other issues that affect the lifecycle and sustainability of the system: operability, upgradeability, compatibility, data capacity, business performance capacity, and so on.。 +To be more specific, if we have deployed a business system on the blockchain, there are other issues that affect the lifecycle and sustainability of the system: operability, upgradeability, compatibility, data capacity, business performance capacity, and so on。 -The process of communicating with friends in the community many times, who will ask questions, such as whether the new version is compatible with the old one?As your business grows, more and more data can be migrated and reused?This is the true voice of the user.。The platform we have built must take the route of sustainable development, pay attention to the compatibility of the software system, have a reasonable release rhythm, and a comprehensive data migration and maintenance strategy, which can better protect the interests of community users and make users willing to develop together with the community for a long time.。 +The process of communicating with friends in the community many times, who will ask questions, such as whether the new version is compatible with the old one?As your business grows, more and more data can be migrated and reused?This is the true voice of the user。The platform we have built must take the route of sustainable development, pay attention to the compatibility of the software system, have a reasonable release rhythm, and a comprehensive data migration and maintenance strategy, which can better protect the interests of community users and make users willing to develop together with the community for a long time。 In addition, the blockchain field is still in rapid development, a variety of new technologies, new ideas, new models, new policies are still emerging, this field has gathered a large number of smart people in the world, they are not only smart but also hard, never idle。So working in this field, every day there will be new knowledge, new stimulation, this is a kind of luck, on the other hand, it will make people extremely anxious。 -How to digest such a huge amount of information, how to explore and master cutting-edge knowledge, how to better meet the needs of users and new challenges brought about by rapid development, and how to make effective breakthrough innovations, this is really a world where innovation and anxiety coexist.。 +How to digest such a huge amount of information, how to explore and master cutting-edge knowledge, how to better meet the needs of users and new challenges brought about by rapid development, and how to make effective breakthrough innovations, this is really a world where innovation and anxiety coexist。 -As a practitioner, you must continue to read a lot, filter and absorb in various information streams, and constantly summarize / summarize / think / develop;Every requirement and user ISSUE feedback is a small goal, and every new release is a new starting point for the next version.。The world of blockchain is no different from other technology fields. You must be sharp and running, curious and humble, constantly learning and practicing, revising short boards to seek breakthroughs, and sharing the results with the community.。Just as systems need great resilience, people need great resilience.。Mutual encouragement。 +As a practitioner, you must continue to read a lot, filter and absorb in various information streams, and constantly summarize / summarize / think / develop;Every requirement and user ISSUE feedback is a small goal, and every new release is a new starting point for the next version。The world of blockchain is no different from other technology fields. You must be sharp and running, curious and humble, constantly learning and practicing, revising short boards to seek breakthroughs, and sharing the results with the community。Just as systems need great resilience, people need great resilience。Mutual encouragement。 #### Recommended Links diff --git a/3.x/en/docs/articles/2_required/go_through_sourcecode.md b/3.x/en/docs/articles/2_required/go_through_sourcecode.md index 88ee52dfe..9c77cdd5c 100644 --- a/3.x/en/docs/articles/2_required/go_through_sourcecode.md +++ b/3.x/en/docs/articles/2_required/go_through_sourcecode.md @@ -4,40 +4,40 @@ Author : LI Hui-zhong | Senior Architect, FISCO BCOS ## Introduction -As an important part of "new infrastructure," blockchain has attracted more and more attention from technology enthusiasts。Blockchain geeks believe in "code is law" and believe that a trusted world can be built through code.。 +As an important part of "new infrastructure," blockchain has attracted more and more attention from technology enthusiasts。Blockchain geeks believe in "code is law" and believe that a trusted world can be built through code。 -As a comprehensive subject technology, blockchain is based on mathematics, cryptography, computer principles, distributed networks and game theory and many other basic disciplines, the underlying code easily hundreds of thousands of lines, if not clear the way, to fully grasp these codes is very challenging.。 +As a comprehensive subject technology, blockchain is based on mathematics, cryptography, computer principles, distributed networks and game theory and many other basic disciplines, the underlying code easily hundreds of thousands of lines, if not clear the way, to fully grasp these codes is very challenging。 -This article hopes to give readers a way to read the blockchain source code, so that readers can calmly say "show me the code" when facing the underlying projects of the blockchain.。 +This article hopes to give readers a way to read the blockchain source code, so that readers can calmly say "show me the code" when facing the underlying projects of the blockchain。 ## Basic knowledge reserve -Blockchain is a comprehensive discipline, involving multiple professional fields, including a wide range of basic knowledge, before in-depth study of blockchain needs to do a certain breadth of knowledge reserves.。Note that this is about breadth, not depth, which means you only need to know the basics and how they work.。 +Blockchain is a comprehensive discipline, involving multiple professional fields, including a wide range of basic knowledge, before in-depth study of blockchain needs to do a certain breadth of knowledge reserves。Note that this is about breadth, not depth, which means you only need to know the basics and how they work。 -- Cryptography Related: Understanding the fundamentals and roles of hashing, symmetric encryption, asymmetric encryption, and digital signatures; -- Computer operating system related: understanding multi-process, multi-threaded, mutually exclusive, parallel and other related concepts and functions.; -- Data structure-related: Understand basic data structures and usage scenarios such as queues, stacks, and trees; +- Cryptography related: understanding the basic principles and functions of hash, symmetric encryption, asymmetric encryption and digital signature; +- Computer operating system related: understanding of multi-process, multi-threaded, mutually exclusive, parallel and other related concepts and functions; +- Data structure related: understand the queue, stack, tree and other basic data structures and usage scenarios; - Computer network related: understand TCP / IP, heartbeat packets, message flow and other basic concepts; -- Database related: understand the basic concepts of database, understand the basic principles of KV database; -- Related to computer principles: understanding the concepts of program compilation, parsing, execution and bytecode, virtual machines, etc.; -- Distributed systems related: understanding peer-to-peer networks, distributed consistency, CAP and other related concepts and fundamentals; +-Database related: understand the basic concepts of the database, understand the basic principles of KV database; +- Related to computer principles: understanding the concepts of program compilation, parsing, execution and bytecode, virtual machines, etc; +- Distributed systems related: understand the concepts and fundamentals of peer-to-peer networks, distributed consistency, CAP, etc; - Program development related: master the relevant programming language, build tools, etc., understand the basic process of project construction。 ## Multi-dimensional walking -After you have stored the relevant basic knowledge, you can open a real blockchain underlying code, which can be quickly downloaded to the project code through git clone.。 +After you have stored the relevant basic knowledge, you can open a real blockchain underlying code, which can be quickly downloaded to the project code through git clone。 But with hundreds of thousands of lines of code, where to start?? -An excellent blockchain underlying project must have an excellent engineering code that has its reasonable organizational structure and texture logic。The day-to-day code should follow the example of Ding Ding, first find out the basic structure and logic of the blockchain, and then start the day-to-day code, you can achieve twice the result with half the effort.。 +An excellent blockchain underlying project must have an excellent engineering code that has its reasonable organizational structure and texture logic。The day-to-day code should follow the example of Ding Ding, first find out the basic structure and logic of the blockchain, and then start the day-to-day code, you can achieve twice the result with half the effort。 -This article recommends going through four different perspectives and looking at the code from your own needs, rather than being swayed by huge amounts of code.。These four perspectives are the functional perspective, the system perspective, the user perspective and the development perspective, which clarify the code architecture and key algorithms from the logical level, the operational level, the use level and the development level respectively.。 +This article recommends going through four different perspectives and looking at the code from your own needs, rather than being swayed by huge amounts of code。These four perspectives are the functional perspective, the system perspective, the user perspective and the development perspective, which clarify the code architecture and key algorithms from the logical level, the operational level, the use level and the development level respectively。 ## functional perspective -Before going deep into a blockchain underlying code, you should first obtain project design documents through its official website, technical documents, github wiki and other channels to understand its basic functional design.。 +Before going deep into a blockchain underlying code, you should first obtain project design documents through its official website, technical documents, github wiki and other channels to understand its basic functional design。 -Generally, each project will provide a list of core functions, overall architecture diagrams, functional module diagrams and other introduction documents, through which you can grasp the basic functions of the project。Even if you really can't find it, it doesn't matter. Most of the underlying blockchain projects have less difference at the functional design level, and the core functional modules are roughly the same.。 +Generally, each project will provide a list of core functions, overall architecture diagrams, functional module diagrams and other introduction documents, through which you can grasp the basic functions of the project。Even if you really can't find it, it doesn't matter. Most of the underlying blockchain projects have less difference at the functional design level, and the core functional modules are roughly the same。 ![](../../../images/articles/go_through_sourcecode/IMG_5076.PNG) @@ -53,15 +53,15 @@ The core code of the interface layer is as follows: ![](../../../images/articles/go_through_sourcecode/IMG_5079.PNG) -From the functional perspective, first locate the code position of the core functional module, and then carefully in-depth each functional code, from a single functional module, you can also continue to recursively use the functional perspective split method, the breadth of traversal until you understand the whole picture.。 +From the functional perspective, first locate the code position of the core functional module, and then carefully in-depth each functional code, from a single functional module, you can also continue to recursively use the functional perspective split method, the breadth of traversal until you understand the whole picture。 ## system perspective -From the perspective of the entire blockchain network operation, focus on the system behavior in which blockchain nodes participate throughout their life cycle.。 +From the perspective of the entire blockchain network operation, focus on the system behavior in which blockchain nodes participate throughout their life cycle。 -Concerns include what initialization steps the node has gone through since tapping the command to start the node, and then how to establish a peer-to-peer network with other nodes, as well as complete distributed collaboration.。 +Concerns include what initialization steps the node has gone through since tapping the command to start the node, and then how to establish a peer-to-peer network with other nodes, as well as complete distributed collaboration。 -Due to the slight differences in the deployment architecture of different blockchains, the system operation mode is also different, but the change is inseparable, the system perspective, each blockchain system has to go through the process of node initialization, the establishment of peer-to-peer network, the completion of distributed interaction.。 +Due to the slight differences in the deployment architecture of different blockchains, the system operation mode is also different, but the change is inseparable, the system perspective, each blockchain system has to go through the process of node initialization, the establishment of peer-to-peer network, the completion of distributed interaction。 Looking at the blockchain from a systems perspective, we must first focus on initialization。Take FISCO BCOS as an example, the blockchain node starts from the main function entry, initializes and starts each module through the libinializer module, and the startup sequence is as follows: @@ -69,52 +69,52 @@ Looking at the blockchain from a systems perspective, we must first focus on ini An important feature of FISCO BCOS can be known through the startup sequence-support for multi-group ledgers, each group is an independent Ledger module, and each Ledger has independent storage, synchronization, and consensus processing functions。 -At the same time, the system will start a number of threads (or processes, coroutines, similar principles), these threads include network monitoring, consensus, message synchronization, etc., can be combined with code analysis and system commands to view the running nodes to determine which key threads, to understand the working mechanism of key threads can be basically mastered blockchain system operation mechanism.。 +At the same time, the system will start a number of threads (or processes, coroutines, similar principles), these threads include network monitoring, consensus, message synchronization, etc., can be combined with code analysis and system commands to view the running nodes to determine which key threads, to understand the working mechanism of key threads can be basically mastered blockchain system operation mechanism。 Taking FISCO BCOS as an example, the key threads after node startup and the relationship between them are as follows: ![](../../../images/articles/go_through_sourcecode/IMG_5081.PNG) -After the initialization is completed, the host thread of the network module will actively establish connections with other nodes according to the configuration list, and continue to listen for connections from other nodes;Sync threads start sending block heights to each other. If the block height is lower than that of other nodes, the download logic is enabled.;RPC and Channel threads wait for the client to send a request and stuff the received transaction into txpool;The Sealer thread gets the transaction from txpool, and the Consensus thread starts processing the consensus packet.。 +After the initialization is completed, the host thread of the network module will actively establish connections with other nodes according to the configuration list, and continue to listen for connections from other nodes;Sync threads start sending block heights to each other. If the block height is lower than that of other nodes, the download logic is enabled;RPC and Channel threads wait for the client to send a request and stuff the received transaction into txpool;The Sealer thread gets the transaction from txpool, and the Consensus thread starts processing the consensus packet。 In this way, the entire blockchain system operates in an orderly manner, completing client requests and distributed collaboration。 ## User perspective -The user perspective focuses on the operation interface and transaction life cycle, the interface and protocol design for accessing the blockchain, the codec method, the core data structure, the error code specification, etc. It also focuses on how to send a transaction to the chain and what processing processes the transaction goes through on the chain until a network-wide consensus is reached.。 +The user perspective focuses on the operation interface and transaction life cycle, the interface and protocol design for accessing the blockchain, the codec method, the core data structure, the error code specification, etc. It also focuses on how to send a transaction to the chain and what processing processes the transaction goes through on the chain until a network-wide consensus is reached。 -Generally, the underlying projects of the blockchain will provide documentation on the interaction protocols, usually implementing different types of interaction protocols, including JsonRPC, gRPC, Restful, and so on.。 +Generally, the underlying projects of the blockchain will provide documentation on the interaction protocols, usually implementing different types of interaction protocols, including JsonRPC, gRPC, Restful, and so on。 -The interaction interface varies from project to project, but Metropolis contains interfaces such as sending transactions, deploying contracts, invoking contracts, viewing blocks, viewing transactions and receipts, and viewing blockchain status.。The data encoding for different projects will also be different, some using Json, some using protobuf, etc.。 +The interaction interface varies from project to project, but Metropolis contains interfaces such as sending transactions, deploying contracts, invoking contracts, viewing blocks, viewing transactions and receipts, and viewing blockchain status。The data encoding for different projects will also be different, some using Json, some using protobuf, etc。 -After you understand the design details of the interaction protocol, interface, codec, and error code from the technical documents, the next most important thing is to send transactions, deploy contracts, call contracts, these key interfaces, the code is stripped, throughout the transaction life cycle, so as to understand the core logic of the underlying blockchain.。 +After you understand the design details of the interaction protocol, interface, codec, and error code from the technical documents, the next most important thing is to send transactions, deploy contracts, call contracts, these key interfaces, the code is stripped, throughout the transaction life cycle, so as to understand the core logic of the underlying blockchain。 -In the case of FISCO BCOS, multiple modules work together to complete the entire life cycle of a transaction. +In the case of FISCO BCOS, multiple modules work together to complete the entire life cycle of a transaction ![](../../../images/articles/go_through_sourcecode/IMG_5082.PNG) ## Development perspective -The development perspective focuses on the entire code project, including third-party dependencies, interrelationships between source code modules, unit test frameworks and test cases, compilation and build methods, continuous integration and benchmark, and how to participate in community source code contributions, among others.。 +The development perspective focuses on the entire code project, including third-party dependencies, interrelationships between source code modules, unit test frameworks and test cases, compilation and build methods, continuous integration and benchmark, and how to participate in community source code contributions, among others。 -Different languages have corresponding recommended compilation and build methods and single-test frameworks, usually in the blockchain project source directory can quickly locate the third-party dependency library, such as cmake built C.++Project has CmakeLists.txt file, go project has go.mod file, rust project has cargo.toml file, etc。 +Different languages have corresponding recommended compilation and build methods and single-test frameworks, usually in the blockchain project source directory can quickly locate the third-party dependency library, such as cmake built C++Project has CmakeLists.txt file, go project has go.mod file, rust project has cargo.toml file, etc。 Take FISCO BCOS as an example. From CMakeLists.txt, you can see that the dependent libraries include: ![](../../../images/articles/go_through_sourcecode/IMG_5083.PNG) -Project core source code including FICO-Bcos program entry code, as well as libxxx module code, according to the name of the module can quickly identify its corresponding function, here also reflects the quality of a project source code, high-quality code should be "code is comment"。 +The core source code of the project includes the entry code of the fisco-bcos program and the code of each module of libxxx. According to the name of the module, the corresponding function can be quickly identified。 Unit test code in the test directory, using the boost unit test framework, subdirectory unittests single test code and source directory one-to-one correspondence, it is easy to find the source code corresponding to the unit test code。 -The code of the build and continuous integration tool maintains a number of continuous integration use cases in different scenarios in the tools directory and sub-directory ci, and each pr (pull request) submitted on github triggers these continuous integration use cases, which can be merged into pr if and only if each use case passes successfully.。 +The code of the build and continuous integration tool maintains a number of continuous integration use cases in different scenarios in the tools directory and sub-directory ci, and each pr (pull request) submitted on github triggers these continuous integration use cases, which can be merged into pr if and only if each use case passes successfully。 The code specification and contribution method of FISCO BCOS are described in detail in the CODING _ STYLE.md and CONTRIBUTING.md files, and community users are encouraged to actively participate in the contribution。 ## SUMMARY -Blockchain involves a lot of fields and knowledge, you need to go deep into the details of the source code in order to truly fully grasp the core technology of blockchain.。The so-called "heavy sword without a front, great skill without work," master the basic methodology of source code read, in order to be in front of a huge amount of code, the face does not change the color of the heart does not jump.。 +Blockchain involves a lot of fields and knowledge, you need to go deep into the details of the source code in order to truly fully grasp the core technology of blockchain。The so-called "heavy sword without a front, great skill without work," master the basic methodology of source code read, in order to be in front of a huge amount of code, the face does not change the color of the heart does not jump。 -This paper proposes a way to read the underlying code of the blockchain from four different perspectives: function, system, user and development.。 +This paper proposes a way to read the underlying code of the blockchain from four different perspectives: function, system, user and development。 Finally, the examples given in this article are FISCO BCOS, but this walk-through method can be applied to any other blockchain underlying project, I hope this article is helpful to you。 \ No newline at end of file diff --git a/3.x/en/docs/articles/2_required/practical_skill_tree.md b/3.x/en/docs/articles/2_required/practical_skill_tree.md index f16a700b4..72e5a0a58 100644 --- a/3.x/en/docs/articles/2_required/practical_skill_tree.md +++ b/3.x/en/docs/articles/2_required/practical_skill_tree.md @@ -2,27 +2,27 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS -With the new wave of blockchain craze, many students entered the field with great enthusiasm, but also encountered a lot of doubts, what knowledge blockchain development needs?How to learn?Where to learn from?What to do in case of problems?This article will try to give a quick and practical guide to newcomers in the blockchain field.。 +With the new wave of blockchain craze, many students entered the field with great enthusiasm, but also encountered a lot of doubts, what knowledge blockchain development needs?How to learn?Where to learn from?What to do in case of problems?This article will try to give a quick and practical guide to newcomers in the blockchain field。 ## I. Basic IT Skills -Blockchain can be called "black technology," itself has a large number of technical elements, people who are interested in cutting into the blockchain from a technical point of view, should have or master the basic IT skills, to at least the conventional level of "programmer" or "system administrator" skill level.。 +Blockchain can be called "black technology," itself has a large number of technical elements, people who are interested in cutting into the blockchain from a technical point of view, should have or master the basic IT skills, to at least the conventional level of "programmer" or "system administrator" skill level。 ![](../../../images/articles/practical_skill_tree/IMG_4890.PNG) **Knowledge of Linux operating system is required first。** -Most blockchain systems can run on Linux, including CentOS and Ubuntu. You must know at least some basic Linux operating instructions, such as ls to view directories, ps or top to view processes, find files, netstat to view networks, ulimit to check system parameter limits, df / du to view disk space, apt / yum to install software, etc.。 +Most blockchain systems can run on Linux, including CentOS and Ubuntu. You must know at least some basic Linux operating instructions, such as ls to view directories, ps or top to view processes, find files, netstat to view networks, ulimit to check system parameter limits, df / du to view disk space, apt / yum to install software, etc。 -There are many books and materials in this area, and I believe I can get started in a week.。In addition, good at Linux man instructions, you can get detailed help for each command。If you learn to write shell scripts, it's even more powerful, and you can automate a lot of tedious operations.。 +There are many books and materials in this area, and I believe I can get started in a week。In addition, good at Linux man instructions, you can get detailed help for each command。If you learn to write shell scripts, it's even more powerful, and you can automate a lot of tedious operations。 **Have a clear network concept。** -The blockchain is originally a distributed system, and the nodes must be connected through the network, but if you run, you don't need much network knowledge, you only need to understand what TCP / IP is.;Difference between public network, intranet and local address;How to Configure Ports;Is the interconnection between nodes and nodes, SDKs, and nodes blocked by firewalls and network policies;Use ifconfig, telnet, ping, netstat and other commands to check network information and detect and locate network problems。Generally speaking, Linux books will also cover this part of the content。 +The blockchain is originally a distributed system, and the nodes must be connected through the network, but if you run, you don't need much network knowledge, you only need to understand what TCP / IP is;Difference between public network, intranet and local address;How to Configure Ports;Is the interconnection between nodes and nodes, SDKs, and nodes blocked by firewalls and network policies;Use ifconfig, telnet, ping, netstat and other commands to check network information and detect and locate network problems。Generally speaking, Linux books will also cover this part of the content。 Blockchain peripheral support, such as browsers, middleware, business applications, will rely on some third-party basic software, such as MySQL / MariaDB database, Nginx service, Tomcat service, etc., at least know how to install a specified version of the software, master the basic operation of modifying the configuration file of these software and making it effective, understand the password, permission configuration and network security policy of each software, in order to protect their own security。 -If it is built based on a container environment such as cloud, docker, or k8s, you need to understand the functions, performance, and configuration methods of the service provider or container you are using, including resource allocation: CPU, memory, bandwidth, storage, etc., as well as security and permission configuration, network policy configuration, and operation and maintenance methods, so that you can easily distribute the build while maintaining its stability and availability.。 +If it is built based on a container environment such as cloud, docker, or k8s, you need to understand the functions, performance, and configuration methods of the service provider or container you are using, including resource allocation: CPU, memory, bandwidth, storage, etc., as well as security and permission configuration, network policy configuration, and operation and maintenance methods, so that you can easily distribute the build while maintaining its stability and availability。 Various cloud service providers and container solutions have comprehensive documentation and customer service channels to help users use。 @@ -30,19 +30,19 @@ Various cloud service providers and container solutions have comprehensive docum If you are using the Java language, you should be familiar with Eclipse, IntelliJ IDEA and other integrated IDEs, familiar with Gradle-based project management software, familiar with Spring, Springboot and other java-based development components, familiar with the IDE or command line on the resource path such as ApplicationContext and other path definitions, and perhaps myBatis and other popular components, which can be found in java-related communities and websites information and books。 -If you are proficient in the Java language, using the Java SDK to connect to the blockchain and run a Demo Sample will be very easy to write.。 +If you are proficient in the Java language, using the Java SDK to connect to the blockchain and run a Demo Sample will be very easy to write。 -If other languages are used, we also provide blockchain SDKs in Python, Node.js, Golang, etc.。 +If other languages are used, we also provide blockchain SDKs in Python, Node.js, Golang, etc。 -Different languages, their installation packages have different stable versions, will use different environments and dependent installation configuration methods, there will be different IDE and debugging methods, will not be listed in this article, I believe that learning and using the language itself, the programmer is already the most basic skills.。 +Different languages, their installation packages have different stable versions, will use different environments and dependent installation configuration methods, there will be different IDE and debugging methods, will not be listed in this article, I believe that learning and using the language itself, the programmer is already the most basic skills。 -**Finally, as a player surfing in the open source world, github, the "world's largest same-sex dating site," must be on it.。** +**Finally, as a player surfing in the open source world, github, the "world's largest same-sex dating site," must be on it。** Register a github account, master the basic operation of the git version management tool, clone and pull open source software code, submit issue, commit your own modifications, submit a pull request to an open source project, and then click a star, passionate and stylish, leave your name in the open source world。 -## Second, the basic knowledge stack in the field of blockchain. +## Second, the basic knowledge stack in the field of blockchain -The following sections of knowledge are more relevant to blockchain or a blockchain platform, from bottom to top as follows. +The following sections of knowledge are more relevant to blockchain or a blockchain platform, from bottom to top as follows ![](../../../images/articles/practical_skill_tree/IMG_4891.JPG) @@ -52,31 +52,31 @@ Strictly speaking, this is not a proprietary knowledge in the blockchain field, ### Basic Applied Cryptography -Basic application cryptography is actually very wide, as a beginner, at least to understand the common algorithms of symmetric and asymmetric encryption, such as AES symmetric encryption, RSA, ECDSA elliptic curve and other asymmetric encryption algorithms, and the role of these algorithms in signature verification, data encryption, communication negotiation and protection.。If you want to use the national secret, you need to understand the concept and use of a series of algorithms from SM2 to SM9.。 +Basic application cryptography is actually very wide, as a beginner, at least to understand the common algorithms of symmetric and asymmetric encryption, such as AES symmetric encryption, RSA, ECDSA elliptic curve and other asymmetric encryption algorithms, and the role of these algorithms in signature verification, data encryption, communication negotiation and protection。If you want to use the national secret, you need to understand the concept and use of a series of algorithms from SM2 to SM9。 ### distributed network structure -Blockchain is an innate "distributed network system." Nodes and nodes are interconnected through P2P ports of the network, and clients and SDKs are interconnected through RPC / Channel ports. First of all, it is necessary to ensure that the networks are interoperable, the listening addresses and ports are correct, the ports are open, the firewall and network policies are correct, and the certificates for secure connection are in place to ensure that the "general rules" of the blockchain are not painful.。 +Blockchain is an innate "distributed network system." Nodes and nodes are interconnected through P2P ports of the network, and clients and SDKs are interconnected through RPC / Channel ports. First of all, it is necessary to ensure that the networks are interoperable, the listening addresses and ports are correct, the ports are open, the firewall and network policies are correct, and the certificates for secure connection are in place to ensure that the "general rules" of the blockchain are not painful。 This also requires users to have basic network knowledge, network tools, and understand the unique node types (consensus nodes, observation nodes, light nodes, etc.) and interconnection methods (point-to-point two-way connections, JSON RPC HTTP short connections, Channel long connections, etc.)。For details, please refer to [FISCO BCOS Network Port Explanation](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485319&idx=1&sn=b1fb98d27a0f34f5824a876ba8fa5fe6&chksm=9f2ef59ba8597c8d09eac31ccf0be07910e53f2d88afc5f58347247ba045c63736cc74133524&token=942411972&lang=zh_CN#rd)。 ### Smart Contracts -Smart contracts can be said to be a door for application developers to face the blockchain, and getting into this door is exciting.。The popular smart contract language is Solidity, which originated from Ethereum and has been for blockchain since its inception.。 +Smart contracts can be said to be a door for application developers to face the blockchain, and getting into this door is exciting。The popular smart contract language is Solidity, which originated from Ethereum and has been for blockchain since its inception。 -The Solidity language is actively updated, well-documented, consistent and transactional, and functional enough to enable medium-sized commercial applications.。 +The Solidity language is actively updated, well-documented, consistent and transactional, and functional enough to enable medium-sized commercial applications。 -Of course, it is not as good as a mature language in terms of real-time debugging, third library support, and running speed, if the developer wants to use C++When writing smart contracts in languages such as Solidity, it is necessary to have an in-depth understanding of the computing paradigm on the blockchain to avoid writing smart contracts that cannot be agreed upon, and it is generally recommended to write contracts in languages other than Solidity after having an in-depth understanding.。 +Of course, it is not as good as a mature language in terms of real-time debugging, third library support, and running speed, if the developer wants to use C++When writing smart contracts in languages such as Solidity, it is necessary to have an in-depth understanding of the computing paradigm on the blockchain to avoid writing smart contracts that cannot be agreed upon, and it is generally recommended to write contracts in languages other than Solidity after having an in-depth understanding。 -To master the Solidity contract, of course, read through the documentation and try it out.。Refer to the following: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html) +To master the Solidity contract, of course, read through the documentation and try it out。Refer to the following: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html) ### ABI Interface Principle -On a blockchain that uses an EVM as a virtual machine, the EVM executes a contract in the Solidity language.。Contract compilation will generate a file with the suffix ABI, which is actually the JSON text defined by the contract interface. You can use the text viewer to find out how the contract you wrote is translated into the interface in the ABI, the interface return type, parameter list, parameter type, etc. As long as there is an ABI file for the contract, you can call the interface of the blockchain SDK to parse the transaction, return value, receipt, etc. related to the contract。 +On a blockchain that uses an EVM as a virtual machine, the EVM executes a contract in the Solidity language。Contract compilation will generate a file with the suffix ABI, which is actually the JSON text defined by the contract interface. You can use the text viewer to find out how the contract you wrote is translated into the interface in the ABI, the interface return type, parameter list, parameter type, etc. As long as there is an ABI file for the contract, you can call the interface of the blockchain SDK to parse the transaction, return value, receipt, etc. related to the contract。 ### block data structure -A block has a block header and a block block。The block has a transaction list, and each transaction (Transaction or Tx) in the transaction list has an initiator, a target address, a calling method and parameters, and a sender signature.。The result of the transaction will generate a "Receipt (Receipt)," which contains the return value of the called method, the EventLog generated by the running process, etc. +A block has a block header and a block block。The block has a transaction list, and each transaction (Transaction or Tx) in the transaction list has an initiator, a target address, a calling method and parameters, and a sender signature。The result of the transaction will generate a "Receipt (Receipt)," which contains the return value of the called method, the EventLog generated by the running process, etc Knowing this, you can basically grasp the context of the blockchain data, and you can continue to delve into the data structure of the merkle root and the corresponding merkle tree is generated, what is the role (e.g. for SPV: Simplified PaymentVerification)。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/protocol_description.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/protocol_description.html) @@ -84,67 +84,67 @@ Knowing this, you can basically grasp the context of the blockchain data, and yo Here, the functional interfaces exposed by blockchain nodes are collectively referred to as "RPC interfaces"。View on-chain data, including blocks, transactions, receipts, system information, configuration information, initiate transactions to the chain to invoke smart contracts, modify system configurations, etc., or send messages and listen for events through the AMOP protocol, all through the RPC interface。 -Dozens of RPC interfaces are suggested to read one by one, or make good use of searching to find the interface you want.。 +Dozens of RPC interfaces are suggested to read one by one, or make good use of searching to find the interface you want。 -The protocol used for interface communication may be JSON RPC or the original Channel protocol created by FISCO BCOS. The SDK has basically packaged the interface and protocol well, and you can also develop your own interface client based on an in-depth understanding of coding modes such as ABI and RLP.。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html) +The protocol used for interface communication may be JSON RPC or the original Channel protocol created by FISCO BCOS. The SDK has basically packaged the interface and protocol well, and you can also develop your own interface client based on an in-depth understanding of coding modes such as ABI and RLP。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html) ### Admission and permission models -The alliance chain emphasizes security and control, and node access is the first step. After the chain is initialized, other nodes or SDKs are configured with corresponding certificates to access the existing alliance chain.。 +The alliance chain emphasizes security and control, and node access is the first step. After the chain is initialized, other nodes or SDKs are configured with corresponding certificates to access the existing alliance chain。 -The roles on the chain are controlled by a permission model, including administrator permissions, permissions to publish contracts, permissions to create tables, parameter configuration permissions, etc., to avoid confusion between roles, and some roles are both athletes and referees.。 +The roles on the chain are controlled by a permission model, including administrator permissions, permissions to publish contracts, permissions to create tables, parameter configuration permissions, etc., to avoid confusion between roles, and some roles are both athletes and referees。 -Beginners need to carefully read the technical documents provided by the blockchain platform to understand the principles and follow the steps in the operation manual.。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/permission_control.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/permission_control.html) +Beginners need to carefully read the technical documents provided by the blockchain platform to understand the principles and follow the steps in the operation manual。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/permission_control.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/permission_control.html) ### data storage model -Blockchain nodes will use file databases (LevelDB or RocksDB), or relational databases such as MySQL to store data, so there is really a "database" on the chain.。 +Blockchain nodes will use file databases (LevelDB or RocksDB), or relational databases such as MySQL to store data, so there is really a "database" on the chain。 -The data written to the database includes blocks, transactions, receipts, status data generated by contracts, etc. Whether to write "historical data generated by calling contracts" depends on different platforms, FISCO BCOS only saves the latest status value by default, and can optionally write modification records to the "receipt" or "history table" for tracking.。 +The data written to the database includes blocks, transactions, receipts, status data generated by contracts, etc. Whether to write "historical data generated by calling contracts" depends on different platforms, FISCO BCOS only saves the latest status value by default, and can optionally write modification records to the "receipt" or "history table" for tracking。 -FISCO BCOS also provides solutions to export historical data to an off-chain database for correlation analysis.。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/storage/index.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/storage/index.html) +FISCO BCOS also provides solutions to export historical data to an off-chain database for correlation analysis。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/storage/index.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/storage/index.html) ### Principle of consensus mechanism Alliance chains are usually implemented using plug-in consensus mechanisms, and FISCO BCOS provides two efficient consensus algorithms, PBFT and RAFT, rather than "mining" these energy-intensive and inefficient consensus。 -The consensus mechanism is the soul of the blockchain, and only through in-depth study of the consensus mechanism can we gain a deeper understanding of the effectiveness of the blockchain through multi-party collaboration, high consistency, support for transaction transactions, and tamper-proof and evil-proof.。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/consensus/index.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/consensus/index.html) +The consensus mechanism is the soul of the blockchain, and only through in-depth study of the consensus mechanism can we gain a deeper understanding of the effectiveness of the blockchain through multi-party collaboration, high consistency, support for transaction transactions, and tamper-proof and evil-proof。Refer to the following documentation: [https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/consensus/index.html](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/consensus/index.html) -The knowledge of blockchain is all-encompassing, and the deeper knowledge includes distributed system theory, game theory, cutting-edge cryptography, economics, sociology, etc. Only by mastering the above basic knowledge, and then in-depth study, by analogy, using scenarios to verify and explore innovative applications, can we give full play to the potential of technology and feel the charm of distributed commerce.。 +The knowledge of blockchain is all-encompassing, and the deeper knowledge includes distributed system theory, game theory, cutting-edge cryptography, economics, sociology, etc. Only by mastering the above basic knowledge, and then in-depth study, by analogy, using scenarios to verify and explore innovative applications, can we give full play to the potential of technology and feel the charm of distributed commerce。 -## Third, what a learner. +## Third, what a learner In this process, learners are expected to: ### Patience in reading documents -Our open source project documents are more than 200,000 words long, and there are a large number of technical analysis and popular science articles in the public number, which are programmers in addition to coding, exhausted their only language reserves, code out of the massive text, is a huge technical wealth, covering all aspects of related open source projects.。If you can read through, or remember the document structure and title, open it quickly when needed, enough to solve puzzles and go deep.。 +Our open source project documents are more than 200,000 words long, and there are a large number of technical analysis and popular science articles in the public number, which are programmers in addition to coding, exhausted their only language reserves, code out of the massive text, is a huge technical wealth, covering all aspects of related open source projects。If you can read through, or remember the document structure and title, open it quickly when needed, enough to solve puzzles and go deep。 ### Ability to search information -Documents, public numbers have a search function, when you think of questions related to the open source community, you can easily use keyword search, generally can find the answer。If there is a language unknown, you can raise it with the open source project team or supplement it according to your own understanding.。Common knowledge points, such as operating systems, networks, etc., can generally be found through public network search engines。 +Documents, public numbers have a search function, when you think of questions related to the open source community, you can easily use keyword search, generally can find the answer。If there is a language unknown, you can raise it with the open source project team or supplement it according to your own understanding。Common knowledge points, such as operating systems, networks, etc., can generally be found through public network search engines。 ### Ability to troubleshoot environmental and dependency issues -Open source software involves the system environment, third-party software, software versions, etc. often have complex dependencies, too high or too low version may have some problems, please pay attention to read the project documentation on the hardware and software environment and dependency description, to ensure that your environment meets the requirements, and make good use of configuration management tools, software installation tools to obtain and set the appropriate version.。 +Open source software involves the system environment, third-party software, software versions, etc. often have complex dependencies, too high or too low version may have some problems, please pay attention to read the project documentation on the hardware and software environment and dependency description, to ensure that your environment meets the requirements, and make good use of configuration management tools, software installation tools to obtain and set the appropriate version。 ### Commissioning capability -As mentioned above, the debugging tools of the Solidity language are not yet perfect, but you can make good use of the return value of the contract, EventLog, etc., through WeBASE, console and other tools to debug, and consult the Solidity documentation to understand where the problem may be.。 +As mentioned above, the debugging tools of the Solidity language are not yet perfect, but you can make good use of the return value of the contract, EventLog, etc., through WeBASE, console and other tools to debug, and consult the Solidity documentation to understand where the problem may be。 -After the debug level is enabled for the logs of the blockchain nodes, detailed information will also be printed. You can check the running logs to obtain running information and possible error information, and analyze these information in combination with your own operations, such as the process of issuing transactions, to improve debugging efficiency.。 +After the debug level is enabled for the logs of the blockchain nodes, detailed information will also be printed. You can check the running logs to obtain running information and possible error information, and analyze these information in combination with your own operations, such as the process of issuing transactions, to improve debugging efficiency。 -At the same time, the current open source software usually prints the cause of the error and the prompt to solve the problem on the screen, carefully check the operation feedback, the probability can understand the cause of the error and the solution.。 +At the same time, the current open source software usually prints the cause of the error and the prompt to solve the problem on the screen, carefully check the operation feedback, the probability can understand the cause of the error and the solution。 ### Code reading ability -The maximum performance of open source software is to spread the code to the developers and learners without omission, understand the code structure, check the key processes in the code, use keywords to search for the corresponding implementation in the code, you can go deep into the details of the system, dig design ideas, locate problems, find optimization methods.。A studious and hardcore programmer who can talk to the world through code.。 +The maximum performance of open source software is to spread the code to the developers and learners without omission, understand the code structure, check the key processes in the code, use keywords to search for the corresponding implementation in the code, you can go deep into the details of the system, dig design ideas, locate problems, find optimization methods。A studious and hardcore programmer who can talk to the world through code。 ### Ways and means of asking questions -"A good question is more important than an answer."。Our community is very active, everyone is very enthusiastic to answer and solve problems。We encourage questions to be asked openly in the community, so that on the one hand, everyone can share the problem and find solutions, and on the other hand, more people can help the questioner.。At the same time, I hope that when the questioner asks a question, a one-time detailed description of the relevant operating steps, system environment, software version, error tips and the desired solution are put forward.。 +"A good question is more important than an answer."。Our community is very active, everyone is very enthusiastic to answer and solve problems。We encourage questions to be asked openly in the community, so that on the one hand, everyone can share the problem and find solutions, and on the other hand, more people can help the questioner。At the same time, I hope that when the questioner asks a question, a one-time detailed description of the relevant operating steps, system environment, software version, error tips and the desired solution are put forward。 -If it is a general problem, you can search and then ask questions, which is conducive to cultivating the ability to solve problems independently.。Hope that the questioner can feedback deeper questions to the community to help the community quickly optimize。For many typical problems, the community has also accumulated some proven solutions that we will collate and publish for easy access.。 +If it is a general problem, you can search and then ask questions, which is conducive to cultivating the ability to solve problems independently。Hope that the questioner can feedback deeper questions to the community to help the community quickly optimize。For many typical problems, the community has also accumulated some proven solutions that we will collate and publish for easy access。 The road from newcomers to old birds may be long, if you can refer to some of the methods of this essay, you can step on many fewer pits and write more applications。Enjoy blockchain, the community advances with you。 @@ -164,7 +164,7 @@ The road from newcomers to old birds may be long, if you can refer to some of th [FISCO BCOS Public Development Tutorial Warehouse](http://mp.weixin.qq.com/mp/homepage?__biz=MzU5NTg0MjA4MA==&hid=9&sn=7edf9a62a2f45494671c91f0608db903&scene=18#wechat_redirect) -[The sparrow is small and has all five internal organs| From Python-SDK Talk about FISCO BCOS Multilingual SDK](https://mp.weixin.qq.com/s/YZdqf3Wxsnj8hY2770CuQA)This article explains in detail how to understand the block link port from the client's application.。 +[The sparrow is small and has all five internal organs| Talk about FISCO BCOS Multilingual SDK from Python-SDK](https://mp.weixin.qq.com/s/YZdqf3Wxsnj8hY2770CuQA)This article explains in detail how to understand the block link port from the client's application。 [Solidity Smart Contract (Chinese)](https://solidity-cn.readthedocs.io/)(Note that select the corresponding version) diff --git a/3.x/en/docs/articles/3_features/30_architecture/dag-based_parallel_transaction_execution_engine.md b/3.x/en/docs/articles/3_features/30_architecture/dag-based_parallel_transaction_execution_engine.md index 54c7ce317..d13ddcc0d 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/dag-based_parallel_transaction_execution_engine.md +++ b/3.x/en/docs/articles/3_features/30_architecture/dag-based_parallel_transaction_execution_engine.md @@ -2,29 +2,29 @@ Author: Li Chen Xi | FISCO BCOS Core Developer -In the blockchain world, transactions are the basic units that make up transactions。To a large extent, transaction throughput can limit or broaden the applicable scenarios of blockchain business. The higher the throughput, the wider the scope of application and the larger the user scale that blockchain can support.。Currently, TPS (Transaction per Second), which reflects transaction throughput, is a hot indicator for evaluating performance.。In order to improve TPS, the industry has put forward an endless stream of optimization solutions, all kinds of optimization means of the final focus, are to maximize the parallel processing capacity of transactions, reduce the processing time of the whole process of transactions.。 +In the blockchain world, transactions are the basic units that make up transactions。To a large extent, transaction throughput can limit or broaden the applicable scenarios of blockchain business. The higher the throughput, the wider the scope of application and the larger the user scale that blockchain can support。Currently, TPS (Transaction per Second), which reflects transaction throughput, is a hot indicator for evaluating performance。In order to improve TPS, the industry has put forward an endless stream of optimization solutions, all kinds of optimization means of the final focus, are to maximize the parallel processing capacity of transactions, reduce the processing time of the whole process of transactions。 -In the multi-core processor architecture has become the mainstream of today, the use of parallel technology to fully tap the potential of the CPU is an effective solution.。A parallel transaction executor (PTE, Parallel Transaction Executor) based on the DAG model is designed in FISCO BCOS 2.0.。 +In the multi-core processor architecture has become the mainstream of today, the use of parallel technology to fully tap the potential of the CPU is an effective solution。A parallel transaction executor (PTE, Parallel Transaction Executor) based on the DAG model is designed in FISCO BCOS 2.0。 -PTE can take full advantage of multi-core processors, so that transactions in the block can be executed in parallel as much as possible;At the same time to provide users with a simple and friendly programming interface, so that users do not have to care about the cumbersome parallel implementation details。The experimental results of the benchmark program show that compared with the traditional serial transaction execution scheme, the PTE running on the 4-core processor can achieve about 200% ~ 300% performance improvement under ideal conditions, and the calculation improvement is proportional to the number of cores.。 +PTE can take full advantage of multi-core processors, so that transactions in the block can be executed in parallel as much as possible;At the same time to provide users with a simple and friendly programming interface, so that users do not have to care about the cumbersome parallel implementation details。The experimental results of the benchmark program show that compared with the traditional serial transaction execution scheme, the PTE running on the 4-core processor can achieve about 200% ~ 300% performance improvement under ideal conditions, and the calculation improvement is proportional to the number of cores。 PTE has laid a solid foundation for the performance of FISCO BCOS. This article will comprehensively introduce the design idea and implementation scheme of PTE, including the following contents: - **背景**Performance Bottlenecks of Traditional Schemes and Introduction of DAG Parallel Model - **Design Ideas**: Problems encountered when PTE is applied to FISCO BCOS and solutions - **Architecture Design**: Architecture and core process of FISCO BCOS after PTE application -- **core algorithm**: Introduces the main data structures and algorithms used. +- **core algorithm**: Introduces the main data structures and algorithms used - **Performance evaluation**Performance and scalability test results of PTE are given respectively ## 背景 -The FISCO BCOS transaction processing module can be abstracted as a transaction-based state machine。In FISCO BCOS, "state" refers to the state of all accounts in the blockchain, while "transaction-based" means that FISCO BCOS uses transactions as a state migration function and updates from the old state to the new state based on the content of the transaction.。FISCO BCOS starts from the genesis block state, continuously collects transactions occurring on the network and packages them into blocks, and executes transactions in the blocks among all nodes participating in the consensus.。When transactions within a block are executed on multiple consensus nodes and the state is consistent, we say that consensus is reached on the block and the block is permanently recorded in the blockchain。 +The FISCO BCOS transaction processing module can be abstracted as a transaction-based state machine。In FISCO BCOS, "state" refers to the state of all accounts in the blockchain, while "transaction-based" means that FISCO BCOS uses transactions as a state migration function and updates from the old state to the new state based on the content of the transaction。FISCO BCOS starts from the genesis block state, continuously collects transactions occurring on the network and packages them into blocks, and executes transactions in the blocks among all nodes participating in the consensus。When transactions within a block are executed on multiple consensus nodes and the state is consistent, we say that consensus is reached on the block and the block is permanently recorded in the blockchain。 -As can be seen from the above-mentioned blockchain packaging → consensus → storage process, executing all transactions in the block is the only way to blockchain。The traditional transaction execution scheme is that the execution unit reads the transactions one by one from the block to be agreed upon, and after each transaction is executed, the state machine migrates to the next state until all transactions are executed serially, as shown in the following figure. +As can be seen from the above-mentioned blockchain packaging → consensus → storage process, executing all transactions in the block is the only way to blockchain。The traditional transaction execution scheme is that the execution unit reads the transactions one by one from the block to be agreed upon, and after each transaction is executed, the state machine migrates to the next state until all transactions are executed serially, as shown in the following figure ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5175.PNG) -Obviously, this way of executing transactions is not performance-friendly。Even if two transactions do not intersect, they can only be executed in order of priority.。As far as the relationship between transactions is concerned, since the one-dimensional "line" structure has such pain points, why not look at the two-dimensional "graph" structure?? +Obviously, this way of executing transactions is not performance-friendly。Even if two transactions do not intersect, they can only be executed in order of priority。As far as the relationship between transactions is concerned, since the one-dimensional "line" structure has such pain points, why not look at the two-dimensional "graph" structure?? In practical applications, according to the mutually exclusive resources that need to be used when each transaction is executed (mutually exclusive means exclusive use of resources, for example, in the above-mentioned transfer problem mutually exclusive resources, refers to the balance status of each account), we can organize a transaction dependency diagram, in order to prevent the transaction dependency relationship in the diagram into a ring, we can specify that the transaction list involves the same mutually exclusive resources, and the order of the lower transaction is a D。 @@ -33,39 +33,39 @@ As shown in the figure below, the 6 transfer transactions on the left can be org ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5176.PNG) -In a trade DAG, a trade with an entry of zero is a ready trade that has no dependencies and can be put into operation immediately.。When the number of ready transactions is greater than 1, ready transactions can be spread across multiple CPU cores for parallel execution.。When a transaction is executed, the entry of all transactions dependent on the transaction is reduced by 1, and as the transactions continue to be executed, ready transactions continue to be generated.。In the extreme case, if the number of layers of the constructed transaction DAG is 1 (i.e., all transactions are independent transactions without dependencies), the increase in the overall execution speed of the transaction will directly depend on the number of cores n of the processor, and if n is greater than the number of transactions in the block, the execution time of all transactions in the block is the same as the execution time of a single transaction。 +In a trade DAG, a trade with an entry of zero is a ready trade that has no dependencies and can be put into operation immediately。When the number of ready transactions is greater than 1, ready transactions can be spread across multiple CPU cores for parallel execution。When a transaction is executed, the entry of all transactions dependent on the transaction is reduced by 1, and as the transactions continue to be executed, ready transactions continue to be generated。In the extreme case, if the number of layers of the constructed transaction DAG is 1 (i.e., all transactions are independent transactions without dependencies), the increase in the overall execution speed of the transaction will directly depend on the number of cores n of the processor, and if n is greater than the number of transactions in the block, the execution time of all transactions in the block is the same as the execution time of a single transaction。 How to apply the trading DAG model, which theoretically has such an irresistibly beautiful feature, to FISCO BCOS? ## Design Ideas -**To apply the transaction DAG model, the primary problem we face is: for the same block, how to ensure that all nodes can reach the same state after execution, which is a key issue related to whether the blockchain can be out of the block normally.。** +**To apply the transaction DAG model, the primary problem we face is: for the same block, how to ensure that all nodes can reach the same state after execution, which is a key issue related to whether the blockchain can be out of the block normally。** -FISCO BCOS Adoption Verification(state root, transaction root, receipt root)The way in which the triples are equal to determine whether the states agree。The transaction root is a hash value calculated based on all transactions in the block. As long as all consensus nodes process the same block data, the transaction root must be the same.。 +FISCO BCOS Adoption Verification(state root, transaction root, receipt root)The way in which the triples are equal to determine whether the states agree。The transaction root is a hash value calculated based on all transactions in the block. As long as all consensus nodes process the same block data, the transaction root must be the same。 -As we all know, for instructions executed in parallel on different CPU cores, the order of execution between instructions cannot be predicted in advance, and the same applies to transactions executed in parallel.。In the traditional transaction execution scheme, every time a transaction is executed, the state root changes once, and the changed state root is written into the transaction receipt. After all transactions are executed, the final state root represents the current state of the blockchain, and a receipt root is calculated based on all transaction receipts.。 +As we all know, for instructions executed in parallel on different CPU cores, the order of execution between instructions cannot be predicted in advance, and the same applies to transactions executed in parallel。In the traditional transaction execution scheme, every time a transaction is executed, the state root changes once, and the changed state root is written into the transaction receipt. After all transactions are executed, the final state root represents the current state of the blockchain, and a receipt root is calculated based on all transaction receipts。 -As you can see, in the traditional execution scenario, state root plays a role similar to a global shared variable.。When transactions are executed in parallel and out of order, the traditional method of calculating state root is obviously no longer applicable, because on different machines, the order of execution of transactions is generally different, at this time there is no guarantee that the final state root can be consistent, similarly, receive root can not guarantee consistency.。 +As you can see, in the traditional execution scenario, state root plays a role similar to a global shared variable。When transactions are executed in parallel and out of order, the traditional method of calculating state root is obviously no longer applicable, because on different machines, the order of execution of transactions is generally different, at this time there is no guarantee that the final state root can be consistent, similarly, receive root can not guarantee consistency。 -In FISCO BCOS, the solution we use is to execute the transaction first, record the history of each transaction's change of state, and then calculate a state root based on these history after all transactions are executed, and at the same time, the state root in the transaction receipt is all changed to the final state root after all transactions are executed, thus ensuring that even if the transactions are executed in parallel, the final consensus node can still reach an agreement.。 +In FISCO BCOS, the solution we use is to execute the transaction first, record the history of each transaction's change of state, and then calculate a state root based on these history after all transactions are executed, and at the same time, the state root in the transaction receipt is all changed to the final state root after all transactions are executed, thus ensuring that even if the transactions are executed in parallel, the final consensus node can still reach an agreement。 -**Once the status problem is solved, the next question is how to determine if there is a dependency between two transactions.?** +**Once the status problem is solved, the next question is how to determine if there is a dependency between two transactions?** -Unnecessary performance loss if two transactions are judged to have no dependencies;Conversely, if the two transactions rewrite the state of the same account but are executed in parallel, the final state of the account may be uncertain。Therefore, the determination of dependencies is an important issue that affects performance and can even determine whether the blockchain can work properly.。 +Unnecessary performance loss if two transactions are judged to have no dependencies;Conversely, if the two transactions rewrite the state of the same account but are executed in parallel, the final state of the account may be uncertain。Therefore, the determination of dependencies is an important issue that affects performance and can even determine whether the blockchain can work properly。 In a simple transfer transaction, we can determine whether two transactions are dependent based on the addresses of the sender and recipient of the transfer, such as the following three transfer transactions: A → B, C → D, D → E. It is easy to see that transaction D → E depends on the result of transaction C → D, but transaction A → B has nothing to do with the other two transactions, so it can be executed in parallel。 -This analysis is correct in a blockchain that only supports simple transfers, but once it is put into a Turing-complete blockchain that runs smart contracts, it may not be as accurate because we don't know exactly what is going on in the transfer contract written by the user, and what might happen is: A.-> B's transaction seems to have nothing to do with the account status of C and D, but in the user's underlying implementation, A is a special account, and every money transferred out of account A must be deducted from account C for a fee.。In this scenario, if all three transactions are related, they cannot be executed in parallel, and if the transactions are also divided according to the previous dependency analysis method, they are bound to fall.。 +This analysis is correct in a blockchain that only supports simple transfers, but once put into a Turing-complete blockchain that runs smart contracts, it may not be as accurate because we don't know exactly what is going on in the transfer contract written by the user, and what might happen is: A->B's transaction seems to have nothing to do with the account status of C and D, but in the user's underlying implementation, A is a special account, and every money transferred out of account A must first be deducted from account C for a fee。In this scenario, if all three transactions are related, they cannot be executed in parallel, and if the transactions are also divided according to the previous dependency analysis method, they are bound to fall。 -Can we automatically deduce which dependencies actually exist in the transaction based on the content of the user's contract??The answer is not very reliable。It's hard to keep track of what data is actually manipulated in a user contract, and even doing so costs a lot of money, which is a far cry from our goal of optimizing performance.。 +Can we automatically deduce which dependencies actually exist in the transaction based on the content of the user's contract??The answer is not very reliable。It's hard to keep track of what data is actually manipulated in a user contract, and even doing so costs a lot of money, which is a far cry from our goal of optimizing performance。 -In summary, we have decided to delegate the assignment of transaction dependencies in FISCO BCOS to developers who are more familiar with the content of the contract.。Specifically, the mutually exclusive resources on which the transaction depends can be represented by a set of strings, FISCO BCOS exposes the interface to the developer, the developer defines the resources on which the transaction depends in the form of a string, informs the executor on the chain, and the executor automatically arranges all transactions in the block as a transaction DAG based on the transaction dependencies specified by the developer.。For example, in a simple transfer contract, the developer only needs to specify that the dependency of each transfer transaction is the sender address.+Recipient's Address。Further, if the developer introduces another third-party address in the transfer logic, the dependency needs to be defined as the sender address.+Recipient Address+The third party address.。 +In summary, we have decided to delegate the assignment of transaction dependencies in FISCO BCOS to developers who are more familiar with the content of the contract。Specifically, the mutually exclusive resources on which the transaction depends can be represented by a set of strings, FISCO BCOS exposes the interface to the developer, the developer defines the resources on which the transaction depends in the form of a string, informs the executor on the chain, and the executor automatically arranges all transactions in the block as a transaction DAG based on the transaction dependencies specified by the developer。For example, in a simple transfer contract, the developer only needs to specify that the dependency of each transfer transaction is the sender address+Recipient's Address。Further, if the developer introduces another third-party address in the transfer logic, the dependency needs to be defined as the sender address+Recipient Address+The third party address。 -This method is more intuitive and simple to implement, but also more general, applicable to all smart contracts, but also increases the responsibility of developers, developers must be very careful when specifying transaction dependencies, if the dependencies are not written correctly, the consequences are unpredictable.。The relevant interface for specifying dependencies will be given in a subsequent article using the tutorial, this article assumes for the time being that all the trade dependencies discussed are clear and unambiguous.。 +This method is more intuitive and simple to implement, but also more general, applicable to all smart contracts, but also increases the responsibility of developers, developers must be very careful when specifying transaction dependencies, if the dependencies are not written correctly, the consequences are unpredictable。The relevant interface for specifying dependencies will be given in a subsequent article using the tutorial, this article assumes for the time being that all the trade dependencies discussed are clear and unambiguous。 -**After solving the two more important issues above, there are still some more detailed engineering issues left: such as whether parallel transactions can be mixed with non-parallel transactions for execution.?How to ensure the global uniqueness of resource strings?** +**After solving the two more important issues above, there are still some more detailed engineering issues left: such as whether parallel transactions can be mixed with non-parallel transactions for execution?How to ensure the global uniqueness of resource strings?** -The answer is also not complicated, the former can be achieved by inserting non-parallel transactions as a barrier (barrier) into the transaction DAG - i.e., we believe that it is dependent on all of its pre-order transactions and at the same time is dependent on all of its post-order transactions -;The latter can be solved by adding a special flag to identify the contract in the transaction dependency specified by the developer.。As these problems do not affect the fundamental design of PTE, this paper will not expand。 +The answer is also not complicated, the former can be achieved by inserting non-parallel transactions as a barrier (barrier) into the transaction DAG - i.e., we believe that it is dependent on all of its pre-order transactions and at the same time is dependent on all of its post-order transactions -;The latter can be solved by adding a special flag to identify the contract in the transaction dependency specified by the developer。As these problems do not affect the fundamental design of PTE, this paper will not expand。 Everything is ready, and FISCO BCOS with the new trade execution engine PTE is on the horizon。 @@ -77,21 +77,21 @@ Everything is ready, and FISCO BCOS with the new trade execution engine PTE is o **The core processes of the whole architecture are as follows:** -Users send transactions to nodes through clients such as SDKs, where transactions can be executed in parallel or not。The transactions are then synchronized between the nodes, and the node with the packaging rights invokes the packer (Sealer) to take a certain amount of transactions from the transaction pool (Tx Pool) and package them into a block.。Thereafter, the block is sent to the consensus unit (Consensus) to prepare for inter-node consensus。 +Users send transactions to nodes through clients such as SDKs, where transactions can be executed in parallel or not。The transactions are then synchronized between the nodes, and the node with the packaging rights invokes the packer (Sealer) to take a certain amount of transactions from the transaction pool (Tx Pool) and package them into a block。Thereafter, the block is sent to the consensus unit (Consensus) to prepare for inter-node consensus。 -The transaction in the block needs to be executed before consensus, and this is where the PTE exerts its power.。As can be seen from the architecture diagram, the PTE first reads the transactions in the block in order and inputs them to the DAG Constructor (DAG Constructor), which constructs a transaction DAG containing all transactions based on the dependencies of each transaction, and the PTE then wakes up the worker thread pool and uses multiple threads to execute the transaction DAG in parallel.。The Joiner suspends the main thread until all threads in the worker thread pool finish executing the DAG. At this time, the Joiner calculates the state root and receipt root based on the modification records of each transaction to the state, and returns the execution results to the upper caller.。 +The transaction in the block needs to be executed before consensus, and this is where the PTE exerts its power。As can be seen from the architecture diagram, the PTE first reads the transactions in the block in order and inputs them to the DAG Constructor (DAG Constructor), which constructs a transaction DAG containing all transactions based on the dependencies of each transaction, and the PTE then wakes up the worker thread pool and uses multiple threads to execute the transaction DAG in parallel。The Joiner suspends the main thread until all threads in the worker thread pool finish executing the DAG. At this time, the Joiner calculates the state root and receipt root based on the modification records of each transaction to the state, and returns the execution results to the upper caller。 -After the transaction is completed, if the status of each node is consistent, a consensus is reached, and the block is then written to the underlying storage (Storage) and permanently recorded on the blockchain.。 +After the transaction is completed, if the status of each node is consistent, a consensus is reached, and the block is then written to the underlying storage (Storage) and permanently recorded on the blockchain。 ## core algorithm -### 1. The data structure of the transaction DAG. +### 1. The data structure of the transaction DAG The data structure of the transaction DAG is shown in the following figure: ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5178.PNG) -**Vertex Class**For the most basic type, in the trading DAG, each Vertex instance represents a trade.。The Vertex class contains: +**Vertex Class**For the most basic type, in the trading DAG, each Vertex instance represents a trade。The Vertex class contains: - **inDegree**: Indicates the degree of entry for this vertex - **outEdges**: Used to store the outgoing edge information of the node, that is, the ID list of all vertices connected to the outgoing edge @@ -105,20 +105,20 @@ The data structure of the transaction DAG is shown in the following figure: - **void generate()Interface**When all edge relationships have been entered, call this method to initialize the topLevel member - **ID waitPop()Interface**Get a vertex ID with 0 in from topLevel -**TxDAG class**is the encapsulation of the DAG class to a higher level and is the bridge between the DAG and the transaction, which contains. +**TxDAG class**is the encapsulation of the DAG class to a higher level and is the bridge between the DAG and the transaction, which contains - **dag**The DAG class instance held by - **exeCnt**: Total number of transactions executed - **totalTxs**: Total number of transactions - **txs**: List of transactions in the block -### 2. The construction process of the transaction DAG. +### 2. The construction process of the transaction DAG -When constructing a transaction DAG, the DAG constructor first sets the value of the totalTxs member to the total number of transactions in the block and initializes the dag object based on the total number of transactions.。Subsequently, initialize an empty resource mapping table criticalFields and scan each transaction one by one in order。 +When constructing a transaction DAG, the DAG constructor first sets the value of the totalTxs member to the total number of transactions in the block and initializes the dag object based on the total number of transactions。Subsequently, initialize an empty resource mapping table criticalFields and scan each transaction one by one in order。 For a transaction tx, the DAG constructor will resolve all the dependencies of the transaction, and for each dependency, it will go to criticalFields to query, if for a dependency d, a previous transaction also depends on the dependency, then build an edge between the two transactions, and update the mapping of d in criticalFields as the ID of tx。 -The pseudo-code for the transaction DAG construction process is as follows. +The pseudo-code for the transaction DAG construction process is as follows ``` criticalFields ← map(); @@ -140,9 +140,9 @@ dag.generate(); ### 3. Execution process of transaction DAG -When a PTE is created, a worker thread pool is generated for executing the transaction DAG according to the configuration, the size of the thread pool is equal to the number of logical cores of the CPU by default, and the life cycle of this thread pool is the same as the life cycle of the PTE.。The worker thread will continuously call the waitPop method of the dag object to take out the ready transaction with an entry of 0 and execute it, and after execution, the entry of all subsequent dependent tasks of the transaction is reduced by 1, and if the entry of the transaction is reduced to 0, the transaction is added to the topLevel.。Loop the above process until the trade DAG is executed。 +When a PTE is created, a worker thread pool is generated for executing the transaction DAG according to the configuration, the size of the thread pool is equal to the number of logical cores of the CPU by default, and the life cycle of this thread pool is the same as the life cycle of the PTE。The worker thread will continuously call the waitPop method of the dag object to take out the ready transaction with an entry of 0 and execute it, and after execution, the entry of all subsequent dependent tasks of the transaction is reduced by 1, and if the entry of the transaction is reduced to 0, the transaction is added to the topLevel。Loop the above process until the trade DAG is executed。 -The pseudocode for the transaction DAG execution process is as follows. +The pseudocode for the transaction DAG execution process is as follows ``` while exeCnt < totalTxs do @@ -161,13 +161,13 @@ end ## Performance evaluation -We chose two benchmark programs to test how PTE has changed the performance of FISCO BCOS, namely, a transfer contract based on a pre-compiled framework implementation and a transfer contract written in the Solidity language, with the following code paths for the two contracts. +We chose two benchmark programs to test how PTE has changed the performance of FISCO BCOS, namely, a transfer contract based on a pre-compiled framework implementation and a transfer contract written in the Solidity language, with the following code paths for the two contracts FISCO-BCOS/libprecompiled/extension/DagTransferPrecompiled.cpp web3sdk/src/test/resources/contract/ParallelOk.sol -We use a single node chain for testing, because we mainly focus on the transaction processing performance of PTE, so we do not consider the impact of network and storage latency.。 +We use a single node chain for testing, because we mainly focus on the transaction processing performance of PTE, so we do not consider the impact of network and storage latency。 **The basic hardware information of the test environment is shown in the following table**: @@ -177,13 +177,13 @@ We use a single node chain for testing, because we mainly focus on the transacti ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5180.JPG) -In the performance test section, we mainly test the transaction processing capabilities of PTE and Serial Transaction Execution (Serial) under each test program.。It can be seen that compared with the serial execution mode, PTE has achieved a speedup of 2.91 and 2.69 times from left to right, respectively。PTE has excellent performance for both pre-compiled and Solidity contracts。 +In the performance test section, we mainly test the transaction processing capabilities of PTE and Serial Transaction Execution (Serial) under each test program。It can be seen that compared with the serial execution mode, PTE has achieved a speedup of 2.91 and 2.69 times from left to right, respectively。PTE has excellent performance for both pre-compiled and Solidity contracts。 ### 2. Scalability testing ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5181.JPG) -In the scalability test section, we mainly test the transaction processing power of PTE at different CPU core numbers, using a benchmark program based on a pre-compiled framework to implement a transfer contract。As can be seen, the transaction throughput of PTE increases approximately linearly as the number of cores increases。However, it can also be seen that as the number of cores increases, the rate of performance growth slows down because the overhead of inter-thread scheduling and synchronization increases as the number of cores increases.。 +In the scalability test section, we mainly test the transaction processing power of PTE at different CPU core numbers, using a benchmark program based on a pre-compiled framework to implement a transfer contract。As can be seen, the transaction throughput of PTE increases approximately linearly as the number of cores increases。However, it can also be seen that as the number of cores increases, the rate of performance growth slows down because the overhead of inter-thread scheduling and synchronization increases as the number of cores increases。 #### Write at the end diff --git a/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_design.md b/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_design.md index ddc433386..65ca10f36 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_design.md +++ b/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_design.md @@ -4,48 +4,48 @@ Author: Mo Nan | Senior Architect, FISCO BCOS FISCO BCOS 2.0 adds support for distributed data storage, overcoming many of the limitations of localized data storage。 -In FISCO BCOS 1.0, nodes use MPT data structure to store data locally through LevelDB, which is limited by the size of the local disk, and when the volume of business increases, the data will expand dramatically, and data migration is also very complex, bringing greater cost and maintenance difficulty to data storage.。 +In FISCO BCOS 1.0, nodes use MPT data structure to store data locally through LevelDB, which is limited by the size of the local disk, and when the volume of business increases, the data will expand dramatically, and data migration is also very complex, bringing greater cost and maintenance difficulty to data storage。 -In order to break through the bottleneck of performance, we redesigned the underlying storage in FISCO BCOS 2.0, implemented distributed storage, and used a different way from MPT to achieve traceability, bringing performance improvements.。 +In order to break through the bottleneck of performance, we redesigned the underlying storage in FISCO BCOS 2.0, implemented distributed storage, and used a different way from MPT to achieve traceability, bringing performance improvements。 -Let's start with the advantages of distributed storage solutions. +Let's start with the advantages of distributed storage solutions -- Supports multiple storage engines and highly available distributed storage systems to support data expansion easily and quickly; -- The calculation and data are isolated, and node failures will not cause data anomalies; -- Data is stored remotely, and data can be stored in a more secure quarantine area, which makes sense in many scenarios; -- Distributed storage not only supports Key-The value form also supports SQL, making business development easier.; -- The storage of world state is changed from the original MPT storage structure to distributed storage, which avoids the problem of performance degradation caused by the rapid expansion of world state.; -- Optimize the structure of data storage and save more storage space。 +- Support multiple storage engines, choose highly available distributed storage system, can support data simple and rapid expansion; +-Isolate calculation and data, node failure will not cause data abnormality; +-Data is stored remotely, and data can be stored in a more secure quarantine area, which is very meaningful in many scenarios; +-Distributed storage not only supports the Key-Value form, but also supports the SQL method, making business development easier; +-The storage of the world state has changed from the original MPT storage structure to distributed storage, avoiding the problem of performance degradation caused by the rapid expansion of the world state; +- Optimized the structure of data storage, saving more storage space。 ## From MPT storage to distributed storage ### MPT Storage -**MPT(Merkle Paricia Trie)**From Ethereum, the external interface is Key-Value, which uses a prefix tree to store data, is the storage mode of FISCO BCOS 1.0。 +**MPT(Merkle Paricia Trie)**From Ethereum, the external interface is Key-Value, and the prefix tree is used to store data, which is the storage mode of FISCO BCOS 1.0。 -MPT is a prefix tree structure, each leaf node in the tree is allowed to have up to 16 child leaf nodes, and the leaf node has a HASH field, which is derived from the HASH operation of all child leaf nodes of the leaf.。The root of the tree has a unique hash value, which identifies the hash of the entire tree.。 +MPT is a prefix tree structure, each leaf node in the tree is allowed to have up to 16 child leaf nodes, and the leaf node has a HASH field, which is derived from the HASH operation of all child leaf nodes of the leaf。The root of the tree has a unique hash value, which identifies the hash of the entire tree。 ![](../../../../images/articles/distributed_storage_design/IMG_5088.JPG) Image from Ethereum White Paper -The global state data of Ethereum, which is stored in the MPT tree. The state data consists of accounts。An account is a leaf node in MPT. Account data includes Nonce, Balance, CodeHash, and StorageRoot.。When any account field changes, the hash of the leaf where the account is located changes. The hash of all leaves from the leaf to the top changes, and finally the top StateRoot changes.。 +The global state data of Ethereum, which is stored in the MPT tree. The state data consists of accounts。An account is a leaf node in MPT. Account data includes Nonce, Balance, CodeHash, and StorageRoot。When any account field changes, the hash of the leaf where the account is located changes. The hash of all leaves from the leaf to the top changes, and finally the top StateRoot changes。 -Thus, any change in any field of any account will result in a change in StateRoot, which can uniquely identify the global state of Ethereum.。 +Thus, any change in any field of any account will result in a change in StateRoot, which can uniquely identify the global state of Ethereum。 ![](../../../../images/articles/distributed_storage_design/IMG_5089.JPG) Image from Ethereum White Paper -MPT can implement light client and data traceability, and can query the status of blocks through StateRoot.。MPT brings a lot of HASH computing, breaking the continuity of the underlying data storage.。In terms of performance, MPT State has a natural disadvantage。It can be said that MPT State pursues the ultimate provability and traceability, and compromises performance and scalability.。 +MPT can implement light client and data traceability, and can query the status of blocks through StateRoot。MPT brings a lot of HASH computing, breaking the continuity of the underlying data storage。In terms of performance, MPT State has a natural disadvantage。It can be said that MPT State pursues the ultimate provability and traceability, and compromises performance and scalability。 ### distributed storage -FISCO BCOS 2.0 introduces high-scalability, high-throughput, high-availability, high-performance distributed storage while maintaining storage interface compatibility。**distributed storage(Advanced Mass Database,AMDB)**: Re-abstracts the underlying storage model of the blockchain, implements an SQL-like abstract storage interface, and supports a variety of back-end databases, including KV databases and relational databases。 After the introduction of distributed storage, data read and write requests directly access the storage without MPT, combined with the cache mechanism, the storage performance is greatly improved compared to MPT-based storage.。MPT data structure remains, only as an option。 +FISCO BCOS 2.0 introduces high-scalability, high-throughput, high-availability, high-performance distributed storage while maintaining storage interface compatibility。**distributed storage(Advanced Mass Database,AMDB)**: Re-abstracts the underlying storage model of the blockchain, implements an SQL-like abstract storage interface, and supports a variety of back-end databases, including KV databases and relational databases。 After the introduction of distributed storage, data read and write requests directly access the storage without MPT, combined with the cache mechanism, the storage performance is greatly improved compared to MPT-based storage。MPT data structure remains, only as an option。 ![](../../../../images/articles/distributed_storage_design/IMG_5090.JPG) -Distributed storage supports relational databases such as MySQL and parallel expansion methods such as MySQL clusters, sub-databases and sub-tables. Theoretically, the storage capacity is unlimited.。 +Distributed storage supports relational databases such as MySQL and parallel expansion methods such as MySQL clusters, sub-databases and sub-tables. Theoretically, the storage capacity is unlimited。 ## distributed storage architecture @@ -53,11 +53,11 @@ Distributed storage supports relational databases such as MySQL and parallel exp #### State layer (State) -Abstracts the storage access interface of the smart contract, which is called by the EVM and divided into StorageState and MPTState.。StorageState is the adaptation layer of distributed storage. MPTState is the old MPT adaptation layer. FISCO BCOS uses StorageState by default.。 +Abstracts the storage access interface of the smart contract, which is called by the EVM and divided into StorageState and MPTState。StorageState is the adaptation layer of distributed storage. MPTState is the old MPT adaptation layer. FISCO BCOS uses StorageState by default。 #### Distributed Storage Layer (Table) -Abstracts the SQL-like interface for distributed storage, which is called by the State layer and Precompiled.。The distributed storage layer abstracts the storage addition, deletion, modification, and lookup interfaces to classify and store the core data of the blockchain in different tables.。 +Abstracts the SQL-like interface for distributed storage, which is called by the State layer and Precompiled。The distributed storage layer abstracts the storage addition, deletion, modification, and lookup interfaces to classify and store the core data of the blockchain in different tables。 #### Drive Layer (Storage) @@ -67,7 +67,7 @@ Implement specific database access logic, including LevelDB and MySQL。 #### Table -Store all data in a table。The mapping between the master key of distributed storage and the corresponding Entries in Table. You can add, delete, modify, and query the master key of distributed storage. Conditional filtering is supported.。 +Store all data in a table。The mapping between the master key of distributed storage and the corresponding Entries in Table. You can add, delete, modify, and query the master key of distributed storage. Conditional filtering is supported。 #### Entries @@ -75,22 +75,22 @@ Entries in which the same master key is stored, array。The master key of distri #### Entry -Corresponding to a row in the table, each row takes the column name as the key and the corresponding value as the value, forming the KV structure。Each entry has its own distributed storage master key. Different entries can have the same distributed storage master key.。 +Corresponding to a row in the table, each row takes the column name as the key and the corresponding value as the value, forming the KV structure。Each entry has its own distributed storage master key. Different entries can have the same distributed storage master key。 #### Condition -The "delete, modify and check" interface in Table can pass in conditions, and supports filtering logic such as "equal to," "greater than" and "less than." The interface filters the data according to the conditions and performs corresponding operations to return the result data.。If the condition is empty, no filtering is done。 +The "delete, modify and check" interface in Table can pass in conditions, and supports filtering logic such as "equal to," "greater than" and "less than." The interface filters the data according to the conditions and performs corresponding operations to return the result data。If the condition is empty, no filtering is done。 ##### Example -Explain the above terms using the example of a company's employee registration form for receiving materials. +Explain the above terms using the example of a company's employee registration form for receiving materials ![](../../../../images/articles/distributed_storage_design/IMG_5092.PNG) - Name in the table is the primary key of distributed storage。 -- One Entry for each line in the table。There are four Entries, each with three fields。 -- Name is the primary key in Table, and there are 3 Entries objects。There are two records of Alice in the first Entries, one record of Bob in the second Entries, and one record of Chris in the third Entries。 -- When calling the query interface of the Table class, the query interface needs to specify the distributed storage master key and conditions, set the distributed storage master key of the query to Alice, and set the condition to price > 40, Entry1 will be queried。 +- one Entry for each line in the table。There are four Entries, each with three fields。 +-Name is the primary key in Table, and there are 3 Entries objects。There are two records of Alice in the first Entries, one record of Bob in the second Entries, and one record of Chris in the third Entries。 +- When calling the query interface of the Table class, the query interface needs to specify the distributed storage master key and condition, and set the distributed storage master key of the query to Alice and the condition to price> 40, will query Entry1。 ### Distributed Storage Table Classification @@ -98,14 +98,14 @@ All entries in the table will have _ status _, _ num _, _ hash _ built-in fields #### System tables -System tables exist by default. The storage driver ensures the creation of system tables.。 +System tables exist by default. The storage driver ensures the creation of system tables。 ![](../../../../images/articles/distributed_storage_design/IMG_5093.JPG) #### User Table -A table created by a user CRUD contract. The name of the table is _ user _ < TableName >.。 +Table created by user CRUD contract to _ user _For the table name, the underlying layer automatically adds the _ user _ prefix。 #### StorageState Account Table @@ -115,9 +115,9 @@ _contract_data_+Address+_ as table name。Table stores information about externa ## SUMMARY -FISCO BCOS has experienced a lot of real business practices since its release.。In the process of continuous improvement, distributed storage has summed up a storage model suitable for financial business, high performance, high availability and high scalability, the architecture is becoming more stable and mature, and distributed storage will continue to be the cornerstone of blockchain systems in the future to support the development of blockchain systems.。 +FISCO BCOS has experienced a lot of real business practices since its release。In the process of continuous improvement, distributed storage has summed up a storage model suitable for financial business, high performance, high availability and high scalability, the architecture is becoming more stable and mature, and distributed storage will continue to be the cornerstone of blockchain systems in the future to support the development of blockchain systems。 -In the next article, I will provide the experience process of distributed storage. Please continue to lock in the FISCO BCOS open source community.。 +In the next article, I will provide the experience process of distributed storage. Please continue to lock in the FISCO BCOS open source community。 ### Series selection @@ -125,11 +125,11 @@ In the next article, I will provide the experience process of distributed storag #### principle analysis -[Design of Group Architecture](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485338&idx=1&sn=9ce03340c699a8527960a0d0b26d4923&chksm=9f2ef586a8597c9003192718c1f60ed486570f6a334c9713cc7e99ede91c6f3ddcd7f438821f&token=705851025&lang=zh_CN#rd): Make it as easy as group chat to establish a multi-party collaborative business relationship between enterprises.。 +[Design of Group Architecture](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485338&idx=1&sn=9ce03340c699a8527960a0d0b26d4923&chksm=9f2ef586a8597c9003192718c1f60ed486570f6a334c9713cc7e99ede91c6f3ddcd7f438821f&token=705851025&lang=zh_CN#rd): Make it as easy as group chat to establish a multi-party collaborative business relationship between enterprises。 #### Using tutorials -[Group Structure Practice Exercise](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485337&idx=1&sn=622e88b631ae1bfe5789b2fe21576779&chksm=9f2ef585a8597c9311c972eb67174b3638f7b69d87d6eea243fc327bf515159fb53f216a5fec&token=705851025&lang=zh_CN#rd): Take building an arbitration chain as an example and demonstrate how to send transactions to that chain.。 +[Group Structure Practice Exercise](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485337&idx=1&sn=622e88b631ae1bfe5789b2fe21576779&chksm=9f2ef585a8597c9311c972eb67174b3638f7b69d87d6eea243fc327bf515159fb53f216a5fec&token=705851025&lang=zh_CN#rd): Take building an arbitration chain as an example and demonstrate how to send transactions to that chain。 diff --git a/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_experience.md b/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_experience.md index cb3373606..42705c5dc 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_experience.md +++ b/3.x/en/docs/articles/3_features/30_architecture/distributed_storage_experience.md @@ -2,27 +2,27 @@ Author: Mo Nan | Senior Architect, FISCO BCOS -After the release of the Distributed Storage Architecture Design article, community members are very concerned about the technology kernel and its use.。Team and community enthusiastic small partners, industry experts for distributed storage, a lot of discussion。Here, share your insights or help you better understand and use distributed storage: +After the release of the Distributed Storage Architecture Design article, community members are very concerned about the technology kernel and its use。Team and community enthusiastic small partners, industry experts for distributed storage, a lot of discussion。Here, share your insights or help you better understand and use distributed storage: -- FISCO BCOS 2.0 distributed storage using library table style, CRUD operation in line with business habits。 -- No contract storage variable pattern, deconstructs the embedded coupling of contract and data, and makes contract upgrades easier.。 +- The distributed storage of FISCO BCOS 2.0 adopts the library table style, and the CRUD operation conforms to the business habit。 +-No contract storage variable mode, deconstructs the embedded coupling of contract and data, and makes contract upgrade easier。 - Storage access engine logic and data structure more intuitive, easy to adapt to a variety of storage engines, large expansion space。 -- The data itself is stored deterministically, without the MPT tree-like intertwined relationship, making it easier to take snapshots and cut migrations.。 -- Table plus primary key structure index data, access efficiency is high, concurrent access is easier。 -- With less storage overhead, the capacity model is linearly related to the number of transactions and states, making it easier to predict business capacity, which is very meaningful for massive services.。 -- In terms of details, the state MPT is weakened, but the transaction and receipt MPT are retained, and the light client can still be supported, using process proof and existence proof, without relying on the volatile state, without affecting the implementation of cross-chain.。 -- The state is tested by incremental HASH, and the state generated by each transaction is rigorously tested across the network to ensure consistency.。 -- Initially built for SQL types, it can support engines such as MySQL and Oracle, and then adapt to NoSQL types such as LevelDB.。More high-speed and mass storage engines will be adapted in the future, and the optimal solution will be explored in the triangular relationship of [single io delay / concurrency efficiency / capacity expansion].。 +-The data itself is stored in a determinant manner, without the intertwined relationship of MPT trees, making it easier to take snapshots and cut and migrate。 +-Table plus primary key structure index data, high access efficiency, easier concurrent access。 +- Less storage overhead, the capacity model is linearly related to the number of transactions and states, making it easier to predict business capacity, which is very meaningful for massive services。 +-In terms of details, the state MPT is weakened, but the transaction and receipt MPT are retained, and the light client can still be supported, using process proof and existence proof, without relying on the volatile state, and without affecting the implementation of cross-chain。 +- The state is checked by incremental HASH, and the state generated by each block of transactions is rigorously checked across the network to ensure consistency。 +-Initially built for SQL types, it can support engines such as MySQL and Oracle, and then adapt to NoSQL types such as LevelDB。More high-speed and mass storage engines will be adapted in the future, and the optimal solution will be explored in the triangular relationship of [single io delay / concurrency efficiency / capacity expansion]。 -Although distributed storage is a big project (it took several fast shooters a year before they dared to take it out to meet people), it is very simple to use, and this article will talk about the experience process of distributed storage.。Initial contact with users, it is recommended to start from the previous article (click the title to jump directly) → [Distributed storage architecture design](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485336&idx=1&sn=ea3a7119634c1c27daa4ec2b9a9f278b&chksm=9f2ef584a8597c9288f8c5000c7def47c3c5b9dc64f25221985cd9e3743b9364a93933e51833&token=705851025&lang=zh_CN#rd) +Although distributed storage is a big project (it took several fast shooters a year before they dared to take it out to meet people), it is very simple to use, and this article will talk about the experience process of distributed storage。Initial contact with users, it is recommended to start from the previous article (click the title to jump directly) → [Distributed storage architecture design](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485336&idx=1&sn=ea3a7119634c1c27daa4ec2b9a9f278b&chksm=9f2ef584a8597c9288f8c5000c7def47c3c5b9dc64f25221985cd9e3743b9364a93933e51833&token=705851025&lang=zh_CN#rd) ## Configure Distributed Storage -Distributed storage supports multiple storage engines and can be configured with different storage engines based on business requirements and deployment environment.。 +Distributed storage supports multiple storage engines and can be configured with different storage engines based on business requirements and deployment environment。 -The basic data such as blocks and transactions of the blockchain are stored in a library table structure, and the state data can be stored in a library table structure or MPT to meet the needs of different scenarios.。 +The basic data such as blocks and transactions of the blockchain are stored in a library table structure, and the state data can be stored in a library table structure or MPT to meet the needs of different scenarios。 -The configuration items of distributed storage are located in the configuration file of the group. Each group can use a separate storage policy. The group configuration file is located in the path named conf / group. [group number] .genesis in the blockchain node, such as group.1.genesis. Once the group is started, the related configuration of the distributed storage of the group cannot be changed.。 +The configuration items of distributed storage are located in the configuration file of the group. Each group can use a separate storage policy. The group configuration file is located in the path named conf / group. [group number] .genesis in the blockchain node, such as group.1.genesis. Once the group is started, the related configuration of the distributed storage of the group cannot be changed。 **An example of a distributed storage configuration item is as follows:** @@ -32,15 +32,15 @@ type = LevelDB: DB engine type for distributed storage [state] -type = storage: the state type. Currently, storage state and MPT state are supported. The default value is storage state. +type = storage: the state type. Currently, storage state and MPT state are supported. The default value is storage state **Recommended storage state**, **Unless MPT must be used to trace the global historical state**,**MPT State not recommended**。 ## Using CRUD Smart Contract Development -Distributed storage provides a dedicated CRUD interface that allows contracts to directly access the underlying storage tables.。 +Distributed storage provides a dedicated CRUD interface that allows contracts to directly access the underlying storage tables。 -To access CRUD, you need to reference the Table.sol interface, a smart contract dedicated to distributed storage. This interface is a database contract. You can create tables and add, delete, and query tables.。 +To access CRUD, you need to reference the Table.sol interface, a smart contract dedicated to distributed storage. This interface is a database contract. You can create tables and add, delete, and query tables。 **Quoting Table.sol** @@ -50,7 +50,7 @@ import "./Table.sol"; **The Table.sol interface includes**: -- createTable / / Create a table +-createTable / / Create table - select(string, Condition) / / Query data - insert(string, Entry) / / Insert data - update(string, Entry, Condition) / / Update data @@ -64,7 +64,7 @@ import "./Table.sol"; / / The address of TableFactory is fixed to 0x1001 TableFactory tf = TableFactory(0x1001); -/ / Create the t _ test table. The key _ field of the table is name, and the value _ field is item _ id and item _ name. +/ / Create the t _ test table. The key _ field of the table is name, and the value _ field is item _ id and item _ name / / key _ field indicates a column in the distributed storage master key value _ field indicates a column in the table, which can have multiple columns, separated by commas int count = tf.createTable("t_test", "name", "item_id,item_name"); ``` @@ -75,7 +75,7 @@ int count = tf.createTable("t_test", "name", "item_id,item_name"); TableFactory tf = TableFactory(0x1001); Table table = tf.openTable("t_test"); -/ / If the condition is empty, you can filter without filtering or use the condition as needed. +/ / If the condition is empty, you can filter without filtering or use the condition as needed Condition condition = table.newCondition(); Entries entries = table.select(name, condition); @@ -126,7 +126,7 @@ int count = table.remove(name, condition); #### PS -The optimization of storage architecture is a basic project, but also a big project.。The shift in implementation is actually an evolution of the architectural worldview, and the impact will be more profound than the functional points seen.。This second article is only the tip of the iceberg of distributed storage.。For more principles and use cases, please refer to: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html +The optimization of storage architecture is a basic project, but also a big project。The shift in implementation is actually an evolution of the architectural worldview, and the impact will be more profound than the functional points seen。This second article is only the tip of the iceberg of distributed storage。For more principles and use cases, please refer to: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html ### Series selection @@ -134,8 +134,8 @@ The optimization of storage architecture is a basic project, but also a big proj #### principle analysis -[Design of Group Architecture](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485338&idx=1&sn=9ce03340c699a8527960a0d0b26d4923&chksm=9f2ef586a8597c9003192718c1f60ed486570f6a334c9713cc7e99ede91c6f3ddcd7f438821f&token=705851025&lang=zh_CN#rd): Make it as easy as group chat to establish a multi-party collaborative business relationship between enterprises.。 +[Design of Group Architecture](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485338&idx=1&sn=9ce03340c699a8527960a0d0b26d4923&chksm=9f2ef586a8597c9003192718c1f60ed486570f6a334c9713cc7e99ede91c6f3ddcd7f438821f&token=705851025&lang=zh_CN#rd): Make it as easy as group chat to establish a multi-party collaborative business relationship between enterprises。 #### Using tutorials -[Group Structure Practice Exercise](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485337&idx=1&sn=622e88b631ae1bfe5789b2fe21576779&chksm=9f2ef585a8597c9311c972eb67174b3638f7b69d87d6eea243fc327bf515159fb53f216a5fec&token=705851025&lang=zh_CN#rd): Take building an arbitration chain as an example and demonstrate how to send transactions to that chain.。 \ No newline at end of file +[Group Structure Practice Exercise](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485337&idx=1&sn=622e88b631ae1bfe5789b2fe21576779&chksm=9f2ef585a8597c9311c972eb67174b3638f7b69d87d6eea243fc327bf515159fb53f216a5fec&token=705851025&lang=zh_CN#rd): Take building an arbitration chain as an example and demonstrate how to send transactions to that chain。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/30_architecture/group_architecture_design.md b/3.x/en/docs/articles/3_features/30_architecture/group_architecture_design.md index 58744edf8..a5b78aba6 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/group_architecture_design.md +++ b/3.x/en/docs/articles/3_features/30_architecture/group_architecture_design.md @@ -2,25 +2,25 @@ Author : Chen Yujie | FISCO BCOS Core Developer -In order to facilitate enterprises and developers to have a deeper understanding of the many new features of FISCO BCOS 2.0 and to use FISCO BCOS to build alliance chain applications more quickly, we have launched the FISCO BCOS 2.0 series analysis program.。In the follow-up push, we will launch a series of articles such as "FISCO BCOS 2.0 Principle Analysis," "FISCO BCOS 2.0 Usage Tutorial," "FISCO BCOS 2.0 Source Code Analysis," etc., to disassemble FISCO BCOS 2.0 in a comprehensive manner.。 +In order to facilitate enterprises and developers to have a deeper understanding of the many new features of FISCO BCOS 2.0 and to use FISCO BCOS to build alliance chain applications more quickly, we have launched the FISCO BCOS 2.0 series analysis program。In the follow-up push, we will launch a series of articles such as "FISCO BCOS 2.0 Principle Analysis," "FISCO BCOS 2.0 Usage Tutorial," "FISCO BCOS 2.0 Source Code Analysis," etc., to disassemble FISCO BCOS 2.0 in a comprehensive manner。 -This article is the first in a series of principle analysis, introducing the main line of the many new features of FISCO BCOS 2.0 - the group architecture.。It mainly includes the overall architecture design of the group architecture, which components the group architecture includes, the main functions of each component, and the interaction between the components.。 +This article is the first in a series of principle analysis, introducing the main line of the many new features of FISCO BCOS 2.0 - the group architecture。It mainly includes the overall architecture design of the group architecture, which components the group architecture includes, the main functions of each component, and the interaction between the components。 ## Design Objectives -To understand the design goals of group architecture, you can start with the group chat model that everyone is familiar with.。 +To understand the design goals of group architecture, you can start with the group chat model that everyone is familiar with。 #### Flexible expansion: ensure business access and expansion as convenient as group chat -The establishment of the group is very flexible, a few people can quickly pull a theme group to communicate.。The same person can participate in multiple groups of interest to them, sending and receiving messages in parallel。Existing groups can also continue to add members。 +The establishment of the group is very flexible, a few people can quickly pull a theme group to communicate。The same person can participate in multiple groups of interest to them, sending and receiving messages in parallel。Existing groups can also continue to add members。 -Looking back at the group architecture, in a network with a group architecture, there can be multiple different ledgers depending on the business scenario, and blockchain nodes can select groups to join based on business relationships and participate in the data sharing and consensus process of the corresponding ledgers.。The group architecture has good scalability, and once an organization participates in such an alliance chain, it has the opportunity to flexibly and quickly enrich business scenarios and expand business scale, while the operational complexity and management costs of the system also decrease linearly.。 +Looking back at the group architecture, in a network with a group architecture, there can be multiple different ledgers depending on the business scenario, and blockchain nodes can select groups to join based on business relationships and participate in the data sharing and consensus process of the corresponding ledgers。The group architecture has good scalability, and once an organization participates in such an alliance chain, it has the opportunity to flexibly and quickly enrich business scenarios and expand business scale, while the operational complexity and management costs of the system also decrease linearly。 #### Privacy protection: decoupling between groups to operate independently -Recall the group chat scenario: group chat users are in your address book, are verified to add, and not in the group of users can not see the group chat information。This coincides with the alliance chain access mechanism, where the institutional identity of all participants is known.。 +Recall the group chat scenario: group chat users are in your address book, are verified to add, and not in the group of users can not see the group chat information。This coincides with the alliance chain access mechanism, where the institutional identity of all participants is known。 -On the other hand, in the group structure, each group independently implements the consensus process, and each group independently maintains its own transaction transactions and data, independent of other groups.。The advantage of this is that it allows groups to decouple and operate independently, resulting in better privacy isolation.。In the inter-group message exchange, the authentication information will be carried, which is credible and traceable.。 +On the other hand, in the group structure, each group independently implements the consensus process, and each group independently maintains its own transaction transactions and data, independent of other groups。The advantage of this is that it allows groups to decouple and operate independently, resulting in better privacy isolation。In the inter-group message exchange, the authentication information will be carried, which is credible and traceable。 ## Architecture Design @@ -30,17 +30,17 @@ On the other hand, in the group structure, each group independently implements t ▲ The picture shows the panorama of group architecture design -As shown in the above figure, the group architecture is mainly divided into network layer and group layer from the bottom down, the network layer is mainly responsible for communication between blockchain nodes, the group layer is mainly responsible for processing intra-group transactions, and each group runs an independent ledger.。 +As shown in the above figure, the group architecture is mainly divided into network layer and group layer from the bottom down, the network layer is mainly responsible for communication between blockchain nodes, the group layer is mainly responsible for processing intra-group transactions, and each group runs an independent ledger。 ### network layer -In the group architecture, all groups share the P2P network, and the packets passed from the group layer to the network layer contain group ID information, and the receiving node passes the received packets to the corresponding group of the target node according to the group ID in the packet.。In order to isolate communication data between groups, the group architecture introduces the**Ledger White List**mechanism, the following figure shows the process of sending and receiving messages between groups under the group architecture: +In the group architecture, all groups share the P2P network, and the packets passed from the group layer to the network layer contain group ID information, and the receiving node passes the received packets to the corresponding group of the target node according to the group ID in the packet。In order to isolate communication data between groups, the group architecture introduces the**Ledger White List**mechanism, the following figure shows the process of sending and receiving messages between groups under the group architecture: ![](../../../../images/articles/group_architecture_design/IMG_4893.PNG) #### Ledger White List -Each group holds a ledger whitelist that maintains a list of nodes for that group。In order to ensure consistency within the ledger whitelist group, the ledger whitelist can only be modified by issuing a transaction consensus.。 +Each group holds a ledger whitelist that maintains a list of nodes for that group。In order to ensure consistency within the ledger whitelist group, the ledger whitelist can only be modified by issuing a transaction consensus。 #### Contract awarding process @@ -50,25 +50,25 @@ For example, the first group of node0 sends the message packetA to the first gro (2) The network layer module encodes packetA and adds the group ID to the packetA header, which is recorded as{groupID(1) + packetA}; -(3) The network layer accesses the ledger whitelist to determine whether node0 is a node of group1. If node0 is not a node of group1, the packet is rejected.;If it is a group1 node, the encoded packet is sent to the destination node node1。 +(3) The network layer accesses the ledger whitelist to determine whether node0 is a node of group1. If node0 is not a node of group1, the packet is rejected;If it is a group1 node, the encoded packet is sent to the destination node node1。 #### Packing process node1 received the packet of node0 group1{groupID(1) + packetA}After: -(1) The network layer accesses the ledger whitelist to determine whether the source node node0 is a group1 node. If it is not a group1 node, the packet is rejected. Otherwise, the packet is passed to the decoding module.; +(1) The network layer accesses the ledger whitelist to determine whether the source node node0 is a group1 node. If it is not a group1 node, the packet is rejected. Otherwise, the packet is passed to the decoding module; (2) The decoding module decodes the group ID = 1 and the packet packetA from the packet and sends the packet packetA to group1。 -Through the ledger whitelist, you can effectively prevent group nodes from obtaining other group communication messages, ensuring the privacy of group network communication.。 +Through the ledger whitelist, you can effectively prevent group nodes from obtaining other group communication messages, ensuring the privacy of group network communication。 ### Group Layer -The group layer is the core of the group architecture。To isolate ledger data between groups, each group holds a separate ledger module.。The group layer is divided into core layer, interface layer and scheduling layer from bottom to top: the core layer provides the underlying storage and transaction execution interface.;The interface layer is the interface to access the core layer.;The scheduling layer includes synchronization and consensus modules for processing transactions, synchronization transactions, and blocks.。 +The group layer is the core of the group architecture。To isolate ledger data between groups, each group holds a separate ledger module。The group layer is divided into core layer, interface layer and scheduling layer from bottom to top: the core layer provides the underlying storage and transaction execution interface;The interface layer is the interface to access the core layer;The scheduling layer includes synchronization and consensus modules for processing transactions, synchronization transactions, and blocks。 #### Core Layer -Mainly includes storage(AMDB/storage/state)and implementation(EVM)Two modules。Stores block data, block execution results, block information, and system tables that are responsible for storing or reading group ledgers from the underlying database.。Execute(EVM)The module is mainly responsible for executing transactions.。 +Mainly includes storage(AMDB/storage/state)and implementation(EVM)Two modules。Stores block data, block execution results, block information, and system tables that are responsible for storing or reading group ledgers from the underlying database。Execute(EVM)The module is mainly responsible for executing transactions。 #### interface layer @@ -76,15 +76,15 @@ Interface layer includes transaction pool(TxPool), Blockchain(BlockChain)and blo ##### Module 1: Trading Pools(TxPool) -The transaction pool is the interface between the client and the scheduling layer, responsible for new transactions received from the client or other nodes, the consensus module will take out the transaction packaging processing, the synchronization module will take out the new transaction from the broadcast.。 +The transaction pool is the interface between the client and the scheduling layer, responsible for new transactions received from the client or other nodes, the consensus module will take out the transaction packaging processing, the synchronization module will take out the new transaction from the broadcast。 ##### Module 2: Blockchain(BlockChain) -The blockchain module is the interface between the core layer and the scheduling layer, and is the only entry for the scheduling layer to access the underlying storage and execution modules, through which the scheduling layer can submit new blocks and block execution results, query historical blocks and other information.。The blockchain module is also the interface between the RPC module and the core layer. The RPC module can obtain information such as blocks, block heights, and transaction execution results through the blockchain module.。 +The blockchain module is the interface between the core layer and the scheduling layer, and is the only entry for the scheduling layer to access the underlying storage and execution modules, through which the scheduling layer can submit new blocks and block execution results, query historical blocks and other information。The blockchain module is also the interface between the RPC module and the core layer. The RPC module can obtain information such as blocks, block heights, and transaction execution results through the blockchain module。 ##### Module 3: Block Actuator(BlockVerifier) -Interacts with the scheduling layer to execute the blocks passed in from the scheduling layer and returns the block execution results to the scheduling layer.。 +Interacts with the scheduling layer to execute the blocks passed in from the scheduling layer and returns the block execution results to the scheduling layer。 #### scheduling layer @@ -94,22 +94,22 @@ Scheduling layer includes consensus module(Consensus)and synchronization module( The consensus module is primarily responsible for executing the transactions submitted by the client and reaching consensus on the results of the transaction execution。 -As shown below, the consensus module includes packaging(Sealer)Threads and consensus(Engine)thread, the Sealer thread is responsible for getting unexecuted transactions from the transaction pool and packaging them into blocks;The Engine thread is responsible for consensus on block execution results, and currently supports PBFT and Raft consensus algorithms.。 +As shown below, the consensus module includes packaging(Sealer)Threads and consensus(Engine)thread, the Sealer thread is responsible for getting unexecuted transactions from the transaction pool and packaging them into blocks;The Engine thread is responsible for consensus on block execution results, and currently supports PBFT and Raft consensus algorithms。 ![](../../../../images/articles/group_architecture_design/IMG_4894.PNG) The main processes of the consensus module include: -(1) After the transaction submitted by the client is cached in the TxPool, the Sealer thread of the consensus node is awakened, and the Sealer thread obtains the latest transaction from the transaction pool and packages and generates a new block blockI based on the current highest block.; +(1) After the transaction submitted by the client is cached in the TxPool, the Sealer thread of the consensus node is awakened, and the Sealer thread obtains the latest transaction from the transaction pool and packages and generates a new block blockI based on the current highest block; (2) The Sealer thread passes the new block blockI generated by the package to the Engine thread for consensus; -(3) After receiving the new block blockI, the Engine thread starts the consensus process. During the consensus process, the block executor BlockVerifier is called to execute each transaction in the block blockI and reach a consensus on the execution results.; +(3) After receiving the new block blockI, the Engine thread starts the consensus process. During the consensus process, the block executor BlockVerifier is called to execute each transaction in the block blockI and reach a consensus on the execution results; -(4) If the consensus is successful, call BlockChain to submit the new block blockI and block execution results to the underlying database.; +(4) If the consensus is successful, call BlockChain to submit the new block blockI and block execution results to the underlying database; -(5) After the new block blockI is successfully linked, the transaction pool is triggered to delete all transactions in the above chain and the transaction execution results are pushed to the client in the form of callbacks.。 +(5) After the new block blockI is successfully linked, the transaction pool is triggered to delete all transactions in the above chain and the transaction execution results are pushed to the client in the form of callbacks。 ##### Module 2: Synchronization(Sync)Module @@ -119,7 +119,7 @@ The synchronization module mainly includes transaction synchronization and block ##### Transaction synchronization -When the client submits a new transaction to the specified group transaction pool through RPC, it wakes up the transaction synchronization thread of the corresponding group synchronization module, which broadcasts all newly received transactions to other group nodes, and other group nodes insert the latest transactions into the transaction pool to ensure that each group node has the full amount of transactions.。 +When the client submits a new transaction to the specified group transaction pool through RPC, it wakes up the transaction synchronization thread of the corresponding group synchronization module, which broadcasts all newly received transactions to other group nodes, and other group nodes insert the latest transactions into the transaction pool to ensure that each group node has the full amount of transactions。 As shown in the following figure, after the client sends transaction tx _ j to group1 and tx _ i to group2, the transaction synchronization thread broadcasts tx _ i to group1 of all group nodes and tx _ j to group2 of all group nodes。 @@ -127,7 +127,7 @@ As shown in the following figure, after the client sends transaction tx _ j to g #### Block synchronization -Considering that inconsistent machine performance in the blockchain network or the addition of new nodes will cause the block height of some nodes to lag behind that of other nodes, the synchronization module provides the block synchronization function, which sends the latest block height of its own node to other nodes, and other nodes will actively download the latest block when they find that the block height lags behind that of other nodes.。 +Considering that inconsistent machine performance in the blockchain network or the addition of new nodes will cause the block height of some nodes to lag behind that of other nodes, the synchronization module provides the block synchronization function, which sends the latest block height of its own node to other nodes, and other nodes will actively download the latest block when they find that the block height lags behind that of other nodes。 Taking the three-node blockchain system as an example, the block synchronization process is as follows: @@ -137,11 +137,11 @@ Taking the three-node blockchain system as an example, the block synchronization (1) The block synchronization threads of Node0, Node1 and Node2 regularly broadcast the latest block height information; -(2) After receiving the latest block heights of Node0 and Node2, Node1 finds that its block height 3 is lower than the latest block height 6 of Node0 and Node2.; +(2) After receiving the latest block heights of Node0 and Node2, Node1 finds that its block height 3 is lower than the latest block height 6 of Node0 and Node2; -(3)Based on the principle of load balancing, Node1 requests the fourth block from Node2, and requests block 5 and block 6 from Node0.; +(3)Based on the principle of load balancing, Node1 requests the fourth block from Node2, and requests block 5 and block 6 from Node0; -(4) After receiving the block request from Node1, Node0 and Node2 send the{5,6}and No.{4}blocks returned to Node1; +(4) After receiving the block request from Node1, Node0 and Node2 send the{5,6}and No{4}blocks returned to Node1; (5) Node1 executes the 4th, 5th, and 6th blocks according to the block height, and commits the latest blocks to the underlying storage in order。 @@ -149,4 +149,4 @@ Taking the three-node blockchain system as an example, the block synchronization **The next notice**: Tutorial for Using the Group Schema -In the next article, I will take building a group blockchain as an example to provide you with practical courses on group architecture. Please continue to lock in the FISCO BCOS open source community.。 +In the next article, I will take building a group blockchain as an example to provide you with practical courses on group architecture. Please continue to lock in the FISCO BCOS open source community。 diff --git a/3.x/en/docs/articles/3_features/30_architecture/group_architecture_practice.md b/3.x/en/docs/articles/3_features/30_architecture/group_architecture_practice.md index 30fd8c0e8..6b48b6346 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/group_architecture_practice.md +++ b/3.x/en/docs/articles/3_features/30_architecture/group_architecture_practice.md @@ -11,7 +11,7 @@ This article is a high-energy practical operation strategy, the whole hard core **course knowledge points**: - Use build _ chain to create a multi-group blockchain installation package -- How to start a blockchain node and view the consensus status and block status of the node +- How to start a blockchain node, view the node consensus status and block status - Build a console to deploy contracts to multiple groups ## Organizational structure of arbitration chain @@ -24,7 +24,7 @@ Enterprise A, Enterprise B and Enterprise C respectively cooperate with arbitrat ## Arbitration Chain Networking Details -The previous section introduced the arbitration chain organization structure, where the arbitration chain networking environment is simulated in a machine environment.。The simulated networking environment is as follows: +The previous section introduced the arbitration chain organization structure, where the arbitration chain networking environment is simulated in a machine environment。The simulated networking environment is as follows: - **arbitration institution**Includes two nodes with IP addresses of 127.0.0.1, belonging to group 1, group 2, and group 3 - **Enterprise A**Includes two nodes, both of which have IP addresses of 127.0.0.1 and belong to group 1 only @@ -33,7 +33,7 @@ The previous section introduced the arbitration chain organization structure, wh **Warm tips** : -In actual application scenarios, we do not recommend that you deploy multiple nodes on the same machine. We recommend that you select the number of nodes to deploy based on the machine load.。In this example, the quorum node belongs to all groups and has a high load. We recommend that you deploy the quorum node separately to machines with better performance.。 +In actual application scenarios, we do not recommend that you deploy multiple nodes on the same machine. We recommend that you select the number of nodes to deploy based on the machine load。In this example, the quorum node belongs to all groups and has a high load. We recommend that you deploy the quorum node separately to machines with better performance。 ## The key process of arbitration chain construction @@ -41,24 +41,24 @@ As shown in the following figure, use the FISCO BCOS 2.0 quick chain building sc ![](../../../../images/articles/group_architecture_practice/IMG_5085.PNG) -- step1: Install dependent software, mainly openssl and build _ chain.sh scripts +-step1: Install dependent software, mainly openssl and build _ chain.sh scripts - step2: Use build _ chain.sh to generate a blockchain node configuration - step3: Launch all institutional blockchain nodes - step4: Start Console - step5: Send a transaction using the console -Below I will describe in detail the key process of building an arbitration chain in these five steps.。 +Below I will describe in detail the key process of building an arbitration chain in these five steps。 ### Installing dependent software To build a FISCO BCOS 2.0 blockchain node, you need to prepare the following dependent software: -- openssl: The network protocol for FISCO BCOS 2.0 depends on openssl -- build _ chain.sh script: mainly used to build blockchain node configuration, available from https://raw.githubusercontent.com/FISCO-BCOS/FISCO-BCOS/master-2.0 / manual / build _ chain.sh Download +-openssl: The network protocol for FISCO BCOS 2.0 depends on openssl +-build _ chain.sh script: mainly used to build blockchain node configuration, available from https:/ / raw.githubusercontent.com / FISCO-BCOS / FISCO-BCOS / master-2.0 / manual / build _ chain.sh Download ```eval_rst .. note:: - - If the build _ chain.sh script cannot be downloaded for a long time due to network problems, try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` + -If the build _ chain.sh script cannot be downloaded for a long time due to network problems, please try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` ``` ### Generate blockchain node configuration @@ -81,11 +81,11 @@ Call the build _ chain.sh script to build the native quorum chain of the simulat $ bash build_chain.sh -f ipconf -p 30300,20200,8545 ``` -After the blockchain node is configured successfully, you will see the output of [INFO] All completed.。 +After the blockchain node is configured successfully, you will see the output of [INFO] All completed。 ### Start Node -After the blockchain node is generated, all nodes need to be started. The start _ all.sh and stop _ all.sh scripts are provided to start and stop the node.。 +After the blockchain node is generated, all nodes need to be started. The start _ all.sh and stop _ all.sh scripts are provided to start and stop the node。 ``` # Start Node @@ -95,28 +95,28 @@ $ bash start_all.sh $ ps aux | grep fisco-bcos ``` -When no transaction is issued, nodes with normal consensus will output+++logs, using tail-f node*/log/* | grep "++"Check whether the consensus of each node is normal。 +When no transaction is issued, nodes with normal consensus will output+++logs, using tail -f node*/log/* | grep "++"Check whether the consensus of each node is normal。 ### Start Console -The console is an important tool for users to interact with FISCO BCOS 2.0 blockchain nodes. It can query the blockchain status, deploy and invoke contracts, and quickly obtain the information needed by users.。 +The console is an important tool for users to interact with FISCO BCOS 2.0 blockchain nodes. It can query the blockchain status, deploy and invoke contracts, and quickly obtain the information needed by users。 Obtain and configure the console before starting the console: -- **Get Console**From https:://github.com/FISCO-BCOS / console / releases / download / v1.0.0 / console.tar.gz Download Console +- **Get Console**From https::/ / github.com / FISCO-BCOS / console / releases / download / v1.0.0 / console.tar.gz Download Console ```eval_rst .. note:: - - If the console script cannot be downloaded for a long time due to network problems, please try downloading from gitee: https://gitee.com/FISCO-BCOS/console/attach_files/420303/download/console.tar.gz + -If you cannot download the console script for a long time due to network problems, please try to download it from gitee: https://gitee.com/FISCO-BCOS/console/attach_files/420303/download/console.tar.gz ``` - **To configure the console:**Copy the certificate and configure the IP address and port information of the node to which conf / applicationContext.xml is connected. The key console configurations are as follows: ![](../../../../images/articles/group_architecture_practice/IMG_5086.PNG) -Of course, the console also supports connecting multiple groups and provides the switch command to switch groups. When connecting multiple groups, you need to configure multiple connections in the groupChannelConnectionsConfig bean id to connect to the blockchain nodes of the corresponding groups.。 +Of course, the console also supports connecting multiple groups and provides the switch command to switch groups. When connecting multiple groups, you need to configure multiple connections in the groupChannelConnectionsConfig bean id to connect to the blockchain nodes of the corresponding groups。 -**Note:**The console depends on Java 8 or above. You can install openjdk 8 on the Ubuntu 16.04 system.。CentOS Please install Oracle Java 8 or later。 +**Note:**The console depends on Java 8 or above. You can install openjdk 8 on the Ubuntu 16.04 system。CentOS Please install Oracle Java 8 or later。 Use the start.sh script to start the console. If the console is successfully started, the following interface is output: @@ -127,13 +127,13 @@ Use the start.sh script to start the console. If the console is successfully sta The console provides the deploy HelloWorld command to issue transactions to the node. After the transaction is issued, the block height of the blockchain node increases ``` -# ... send a deal to group1... +# ... send a deal to group1.. $ [group:1]> deploy HelloWorld 0x8c17cf316c1063ab6c89df875e96c9f0f5b2f744 -# Check the current block height of group1. If the block height is increased to 1, the block height is normal. Otherwise, check whether the consensus of group1 is normal. +# Check the current block height of group1. If the block height is increased to 1, the block height is normal. Otherwise, check whether the consensus of group1 is normal $ [group:1]> getBlockNumber 1 -# ... make a deal to group2... +# ... make a deal to group2.. # Switch to group2 $ [group:1]> switch 2 Switched to group 2 diff --git a/3.x/en/docs/articles/3_features/30_architecture/parallel_contract_development_framework_with_tutorials.md b/3.x/en/docs/articles/3_features/30_architecture/parallel_contract_development_framework_with_tutorials.md index 67dc5330f..8b5747cc1 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/parallel_contract_development_framework_with_tutorials.md +++ b/3.x/en/docs/articles/3_features/30_architecture/parallel_contract_development_framework_with_tutorials.md @@ -2,7 +2,7 @@ Author : SHI Xiang | FISCO BCOS Core Developer -This special series of articles to catch up with now, you may want to ask, FISCO BCOS parallel how to use?As the end of the topic, this article will reveal the true face of Lushan and teach you how to use the parallel features of FISCO BCOS.!FISCO BCOS provides a parallelizable contract development framework, where developers write contracts in accordance with the framework specifications that can be executed in parallel by FISCO BCOS nodes.。The advantages of parallel contracts are: +This special series of articles to catch up with now, you may want to ask, FISCO BCOS parallel how to use?As the end of the topic, this article will reveal the true face of Lushan and teach you how to use the parallel features of FISCO BCOS!FISCO BCOS provides a parallelizable contract development framework, where developers write contracts in accordance with the framework specifications that can be executed in parallel by FISCO BCOS nodes。The advantages of parallel contracts are: - **high throughput**: Multiple independent transactions are executed at the same time, which maximizes the CPU resources of the machine and thus has a high TPS - **Can be expanded**: The performance of transaction execution can be improved by improving the configuration of the machine to support the continuous expansion of business scale @@ -13,31 +13,31 @@ Next, I'll show you how to write FISCO BCOS parallel contracts and how to deploy ### parallel mutex -Whether two transactions can be executed in parallel depends on whether the two transactions exist.**Mutex**。Mutual exclusion means that two transactions are each**There is an intersection of the collection of operating contract storage variables.**。 +Whether two transactions can be executed in parallel depends on whether the two transactions exist**Mutex**。Mutual exclusion means that two transactions are each**There is an intersection of the collection of operating contract storage variables**。 -For example, in a transfer scenario, a transaction is a transfer operation between users。with transfer(X, Y) Represents the transfer interface from user X to user Y.。The mutual exclusion is as follows: +For example, in a transfer scenario, a transaction is a transfer operation between users。with transfer(X, Y) Represents the transfer interface from user X to user Y。The mutual exclusion is as follows: ![](../../../../images/articles/parallel_contract_development_framework_with_tutorials/IMG_5187.PNG) A more specific definition is given here: -- **Mutex parameters:**合同**Interface**parameters related to read / write operations for contract storage variables in。For example, the transfer interface(X, Y)X and Y are mutually exclusive parameters.。 +- **Mutex parameters:**合同**Interface**parameters related to read / write operations for contract storage variables in。For example, the transfer interface(X, Y)X and Y are mutually exclusive parameters。 -- **Mutex Object**: a sum of money**Transaction**The specific mutually exclusive content extracted from the mutually exclusive parameters.。For example, the transfer interface(X, Y), in a transaction that calls this interface, the specific parameter is transfer(A, B)then the mutex for this operation is [A, B];Another transaction, the argument to the call is transfer(A, C)then the mutex for this operation is [A, C]。 +- **Mutex Object**: a sum of money**Transaction**The specific mutually exclusive content extracted from the mutually exclusive parameters。For example, the transfer interface(X, Y), in a transaction that calls this interface, the specific parameter is transfer(A, B)then the mutex for this operation is [A, B];Another transaction, the argument to the call is transfer(A, C)then the mutex for this operation is [A, C]。 **To determine whether two transactions can be executed in parallel at the same time is to determine whether the mutually exclusive objects of the two transactions intersect。Transactions with empty intersections can be executed in parallel。** ## Writing Parallel Contracts -FISCO BCOS provides**parallelizable contract development framework**The developer only needs to develop the contract according to the specification of the framework and define the mutually exclusive parameters of each contract interface to implement the contract that can be executed in parallel.。When the contract is deployed, FISCO BCOS automatically parses the mutually exclusive objects before executing the transaction, allowing the non-dependent transactions to be executed in parallel as much as possible at the same time.。 +FISCO BCOS provides**parallelizable contract development framework**The developer only needs to develop the contract according to the specification of the framework and define the mutually exclusive parameters of each contract interface to implement the contract that can be executed in parallel。When the contract is deployed, FISCO BCOS automatically parses the mutually exclusive objects before executing the transaction, allowing the non-dependent transactions to be executed in parallel as much as possible at the same time。 -Currently, FISCO BCOS provides two parallel contract development frameworks, solidity and precompiled contracts.。 +Currently, FISCO BCOS provides two parallel contract development frameworks, solidity and precompiled contracts。 ### Parallel Framework for Solidity Contracts -Write parallel solidity contracts, the development process is the same as the process of developing ordinary solidity contracts.。On this basis, simply use ParallelContract as the contract base class that requires parallelism and call registerParallelFunction(), register interfaces that can be parallel。 +Write parallel solidity contracts, the development process is the same as the process of developing ordinary solidity contracts。On this basis, simply use ParallelContract as the contract base class that requires parallelism and call registerParallelFunction(), register interfaces that can be parallel。 -Give a complete example first.。The ParallelOk contract in the example implements the function of parallel transfer: +Give a complete example first。The ParallelOk contract in the example implements the function of parallel transfer: ``` pragma solidity ^0.4.25; @@ -49,7 +49,7 @@ contract ParallelOk is ParallelContract / / Using ParallelContract as the base c function transfer(string from, string to, uint256 num) public { - / / Here is a simple example, please use SafeMath instead of direct addition and subtraction in actual production. + / / Here is a simple example, please use SafeMath instead of direct addition and subtraction in actual production _balance[from] -= num; _balance[to] += num; } @@ -102,32 +102,32 @@ contract ParallelOk is ParallelContract / / Using ParallelContract as the base c } ``` -#### step2 Write a parallel contract interface. +#### step2 Write a parallel contract interface -The public function in the contract, which is the interface to the contract。To write a parallelizable contract interface is to implement the public function in a contract according to certain rules.。 +The public function in the contract, which is the interface to the contract。To write a parallelizable contract interface is to implement the public function in a contract according to certain rules。 ##### Determine whether an interface is parallelizable A parallelizable contract interface must satisfy: -- No call to external contract -- No call to other function interface +- No calls to external contracts +- No calls to other function interfaces ##### Determine Mutex Parameters -Before writing an interface, determine the mutex parameters of the interface, which is the mutex of global variables, and the rules for determining mutex parameters are. +Before writing an interface, determine the mutex parameters of the interface, which is the mutex of global variables, and the rules for determining mutex parameters are -- The interface accesses global mapping. The mapping key is a mutually exclusive parameter. -- The interface accesses the global array, and the subscript of the array is a mutually exclusive parameter +- The interface accesses global mapping. The key of mapping is a mutually exclusive parameter +-The interface accesses the global array, and the subscript of the array is a mutually exclusive parameter - The interface accesses global variables of simple types, all global variables of simple types share a mutually exclusive parameter, using different variable names as mutually exclusive objects ##### Determine parameter type and order After determining the mutually exclusive parameters, determine the parameter type and order according to the rules. The rules are as follows: -- Interface parameters are limited to string, address, uint256, int256 (more types will be supported in the future) -- Mutex parameters must all appear in interface parameters -- All mutually exclusive parameters are arranged at the top of the interface parameters. +- Interface parameters only: string, address, uint256, int256 (more types will be supported in the future) +-Mutex parameters must all appear in interface parameters +- All mutually exclusive parameters are arranged at the top of the interface parameters ``` mapping (string => uint256) _balance; / / Global mapping @@ -154,7 +154,7 @@ Implementing enableParallel in a contract() function, calling registerParallelFu / / Register contract interfaces that can be parallel function enableParallel() public { - / / Function definition string (note","There can be no spaces after), the first few parameters are mutually exclusive parameters. + / / Function definition string (note","There can be no spaces after), the first few parameters are mutually exclusive parameters registerParallelFunction("transfer(string,string,uint256)", 2); / / transfer interface, the first two are mutually exclusive parameters registerParallelFunction("set(string,uint256)", 1); / / transfer interface, the first four mutually exclusive parameters } @@ -205,7 +205,7 @@ An example of sending a large number of transactions with the SDK is given in th ### Parallel Framework for Precompiled Contracts -Write parallel precompiled contracts, the development process is the same as the development of ordinary precompiled contracts.。Ordinary precompiled contracts use Precompile as the base class, on top of which the contract logic is implemented.。Based on this, Precompile's base class also provides two virtual functions for parallelism, which continue to be implemented to implement parallel precompiled contracts.。 +Write parallel precompiled contracts, the development process is the same as the development of ordinary precompiled contracts。Ordinary precompiled contracts use Precompile as the base class, on top of which the contract logic is implemented。Based on this, Precompile's base class also provides two virtual functions for parallelism, which continue to be implemented to implement parallel precompiled contracts。 #### step1 Defines the contract to support parallelism @@ -215,7 +215,7 @@ bool isParallelPrecompiled() override { return true; } #### step2 Defines parallel interfaces and mutually exclusive parameters -Note that once defined to support parallelism, all interfaces need to be defined。If null is returned, this interface does not have any mutex。The mutually exclusive parameters are related to the implementation of the precompiled contract, which involves an understanding of FISCO BCOS storage, and the specific implementation can be read directly from the code or ask the relevant experienced programmer.。 +Note that once defined to support parallelism, all interfaces need to be defined。If null is returned, this interface does not have any mutex。The mutually exclusive parameters are related to the implementation of the precompiled contract, which involves an understanding of FISCO BCOS storage, and the specific implementation can be read directly from the code or ask the relevant experienced programmer。 ``` / / According to the parallel interface, take out the mutex from the parameters and return the mutex @@ -241,7 +241,7 @@ std::vector getParallelTag(bytesConstRef param) override results.push_back(toUser); } } - else if... / / All interfaces need to give a mutex, and the return is empty to indicate that there is no mutex. + else if... / / All interfaces need to give a mutex, and the return is empty to indicate that there is no mutex return results; / / return mutex } @@ -253,50 +253,50 @@ Method of compiling nodes manually, [refer to FISCO BCOS technical documentation ## Example: Parallel transfer -Parallel examples of solidity contracts and precompiled contracts are given here.。 +Parallel examples of solidity contracts and precompiled contracts are given here。 #### Configure Environment The example requires the following execution environment: - Web3SDK Client -- A FISCO BCOS chain +- One FISCO BCOS chain If the maximum performance of pressure measurement is required, at least: - 3 Web3SDKs to generate enough transactions -- 4 nodes, and all Web3SDKs are configured with all the node information on the chain, so that transactions are evenly sent to each node, so that the link can receive enough transactions. +- 4 nodes, and all Web3SDKs are configured with all node information on the chain, so that transactions are evenly sent to each node, so that the link can receive enough transactions ### Parallel Solidity Contract: ParallelOk -Transfers based on account models are a typical business operation。The ParallelOk contract is an example of an account model that enables parallel transfers.。The ParallelOk contract has been given above。 +Transfers based on account models are a typical business operation。The ParallelOk contract is an example of an account model that enables parallel transfers。The ParallelOk contract has been given above。 -FISCO BCOS has the ParallelOk contract built into the Web3SDK. Here is how to use the Web3SDK to send a large number of parallel transactions.。 +FISCO BCOS has the ParallelOk contract built into the Web3SDK. Here is how to use the Web3SDK to send a large number of parallel transactions。 #### step1 Deploy contracts with SDK, create new users, and enable contract parallelism ``` -# Parameter: < groupID > add < number of users created > < TPS requested by this create operation > < generated user information file name > +# Parameters: add java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.parallelok.PerformanceDT 1 add 10000 2500 user # 10,000 users are created on group1, the creation operation is sent with 2500TPS, and the generated user information is saved in user ``` -After the execution is successful, ParallelOk is deployed on the blockchain, and the created user information is saved in the user file, and the parallel capability of ParallelOk is enabled.。 +After the execution is successful, ParallelOk is deployed on the blockchain, and the created user information is saved in the user file, and the parallel capability of ParallelOk is enabled。 #### step2 Send parallel transfer transactions in batches Note: Before sending in batches, please adjust the log level of the SDK to ERROR to have sufficient sending capacity。 ``` -# Parameters: < groupID > transfer < total number of transactions > < TPS limit of this transfer operation request > < required user information file > < transaction mutual exclusion percentage: 0 ~ 10 > +# Parameters: transfer java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.parallelok.PerformanceDT 1 transfer 100000 4000 user 2 ​ -# Sent 100,000 transactions to group1, the maximum TPS sent is 4000, using the user in the previously created user file.。 +# Sent 100,000 transactions to group1, the maximum TPS sent is 4000, using the user in the previously created user file。 ``` #### step3 Verifying parallel correctness -After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction.。 +After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction。 ``` Total transactions: 100000 @@ -329,7 +329,7 @@ Calculate TPS from log file with script ``` cd tools -sh get_tps.sh log/log_2019031821.00.log 21:26:24 21:26:59 # Parameters: < log file > < calculation start time > < calculation end time > +sh get_tps.sh log/log_2019031821.00.log 21:26:24 21:26:59 # Parameters: ``` Get TPS (3 SDK, 4 nodes, 8 cores, 16G memory) @@ -342,13 +342,13 @@ total transactions = 193332, execute_time = 34580ms, tps = 5590 (tx/s) ### Parallel precompiled contract: DagTransferPrecompiled -Like the ParallelOk contract, FISCO BCOS has a built-in example of a parallel precompiled contract (DagTransferPrecompiled) that implements a simple account model-based transfer function.。The contract can manage the deposits of multiple users and provides a parallel transfer interface for parallel processing of transfer operations between users.。 +Like the ParallelOk contract, FISCO BCOS has a built-in example of a parallel precompiled contract (DagTransferPrecompiled) that implements a simple account model-based transfer function。The contract can manage the deposits of multiple users and provides a parallel transfer interface for parallel processing of transfer operations between users。 -**Note: DagTransferPrecompiled is used as an example only and should not be used directly in the production environment.。** +**Note: DagTransferPrecompiled is used as an example only and should not be used directly in the production environment。** #### step1 Generate User -Use the Web3SDK to send the operation of creating a user, and save the created user information in the user file。The command parameters are the same as parallelOk, except that the object called by the command is precompile.。 +Use the Web3SDK to send the operation of creating a user, and save the created user information in the user file。The command parameters are the same as parallelOk, except that the object called by the command is precompile。 ``` java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.PerformanceDT 1 add 10000 2500 user @@ -358,7 +358,7 @@ java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.Perf Send parallel transfer transactions with Web3SDK。 -**Note: Before sending in batches, adjust the log level of the SDK to ERROR to ensure sufficient sending capability.。** +**Note: Before sending in batches, adjust the log level of the SDK to ERROR to ensure sufficient sending capability。** ``` java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.PerformanceDT 1 transfer 100000 4000 user 2 @@ -366,7 +366,7 @@ java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.Perf #### step3 Verifying parallel correctness -After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction.。 +After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction。 ``` Total transactions: 80000 @@ -399,7 +399,7 @@ Calculate TPS from log file with script ``` cd tools -sh get_tps.sh log/log_2019031311.17.log 11:25 11:30 # Parameters: < log file > < calculation start time > < calculation end time > +sh get_tps.sh log/log_2019031311.17.log 11:25 11:30 # Parameters: ``` Get TPS (3 SDK, 4 nodes, 8 cores, 16G memory) @@ -412,6 +412,6 @@ total transactions = 3340000, execute_time = 298945ms, tps = 11172 (tx/s) ## Result description -The performance results in this example are measured under 3SDK, 4 nodes, 8 cores, 16G memory, and 1G network.。Each SDK and node are deployed in a different VPS.。Actual TPS will vary based on your hardware configuration, operating system, and network bandwidth。 +The performance results in this example are measured under 3SDK, 4 nodes, 8 cores, 16G memory, and 1G network。Each SDK and node are deployed in a different VPS。Actual TPS will vary based on your hardware configuration, operating system, and network bandwidth。 -**If you encounter obstacles or need to consult during the deployment process, you can enter the FISCO BCOS official technical exchange group for answers.。**(into the group, please long press the two-dimensional code below to identify the small assistant) \ No newline at end of file +**If you encounter obstacles or need to consult during the deployment process, you can enter the FISCO BCOS official technical exchange group for answers。**(into the group, please long press the two-dimensional code below to identify the small assistant) \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/30_architecture/parallel_transformation.md b/3.x/en/docs/articles/3_features/30_architecture/parallel_transformation.md index 45b9769d4..07b3a4998 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/parallel_transformation.md +++ b/3.x/en/docs/articles/3_features/30_architecture/parallel_transformation.md @@ -4,55 +4,55 @@ Author: Li Chen Xi | FISCO BCOS Core Developer ## 背景 -The introduction of PTE (Parallel Transaction Executor, a parallel transaction executor based on the DAG model) gives FISCO BCOS the ability to execute transactions in parallel, significantly improving the efficiency of node transaction processing.。We are not satisfied with this stage result, and continue to dig deeper and find that the overall TPS of FISCO BCOS still has a lot of room for improvement.。 To use a barrel as an analogy: if all the modules of the transaction processing of the participating nodes constitute a barrel, the transaction execution is just a piece of wood that makes up the entire barrel, and according to the short board theory, how much water a barrel can hold depends on the shortest piece on the barrel wall, by the same token.**FISCO BCOS performance is also determined by the slowest components**。 Despite the theoretically high performance capacity achieved by PTE, the overall performance of FISCO BCOS is still constrained by the slower transaction processing speeds of other modules。**In order to maximize the use of computing resources to further improve transaction processing capabilities, it is imperative to fully advance the parallelization transformation in FISCO BCOS。** +The introduction of PTE (Parallel Transaction Executor, a parallel transaction executor based on the DAG model) gives FISCO BCOS the ability to execute transactions in parallel, significantly improving the efficiency of node transaction processing。We are not satisfied with this stage result, and continue to dig deeper and find that the overall TPS of FISCO BCOS still has a lot of room for improvement。 To use a barrel as an analogy: if all the modules of the transaction processing of the participating nodes constitute a barrel, the transaction execution is just a piece of wood that makes up the entire barrel, and according to the short board theory, how much water a barrel can hold depends on the shortest piece on the barrel wall, by the same token**FISCO BCOS performance is also determined by the slowest components**。 Despite the theoretically high performance capacity achieved by PTE, the overall performance of FISCO BCOS is still constrained by the slower transaction processing speeds of other modules。**In order to maximize the use of computing resources to further improve transaction processing capabilities, it is imperative to fully advance the parallelization transformation in FISCO BCOS。** ## Data analysis -According to the four-step principle of "analysis → decomposition → design → verification" of parallel programming, it is first necessary to locate the precise location of the performance bottlenecks that still exist in the system in order to decompose the tasks more deeply and design the corresponding parallelization strategy.。**Using top-down analysis, we divide the transaction processing process into four modules for performance analysis**The four modules are: +According to the four-step principle of "analysis → decomposition → design → verification" of parallel programming, it is first necessary to locate the precise location of the performance bottlenecks that still exist in the system in order to decompose the tasks more deeply and design the corresponding parallelization strategy。**Using top-down analysis, we divide the transaction processing process into four modules for performance analysis**The four modules are: -**Block decoding (decode)**: Blocks need to be sent from one node to another during consensus or synchronization between nodes. In this process, blocks are transmitted between networks in the form of RLP encoding。After the node receives the block encoding, it needs to decode it and restore the block to a binary object in memory before further processing.。 +**Block decoding (decode)**: Blocks need to be sent from one node to another during consensus or synchronization between nodes. In this process, blocks are transmitted between networks in the form of RLP encoding。After the node receives the block encoding, it needs to decode it and restore the block to a binary object in memory before further processing。 **Transaction verification (verify)**: The transaction is signed by the sender before it is sent, and the data obtained by the signature can be divided into(v, r, s)In the third part, the main task of signing is to receive the transaction or execute the transaction from the(v, r, s)Restore the public key of the transaction sender from the data to verify the identity of the transaction sender。 **Transaction execution**Execute all transactions in the block, update the blockchain status。 -**Data drop (commit)**: After the block is executed, the block and related data need to be written to the disk for persistent storage.。 +**Data drop (commit)**: After the block is executed, the block and related data need to be written to the disk for persistent storage。 -Using a block containing 2,500 pre-compiled transfer contract transactions as the test object, the average time-consuming distribution of each phase in our test environment is shown in the following figure. +Using a block containing 2,500 pre-compiled transfer contract transactions as the test object, the average time-consuming distribution of each phase in our test environment is shown in the following figure ![](../../../../images/articles/parallel_transformation/IMG_5182.JPG) -As can be seen from the figure, the execution time of 2500 trades has been reduced to less than 50 milliseconds, which proves that PTE's optimization of the FISCO BCOS trade execution phase is effective.。However, the chart also reveals a very obvious problem: the time taken at other stages is much higher than the time taken for trade execution, resulting in the performance advantage of trade execution being severely offset and the PTE not being able to deliver its due value.。 +As can be seen from the figure, the execution time of 2500 trades has been reduced to less than 50 milliseconds, which proves that PTE's optimization of the FISCO BCOS trade execution phase is effective。However, the chart also reveals a very obvious problem: the time taken at other stages is much higher than the time taken for trade execution, resulting in the performance advantage of trade execution being severely offset and the PTE not being able to deliver its due value。 -As early as 1967, the law named after him by Amdahl, a veteran of computer architecture, has explained to us the rule of thumb for measuring the efficiency gains of processors after parallel computing. +As early as 1967, the law named after him by Amdahl, a veteran of computer architecture, has explained to us the rule of thumb for measuring the efficiency gains of processors after parallel computing ![](../../../../images/articles/parallel_transformation/IMG_5183.PNG) -where SpeedUp is the speedup, Ws is the serial component of the program, Wp is the parallel component in the program, and N is the number of CPUs。It can be seen that in the case of a constant total amount of work, the more parallel parts of the code, the higher the overall performance of the system.。We need to take our thinking out of the linear model, continue to subdivide the entire processing flow, identify the program hotspots with the longest execution time, and parallelize these code segments to break all the bottlenecks one by one, which is the best way to maximize performance gains through parallelization.。 +where SpeedUp is the speedup, Ws is the serial component of the program, Wp is the parallel component in the program, and N is the number of CPUs。It can be seen that in the case of a constant total amount of work, the more parallel parts of the code, the higher the overall performance of the system。We need to take our thinking out of the linear model, continue to subdivide the entire processing flow, identify the program hotspots with the longest execution time, and parallelize these code segments to break all the bottlenecks one by one, which is the best way to maximize performance gains through parallelization。 ## Root cause dismantling ### 1. Serial block decoding -The main performance problem of block decoding lies in the RLP coding method itself.。The full name of RLP is recursive length prefix coding, which is a coding method that uses length as a prefix to indicate the number of elements in the encoded object.。As shown in the following figure, the beginning of the RLP code is the number of objects in the code (Object num).。After the number, is the corresponding number of objects (Object)。Recursively, each object is also RLP encoded, and its format is also the same as the figure below。 +The main performance problem of block decoding lies in the RLP coding method itself。The full name of RLP is recursive length prefix coding, which is a coding method that uses length as a prefix to indicate the number of elements in the encoded object。As shown in the following figure, the beginning of the RLP code is the number of objects in the code (Object num)。After the number, is the corresponding number of objects (Object)。Recursively, each object is also RLP encoded, and its format is also the same as the figure below。 -It is important to note that in RLP coding。The byte size of each object is not fixed. Object num only indicates the number of objects and does not indicate the byte length of an object.。 +It is important to note that in RLP coding。The byte size of each object is not fixed. Object num only indicates the number of objects and does not indicate the byte length of an object。 ![](../../../../images/articles/parallel_transformation/IMG_5184.JPG) -RLP can theoretically encode any number of objects by combining a length prefix with recursion.。The following figure shows the RLP encoding of a block. When encoding a block, it is recursive to the bottom layer to encode multiple sealers. After the multiple sealers are encoded and the length prefix is added, the encoding becomes a string of RLP encodings (sealerList).。This is followed by layer-by-layer recursion and the final encoding becomes the RLP encoding of the block.。Because RLP encoding is recursive, the length after encoding cannot be known before encoding。 +RLP can theoretically encode any number of objects by combining a length prefix with recursion。The following figure shows the RLP encoding of a block. When encoding a block, it is recursive to the bottom layer to encode multiple sealers. After the multiple sealers are encoded and the length prefix is added, the encoding becomes a string of RLP encodings (sealerList)。This is followed by layer-by-layer recursion and the final encoding becomes the RLP encoding of the block。Because RLP encoding is recursive, the length after encoding cannot be known before encoding。 ![](../../../../images/articles/parallel_transformation/IMG_5185.JPG) -When decoding, because the length of each object in RLP encoding is uncertain, and RLP encoding only records the number of objects, not the byte length of the object, to obtain one of the encoded objects, you must recursively decode all the objects in its preamble, after decoding the preamble of the object, you can access the byte position of the encoded object that needs to be accessed.。For example, in the above figure, if you need to access the 0th transaction in the block, that is, tx0, you must first decode the blockHeader, and the decoding of the blockHeader needs to be recursive again, decoding the parentHash, stateRoot, and even the sealerList.。 +When decoding, because the length of each object in RLP encoding is uncertain, and RLP encoding only records the number of objects, not the byte length of the object, to obtain one of the encoded objects, you must recursively decode all the objects in its preamble, after decoding the preamble of the object, you can access the byte position of the encoded object that needs to be accessed。For example, in the above figure, if you need to access the 0th transaction in the block, that is, tx0, you must first decode the blockHeader, and the decoding of the blockHeader needs to be recursive again, decoding the parentHash, stateRoot, and even the sealerList。 -The most important purpose of decoding a block is to decode the transactions contained in the block, and the codes of the transactions are independent of each other, but under the special coding method of RLP, the necessary condition for decoding a transaction is to decode the previous transaction, and the decoding tasks of the transaction are interlinked, forming a chain of dependencies.。It should be pointed out that this decoding method is not a defect of RLP, one of the design goals of RLP is to minimize the space occupation, make full use of each byte, although the codec has become less efficient, but the compactness of the encoding is obvious to all, so this encoding is essentially a time-for-space trade-off.。 +The most important purpose of decoding a block is to decode the transactions contained in the block, and the codes of the transactions are independent of each other, but under the special coding method of RLP, the necessary condition for decoding a transaction is to decode the previous transaction, and the decoding tasks of the transaction are interlinked, forming a chain of dependencies。It should be pointed out that this decoding method is not a defect of RLP, one of the design goals of RLP is to minimize the space occupation, make full use of each byte, although the codec has become less efficient, but the compactness of the encoding is obvious to all, so this encoding is essentially a time-for-space trade-off。 -Due to historical reasons, RLP coding is used in FISCO BCOS as a multi-site information exchange protocol, and the rush to switch to other parallelization-friendly serialization schemes may result in a greater development burden.。Based on this consideration, we decided to slightly modify the original RLP codec scheme, by adding additional position offset information for each encoded element, we can decode the RLP in parallel without changing a lot of the original code.。 +Due to historical reasons, RLP coding is used in FISCO BCOS as a multi-site information exchange protocol, and the rush to switch to other parallelization-friendly serialization schemes may result in a greater development burden。Based on this consideration, we decided to slightly modify the original RLP codec scheme, by adding additional position offset information for each encoded element, we can decode the RLP in parallel without changing a lot of the original code。 ### 2. Transaction verification & high cost of data placement -By breaking down the code for the trade check and data drop sections, we found that the main functions of both are concentrated in a time-consuming for loop。Transaction validation is responsible for taking out transactions in sequence and then from the signature data of the transaction.(v, r, s)data and restore the public key of the transaction sender from it, where the step of restoring the public key is time-consuming due to the cryptographic algorithm involved;The data drop disk is responsible for taking out the transaction-related data from the cache one by one, encoding it into a JSON string and writing it to disk, which is also a disaster area for performance loss due to the low efficiency of the JSON encoding process itself.。 +By breaking down the code for the trade check and data drop sections, we found that the main functions of both are concentrated in a time-consuming for loop。Transaction validation is responsible for taking out transactions in sequence and then from the signature data of the transaction(v, r, s)data and restore the public key of the transaction sender from it, where the step of restoring the public key is time-consuming due to the cryptographic algorithm involved;The data drop disk is responsible for taking out the transaction-related data from the cache one by one, encoding it into a JSON string and writing it to disk, which is also a disaster area for performance loss due to the low efficiency of the JSON encoding process itself。 The two codes are as follows: @@ -78,21 +78,21 @@ for(int i = 0; i < datas.size(); ++i) } ``` -The common feature of both processes is that they both apply the same operations to different parts of the data structure, and for this type of problem, you can directly use data-level parallelism for transformation.。The so-called data-level parallelism, that is, the data as a partition object, by dividing the data into fragments of approximately equal size, by operating on different data fragments on multiple threads, to achieve the purpose of parallel processing of data sets.。 +The common feature of both processes is that they both apply the same operations to different parts of the data structure, and for this type of problem, you can directly use data-level parallelism for transformation。The so-called data-level parallelism, that is, the data as a partition object, by dividing the data into fragments of approximately equal size, by operating on different data fragments on multiple threads, to achieve the purpose of parallel processing of data sets。 -The only additional requirement for data-level parallelism is that the tasks are independent of each other, and there is no doubt that in the FISCO BCOS implementation, both transaction validation and data drop meet this requirement.。 +The only additional requirement for data-level parallelism is that the tasks are independent of each other, and there is no doubt that in the FISCO BCOS implementation, both transaction validation and data drop meet this requirement。 ## optimization practice ### 1. Block decoding parallelization -During the transformation, we added an offset field to the common RLP encoding used in the system to index the location of each Object.。As shown in the following figure, the beginning of the modified encoding format is still the number of objects (Object num), but after the number field, it is an array (Offsets) that records the offset of the object.。 +During the transformation, we added an offset field to the common RLP encoding used in the system to index the location of each Object。As shown in the following figure, the beginning of the modified encoding format is still the number of objects (Object num), but after the number field, it is an array (Offsets) that records the offset of the object。 ![](../../../../images/articles/parallel_transformation/IMG_5186.JPG) -Each element in the array has a fixed length。Therefore, to read the value of an Offset, you only need to access the array, according to the serial number of the Offset direct index can be randomly accessed.。After Offsets, is a list of objects that are the same as the RLP encoding。Offset of the corresponding ordinal, pointing to the RLP-encoded byte position of the object of the corresponding ordinal。Therefore, to decode an object arbitrarily, you only need to find its offset based on the object's serial number, and then locate the RLP encoded byte position of the corresponding object based on the offset.。 +Each element in the array has a fixed length。Therefore, to read the value of an Offset, you only need to access the array, according to the serial number of the Offset direct index can be randomly accessed。After Offsets, is a list of objects that are the same as the RLP encoding。Offset of the corresponding ordinal, pointing to the RLP-encoded byte position of the object of the corresponding ordinal。Therefore, to decode an object arbitrarily, you only need to find its offset based on the object's serial number, and then locate the RLP encoded byte position of the corresponding object based on the offset。 -The coding process has also been redesigned。The process itself is still based on the idea of recursion. For the input object array, first encode the size of the object array at the beginning of the output encoding. If the array size exceeds 1, take out the objects to be encoded one by one and cache their recursive encoding, and record the offset position of the object in the Offsets array. After the array is traversed, take out the cached object encoding for the first time and append it to the output encoding.;If the array size is 1, it is recursively encoded and written to the end of the output encoding, ending the recursion。 +The coding process has also been redesigned。The process itself is still based on the idea of recursion. For the input object array, first encode the size of the object array at the beginning of the output encoding. If the array size exceeds 1, take out the objects to be encoded one by one and cache their recursive encoding, and record the offset position of the object in the Offsets array. After the array is traversed, take out the cached object encoding for the first time and append it to the output encoding;If the array size is 1, it is recursively encoded and written to the end of the output encoding, ending the recursion。 **The pseudocode for the coding process is as follows:** @@ -127,7 +127,7 @@ void encode(objs) //Input: objs = array of objects to be encoded } ``` -The introduction of offsets enables the decoding module to have random access to the element encoding。The array range of Offsets can be spread evenly across multiple threads, so that each thread can access different parts of the object array in parallel and decode them separately。Because it is read-only access, this parallel approach is thread-safe and only needs to summarize the output at the end.。 +The introduction of offsets enables the decoding module to have random access to the element encoding。The array range of Offsets can be spread evenly across multiple threads, so that each thread can access different parts of the object array in parallel and decode them separately。Because it is read-only access, this parallel approach is thread-safe and only needs to summarize the output at the end。 **The pseudo-code for the decoding process is as follows:** @@ -156,11 +156,11 @@ Objs decode(RLP Rlps) ## 2. Transaction Verification & Parallelization of Data Drop -For data-level parallelism, there are a variety of mature multithreaded programming models in the industry.。While explicit multithreaded programming models such as Pthread can provide more granular control over threads, they require us to have skillful mastery of thread communication and synchronization.。The higher the complexity of the implementation, the greater the chance of making mistakes, and the more difficult it will be to maintain the code in the future.。Our main goal is to parallelize only intensive loops, so Keep It Simple & Stupid is our coding principle, so we use an implicit programming model to achieve our goal。 +For data-level parallelism, there are a variety of mature multithreaded programming models in the industry。While explicit multithreaded programming models such as Pthread can provide more granular control over threads, they require us to have skillful mastery of thread communication and synchronization。The higher the complexity of the implementation, the greater the chance of making mistakes, and the more difficult it will be to maintain the code in the future。Our main goal is to parallelize only intensive loops, so Keep It Simple & Stupid is our coding principle, so we use an implicit programming model to achieve our goal。 -After repeated trade-offs, we have chosen the Thread Building Blocks (TBB) open source library from Intel among the many implicit multithreaded programming models on the market.。In terms of data-level parallelism, TBB is a veteran, and the TBB runtime system not only masks the implementation details of the underlying worker threads, but also automatically balances workloads between processors based on the amount of tasks, thus making full use of the underlying CPU resources.。 +After repeated trade-offs, we have chosen the Thread Building Blocks (TBB) open source library from Intel among the many implicit multithreaded programming models on the market。In terms of data-level parallelism, TBB is a veteran, and the TBB runtime system not only masks the implementation details of the underlying worker threads, but also automatically balances workloads between processors based on the amount of tasks, thus making full use of the underlying CPU resources。 -**With TBB, the code for transaction validation and data drop is as follows.** +**With TBB, the code for transaction validation and data drop is as follows** ``` / / Parallel transaction verification @@ -192,13 +192,13 @@ tbb::parallel_for(tbb::blocked_range(0, transactions.size()), }); ``` -As you can see, in addition to using the tbb provided by the TBB::parallel _ for parallel loop and tbb::The code inside the loop body is almost unchanged outside the blocked _ range reference data shard, close to C.++Native syntax is exactly what makes TBB。TBB provides parallel interfaces with a high level of abstraction, such as generic parallel algorithms such as parallel _ for and parallel _ for _ each, which makes the transformation easier.。At the same time, TBB does not depend on any language or compiler, as long as it can support ISO C.++Standard compiler, there is TBB use。 +As you can see, in addition to using the tbb provided by the TBB::parallel _ for parallel loop and tbb::The code inside the loop body is almost unchanged outside the blocked _ range reference data shard, close to C++Native syntax is exactly what makes TBB。TBB provides parallel interfaces with a high level of abstraction, such as generic parallel algorithms such as parallel _ for and parallel _ for _ each, which makes the transformation easier。At the same time, TBB does not depend on any language or compiler, as long as it can support ISO C++Standard compiler, there is TBB use。 Of course, the use of TBB is not completely without additional burden, such as inter-thread security or need to be carefully analyzed by developers to ensure, but TBB thoughtful, provides a set of convenient tools to help us solve the problem of mutual exclusion between threads, such as atomic variables, thread local storage and parallel containers, these parallel tools are also widely used in FISCO BCOS, for the stable operation of FISCO BCOS escort。 #### Write at the end -After a set of parallel optimization of the combination of fist, FISCO BCOS performance to a higher level。The results of the stress test show that the transaction processing capacity of FISCO BCOS has been successfully improved by 1.74 times compared to before the parallel transformation, basically achieving the expected effect of this link.。 +After a set of parallel optimization of the combination of fist, FISCO BCOS performance to a higher level。The results of the stress test show that the transaction processing capacity of FISCO BCOS has been successfully improved by 1.74 times compared to before the parallel transformation, basically achieving the expected effect of this link。 -But we also deeply understand that the road to performance optimization is long, the shortest board of the barrel always alternates, the parallel way is, through repeated analysis, disassembly, quantification and optimization, so that the modules work together, the whole system to achieve an elegant balance, and the optimal solution is always in the "jump" to get the place.。 +But we also deeply understand that the road to performance optimization is long, the shortest board of the barrel always alternates, the parallel way is, through repeated analysis, disassembly, quantification and optimization, so that the modules work together, the whole system to achieve an elegant balance, and the optimal solution is always in the "jump" to get the place。 diff --git a/3.x/en/docs/articles/3_features/30_architecture/transaction_lifetime.md b/3.x/en/docs/articles/3_features/30_architecture/transaction_lifetime.md index 947c52499..bf3357b02 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/transaction_lifetime.md +++ b/3.x/en/docs/articles/3_features/30_architecture/transaction_lifetime.md @@ -2,28 +2,28 @@ Author: Li Chen Xi | FISCO BCOS Core Developer -Transactions - the core of the blockchain system, responsible for recording everything that happens on the blockchain。With the introduction of smart contracts in the blockchain, transactions go beyond the original definition of "value transfer," and a more precise definition should be a digital record of a transaction in the blockchain.。transactions, large or small, require the involvement of transactions。 +Transactions - the core of the blockchain system, responsible for recording everything that happens on the blockchain。With the introduction of smart contracts in the blockchain, transactions go beyond the original definition of "value transfer," and a more precise definition should be a digital record of a transaction in the blockchain。transactions, large or small, require the involvement of transactions。 -The life of the transaction, through the stages shown in the chart below。This article will review the entire flow of the transaction and get a glimpse of the complete life cycle of the FISCO BCOS transaction.。 +The life of the transaction, through the stages shown in the chart below。This article will review the entire flow of the transaction and get a glimpse of the complete life cycle of the FISCO BCOS transaction。 ![](../../../../images/articles/transaction_lifetime/IMG_5188.PNG) ## Transaction Generation -After the user's request is sent to the client, the client builds a valid transaction that includes the following key information. +After the user's request is sent to the client, the client builds a valid transaction that includes the following key information 1. Sending address: the user's own account, which is used to indicate where the transaction came from。 -2. Receiving address: The transactions in FISCO BCOS are divided into two categories, one is the transaction of the deployment contract and the other is the transaction of the call contract.。The former, since the transaction does not have a specific recipient, specifies that the receiving address for such transactions is fixed at 0x0;The latter requires that the receiving address of the transaction be set to the address of the contract on the chain.。 -3. Transaction-related data: A transaction often requires some user-provided input to perform the user's desired action, which is encoded into the transaction in binary form.。 -4. Transaction signature: In order to show that the transaction was indeed sent by itself, the user will provide the SDK with the private key to allow the client to sign the transaction, where the private key and the user account are one-to-one correspondence.。 +2. Receiving address: The transactions in FISCO BCOS are divided into two categories, one is the transaction of the deployment contract and the other is the transaction of the call contract。The former, since the transaction does not have a specific recipient, specifies that the receiving address for such transactions is fixed at 0x0;The latter requires that the receiving address of the transaction be set to the address of the contract on the chain。 +3. Transaction-related data: A transaction often requires some user-provided input to perform the user's desired action, which is encoded into the transaction in binary form。 +4. Transaction signature: In order to show that the transaction was indeed sent by itself, the user will provide the SDK with the private key to allow the client to sign the transaction, where the private key and the user account are one-to-one correspondence。 -The blockchain client then populates the transaction with the necessary fields, such as the transaction ID and blockLimit for transaction replay prevention.。For the specific structure and field meaning of the transaction, please refer to [Coding Protocol Document](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/protocol_description.html)After the transaction is constructed, the client then sends the transaction to the node via the Channel or RPC channel。 +The blockchain client then populates the transaction with the necessary fields, such as the transaction ID and blockLimit for transaction replay prevention。For the specific structure and field meaning of the transaction, please refer to [Coding Protocol Document](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/protocol_description.html)After the transaction is constructed, the client then sends the transaction to the node via the Channel or RPC channel。 ![](../../../../images/articles/transaction_lifetime/IMG_5189.PNG) ## Trading pool -After a blockchain transaction is sent to a node, the node verifies whether a transaction is legitimate by verifying the transaction signature。If a transaction is legal, the node further checks whether the transaction has been repeated, and if it has never occurred, the transaction is added to the transaction pool and cached.。If the transaction is illegal or the transaction is repeated, the transaction will be discarded directly。 +After a blockchain transaction is sent to a node, the node verifies whether a transaction is legitimate by verifying the transaction signature。If a transaction is legal, the node further checks whether the transaction has been repeated, and if it has never occurred, the transaction is added to the transaction pool and cached。If the transaction is illegal or the transaction is repeated, the transaction will be discarded directly。 ![](../../../../images/articles/transaction_lifetime/IMG_5190.PNG) @@ -35,22 +35,22 @@ In order to make the transaction reach all nodes as much as possible, other tran ## Transaction Packaging -In order to improve the efficiency of transaction processing, and also to determine the order of execution after the transaction to ensure transactionality, when there are transactions in the transaction pool, the Sealer thread is responsible for taking out a certain number of transactions from the transaction pool in a first-in, first-out order, assembling them into blocks to be agreed upon, and then the blocks to be agreed upon are sent to each node for processing.。 +In order to improve the efficiency of transaction processing, and also to determine the order of execution after the transaction to ensure transactionality, when there are transactions in the transaction pool, the Sealer thread is responsible for taking out a certain number of transactions from the transaction pool in a first-in, first-out order, assembling them into blocks to be agreed upon, and then the blocks to be agreed upon are sent to each node for processing。 ![](../../../../images/articles/transaction_lifetime/IMG_5191.JPG) ## Transaction Execution -After the node receives the block, it calls the block validator to take the transactions out of the block one by one and execute them.。In the case of precompiled contract code, the execution engine in the validator calls the corresponding C++function, otherwise the execution engine will hand over the transaction to the EVM (Ethereum Virtual Machine) for execution.。 +After the node receives the block, it calls the block validator to take the transactions out of the block one by one and execute them。In the case of precompiled contract code, the execution engine in the validator calls the corresponding C++function, otherwise the execution engine will hand over the transaction to the EVM (Ethereum Virtual Machine) for execution。 -The transaction may execute successfully, or it may fail due to logical errors or insufficient Gas。The result and status of the transaction execution are returned encapsulated in the transaction receipt.。 +The transaction may execute successfully, or it may fail due to logical errors or insufficient Gas。The result and status of the transaction execution are returned encapsulated in the transaction receipt。 ![](../../../../images/articles/transaction_lifetime/IMG_5192.JPG) ## Trading consensus -The blockchain requires an agreement between nodes on the execution result of the block before the block can be released.。The PBFT algorithm is generally used in FISCO BCOS to ensure the consistency of the entire system, and the general process is as follows: each node executes the same block independently, and then the nodes exchange their execution results.。 +The blockchain requires an agreement between nodes on the execution result of the block before the block can be released。The PBFT algorithm is generally used in FISCO BCOS to ensure the consistency of the entire system, and the general process is as follows: each node executes the same block independently, and then the nodes exchange their execution results。 ## Trading Drop -After the consensus block is released, the node needs to write the transactions and execution results in the block to the hard disk for permanent storage, and update the mapping table of block height and block hash, etc., and then the node will remove the transactions that have been dropped from the transaction pool to start a new round of the block process.。Users can query the transaction data and receipt information they are interested in in the historical data on the chain through information such as transaction hashes.。 \ No newline at end of file +After the consensus block is released, the node needs to write the transactions and execution results in the block to the hard disk for permanent storage, and update the mapping table of block height and block hash, etc., and then the node will remove the transactions that have been dropped from the transaction pool to start a new round of the block process。Users can query the transaction data and receipt information they are interested in in the historical data on the chain through information such as transaction hashes。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/30_architecture/transaction_pool_optimization_strategy.md b/3.x/en/docs/articles/3_features/30_architecture/transaction_pool_optimization_strategy.md index 48202d556..fffd2c7d1 100644 --- a/3.x/en/docs/articles/3_features/30_architecture/transaction_pool_optimization_strategy.md +++ b/3.x/en/docs/articles/3_features/30_architecture/transaction_pool_optimization_strategy.md @@ -4,7 +4,7 @@ Author : Chen Yujie | FISCO BCOS Core Developer **Author language** -In the FISCO BCOS blockchain system, transactions are stored in the transaction pool before they are put on the chain.。The trading pool is a small blockchain expert, acting as a quality inspector on the one hand, shutting out all illegal transactions;On the one hand, it is the responsibility of the supplier to deliver legal transactions to the consensus module.;Also responsible for pushing up-chain notifications to the client。It can be said that the transaction pool of the FISCO BCOS blockchain system is extremely busy, and its performance will directly affect the performance of the blockchain system。This article will lead you to unveil the transaction pool, understand the multiple identities of the transaction pool, and together understand how the transaction pool in the FISCO BCOS blockchain system can navigate between multiple roles.。 +In the FISCO BCOS blockchain system, transactions are stored in the transaction pool before they are put on the chain。The trading pool is a small blockchain expert, acting as a quality inspector on the one hand, shutting out all illegal transactions;On the one hand, it is the responsibility of the supplier to deliver legal transactions to the consensus module;Also responsible for pushing up-chain notifications to the client。It can be said that the transaction pool of the FISCO BCOS blockchain system is extremely busy, and its performance will directly affect the performance of the blockchain system。This article will lead you to unveil the transaction pool, understand the multiple identities of the transaction pool, and together understand how the transaction pool in the FISCO BCOS blockchain system can navigate between multiple roles。 ------ @@ -12,66 +12,66 @@ In the FISCO BCOS blockchain system, transactions are stored in the transaction ![](../../../../images/articles/transaction_pool_optimization_strategy/IMG_5193.PNG) -As shown in the figure above, in the FISCO BCOS blockchain system, it is the basic responsibility of the transaction pool to receive transactions sent by the storage client, which are the "raw materials" for the consensus module to package transactions and the synchronization module to broadcast transactions.。The trading pool needs to ensure the quality of the "raw materials" of these transactions and verify the legality of the transactions.。Of course, in order to prevent DOS attacks, FISCO BCOS limits the capacity of the trading pool and rejects new transactions sent by the client when the number of transactions in the trading pool exceeds the capacity limit.。 +As shown in the figure above, in the FISCO BCOS blockchain system, it is the basic responsibility of the transaction pool to receive transactions sent by the storage client, which are the "raw materials" for the consensus module to package transactions and the synchronization module to broadcast transactions。The trading pool needs to ensure the quality of the "raw materials" of these transactions and verify the legality of the transactions。Of course, in order to prevent DOS attacks, FISCO BCOS limits the capacity of the trading pool and rejects new transactions sent by the client when the number of transactions in the trading pool exceeds the capacity limit。 ## Importance of trading pools -In the FISCO BCOS blockchain system, the transaction pool, as a key system module, is responsible for interacting with the SDK and multiple back-end modules at the same time, and this section takes the multiple responsibilities of the transaction pool as a starting point to see how busy the transaction pool is.。 +In the FISCO BCOS blockchain system, the transaction pool, as a key system module, is responsible for interacting with the SDK and multiple back-end modules at the same time, and this section takes the multiple responsibilities of the transaction pool as a starting point to see how busy the transaction pool is。 ### Four Duties of the Trading Pool ![](../../../../images/articles/transaction_pool_optimization_strategy/IMG_5194.PNG) -The above figure shows the multiple roles played by the transaction pool throughout the life cycle of the transaction from the client to the chain. +The above figure shows the multiple roles played by the transaction pool throughout the life cycle of the transaction from the client to the chain -- **Transaction Quality Inspector**: Before the transaction is placed in the transaction pool, the validity of the transaction is detected, and the valid transaction must meet: ① The signature is valid.;② Non-duplicate transactions;③ Non-on-chain transactions。 +- **Transaction Quality Inspector**: Before the transaction is placed in the transaction pool, the validity of the transaction is detected, and the valid transaction must meet: ① The signature is valid;② Non-duplicate transactions;③ Non-on-chain transactions。 - **Transaction Supplier**: Store legitimate transactions and provide transaction "raw materials" for back-end modules。 -- **Check and sign anti-weight assistant**: Provides a block verification interface for the consensus module to verify only missed transactions in the transaction pool, improving the efficiency of consensus block verification.。 -- **Transaction Chain Notifier**: After the transaction is successfully linked, notify the client of the transaction execution result.。 +- **Check and sign anti-weight assistant**: Provides a block verification interface for the consensus module to verify only missed transactions in the transaction pool, improving the efficiency of consensus block verification。 +- **Transaction Chain Notifier**: After the transaction is successfully linked, notify the client of the transaction execution result。 -As the core module of the blockchain, the transaction pool has four functions, and each transaction processing process requires up to eight interactions with the three back-end modules, the four internal modules, and the client, which is indeed extremely busy.。 +As the core module of the blockchain, the transaction pool has four functions, and each transaction processing process requires up to eight interactions with the three back-end modules, the four internal modules, and the client, which is indeed extremely busy。 ### Role of trading pools -The following is an example of the transaction processing life cycle of a blockchain node to learn more about the role of the various roles in the transaction pool.。 +The following is an example of the transaction processing life cycle of a blockchain node to learn more about the role of the various roles in the transaction pool。 ![](../../../../images/articles/transaction_pool_optimization_strategy/IMG_5195.PNG) As shown in the figure above, the transactions sent by the client to the node will be pipelined, and each flow needs to have five processing processes: -- **Transaction detection**: After the transaction sent by the client is sent to the node, it must first be tested by the quality inspector of the transaction pool, and the transaction pool will only put transactions with valid signatures, non-duplicate, and not on the chain into the transaction pool.。 -- **Transaction Storage**: After the transaction has passed the "quality test," it is stored in the transaction pool, where the transaction pool assumes the role of "supplier," and the consensus module obtains new transactions from the transaction pool for packaging into blocks.;The synchronization module retrieves new transactions from the transaction pool and broadcasts them to all other consensus nodes.。 -- **Transaction Packaging & & Transaction Consensus**: The consensus module obtains legal transactions from the transaction pool, packages them into new blocks, and broadcasts them to all other consensus nodes. After receiving the packaged new blocks, other consensus nodes verify the signature of each transaction in the blocks in order to ensure the legality of the blocks.。Considering that transaction validation is a very time-consuming operation, and there is a high probability that transactions in the new block will hit the transaction pool of other nodes, in order to improve the efficiency of consensus validation, the transaction validation anti-heavy assistant is used at this time, it will only verify the transaction signatures in the new block that do not hit the local transaction pool.。 -- **Transaction Submission**After the transaction consensus is reached, the storage module is called to submit the transaction and its execution results to the blockchain database.。 -- **TRANSACTION NOTICE**: After the transaction is successfully chained, the chaining notifier of the transaction pool notifies the client of the transaction execution result.。 +- **Transaction detection**: After the transaction sent by the client is sent to the node, it must first be tested by the quality inspector of the transaction pool, and the transaction pool will only put transactions with valid signatures, non-duplicate, and not on the chain into the transaction pool。 +- **Transaction Storage**: After the transaction has passed the "quality test," it is stored in the transaction pool, where the transaction pool assumes the role of "supplier," and the consensus module obtains new transactions from the transaction pool for packaging into blocks;The synchronization module retrieves new transactions from the transaction pool and broadcasts them to all other consensus nodes。 +- **Transaction Packaging & & Transaction Consensus**: The consensus module obtains legal transactions from the transaction pool, packages them into new blocks, and broadcasts them to all other consensus nodes. After receiving the packaged new blocks, other consensus nodes verify the signature of each transaction in the blocks in order to ensure the legality of the blocks。Considering that transaction validation is a very time-consuming operation, and there is a high probability that transactions in the new block will hit the transaction pool of other nodes, in order to improve the efficiency of consensus validation, the transaction validation anti-heavy assistant is used at this time, it will only verify the transaction signatures in the new block that do not hit the local transaction pool。 +- **Transaction Submission**After the transaction consensus is reached, the storage module is called to submit the transaction and its execution results to the blockchain database。 +- **TRANSACTION NOTICE**: After the transaction is successfully chained, the chaining notifier of the transaction pool notifies the client of the transaction execution result。 -The transaction pool is involved in each process throughout the life cycle of the transaction from issuance to the blockchain, so the transaction pool is very important to the entire blockchain system, and each process of the transaction pool directly affects the performance of the blockchain system.。 +The transaction pool is involved in each process throughout the life cycle of the transaction from issuance to the blockchain, so the transaction pool is very important to the entire blockchain system, and each process of the transaction pool directly affects the performance of the blockchain system。 ## Trading Pool Optimization -From the previous introduction, we learned that the FISCO BCOS blockchain system's transaction pool is extremely busy and directly affects the performance of the blockchain system, and this section details the optimization process and optimization methods of the transaction pool.。 +From the previous introduction, we learned that the FISCO BCOS blockchain system's transaction pool is extremely busy and directly affects the performance of the blockchain system, and this section details the optimization process and optimization methods of the transaction pool。 ### Optimize transaction processing pipeline efficiency -As can be seen from the above transaction processing pipeline diagram, the transaction pool participates in every process of transaction processing, so each process of the transaction pool has a significant impact on system performance。FISCO BCOS blockchain system uses split and parallel execution of transaction verification tasks, transaction asynchronous notification strategy to optimize transaction pipeline processing efficiency.。 +As can be seen from the above transaction processing pipeline diagram, the transaction pool participates in every process of transaction processing, so each process of the transaction pool has a significant impact on system performance。FISCO BCOS blockchain system uses split and parallel execution of transaction verification tasks, transaction asynchronous notification strategy to optimize transaction pipeline processing efficiency。 ### Optimize transaction validation efficiency -After the introduction of parallel trading in FISCO BCOS rc2, FISCO BCOS developers found that each block of the consensus module was often unable to fill up the transaction during the pressure test, and occasionally the consensus module was empty and waiting for the trading pool to provide new transactions.。The investigation found that the transaction pool as a transaction detector task is too heavy, not only to verify the transaction signature, but also to check whether the transaction is repeated, whether it has been on the chain, resulting in very low efficiency in providing transactions to transaction suppliers, often in short supply of transactions, seriously affecting the blockchain system TPS.。 +After the introduction of parallel trading in FISCO BCOS rc2, FISCO BCOS developers found that each block of the consensus module was often unable to fill up the transaction during the pressure test, and occasionally the consensus module was empty and waiting for the trading pool to provide new transactions。The investigation found that the transaction pool as a transaction detector task is too heavy, not only to verify the transaction signature, but also to check whether the transaction is repeated, whether it has been on the chain, resulting in very low efficiency in providing transactions to transaction suppliers, often in short supply of transactions, seriously affecting the blockchain system TPS。 The following diagram depicts this phenomenon of short supply: ![](../../../../images/articles/transaction_pool_optimization_strategy/IMG_5196.PNG) -In order to break the dilemma that the transaction pool is in short supply and optimize the processing efficiency of the transaction pipeline, the FISCO BCOS blockchain system introduces a special transaction verification module and assigns the "verification signature" responsibility of the "transaction detector" to this new module, and in order to further improve the efficiency of transaction storage, the transaction verification module verifies transactions in parallel.。After optimizing the transaction processing pipeline, the workload of the "transaction tester" is much lighter, the transaction supplier can fully meet the transaction needs of the consensus module, and still have some inventory.。 +In order to break the dilemma that the transaction pool is in short supply and optimize the processing efficiency of the transaction pipeline, the FISCO BCOS blockchain system introduces a special transaction verification module and assigns the "verification signature" responsibility of the "transaction detector" to this new module, and in order to further improve the efficiency of transaction storage, the transaction verification module verifies transactions in parallel。After optimizing the transaction processing pipeline, the workload of the "transaction tester" is much lighter, the transaction supplier can fully meet the transaction needs of the consensus module, and still have some inventory。 ![](../../../../images/articles/transaction_pool_optimization_strategy/IMG_5197.PNG) -After optimizing the processing pipeline,"Transaction Detector"The heavy work was"sufficient manpower""Validation module"Share, system performance significantly improved: using parallel transaction stress testing, FISCO BCOS blockchain system performance exceeded 1W.。 +After optimizing the processing pipeline,"Transaction Detector"The heavy work was"sufficient manpower""Validation module"Share, system performance significantly improved: using parallel transaction stress testing, FISCO BCOS blockchain system performance exceeded 1W。 ### Transaction Asynchronous Notification -Through the previous introduction, we understand that the trading pool also bears the responsibility of transaction notification,"Transaction Notifier"It is also a busy role, it needs to receive the block drop signal, all the chain transactions to the client, if and only when the consensus module confirms that the previous round of consensus on the chain transactions will be notified, the consensus module will start the next round of consensus, transaction synchronization push will undoubtedly slow down the consensus process.。In order to further optimize the processing efficiency of the pipeline, the FISCO BCOS blockchain system adopts the transaction asynchronous notification strategy: the storage module places the transaction notification results in the transaction notification queue and returns them directly, and the consensus module directly starts the next round of consensus process.。 +Through the previous introduction, we understand that the trading pool also bears the responsibility of transaction notification,"Transaction Notifier"It is also a busy role, it needs to receive the block drop signal, all the chain transactions to the client, if and only when the consensus module confirms that the previous round of consensus on the chain transactions will be notified, the consensus module will start the next round of consensus, transaction synchronization push will undoubtedly slow down the consensus process。In order to further optimize the processing efficiency of the pipeline, the FISCO BCOS blockchain system adopts the transaction asynchronous notification strategy: the storage module places the transaction notification results in the transaction notification queue and returns them directly, and the consensus module directly starts the next round of consensus process。 As shown in the following figure: @@ -81,20 +81,20 @@ After adopting the transaction asynchronous notification strategy, the transacti ### Dual Cache Queue -After FISCO BCOS 2.1, the FISCO BCOS team carefully counted the processing time of each block and felt that there was room for further improvement in system performance, so they decided to continue to optimize performance and further improve the processing power of the FISCO BCOS blockchain system.。 +After FISCO BCOS 2.1, the FISCO BCOS team carefully counted the processing time of each block and felt that there was room for further improvement in system performance, so they decided to continue to optimize performance and further improve the processing power of the FISCO BCOS blockchain system。 -When the storage module and execution module are optimized to the extreme, the final pressure test results are always not as expected.。After checking, it is found that the transaction pool is in short supply again, but this short supply is caused by the client, after the client sends the transaction, a large number of threads are blocked waiting for the transaction verification to pass, return the transaction hash, can not empty more threads to send new transactions.。 +When the storage module and execution module are optimized to the extreme, the final pressure test results are always not as expected。After checking, it is found that the transaction pool is in short supply again, but this short supply is caused by the client, after the client sends the transaction, a large number of threads are blocked waiting for the transaction verification to pass, return the transaction hash, can not empty more threads to send new transactions。 -In order to improve the response speed of the node to the client, thereby improving the transaction sending rate of the client, the FISCO BCOS blockchain system introduces a transaction pre-buffer on the basis of the transaction storage queue held by the "transaction provider" to store the transactions sent by the client to the node and respond directly to the client.。 +In order to improve the response speed of the node to the client, thereby improving the transaction sending rate of the client, the FISCO BCOS blockchain system introduces a transaction pre-buffer on the basis of the transaction storage queue held by the "transaction provider" to store the transactions sent by the client to the node and respond directly to the client。 -The transaction pre-buffer continuously sends cached transactions to the"Validation module"和"Transaction Detector"The verified transactions will eventually be placed in the real transaction queue for the transaction provider to schedule, as shown in the following figure. +The transaction pre-buffer continuously sends cached transactions to the"Validation module"和"Transaction Detector"The verified transactions will eventually be placed in the real transaction queue for the transaction provider to schedule, as shown in the following figure ![](../../../../images/articles/transaction_pool_optimization_strategy/IMG_5199.PNG) -This dual-cache queue mechanism greatly improves the response speed of the transaction pool to the client, and the client can continue to free up threads to continue to send new transactions.。 +This dual-cache queue mechanism greatly improves the response speed of the transaction pool to the client, and the client can continue to free up threads to continue to send new transactions。 ## Summary -The transaction pool is very busy. In the FISCO BCOS blockchain system, the transaction pool is even busier. It is used to verify transactions, store transactions, prevent duplicate transaction checks, and push transaction execution results to the client.。The transaction pool is very important, FISCO BCOS blockchain system, the transaction pool is more important, the task is heavy."Transaction Detector"Will greatly reduce the transaction insertion rate, resulting in a shortage of transactions;No."Transaction pre-buffer"The transaction pool will block the client transaction sending thread, reducing the client transaction sending rate.;Transaction synchronization push, will lose about 10% of the system performance... +The transaction pool is very busy. In the FISCO BCOS blockchain system, the transaction pool is even busier. It is used to verify transactions, store transactions, prevent duplicate transaction checks, and push transaction execution results to the client。The transaction pool is very important, FISCO BCOS blockchain system, the transaction pool is more important, the task is heavy"Transaction Detector"Will greatly reduce the transaction insertion rate, resulting in a shortage of transactions;No"Transaction pre-buffer"The transaction pool will block the client transaction sending thread, reducing the client transaction sending rate;Transaction synchronization push, will lose about 10% of the system performance.. -On the road to performance optimization, trading pool performance optimization has always been ranked high.。 \ No newline at end of file +On the road to performance optimization, trading pool performance optimization has always been ranked high。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/cachedstorage_deadlock_debug.md b/3.x/en/docs/articles/3_features/31_performance/cachedstorage_deadlock_debug.md index 882b06af8..c62a03af2 100644 --- a/3.x/en/docs/articles/3_features/31_performance/cachedstorage_deadlock_debug.md +++ b/3.x/en/docs/articles/3_features/31_performance/cachedstorage_deadlock_debug.md @@ -2,19 +2,19 @@ Author: Li Chen Xi | FISCO BCOS Core Developer -In the work of integrating FISCO BCOS non-national secret single test and national secret single test, we found that the single test of CachedStorage accidentally falls into a stuck state and can continue to reappear locally.。The recurrence method is to execute the CachedStorage single test about 200 times in a loop, and there will be a situation where all threads are stuck in a waiting state and the single test cannot continue to execute, we suspect that a deadlock has occurred in CachedStroage, so we debug this。 +In the work of integrating FISCO BCOS non-national secret single test and national secret single test, we found that the single test of CachedStorage accidentally falls into a stuck state and can continue to reappear locally。The recurrence method is to execute the CachedStorage single test about 200 times in a loop, and there will be a situation where all threads are stuck in a waiting state and the single test cannot continue to execute, we suspect that a deadlock has occurred in CachedStroage, so we debug this。 ## Debug Ideas -Traditional Chinese medicine pays attention to the treatment of diseases, debugging bugs also need to follow the idea of finding clues, reasonable inference, verification and solution.。 +Traditional Chinese medicine pays attention to the treatment of diseases, debugging bugs also need to follow the idea of finding clues, reasonable inference, verification and solution。 ### Observe Thread Stack -When a deadlock occurs, use the / usr / bin / sample tool (in the mac platform environment) to print out the stacks of all threads and observe the working status of each thread.。From the thread stack of all threads, it is observed that there is a thread (here called T1) stuck in the touchCache function on line 698 of CachedStorage.cpp, click on the reference [specific code implementation](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/release-2.3.0-bsn/libstorage/CachedStorage.cpp)。 +When a deadlock occurs, use the / usr / bin / sample tool (in the mac platform environment) to print out the stacks of all threads and observe the working status of each thread。From the thread stack of all threads, it is observed that there is a thread (here called T1) stuck in the touchCache function on line 698 of CachedStorage.cpp, click on the reference [specific code implementation](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/release-2.3.0-bsn/libstorage/CachedStorage.cpp)。 ![](../../../../images/articles/cachedstorage_deadlock_debug/IMG_5257.PNG) -As you can see from the code snippet, T1 has acquired the read lock of m _ cachesMutex on line 691: code RWMutexScoped(some_rw_mutex, false)It means to obtain the read lock of a read-write lock.;Accordingly, the code RWMutexScoped(some_rw_mutex, true)The RWMutex is a Spin Lock.。 +As you can see from the code snippet, T1 has acquired the read lock of m _ cachesMutex on line 691: code RWMutexScoped(some_rw_mutex, false)It means to obtain the read lock of a read-write lock;Accordingly, the code RWMutexScoped(some_rw_mutex, true)The RWMutex is a Spin Lock。 Then try to acquire a write lock for a cache at line 698。In addition to T1, there is another thread (here called T2) stuck in the touchCache function at line 691 of CachedStorage.cpp: @@ -28,15 +28,15 @@ Finally, there is a Cache cleanup thread (here called T3) stuck in the removeCac ![](../../../../images/articles/cachedstorage_deadlock_debug/IMG_5260.PNG) -As you can see from the code snippet, none of these threads hold any lock resources, but are simply trying to obtain the write lock of m _ cachesMutex.。 +As you can see from the code snippet, none of these threads hold any lock resources, but are simply trying to obtain the write lock of m _ cachesMutex。 ### The Hunger Problem -In the initial analysis of the problem, the most bizarre thing is that when T1 has already acquired the m _ cachesMutex read lock, other threads that are also trying to acquire the m _ cachesMutex read lock are unable to do so.。However, seeing that the T3 thread is trying to obtain the m _ cachesMutex write lock at this time, we think of the read-write lock hunger problem, and we think that the root cause of the problem that other threads cannot obtain the read lock is probably in T3.。 +In the initial analysis of the problem, the most bizarre thing is that when T1 has already acquired the m _ cachesMutex read lock, other threads that are also trying to acquire the m _ cachesMutex read lock are unable to do so。However, seeing that the T3 thread is trying to obtain the m _ cachesMutex write lock at this time, we think of the read-write lock hunger problem, and we think that the root cause of the problem that other threads cannot obtain the read lock is probably in T3。 -The so-called read-write lock starvation problem means that in an environment where multiple threads share a read-write lock, if you set that as long as there is a read thread to obtain the read lock, the subsequent read threads that want to obtain the read lock can share the read lock, it may cause the write thread that wants to obtain the write lock to never get the execution opportunity (because the read-write lock has been preempted by other read threads)。In order to solve the hunger problem, some read and write locks will increase the priority of the write thread in some cases, that is, the write thread occupies the write lock first, while other read threads can only queue up after the write thread until the write thread releases the read and write lock.。 +The so-called read-write lock starvation problem means that in an environment where multiple threads share a read-write lock, if you set that as long as there is a read thread to obtain the read lock, the subsequent read threads that want to obtain the read lock can share the read lock, it may cause the write thread that wants to obtain the write lock to never get the execution opportunity (because the read-write lock has been preempted by other read threads)。In order to solve the hunger problem, some read and write locks will increase the priority of the write thread in some cases, that is, the write thread occupies the write lock first, while other read threads can only queue up after the write thread until the write thread releases the read and write lock。 -In the above problem, T1 has acquired the read lock of m _ cachesMutex. If T3 acquires the time slice and executes to line 734 of CachedStorage.cpp, it will be stuck because it cannot acquire the write lock of m _ cachesMutex, and then other threads also start to execute and arrive at the line of code to acquire the read lock of m _ cachesMutex.。If the read-write anti-starvation policy really exists, then these threads (including T2) will indeed get stuck during the read lock acquisition phase, which will cause T2 to fail to release the cache lock, thus T1 cannot obtain the cache lock, and all threads will be stuck in waiting at this time.。 +In the above problem, T1 has acquired the read lock of m _ cachesMutex. If T3 acquires the time slice and executes to line 734 of CachedStorage.cpp, it will be stuck because it cannot acquire the write lock of m _ cachesMutex, and then other threads also start to execute and arrive at the line of code to acquire the read lock of m _ cachesMutex。If the read-write anti-starvation policy really exists, then these threads (including T2) will indeed get stuck during the read lock acquisition phase, which will cause T2 to fail to release the cache lock, thus T1 cannot obtain the cache lock, and all threads will be stuck in waiting at this time。 On this premise, it seems that everything can be explained。The sequence diagram of the above process is as follows: @@ -52,24 +52,24 @@ To acquire a read lock: ![](../../../../images/articles/cachedstorage_deadlock_debug/IMG_5263.PNG) -In the code that obtains the write lock, you can see that if the write thread does not obtain the write lock, a WRITER _ PENDING flag will be set, indicating that a write thread is waiting for the release of the read-write lock at this time.。 +In the code that obtains the write lock, you can see that if the write thread does not obtain the write lock, a WRITER _ PENDING flag will be set, indicating that a write thread is waiting for the release of the read-write lock at this time。 -In the obtained read lock code, you can also see that if the read thread finds that the WRITER _ PENDING flag bit is set on the lock, it will wait in an honest loop, giving the write thread priority to obtain the read-write lock.。The behavior of the read-write lock here is perfectly in line with the previous speculation about the read-write lock's anti-hunger strategy, and the truth is now clear.。Now that you've found the cause of the problem, it's much easier to solve it.。In the design of CachedStorage, the priority of the cache cleanup thread is very low, and the call frequency is not high (about 1 time per second), so it is unreasonable to give it a high read-write lock priority. +In the obtained read lock code, you can also see that if the read thread finds that the WRITER _ PENDING flag bit is set on the lock, it will wait in an honest loop, giving the write thread priority to obtain the read-write lock。The behavior of the read-write lock here is perfectly in line with the previous speculation about the read-write lock's anti-hunger strategy, and the truth is now clear。Now that you've found the cause of the problem, it's much easier to solve it。In the design of CachedStorage, the priority of the cache cleanup thread is very low, and the call frequency is not high (about 1 time per second), so it is unreasonable to give it a high read-write lock priority ![](../../../../images/articles/cachedstorage_deadlock_debug/IMG_5263.PNG) -After the modification, the method of acquiring a write lock is similar to acquiring a read lock: every time a write lock is acquired, try _ acquire first, and if it is not acquired, give up the current time slice and try again until the write lock is acquired. At this time, the write thread will not set the WRITER _ PENDING flag, which will not affect the normal execution of other read threads.。 +After the modification, the method of acquiring a write lock is similar to acquiring a read lock: every time a write lock is acquired, try _ acquire first, and if it is not acquired, give up the current time slice and try again until the write lock is acquired. At this time, the write thread will not set the WRITER _ PENDING flag, which will not affect the normal execution of other read threads。 The relevant code has been submitted to version 2.5, which will be available soon, so stay tuned。 ### achieve effect -Before modification, a deadlock will occur when CachedStorage is cycled for about 200 times.;Modified Loop Execution 2000+No deadlock has occurred, and each thread can work methodically.。 +Before modification, a deadlock will occur when CachedStorage is cycled for about 200 times;Modified Loop Execution 2000+No deadlock has occurred, and each thread can work methodically。 ## Experience summary From the debugging process, summed up some experience to share with you。 -First of all, the most effective way to analyze the deadlock problem is still the "two-step" method, that is, through pstack, sample, gdb and other tools to look at the thread stack, speculate on the thread execution timing that caused the deadlock.。The second step here requires a little more imagination。In the past, the deadlock problem is often caused by the interaction between two threads, textbooks also use two threads to explain the four elements of deadlock, but in the above problems, due to the special nature of read-write locks, three threads need to interact according to a special timing to cause deadlock, which is a relatively rare situation.。 +First of all, the most effective way to analyze the deadlock problem is still the "two-step" method, that is, through pstack, sample, gdb and other tools to look at the thread stack, speculate on the thread execution timing that caused the deadlock。The second step here requires a little more imagination。In the past, the deadlock problem is often caused by the interaction between two threads, textbooks also use two threads to explain the four elements of deadlock, but in the above problems, due to the special nature of read-write locks, three threads need to interact according to a special timing to cause deadlock, which is a relatively rare situation。 -Secondly, the mindset that "as long as a thread acquires a read lock, then other threads that want to acquire a read lock must also be able to acquire a read lock" is problematic.。At least in the above problem, the existence of the anti-starvation policy causes the read thread after the write thread to fail to acquire the read lock。However, the conclusion of this article is not universally applicable, whether or not to prevent hunger, how to prevent hunger in the implementation of various multi-threaded libraries have different trade-offs.。Some articles have mentioned that the implementation of some libraries is to follow the "read thread absolute priority" rule, then these libraries will not encounter such problems, so still need specific analysis of specific problems。 \ No newline at end of file +Secondly, the mindset that "as long as a thread acquires a read lock, then other threads that want to acquire a read lock must also be able to acquire a read lock" is problematic。At least in the above problem, the existence of the anti-starvation policy causes the read thread after the write thread to fail to acquire the read lock。However, the conclusion of this article is not universally applicable, whether or not to prevent hunger, how to prevent hunger in the implementation of various multi-threaded libraries have different trade-offs。Some articles have mentioned that the implementation of some libraries is to follow the "read thread absolute priority" rule, then these libraries will not encounter such problems, so still need specific analysis of specific problems。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/consensus_and_sync_process_optimization.md b/3.x/en/docs/articles/3_features/31_performance/consensus_and_sync_process_optimization.md index 736f37cc3..e92b77081 100644 --- a/3.x/en/docs/articles/3_features/31_performance/consensus_and_sync_process_optimization.md +++ b/3.x/en/docs/articles/3_features/31_performance/consensus_and_sync_process_optimization.md @@ -10,9 +10,9 @@ In Chaplin's film Modern Times, Chaplin plays a worker who repeats the action of ## What is consensus and synchronization?? -Consensus and synchronization, two core processes in FISCO BCOS nodes。They work together to achieve the core function of the blockchain: to produce a blockchain that is consistent on every node.。In the implementation of the FISCO BCOS node, the consensus and synchronization entities, which we call the consensus module and the synchronization module。 +Consensus and synchronization, two core processes in FISCO BCOS nodes。They work together to achieve the core function of the blockchain: to produce a blockchain that is consistent on every node。In the implementation of the FISCO BCOS node, the consensus and synchronization entities, which we call the consensus module and the synchronization module。 -- **Consensus Module**: Responsible for the production block, so that the blocks generated by the node are identical. +- **Consensus Module**: Responsible for the production block, so that the blocks generated by the node are identical - **synchronization module**: Responsible for broadcasting transactions so that transactions sent by users reach each node as much as possible @@ -24,12 +24,12 @@ Let's take a look at the working environment of the consensus module and the syn ### Consensus Module -Constantly process and send consensus messages to make the blocks on all nodes consistent, taking the PBFT consensus as an example.。 +Constantly process and send consensus messages to make the blocks on all nodes consistent, taking the PBFT consensus as an example。 -1. **Packing Block**: Take out transactions from the transaction pool, package them into blocks and broadcast them, or process blocks from other nodes taken from the network module. +1. **Packing Block**: Take out transactions from the transaction pool, package them into blocks and broadcast them, or process blocks from other nodes taken from the network module 2. **Execution Block**Decode the block, verify the block, execute the block, and sign and broadcast the execution result of the block -3. **Collect signatures**Collect the signatures of the execution results of other nodes. If a certain number of signatures are collected, a "commit message" is broadcast to other nodes. -4. **Collect commit**Collect the commit messages of other nodes. When the number of collected commit messages reaches a certain number, the blocks are consistent and can be dropped. +3. **Collect signatures**Collect the signatures of the execution results of other nodes. If a certain number of signatures are collected, a "commit message" is broadcast to other nodes +4. **Collect commit**Collect the commit messages of other nodes. When the number of collected commit messages reaches a certain number, the blocks are consistent and can be dropped 5. **falling plate**: Connect the block to the end of the existing blockchain to form a blockchain and store it in the DB ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5231.JPG) @@ -42,70 +42,70 @@ Constantly send and receive transactions so that each transaction reaches as man 2. **issue transaction**Broadcast unsent transactions to other nodes 3. **Receipt transaction**: Receive transactions from other nodes from the network module 4. **inspection transaction**: decoding and verifying transactions -5. **deposit transaction**: Deposit transactions that have passed the check in the transaction pool. +5. **deposit transaction**: Deposit transactions that have passed the check in the transaction pool ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5232.JPG) ## problem and optimization -Chaplin and his partner each perform their duties in an orderly and seemingly very harmonious manner.。But when the backward productivity of the factory can not keep up with the strong market demand, even Chaplin such skilled workers, overtime can not finish。At this time, Chaplin had to start thinking about himself and his partners in the production relations。 +Chaplin and his partner each perform their duties in an orderly and seemingly very harmonious manner。But when the backward productivity of the factory can not keep up with the strong market demand, even Chaplin such skilled workers, overtime can not finish。At this time, Chaplin had to start thinking about himself and his partners in the production relations。 -In previous designs, the consensus module and the synchronization module were not prioritized, resulting in them wasting a lot of time competing for resources。At the same time, there are many repetitive operations in the consensus module and the synchronization module, which also wastes time。Therefore, the implementation process of the consensus module and the synchronization module should be considered together to optimize the process and improve efficiency.。After detailed analysis and careful validation, FISCO BCOS optimizes the consensus and synchronization module processes。Optimization is based on the following ideas: +In previous designs, the consensus module and the synchronization module were not prioritized, resulting in them wasting a lot of time competing for resources。At the same time, there are many repetitive operations in the consensus module and the synchronization module, which also wastes time。Therefore, the implementation process of the consensus module and the synchronization module should be considered together to optimize the process and improve efficiency。After detailed analysis and careful validation, FISCO BCOS optimizes the consensus and synchronization module processes。Optimization is based on the following ideas: -**The consensus module is responsible for dominating the rhythm of the entire blockchain, and the consensus module should be allowed to go first.。The synchronization module, on the other hand, should play a good role in coordination, assisting the consensus module to come out faster.。** +**The consensus module is responsible for dominating the rhythm of the entire blockchain, and the consensus module should be allowed to go first。The synchronization module, on the other hand, should play a good role in coordination, assisting the consensus module to come out faster。** -Based on the above ideas, let's take a look at the optimization methods for several of these problems.。 +Based on the above ideas, let's take a look at the optimization methods for several of these problems。 ### Problem 1: Job blocking -Both the consensus module and the synchronization module obtain message packets from the network module, and then proceed to the next step according to the corresponding message packets.。However, due to the limitation of the number of network callback threads, the synchronization module occupies the callback threads of the network when processing message packets, resulting in the consensus module being unable to process consensus messages from other nodes in a timely manner, and the consensus process is blocked.。 +Both the consensus module and the synchronization module obtain message packets from the network module, and then proceed to the next step according to the corresponding message packets。However, due to the limitation of the number of network callback threads, the synchronization module occupies the callback threads of the network when processing message packets, resulting in the consensus module being unable to process consensus messages from other nodes in a timely manner, and the consensus process is blocked。 ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5233.JPG) ### How to solve?- Stripping the processing of synchronous messages from the network callback thread -Based on the idea of consensus module first, the consensus module should receive consensus messages in a more timely manner, and the synchronization module should not occupy the network callback thread for too long.。Therefore, when the synchronization module gets the message, instead of decoding and checking the transaction directly in the callback thread, it caches the synchronization message package and processes it slowly "privately" with another thread.。In this way, the processing of synchronous messages does not occupy the network callback thread for a long time, allowing consensus messages to respond faster.。 +Based on the idea of consensus module first, the consensus module should receive consensus messages in a more timely manner, and the synchronization module should not occupy the network callback thread for too long。Therefore, when the synchronization module gets the message, instead of decoding and checking the transaction directly in the callback thread, it caches the synchronization message package and processes it slowly "privately" with another thread。In this way, the processing of synchronous messages does not occupy the network callback thread for a long time, allowing consensus messages to respond faster。 ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5234.JPG) ### Issue 2: Codec redundancy -The synchronization module receives the transaction in the synchronization message, which is encoded, and the synchronization module needs to decode it into the data structure in the node code and store it in the transaction pool.。When the consensus module packages a block, it takes the transaction out of the transaction pool, encodes the transaction, packages it into a block, and sends the block out.。In this process, the transaction is decoded and encoded, and there is redundancy in the operation.。 +The synchronization module receives the transaction in the synchronization message, which is encoded, and the synchronization module needs to decode it into the data structure in the node code and store it in the transaction pool。When the consensus module packages a block, it takes the transaction out of the transaction pool, encodes the transaction, packages it into a block, and sends the block out。In this process, the transaction is decoded and encoded, and there is redundancy in the operation。 ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5235.JPG) ### How to solve?- Transaction encoding cache -Consensus takes precedence over synchronization, and unnecessary operations in the consensus module should be minimized。Therefore, when the synchronization module stores the transaction, the transaction code is also stored in the transaction pool.。When the consensus module takes transactions, it takes out the coded transactions directly from the transaction pool, eliminating the need for coding operations.。 +Consensus takes precedence over synchronization, and unnecessary operations in the consensus module should be minimized。Therefore, when the synchronization module stores the transaction, the transaction code is also stored in the transaction pool。When the consensus module takes transactions, it takes out the coded transactions directly from the transaction pool, eliminating the need for coding operations。 ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5236.JPG) ### Question 3: Repeat -After receiving the transaction, the synchronization module needs to verify the signature of the transaction (referred to as "verification"), and the consensus module also needs to verify the transaction in the block after receiving the block.。There is a high probability that the transactions checked by the synchronization module and the consensus module are duplicated.。Checking is a very time-consuming operation, and each additional check consumes a lot of time.。 +After receiving the transaction, the synchronization module needs to verify the signature of the transaction (referred to as "verification"), and the consensus module also needs to verify the transaction in the block after receiving the block。There is a high probability that the transactions checked by the synchronization module and the consensus module are duplicated。Checking is a very time-consuming operation, and each additional check consumes a lot of time。 ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5237.JPG) ### How to solve?--Check and remove the weight -Both the synchronization module and the consensus module go to the trading pool to check whether the transaction exists before checking the signature.。If it exists, omit the check sign operation。As a result, a transaction is checked and signed only once, reducing unnecessary check-and-sign overhead.。 +Both the synchronization module and the consensus module go to the trading pool to check whether the transaction exists before checking the signature。If it exists, omit the check sign operation。As a result, a transaction is checked and signed only once, reducing unnecessary check-and-sign overhead。 ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5238.JPG) ### Can the solution be better??- - Try to make synchronous check sign, reduce the number of consensus module check sign -Still the idea of prioritizing the consensus module to minimize the operation of consensus module validation。Therefore, the synchronization module must run faster than the consensus module, and before the consensus module processes a transaction, the synchronization module gets the transaction first and gives priority to the transaction verification.。 +Still the idea of prioritizing the consensus module to minimize the operation of consensus module validation。Therefore, the synchronization module must run faster than the consensus module, and before the consensus module processes a transaction, the synchronization module gets the transaction first and gives priority to the transaction verification。 ![](../../../../images/articles/consensus_and_sync_process_optimization/IMG_5239.JPG) The strategy FISCO BCOS adopts here for the synchronization module is:**Full broadcast of transactions**。 -When one packaging node gets the transaction, the synchronization modules of the other nodes also receive the corresponding transaction.。When other nodes receive the block sent by the packaging node, the transactions contained in the block have already been verified by the synchronization module and written to the transaction pool.。At the same time, in order to make the synchronization module not lower than the consensus module in the processing speed of the same operation, the synchronization module's transaction codec also uses the same "parallel codec" and "transaction code cache" as the consensus module.。 +When one packaging node gets the transaction, the synchronization modules of the other nodes also receive the corresponding transaction。When other nodes receive the block sent by the packaging node, the transactions contained in the block have already been verified by the synchronization module and written to the transaction pool。At the same time, in order to make the synchronization module not lower than the consensus module in the processing speed of the same operation, the synchronization module's transaction codec also uses the same "parallel codec" and "transaction code cache" as the consensus module。 ## How about the result? -The process optimization of consensus and synchronization also improves the TPS of transaction processing to some extent.。After testing, the TPS of transaction processing increased to 1.75 times the original!More importantly, through process optimization, the dominance of consensus is determined, eliminating the performance impact of synchronization on consensus, allowing subsequent performance analysis to better focus on the consensus process.! +The process optimization of consensus and synchronization also improves the TPS of transaction processing to some extent。After testing, the TPS of transaction processing increased to 1.75 times the original!More importantly, through process optimization, the dominance of consensus is determined, eliminating the performance impact of synchronization on consensus, allowing subsequent performance analysis to better focus on the consensus process! Eliminating blocking, eliminating coding redundancy, eliminating duplicate checks, Chaplin and his partners work easier and smoother! -In the next article, we will focus on parallel optimization, so that parallelizable operations are parallelized.!Please look forward to the**Omni-directional parallel processing**》。 \ No newline at end of file +In the next article, we will focus on parallel optimization, so that parallelizable operations are parallelized!Please look forward to the**Omni-directional parallel processing**》。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/dag-based_parallel_transaction_execution_engine.md b/3.x/en/docs/articles/3_features/31_performance/dag-based_parallel_transaction_execution_engine.md index 54c7ce317..d13ddcc0d 100644 --- a/3.x/en/docs/articles/3_features/31_performance/dag-based_parallel_transaction_execution_engine.md +++ b/3.x/en/docs/articles/3_features/31_performance/dag-based_parallel_transaction_execution_engine.md @@ -2,29 +2,29 @@ Author: Li Chen Xi | FISCO BCOS Core Developer -In the blockchain world, transactions are the basic units that make up transactions。To a large extent, transaction throughput can limit or broaden the applicable scenarios of blockchain business. The higher the throughput, the wider the scope of application and the larger the user scale that blockchain can support.。Currently, TPS (Transaction per Second), which reflects transaction throughput, is a hot indicator for evaluating performance.。In order to improve TPS, the industry has put forward an endless stream of optimization solutions, all kinds of optimization means of the final focus, are to maximize the parallel processing capacity of transactions, reduce the processing time of the whole process of transactions.。 +In the blockchain world, transactions are the basic units that make up transactions。To a large extent, transaction throughput can limit or broaden the applicable scenarios of blockchain business. The higher the throughput, the wider the scope of application and the larger the user scale that blockchain can support。Currently, TPS (Transaction per Second), which reflects transaction throughput, is a hot indicator for evaluating performance。In order to improve TPS, the industry has put forward an endless stream of optimization solutions, all kinds of optimization means of the final focus, are to maximize the parallel processing capacity of transactions, reduce the processing time of the whole process of transactions。 -In the multi-core processor architecture has become the mainstream of today, the use of parallel technology to fully tap the potential of the CPU is an effective solution.。A parallel transaction executor (PTE, Parallel Transaction Executor) based on the DAG model is designed in FISCO BCOS 2.0.。 +In the multi-core processor architecture has become the mainstream of today, the use of parallel technology to fully tap the potential of the CPU is an effective solution。A parallel transaction executor (PTE, Parallel Transaction Executor) based on the DAG model is designed in FISCO BCOS 2.0。 -PTE can take full advantage of multi-core processors, so that transactions in the block can be executed in parallel as much as possible;At the same time to provide users with a simple and friendly programming interface, so that users do not have to care about the cumbersome parallel implementation details。The experimental results of the benchmark program show that compared with the traditional serial transaction execution scheme, the PTE running on the 4-core processor can achieve about 200% ~ 300% performance improvement under ideal conditions, and the calculation improvement is proportional to the number of cores.。 +PTE can take full advantage of multi-core processors, so that transactions in the block can be executed in parallel as much as possible;At the same time to provide users with a simple and friendly programming interface, so that users do not have to care about the cumbersome parallel implementation details。The experimental results of the benchmark program show that compared with the traditional serial transaction execution scheme, the PTE running on the 4-core processor can achieve about 200% ~ 300% performance improvement under ideal conditions, and the calculation improvement is proportional to the number of cores。 PTE has laid a solid foundation for the performance of FISCO BCOS. This article will comprehensively introduce the design idea and implementation scheme of PTE, including the following contents: - **背景**Performance Bottlenecks of Traditional Schemes and Introduction of DAG Parallel Model - **Design Ideas**: Problems encountered when PTE is applied to FISCO BCOS and solutions - **Architecture Design**: Architecture and core process of FISCO BCOS after PTE application -- **core algorithm**: Introduces the main data structures and algorithms used. +- **core algorithm**: Introduces the main data structures and algorithms used - **Performance evaluation**Performance and scalability test results of PTE are given respectively ## 背景 -The FISCO BCOS transaction processing module can be abstracted as a transaction-based state machine。In FISCO BCOS, "state" refers to the state of all accounts in the blockchain, while "transaction-based" means that FISCO BCOS uses transactions as a state migration function and updates from the old state to the new state based on the content of the transaction.。FISCO BCOS starts from the genesis block state, continuously collects transactions occurring on the network and packages them into blocks, and executes transactions in the blocks among all nodes participating in the consensus.。When transactions within a block are executed on multiple consensus nodes and the state is consistent, we say that consensus is reached on the block and the block is permanently recorded in the blockchain。 +The FISCO BCOS transaction processing module can be abstracted as a transaction-based state machine。In FISCO BCOS, "state" refers to the state of all accounts in the blockchain, while "transaction-based" means that FISCO BCOS uses transactions as a state migration function and updates from the old state to the new state based on the content of the transaction。FISCO BCOS starts from the genesis block state, continuously collects transactions occurring on the network and packages them into blocks, and executes transactions in the blocks among all nodes participating in the consensus。When transactions within a block are executed on multiple consensus nodes and the state is consistent, we say that consensus is reached on the block and the block is permanently recorded in the blockchain。 -As can be seen from the above-mentioned blockchain packaging → consensus → storage process, executing all transactions in the block is the only way to blockchain。The traditional transaction execution scheme is that the execution unit reads the transactions one by one from the block to be agreed upon, and after each transaction is executed, the state machine migrates to the next state until all transactions are executed serially, as shown in the following figure. +As can be seen from the above-mentioned blockchain packaging → consensus → storage process, executing all transactions in the block is the only way to blockchain。The traditional transaction execution scheme is that the execution unit reads the transactions one by one from the block to be agreed upon, and after each transaction is executed, the state machine migrates to the next state until all transactions are executed serially, as shown in the following figure ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5175.PNG) -Obviously, this way of executing transactions is not performance-friendly。Even if two transactions do not intersect, they can only be executed in order of priority.。As far as the relationship between transactions is concerned, since the one-dimensional "line" structure has such pain points, why not look at the two-dimensional "graph" structure?? +Obviously, this way of executing transactions is not performance-friendly。Even if two transactions do not intersect, they can only be executed in order of priority。As far as the relationship between transactions is concerned, since the one-dimensional "line" structure has such pain points, why not look at the two-dimensional "graph" structure?? In practical applications, according to the mutually exclusive resources that need to be used when each transaction is executed (mutually exclusive means exclusive use of resources, for example, in the above-mentioned transfer problem mutually exclusive resources, refers to the balance status of each account), we can organize a transaction dependency diagram, in order to prevent the transaction dependency relationship in the diagram into a ring, we can specify that the transaction list involves the same mutually exclusive resources, and the order of the lower transaction is a D。 @@ -33,39 +33,39 @@ As shown in the figure below, the 6 transfer transactions on the left can be org ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5176.PNG) -In a trade DAG, a trade with an entry of zero is a ready trade that has no dependencies and can be put into operation immediately.。When the number of ready transactions is greater than 1, ready transactions can be spread across multiple CPU cores for parallel execution.。When a transaction is executed, the entry of all transactions dependent on the transaction is reduced by 1, and as the transactions continue to be executed, ready transactions continue to be generated.。In the extreme case, if the number of layers of the constructed transaction DAG is 1 (i.e., all transactions are independent transactions without dependencies), the increase in the overall execution speed of the transaction will directly depend on the number of cores n of the processor, and if n is greater than the number of transactions in the block, the execution time of all transactions in the block is the same as the execution time of a single transaction。 +In a trade DAG, a trade with an entry of zero is a ready trade that has no dependencies and can be put into operation immediately。When the number of ready transactions is greater than 1, ready transactions can be spread across multiple CPU cores for parallel execution。When a transaction is executed, the entry of all transactions dependent on the transaction is reduced by 1, and as the transactions continue to be executed, ready transactions continue to be generated。In the extreme case, if the number of layers of the constructed transaction DAG is 1 (i.e., all transactions are independent transactions without dependencies), the increase in the overall execution speed of the transaction will directly depend on the number of cores n of the processor, and if n is greater than the number of transactions in the block, the execution time of all transactions in the block is the same as the execution time of a single transaction。 How to apply the trading DAG model, which theoretically has such an irresistibly beautiful feature, to FISCO BCOS? ## Design Ideas -**To apply the transaction DAG model, the primary problem we face is: for the same block, how to ensure that all nodes can reach the same state after execution, which is a key issue related to whether the blockchain can be out of the block normally.。** +**To apply the transaction DAG model, the primary problem we face is: for the same block, how to ensure that all nodes can reach the same state after execution, which is a key issue related to whether the blockchain can be out of the block normally。** -FISCO BCOS Adoption Verification(state root, transaction root, receipt root)The way in which the triples are equal to determine whether the states agree。The transaction root is a hash value calculated based on all transactions in the block. As long as all consensus nodes process the same block data, the transaction root must be the same.。 +FISCO BCOS Adoption Verification(state root, transaction root, receipt root)The way in which the triples are equal to determine whether the states agree。The transaction root is a hash value calculated based on all transactions in the block. As long as all consensus nodes process the same block data, the transaction root must be the same。 -As we all know, for instructions executed in parallel on different CPU cores, the order of execution between instructions cannot be predicted in advance, and the same applies to transactions executed in parallel.。In the traditional transaction execution scheme, every time a transaction is executed, the state root changes once, and the changed state root is written into the transaction receipt. After all transactions are executed, the final state root represents the current state of the blockchain, and a receipt root is calculated based on all transaction receipts.。 +As we all know, for instructions executed in parallel on different CPU cores, the order of execution between instructions cannot be predicted in advance, and the same applies to transactions executed in parallel。In the traditional transaction execution scheme, every time a transaction is executed, the state root changes once, and the changed state root is written into the transaction receipt. After all transactions are executed, the final state root represents the current state of the blockchain, and a receipt root is calculated based on all transaction receipts。 -As you can see, in the traditional execution scenario, state root plays a role similar to a global shared variable.。When transactions are executed in parallel and out of order, the traditional method of calculating state root is obviously no longer applicable, because on different machines, the order of execution of transactions is generally different, at this time there is no guarantee that the final state root can be consistent, similarly, receive root can not guarantee consistency.。 +As you can see, in the traditional execution scenario, state root plays a role similar to a global shared variable。When transactions are executed in parallel and out of order, the traditional method of calculating state root is obviously no longer applicable, because on different machines, the order of execution of transactions is generally different, at this time there is no guarantee that the final state root can be consistent, similarly, receive root can not guarantee consistency。 -In FISCO BCOS, the solution we use is to execute the transaction first, record the history of each transaction's change of state, and then calculate a state root based on these history after all transactions are executed, and at the same time, the state root in the transaction receipt is all changed to the final state root after all transactions are executed, thus ensuring that even if the transactions are executed in parallel, the final consensus node can still reach an agreement.。 +In FISCO BCOS, the solution we use is to execute the transaction first, record the history of each transaction's change of state, and then calculate a state root based on these history after all transactions are executed, and at the same time, the state root in the transaction receipt is all changed to the final state root after all transactions are executed, thus ensuring that even if the transactions are executed in parallel, the final consensus node can still reach an agreement。 -**Once the status problem is solved, the next question is how to determine if there is a dependency between two transactions.?** +**Once the status problem is solved, the next question is how to determine if there is a dependency between two transactions?** -Unnecessary performance loss if two transactions are judged to have no dependencies;Conversely, if the two transactions rewrite the state of the same account but are executed in parallel, the final state of the account may be uncertain。Therefore, the determination of dependencies is an important issue that affects performance and can even determine whether the blockchain can work properly.。 +Unnecessary performance loss if two transactions are judged to have no dependencies;Conversely, if the two transactions rewrite the state of the same account but are executed in parallel, the final state of the account may be uncertain。Therefore, the determination of dependencies is an important issue that affects performance and can even determine whether the blockchain can work properly。 In a simple transfer transaction, we can determine whether two transactions are dependent based on the addresses of the sender and recipient of the transfer, such as the following three transfer transactions: A → B, C → D, D → E. It is easy to see that transaction D → E depends on the result of transaction C → D, but transaction A → B has nothing to do with the other two transactions, so it can be executed in parallel。 -This analysis is correct in a blockchain that only supports simple transfers, but once it is put into a Turing-complete blockchain that runs smart contracts, it may not be as accurate because we don't know exactly what is going on in the transfer contract written by the user, and what might happen is: A.-> B's transaction seems to have nothing to do with the account status of C and D, but in the user's underlying implementation, A is a special account, and every money transferred out of account A must be deducted from account C for a fee.。In this scenario, if all three transactions are related, they cannot be executed in parallel, and if the transactions are also divided according to the previous dependency analysis method, they are bound to fall.。 +This analysis is correct in a blockchain that only supports simple transfers, but once put into a Turing-complete blockchain that runs smart contracts, it may not be as accurate because we don't know exactly what is going on in the transfer contract written by the user, and what might happen is: A->B's transaction seems to have nothing to do with the account status of C and D, but in the user's underlying implementation, A is a special account, and every money transferred out of account A must first be deducted from account C for a fee。In this scenario, if all three transactions are related, they cannot be executed in parallel, and if the transactions are also divided according to the previous dependency analysis method, they are bound to fall。 -Can we automatically deduce which dependencies actually exist in the transaction based on the content of the user's contract??The answer is not very reliable。It's hard to keep track of what data is actually manipulated in a user contract, and even doing so costs a lot of money, which is a far cry from our goal of optimizing performance.。 +Can we automatically deduce which dependencies actually exist in the transaction based on the content of the user's contract??The answer is not very reliable。It's hard to keep track of what data is actually manipulated in a user contract, and even doing so costs a lot of money, which is a far cry from our goal of optimizing performance。 -In summary, we have decided to delegate the assignment of transaction dependencies in FISCO BCOS to developers who are more familiar with the content of the contract.。Specifically, the mutually exclusive resources on which the transaction depends can be represented by a set of strings, FISCO BCOS exposes the interface to the developer, the developer defines the resources on which the transaction depends in the form of a string, informs the executor on the chain, and the executor automatically arranges all transactions in the block as a transaction DAG based on the transaction dependencies specified by the developer.。For example, in a simple transfer contract, the developer only needs to specify that the dependency of each transfer transaction is the sender address.+Recipient's Address。Further, if the developer introduces another third-party address in the transfer logic, the dependency needs to be defined as the sender address.+Recipient Address+The third party address.。 +In summary, we have decided to delegate the assignment of transaction dependencies in FISCO BCOS to developers who are more familiar with the content of the contract。Specifically, the mutually exclusive resources on which the transaction depends can be represented by a set of strings, FISCO BCOS exposes the interface to the developer, the developer defines the resources on which the transaction depends in the form of a string, informs the executor on the chain, and the executor automatically arranges all transactions in the block as a transaction DAG based on the transaction dependencies specified by the developer。For example, in a simple transfer contract, the developer only needs to specify that the dependency of each transfer transaction is the sender address+Recipient's Address。Further, if the developer introduces another third-party address in the transfer logic, the dependency needs to be defined as the sender address+Recipient Address+The third party address。 -This method is more intuitive and simple to implement, but also more general, applicable to all smart contracts, but also increases the responsibility of developers, developers must be very careful when specifying transaction dependencies, if the dependencies are not written correctly, the consequences are unpredictable.。The relevant interface for specifying dependencies will be given in a subsequent article using the tutorial, this article assumes for the time being that all the trade dependencies discussed are clear and unambiguous.。 +This method is more intuitive and simple to implement, but also more general, applicable to all smart contracts, but also increases the responsibility of developers, developers must be very careful when specifying transaction dependencies, if the dependencies are not written correctly, the consequences are unpredictable。The relevant interface for specifying dependencies will be given in a subsequent article using the tutorial, this article assumes for the time being that all the trade dependencies discussed are clear and unambiguous。 -**After solving the two more important issues above, there are still some more detailed engineering issues left: such as whether parallel transactions can be mixed with non-parallel transactions for execution.?How to ensure the global uniqueness of resource strings?** +**After solving the two more important issues above, there are still some more detailed engineering issues left: such as whether parallel transactions can be mixed with non-parallel transactions for execution?How to ensure the global uniqueness of resource strings?** -The answer is also not complicated, the former can be achieved by inserting non-parallel transactions as a barrier (barrier) into the transaction DAG - i.e., we believe that it is dependent on all of its pre-order transactions and at the same time is dependent on all of its post-order transactions -;The latter can be solved by adding a special flag to identify the contract in the transaction dependency specified by the developer.。As these problems do not affect the fundamental design of PTE, this paper will not expand。 +The answer is also not complicated, the former can be achieved by inserting non-parallel transactions as a barrier (barrier) into the transaction DAG - i.e., we believe that it is dependent on all of its pre-order transactions and at the same time is dependent on all of its post-order transactions -;The latter can be solved by adding a special flag to identify the contract in the transaction dependency specified by the developer。As these problems do not affect the fundamental design of PTE, this paper will not expand。 Everything is ready, and FISCO BCOS with the new trade execution engine PTE is on the horizon。 @@ -77,21 +77,21 @@ Everything is ready, and FISCO BCOS with the new trade execution engine PTE is o **The core processes of the whole architecture are as follows:** -Users send transactions to nodes through clients such as SDKs, where transactions can be executed in parallel or not。The transactions are then synchronized between the nodes, and the node with the packaging rights invokes the packer (Sealer) to take a certain amount of transactions from the transaction pool (Tx Pool) and package them into a block.。Thereafter, the block is sent to the consensus unit (Consensus) to prepare for inter-node consensus。 +Users send transactions to nodes through clients such as SDKs, where transactions can be executed in parallel or not。The transactions are then synchronized between the nodes, and the node with the packaging rights invokes the packer (Sealer) to take a certain amount of transactions from the transaction pool (Tx Pool) and package them into a block。Thereafter, the block is sent to the consensus unit (Consensus) to prepare for inter-node consensus。 -The transaction in the block needs to be executed before consensus, and this is where the PTE exerts its power.。As can be seen from the architecture diagram, the PTE first reads the transactions in the block in order and inputs them to the DAG Constructor (DAG Constructor), which constructs a transaction DAG containing all transactions based on the dependencies of each transaction, and the PTE then wakes up the worker thread pool and uses multiple threads to execute the transaction DAG in parallel.。The Joiner suspends the main thread until all threads in the worker thread pool finish executing the DAG. At this time, the Joiner calculates the state root and receipt root based on the modification records of each transaction to the state, and returns the execution results to the upper caller.。 +The transaction in the block needs to be executed before consensus, and this is where the PTE exerts its power。As can be seen from the architecture diagram, the PTE first reads the transactions in the block in order and inputs them to the DAG Constructor (DAG Constructor), which constructs a transaction DAG containing all transactions based on the dependencies of each transaction, and the PTE then wakes up the worker thread pool and uses multiple threads to execute the transaction DAG in parallel。The Joiner suspends the main thread until all threads in the worker thread pool finish executing the DAG. At this time, the Joiner calculates the state root and receipt root based on the modification records of each transaction to the state, and returns the execution results to the upper caller。 -After the transaction is completed, if the status of each node is consistent, a consensus is reached, and the block is then written to the underlying storage (Storage) and permanently recorded on the blockchain.。 +After the transaction is completed, if the status of each node is consistent, a consensus is reached, and the block is then written to the underlying storage (Storage) and permanently recorded on the blockchain。 ## core algorithm -### 1. The data structure of the transaction DAG. +### 1. The data structure of the transaction DAG The data structure of the transaction DAG is shown in the following figure: ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5178.PNG) -**Vertex Class**For the most basic type, in the trading DAG, each Vertex instance represents a trade.。The Vertex class contains: +**Vertex Class**For the most basic type, in the trading DAG, each Vertex instance represents a trade。The Vertex class contains: - **inDegree**: Indicates the degree of entry for this vertex - **outEdges**: Used to store the outgoing edge information of the node, that is, the ID list of all vertices connected to the outgoing edge @@ -105,20 +105,20 @@ The data structure of the transaction DAG is shown in the following figure: - **void generate()Interface**When all edge relationships have been entered, call this method to initialize the topLevel member - **ID waitPop()Interface**Get a vertex ID with 0 in from topLevel -**TxDAG class**is the encapsulation of the DAG class to a higher level and is the bridge between the DAG and the transaction, which contains. +**TxDAG class**is the encapsulation of the DAG class to a higher level and is the bridge between the DAG and the transaction, which contains - **dag**The DAG class instance held by - **exeCnt**: Total number of transactions executed - **totalTxs**: Total number of transactions - **txs**: List of transactions in the block -### 2. The construction process of the transaction DAG. +### 2. The construction process of the transaction DAG -When constructing a transaction DAG, the DAG constructor first sets the value of the totalTxs member to the total number of transactions in the block and initializes the dag object based on the total number of transactions.。Subsequently, initialize an empty resource mapping table criticalFields and scan each transaction one by one in order。 +When constructing a transaction DAG, the DAG constructor first sets the value of the totalTxs member to the total number of transactions in the block and initializes the dag object based on the total number of transactions。Subsequently, initialize an empty resource mapping table criticalFields and scan each transaction one by one in order。 For a transaction tx, the DAG constructor will resolve all the dependencies of the transaction, and for each dependency, it will go to criticalFields to query, if for a dependency d, a previous transaction also depends on the dependency, then build an edge between the two transactions, and update the mapping of d in criticalFields as the ID of tx。 -The pseudo-code for the transaction DAG construction process is as follows. +The pseudo-code for the transaction DAG construction process is as follows ``` criticalFields ← map(); @@ -140,9 +140,9 @@ dag.generate(); ### 3. Execution process of transaction DAG -When a PTE is created, a worker thread pool is generated for executing the transaction DAG according to the configuration, the size of the thread pool is equal to the number of logical cores of the CPU by default, and the life cycle of this thread pool is the same as the life cycle of the PTE.。The worker thread will continuously call the waitPop method of the dag object to take out the ready transaction with an entry of 0 and execute it, and after execution, the entry of all subsequent dependent tasks of the transaction is reduced by 1, and if the entry of the transaction is reduced to 0, the transaction is added to the topLevel.。Loop the above process until the trade DAG is executed。 +When a PTE is created, a worker thread pool is generated for executing the transaction DAG according to the configuration, the size of the thread pool is equal to the number of logical cores of the CPU by default, and the life cycle of this thread pool is the same as the life cycle of the PTE。The worker thread will continuously call the waitPop method of the dag object to take out the ready transaction with an entry of 0 and execute it, and after execution, the entry of all subsequent dependent tasks of the transaction is reduced by 1, and if the entry of the transaction is reduced to 0, the transaction is added to the topLevel。Loop the above process until the trade DAG is executed。 -The pseudocode for the transaction DAG execution process is as follows. +The pseudocode for the transaction DAG execution process is as follows ``` while exeCnt < totalTxs do @@ -161,13 +161,13 @@ end ## Performance evaluation -We chose two benchmark programs to test how PTE has changed the performance of FISCO BCOS, namely, a transfer contract based on a pre-compiled framework implementation and a transfer contract written in the Solidity language, with the following code paths for the two contracts. +We chose two benchmark programs to test how PTE has changed the performance of FISCO BCOS, namely, a transfer contract based on a pre-compiled framework implementation and a transfer contract written in the Solidity language, with the following code paths for the two contracts FISCO-BCOS/libprecompiled/extension/DagTransferPrecompiled.cpp web3sdk/src/test/resources/contract/ParallelOk.sol -We use a single node chain for testing, because we mainly focus on the transaction processing performance of PTE, so we do not consider the impact of network and storage latency.。 +We use a single node chain for testing, because we mainly focus on the transaction processing performance of PTE, so we do not consider the impact of network and storage latency。 **The basic hardware information of the test environment is shown in the following table**: @@ -177,13 +177,13 @@ We use a single node chain for testing, because we mainly focus on the transacti ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5180.JPG) -In the performance test section, we mainly test the transaction processing capabilities of PTE and Serial Transaction Execution (Serial) under each test program.。It can be seen that compared with the serial execution mode, PTE has achieved a speedup of 2.91 and 2.69 times from left to right, respectively。PTE has excellent performance for both pre-compiled and Solidity contracts。 +In the performance test section, we mainly test the transaction processing capabilities of PTE and Serial Transaction Execution (Serial) under each test program。It can be seen that compared with the serial execution mode, PTE has achieved a speedup of 2.91 and 2.69 times from left to right, respectively。PTE has excellent performance for both pre-compiled and Solidity contracts。 ### 2. Scalability testing ![](../../../../images/articles/dag-based_parallel_transaction_execution_engine/IMG_5181.JPG) -In the scalability test section, we mainly test the transaction processing power of PTE at different CPU core numbers, using a benchmark program based on a pre-compiled framework to implement a transfer contract。As can be seen, the transaction throughput of PTE increases approximately linearly as the number of cores increases。However, it can also be seen that as the number of cores increases, the rate of performance growth slows down because the overhead of inter-thread scheduling and synchronization increases as the number of cores increases.。 +In the scalability test section, we mainly test the transaction processing power of PTE at different CPU core numbers, using a benchmark program based on a pre-compiled framework to implement a transfer contract。As can be seen, the transaction throughput of PTE increases approximately linearly as the number of cores increases。However, it can also be seen that as the number of cores increases, the rate of performance growth slows down because the overhead of inter-thread scheduling and synchronization increases as the number of cores increases。 #### Write at the end diff --git a/3.x/en/docs/articles/3_features/31_performance/flow_control.md b/3.x/en/docs/articles/3_features/31_performance/flow_control.md index 9da1afcff..c7fc88e18 100644 --- a/3.x/en/docs/articles/3_features/31_performance/flow_control.md +++ b/3.x/en/docs/articles/3_features/31_performance/flow_control.md @@ -4,9 +4,9 @@ Author : Chen Yujie | FISCO BCOS Core Developer ## Introduction -As a distributed system, in the face of large data burst request scenarios, skyrocketing requests can easily cause blockchain services or interfaces to be unavailable, and in severe cases, the entire blockchain system may fall into an avalanche state.。 +As a distributed system, in the face of large data burst request scenarios, skyrocketing requests can easily cause blockchain services or interfaces to be unavailable, and in severe cases, the entire blockchain system may fall into an avalanche state。 -In order to provide more stable, reliable and flexible services, FISCO BCOS v2.5 introduces the flow control function, which limits the flow from the two dimensions of nodes and groups. On the one hand, it protects the blockchain system in the face of large data burst requests to ensure the normal operation of the system and improve system availability.;On the other hand, it reduces resource interference between blockchain nodes and groups and improves the quality of service of the blockchain system.。 +In order to provide more stable, reliable and flexible services, FISCO BCOS v2.5 introduces the flow control function, which limits the flow from the two dimensions of nodes and groups. On the one hand, it protects the blockchain system in the face of large data burst requests to ensure the normal operation of the system and improve system availability;On the other hand, it reduces resource interference between blockchain nodes and groups and improves the quality of service of the blockchain system。 ## Why introduce flow control @@ -14,83 +14,83 @@ FISCO BCOS introduces flow control to: - Response to large data burst requests - Reduce resource interference between blockchain nodes and groups -- Reduce the interaction between modules +- Reduced interaction between modules ### Response to large data burst requests ![](../../../../images/articles/flow_control/IMG_5265.PNG) -The above figure compares**No flow control function**和**With flow control function**The processing of the blockchain system in the face of large data burst requests.。 +The above figure compares**No flow control function**和**With flow control function**The processing of the blockchain system in the face of large data burst requests。 Assuming that the processing capacity of the blockchain system is 2W, when the business accesses the blockchain node at a request rate of 20W: -- **No flow control**scenario, the system requests the business**Accept all orders**If the business continues to initiate requests at a rate higher than the system's processing capacity, the entire system may eventually fall into an avalanche and be unable to respond to any business requests.。 -- Joined**Flow control function**After that, the flow control module will**根据****System Processing Capacity Filtering Business Requests**。When the business request rate exceeds the system's processing capacity, the flow control module rejects the remaining processing requests to maintain the system."balance of payments"state of health;The service can adaptively adjust the request rate based on the information to protect the blockchain system.。 +- **No flow control**scenario, the system requests the business**Accept all orders**If the business continues to initiate requests at a rate higher than the system's processing capacity, the entire system may eventually fall into an avalanche and be unable to respond to any business requests。 +- Joined**Flow control function**After that, the flow control module will**根据****System Processing Capacity Filtering Business Requests**。When the business request rate exceeds the system's processing capacity, the flow control module rejects the remaining processing requests to maintain the system"balance of payments"state of health;The service can adaptively adjust the request rate based on the information to protect the blockchain system。 -In short, the introduction of the flow control module is to add a layer of security protection to the blockchain system, so that the system can work robustly and respond to business requests normally in the case of receiving large data bursts.。 +In short, the introduction of the flow control module is to add a layer of security protection to the blockchain system, so that the system can work robustly and respond to business requests normally in the case of receiving large data bursts。 ### Reduce resource interference between blockchain nodes / groups ![](../../../../images/articles/flow_control/IMG_5266.PNG) -Note: The two nodes in the figure belong to two different chains and are connected to two different services. +Note: The two nodes in the figure belong to two different chains and are connected to two different services -As shown in the figure above, when multiple blockchain nodes are deployed on the same machine, there will be resource competition, some nodes occupy too many system resources will affect the normal service of other nodes.。 +As shown in the figure above, when multiple blockchain nodes are deployed on the same machine, there will be resource competition, some nodes occupy too many system resources will affect the normal service of other nodes。 -- At time t1, service 1 continues to request the left node at a request rate of 1W, and the traffic of this node surges. After the system receives and processes the request, it uses 90% of the CPU -- After t time interval, service 2 requests the right node at a request rate of 5000. The node is short of resources and can only preempt 10% of the CPU, which is very slow +-At time t1, service 1 continues to request the left node at a request rate of 1W, and the node's traffic surges. After the system receives and processes the request, it uses 90% of the CPU +- After t interval, service 2 requests the right node at a request rate of 5000. The node is short of resources and can only preempt 10% of the CPU. The response speed is very slow -In the above scenario, the left node occupies too many system resources and affects the quality of service of the right node。After the introduction of traffic control, you can limit the rate at which each node receives requests, control the resource occupation of each blockchain node, and avoid service quality degradation or service unavailability problems caused by blockchain node resource competition.。 +In the above scenario, the left node occupies too many system resources and affects the quality of service of the right node。After the introduction of traffic control, you can limit the rate at which each node receives requests, control the resource occupation of each blockchain node, and avoid service quality degradation or service unavailability problems caused by blockchain node resource competition。 Still above figure for example: -- At time t1, service 1 continues to request node 1 at a request rate of 1W, and the node 1 flow control module rejects redundant requests based on the configured request threshold(Here set the threshold to 5000)The CPU utilization rate of the machine is maintained at 50%. +-At time t1, service 1 continues to request node 1 at a request rate of 1W, and the node 1 flow control module rejects redundant requests according to the configured request threshold(Here set the threshold to 5000)The CPU utilization rate of the machine is maintained at 50% - Business 1 Received"flow overload"can adjust its request rate to 5000 -- After t interval, business 2 requests node 2 at a request rate of 5000, at which point the machine still has 50% of the CPU left, enough to process 5000 requests, and business 2 requests get a normal response. +- After the t interval, business 2 requests node 2 at a request rate of 5000, at which point the machine still has 50% of the CPU left, enough to process 5000 requests, and the request of business 2 gets a normal response -Similar to resource competition when multiple blockchain nodes are running on a machine, there is also resource competition between groups under the multi-group architecture, and excessive resource occupation by one group will also affect the quality of service of other groups, using group-level flow control is a good way to solve the resource competition between groups.。 +Similar to resource competition when multiple blockchain nodes are running on a machine, there is also resource competition between groups under the multi-group architecture, and excessive resource occupation by one group will also affect the quality of service of other groups, using group-level flow control is a good way to solve the resource competition between groups。 ### Reduce interaction between modules Different modules in the same node or group also have resource competition problems, mainly network resource competition. The modules with network resource competition include: -- Consensus Module -- Transaction Synchronization Module -- Block Synchronization Module -- AMOP Module +- Consensus module +- Transaction synchronization module +- Block synchronization module +- AMOP module -Among them, the consensus module and the transaction synchronization module are the key modules that determine the quality of service of the blockchain system, and other modules take up too much network resources, which will affect these key modules and thus the availability of the system.。FISCO BCOS implements module-level flow control, which prioritizes the quality of service of key modules and improves system robustness by controlling non-critical network traffic.。 +Among them, the consensus module and the transaction synchronization module are the key modules that determine the quality of service of the blockchain system, and other modules take up too much network resources, which will affect these key modules and thus the availability of the system。FISCO BCOS implements module-level flow control, which prioritizes the quality of service of key modules and improves system robustness by controlling non-critical network traffic。 ## Flow control functions -FISCO BCOS implements service-to-node request rate limiting and module granularity network traffic limiting from both node and group dimensions.。The former limits the request rate from services to nodes to cope with large data bursts and ensure flexible services for blockchain nodes.;The latter limits the network traffic of non-critical modules such as block synchronization and AMOP, giving priority to ensuring the performance and stability of key modules such as consensus and transaction synchronization.。 +FISCO BCOS implements service-to-node request rate limiting and module granularity network traffic limiting from both node and group dimensions。The former limits the request rate from services to nodes to cope with large data bursts and ensure flexible services for blockchain nodes;The latter limits the network traffic of non-critical modules such as block synchronization and AMOP, giving priority to ensuring the performance and stability of key modules such as consensus and transaction synchronization。 ![](../../../../images/articles/flow_control/IMG_5267.PNG) - **Node-level request rate limiting**Limit the total request rate from the service to the node. When the request rate exceeds the specified threshold, the node will reject the service request to avoid node overload and prevent excessive requests from causing node abnormalities;Control node resource usage and reduce resource competition between blockchain nodes -- **Node-level flow control**Limits the average outbound bandwidth of the node. When the average outbound bandwidth of the node exceeds the set threshold, the node suspends sending blocks and rejects received AMOP requests after receiving block synchronization requests to avoid the impact of block synchronization and AMOP message packet sending on node consensus. +- **Node-level flow control**Limits the average outbound bandwidth of the node. When the average outbound bandwidth of the node exceeds the set threshold, the node suspends sending blocks and rejects received AMOP requests after receiving block synchronization requests to avoid the impact of block synchronization and AMOP message packet sending on node consensus In the group dimension, the main functions include: -- **Group-level request rate limiting**Limits the service-to-group request rate. When the request rate exceeds the threshold, the group rejects the service request. This function protects the blockchain nodes in the scenario of large data burst requests, controls group resource usage, and reduces resource competition between groups. -- **Group-level flow control**Limits the average outbound bandwidth of each group. When the average outbound bandwidth traffic of the group exceeds the set threshold, the group suspends the block sending and AMOP request packet forwarding logic, giving priority to providing network traffic to the consensus module. +- **Group-level request rate limiting**Limits the service-to-group request rate. When the request rate exceeds the threshold, the group rejects the service request. This function protects the blockchain nodes in the scenario of large data burst requests, controls group resource usage, and reduces resource competition between groups +- **Group-level flow control**Limits the average outbound bandwidth of each group. When the average outbound bandwidth traffic of the group exceeds the set threshold, the group suspends the block sending and AMOP request packet forwarding logic, giving priority to providing network traffic to the consensus module **When request rate limiting is turned on for both nodes and groups:** -When a node receives a request packet sent by a service, it first invokes the node-level request rate limiting module to determine whether to receive the request, and if the request is received, it enters the group-level request rate limiting module, and the request checked by the module is forwarded to the corresponding group for processing.。 +When a node receives a request packet sent by a service, it first invokes the node-level request rate limiting module to determine whether to receive the request, and if the request is received, it enters the group-level request rate limiting module, and the request checked by the module is forwarded to the corresponding group for processing。 **When both nodes and groups have network traffic control enabled:** -1, the node receives the client AMOP request, first call the node-level flow control module to determine whether to receive the AMOP request. +1, the node receives the client AMOP request, first call the node-level flow control module to determine whether to receive the AMOP request 2. After a group receives a block request from a group corresponding to another node, the group needs to: -- Call the node-level flow control module to determine whether the average outgoing bandwidth of the node exceeds the set threshold. -- Invoke the group-level traffic control module to determine whether the outbound bandwidth of the group exceeds the set threshold. If and only if the average outbound bandwidth of the node level and the group level does not exceed the set threshold, the group will reply to the block request. +-Call the node-level flow control module to determine whether the average node bandwidth exceeds the set threshold +- Invoke the group-level traffic control module to determine whether the outbound bandwidth of the group exceeds the set threshold. If and only if the average outbound bandwidth of the node level and the group level does not exceed the set threshold, the group will reply to the block request ## How to use the flow control function -The flow control configurations are located in the [flow _ control] configuration items of the config.ini and group.i.ini configuration files, respectively, corresponding to the node-level flow control configuration and the group-level flow control configuration, respectively.。Here to show you how to enable, turn off, configure flow control。 +The flow control configurations are located in the [flow _ control] configuration items of the config.ini and group.i.ini configuration files, respectively, corresponding to the node-level flow control configuration and the group-level flow control configuration, respectively。Here to show you how to enable, turn off, configure flow control。 ### Node Level Flow Control @@ -110,7 +110,7 @@ An example of turning on the request rate limit and designing a node to accept 2 ### Network traffic restrictions -- [flow _ control] .outgoing _ bandwidth _ limit: the bandwidth limit of the node, in Mbit / s. When the bandwidth exceeds this value, block sending will be suspended and AMOP requests sent by clients will be rejected, but the traffic of block consensus and transaction broadcast will not be limited.。**This configuration item is turned off by default**To turn on, set the;Remove。 +- [flow _ control] .outgoing _ bandwidth _ limit: indicates the bandwidth limit of the node. The unit is Mbit / s. When the bandwidth exceeds this value, block sending is suspended and AMOP requests sent by the client are rejected. However, the traffic of block consensus and transaction broadcast is not limited。**This configuration item is turned off by default**To turn on, set the;Remove。 The following is an example of how to turn on the node outbound bandwidth traffic limit and set it to 5MBit / s: @@ -139,7 +139,7 @@ An example of turning on request rate limiting and configuring the group to acce ### Intra-Group Network Traffic Restrictions -[flow _ control] .outgoing _ bandwidth _ limit: the outbound bandwidth limit, in Mbit / s. When the outbound bandwidth exceeds this value, blocks are suspended, but the traffic of block consensus and transaction broadcast is not limited.。**This configuration item is turned off by default**To turn on, set the;Remove。 +[flow _ control] .outgoing _ bandwidth _ limit: the outbound bandwidth limit, in Mbit / s. When the outbound bandwidth exceeds this value, blocks are suspended, but the traffic of block consensus and transaction broadcast is not limited。**This configuration item is turned off by default**To turn on, set the;Remove。 The following is an example of how to turn on the group outbound bandwidth traffic limit and set it to 2MBit / s: @@ -152,8 +152,8 @@ The following is an example of how to turn on the group outbound bandwidth traff ## SUMMARY -With the development of blockchain technology, more and more applications are deployed in blockchain systems, and the requirements for the quality of service of blockchain systems are increasing, making it more important for blockchain systems to be flexible, available, stable and robust.。 +With the development of blockchain technology, more and more applications are deployed in blockchain systems, and the requirements for the quality of service of blockchain systems are increasing, making it more important for blockchain systems to be flexible, available, stable and robust。 FISCO BCOS v2.5 introduces the flow control function, which is an important step in FISCO BCOS's exploration of blockchain flexible services。 -The community will continue to polish and optimize the service quality of the blockchain system, hoping to provide better and highly available flexible services for massive business scenarios in the future.。How to do a good job of flow control without affecting the performance of the original system?Please pay attention to the follow-up articles of the community to explain the specific implementation principle of the flow control strategy for you.。Welcome everyone to discuss the exchange, positive feedback using experience and suggestions for improvement。 \ No newline at end of file +The community will continue to polish and optimize the service quality of the blockchain system, hoping to provide better and highly available flexible services for massive business scenarios in the future。How to do a good job of flow control without affecting the performance of the original system?Please pay attention to the follow-up articles of the community to explain the specific implementation principle of the flow control strategy for you。Welcome everyone to discuss the exchange, positive feedback using experience and suggestions for improvement。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/parallel_contract_development_framework_with_tutorials.md b/3.x/en/docs/articles/3_features/31_performance/parallel_contract_development_framework_with_tutorials.md index 67dc5330f..8b5747cc1 100644 --- a/3.x/en/docs/articles/3_features/31_performance/parallel_contract_development_framework_with_tutorials.md +++ b/3.x/en/docs/articles/3_features/31_performance/parallel_contract_development_framework_with_tutorials.md @@ -2,7 +2,7 @@ Author : SHI Xiang | FISCO BCOS Core Developer -This special series of articles to catch up with now, you may want to ask, FISCO BCOS parallel how to use?As the end of the topic, this article will reveal the true face of Lushan and teach you how to use the parallel features of FISCO BCOS.!FISCO BCOS provides a parallelizable contract development framework, where developers write contracts in accordance with the framework specifications that can be executed in parallel by FISCO BCOS nodes.。The advantages of parallel contracts are: +This special series of articles to catch up with now, you may want to ask, FISCO BCOS parallel how to use?As the end of the topic, this article will reveal the true face of Lushan and teach you how to use the parallel features of FISCO BCOS!FISCO BCOS provides a parallelizable contract development framework, where developers write contracts in accordance with the framework specifications that can be executed in parallel by FISCO BCOS nodes。The advantages of parallel contracts are: - **high throughput**: Multiple independent transactions are executed at the same time, which maximizes the CPU resources of the machine and thus has a high TPS - **Can be expanded**: The performance of transaction execution can be improved by improving the configuration of the machine to support the continuous expansion of business scale @@ -13,31 +13,31 @@ Next, I'll show you how to write FISCO BCOS parallel contracts and how to deploy ### parallel mutex -Whether two transactions can be executed in parallel depends on whether the two transactions exist.**Mutex**。Mutual exclusion means that two transactions are each**There is an intersection of the collection of operating contract storage variables.**。 +Whether two transactions can be executed in parallel depends on whether the two transactions exist**Mutex**。Mutual exclusion means that two transactions are each**There is an intersection of the collection of operating contract storage variables**。 -For example, in a transfer scenario, a transaction is a transfer operation between users。with transfer(X, Y) Represents the transfer interface from user X to user Y.。The mutual exclusion is as follows: +For example, in a transfer scenario, a transaction is a transfer operation between users。with transfer(X, Y) Represents the transfer interface from user X to user Y。The mutual exclusion is as follows: ![](../../../../images/articles/parallel_contract_development_framework_with_tutorials/IMG_5187.PNG) A more specific definition is given here: -- **Mutex parameters:**合同**Interface**parameters related to read / write operations for contract storage variables in。For example, the transfer interface(X, Y)X and Y are mutually exclusive parameters.。 +- **Mutex parameters:**合同**Interface**parameters related to read / write operations for contract storage variables in。For example, the transfer interface(X, Y)X and Y are mutually exclusive parameters。 -- **Mutex Object**: a sum of money**Transaction**The specific mutually exclusive content extracted from the mutually exclusive parameters.。For example, the transfer interface(X, Y), in a transaction that calls this interface, the specific parameter is transfer(A, B)then the mutex for this operation is [A, B];Another transaction, the argument to the call is transfer(A, C)then the mutex for this operation is [A, C]。 +- **Mutex Object**: a sum of money**Transaction**The specific mutually exclusive content extracted from the mutually exclusive parameters。For example, the transfer interface(X, Y), in a transaction that calls this interface, the specific parameter is transfer(A, B)then the mutex for this operation is [A, B];Another transaction, the argument to the call is transfer(A, C)then the mutex for this operation is [A, C]。 **To determine whether two transactions can be executed in parallel at the same time is to determine whether the mutually exclusive objects of the two transactions intersect。Transactions with empty intersections can be executed in parallel。** ## Writing Parallel Contracts -FISCO BCOS provides**parallelizable contract development framework**The developer only needs to develop the contract according to the specification of the framework and define the mutually exclusive parameters of each contract interface to implement the contract that can be executed in parallel.。When the contract is deployed, FISCO BCOS automatically parses the mutually exclusive objects before executing the transaction, allowing the non-dependent transactions to be executed in parallel as much as possible at the same time.。 +FISCO BCOS provides**parallelizable contract development framework**The developer only needs to develop the contract according to the specification of the framework and define the mutually exclusive parameters of each contract interface to implement the contract that can be executed in parallel。When the contract is deployed, FISCO BCOS automatically parses the mutually exclusive objects before executing the transaction, allowing the non-dependent transactions to be executed in parallel as much as possible at the same time。 -Currently, FISCO BCOS provides two parallel contract development frameworks, solidity and precompiled contracts.。 +Currently, FISCO BCOS provides two parallel contract development frameworks, solidity and precompiled contracts。 ### Parallel Framework for Solidity Contracts -Write parallel solidity contracts, the development process is the same as the process of developing ordinary solidity contracts.。On this basis, simply use ParallelContract as the contract base class that requires parallelism and call registerParallelFunction(), register interfaces that can be parallel。 +Write parallel solidity contracts, the development process is the same as the process of developing ordinary solidity contracts。On this basis, simply use ParallelContract as the contract base class that requires parallelism and call registerParallelFunction(), register interfaces that can be parallel。 -Give a complete example first.。The ParallelOk contract in the example implements the function of parallel transfer: +Give a complete example first。The ParallelOk contract in the example implements the function of parallel transfer: ``` pragma solidity ^0.4.25; @@ -49,7 +49,7 @@ contract ParallelOk is ParallelContract / / Using ParallelContract as the base c function transfer(string from, string to, uint256 num) public { - / / Here is a simple example, please use SafeMath instead of direct addition and subtraction in actual production. + / / Here is a simple example, please use SafeMath instead of direct addition and subtraction in actual production _balance[from] -= num; _balance[to] += num; } @@ -102,32 +102,32 @@ contract ParallelOk is ParallelContract / / Using ParallelContract as the base c } ``` -#### step2 Write a parallel contract interface. +#### step2 Write a parallel contract interface -The public function in the contract, which is the interface to the contract。To write a parallelizable contract interface is to implement the public function in a contract according to certain rules.。 +The public function in the contract, which is the interface to the contract。To write a parallelizable contract interface is to implement the public function in a contract according to certain rules。 ##### Determine whether an interface is parallelizable A parallelizable contract interface must satisfy: -- No call to external contract -- No call to other function interface +- No calls to external contracts +- No calls to other function interfaces ##### Determine Mutex Parameters -Before writing an interface, determine the mutex parameters of the interface, which is the mutex of global variables, and the rules for determining mutex parameters are. +Before writing an interface, determine the mutex parameters of the interface, which is the mutex of global variables, and the rules for determining mutex parameters are -- The interface accesses global mapping. The mapping key is a mutually exclusive parameter. -- The interface accesses the global array, and the subscript of the array is a mutually exclusive parameter +- The interface accesses global mapping. The key of mapping is a mutually exclusive parameter +-The interface accesses the global array, and the subscript of the array is a mutually exclusive parameter - The interface accesses global variables of simple types, all global variables of simple types share a mutually exclusive parameter, using different variable names as mutually exclusive objects ##### Determine parameter type and order After determining the mutually exclusive parameters, determine the parameter type and order according to the rules. The rules are as follows: -- Interface parameters are limited to string, address, uint256, int256 (more types will be supported in the future) -- Mutex parameters must all appear in interface parameters -- All mutually exclusive parameters are arranged at the top of the interface parameters. +- Interface parameters only: string, address, uint256, int256 (more types will be supported in the future) +-Mutex parameters must all appear in interface parameters +- All mutually exclusive parameters are arranged at the top of the interface parameters ``` mapping (string => uint256) _balance; / / Global mapping @@ -154,7 +154,7 @@ Implementing enableParallel in a contract() function, calling registerParallelFu / / Register contract interfaces that can be parallel function enableParallel() public { - / / Function definition string (note","There can be no spaces after), the first few parameters are mutually exclusive parameters. + / / Function definition string (note","There can be no spaces after), the first few parameters are mutually exclusive parameters registerParallelFunction("transfer(string,string,uint256)", 2); / / transfer interface, the first two are mutually exclusive parameters registerParallelFunction("set(string,uint256)", 1); / / transfer interface, the first four mutually exclusive parameters } @@ -205,7 +205,7 @@ An example of sending a large number of transactions with the SDK is given in th ### Parallel Framework for Precompiled Contracts -Write parallel precompiled contracts, the development process is the same as the development of ordinary precompiled contracts.。Ordinary precompiled contracts use Precompile as the base class, on top of which the contract logic is implemented.。Based on this, Precompile's base class also provides two virtual functions for parallelism, which continue to be implemented to implement parallel precompiled contracts.。 +Write parallel precompiled contracts, the development process is the same as the development of ordinary precompiled contracts。Ordinary precompiled contracts use Precompile as the base class, on top of which the contract logic is implemented。Based on this, Precompile's base class also provides two virtual functions for parallelism, which continue to be implemented to implement parallel precompiled contracts。 #### step1 Defines the contract to support parallelism @@ -215,7 +215,7 @@ bool isParallelPrecompiled() override { return true; } #### step2 Defines parallel interfaces and mutually exclusive parameters -Note that once defined to support parallelism, all interfaces need to be defined。If null is returned, this interface does not have any mutex。The mutually exclusive parameters are related to the implementation of the precompiled contract, which involves an understanding of FISCO BCOS storage, and the specific implementation can be read directly from the code or ask the relevant experienced programmer.。 +Note that once defined to support parallelism, all interfaces need to be defined。If null is returned, this interface does not have any mutex。The mutually exclusive parameters are related to the implementation of the precompiled contract, which involves an understanding of FISCO BCOS storage, and the specific implementation can be read directly from the code or ask the relevant experienced programmer。 ``` / / According to the parallel interface, take out the mutex from the parameters and return the mutex @@ -241,7 +241,7 @@ std::vector getParallelTag(bytesConstRef param) override results.push_back(toUser); } } - else if... / / All interfaces need to give a mutex, and the return is empty to indicate that there is no mutex. + else if... / / All interfaces need to give a mutex, and the return is empty to indicate that there is no mutex return results; / / return mutex } @@ -253,50 +253,50 @@ Method of compiling nodes manually, [refer to FISCO BCOS technical documentation ## Example: Parallel transfer -Parallel examples of solidity contracts and precompiled contracts are given here.。 +Parallel examples of solidity contracts and precompiled contracts are given here。 #### Configure Environment The example requires the following execution environment: - Web3SDK Client -- A FISCO BCOS chain +- One FISCO BCOS chain If the maximum performance of pressure measurement is required, at least: - 3 Web3SDKs to generate enough transactions -- 4 nodes, and all Web3SDKs are configured with all the node information on the chain, so that transactions are evenly sent to each node, so that the link can receive enough transactions. +- 4 nodes, and all Web3SDKs are configured with all node information on the chain, so that transactions are evenly sent to each node, so that the link can receive enough transactions ### Parallel Solidity Contract: ParallelOk -Transfers based on account models are a typical business operation。The ParallelOk contract is an example of an account model that enables parallel transfers.。The ParallelOk contract has been given above。 +Transfers based on account models are a typical business operation。The ParallelOk contract is an example of an account model that enables parallel transfers。The ParallelOk contract has been given above。 -FISCO BCOS has the ParallelOk contract built into the Web3SDK. Here is how to use the Web3SDK to send a large number of parallel transactions.。 +FISCO BCOS has the ParallelOk contract built into the Web3SDK. Here is how to use the Web3SDK to send a large number of parallel transactions。 #### step1 Deploy contracts with SDK, create new users, and enable contract parallelism ``` -# Parameter: < groupID > add < number of users created > < TPS requested by this create operation > < generated user information file name > +# Parameters: add java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.parallelok.PerformanceDT 1 add 10000 2500 user # 10,000 users are created on group1, the creation operation is sent with 2500TPS, and the generated user information is saved in user ``` -After the execution is successful, ParallelOk is deployed on the blockchain, and the created user information is saved in the user file, and the parallel capability of ParallelOk is enabled.。 +After the execution is successful, ParallelOk is deployed on the blockchain, and the created user information is saved in the user file, and the parallel capability of ParallelOk is enabled。 #### step2 Send parallel transfer transactions in batches Note: Before sending in batches, please adjust the log level of the SDK to ERROR to have sufficient sending capacity。 ``` -# Parameters: < groupID > transfer < total number of transactions > < TPS limit of this transfer operation request > < required user information file > < transaction mutual exclusion percentage: 0 ~ 10 > +# Parameters: transfer java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.parallelok.PerformanceDT 1 transfer 100000 4000 user 2 ​ -# Sent 100,000 transactions to group1, the maximum TPS sent is 4000, using the user in the previously created user file.。 +# Sent 100,000 transactions to group1, the maximum TPS sent is 4000, using the user in the previously created user file。 ``` #### step3 Verifying parallel correctness -After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction.。 +After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction。 ``` Total transactions: 100000 @@ -329,7 +329,7 @@ Calculate TPS from log file with script ``` cd tools -sh get_tps.sh log/log_2019031821.00.log 21:26:24 21:26:59 # Parameters: < log file > < calculation start time > < calculation end time > +sh get_tps.sh log/log_2019031821.00.log 21:26:24 21:26:59 # Parameters: ``` Get TPS (3 SDK, 4 nodes, 8 cores, 16G memory) @@ -342,13 +342,13 @@ total transactions = 193332, execute_time = 34580ms, tps = 5590 (tx/s) ### Parallel precompiled contract: DagTransferPrecompiled -Like the ParallelOk contract, FISCO BCOS has a built-in example of a parallel precompiled contract (DagTransferPrecompiled) that implements a simple account model-based transfer function.。The contract can manage the deposits of multiple users and provides a parallel transfer interface for parallel processing of transfer operations between users.。 +Like the ParallelOk contract, FISCO BCOS has a built-in example of a parallel precompiled contract (DagTransferPrecompiled) that implements a simple account model-based transfer function。The contract can manage the deposits of multiple users and provides a parallel transfer interface for parallel processing of transfer operations between users。 -**Note: DagTransferPrecompiled is used as an example only and should not be used directly in the production environment.。** +**Note: DagTransferPrecompiled is used as an example only and should not be used directly in the production environment。** #### step1 Generate User -Use the Web3SDK to send the operation of creating a user, and save the created user information in the user file。The command parameters are the same as parallelOk, except that the object called by the command is precompile.。 +Use the Web3SDK to send the operation of creating a user, and save the created user information in the user file。The command parameters are the same as parallelOk, except that the object called by the command is precompile。 ``` java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.PerformanceDT 1 add 10000 2500 user @@ -358,7 +358,7 @@ java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.Perf Send parallel transfer transactions with Web3SDK。 -**Note: Before sending in batches, adjust the log level of the SDK to ERROR to ensure sufficient sending capability.。** +**Note: Before sending in batches, adjust the log level of the SDK to ERROR to ensure sufficient sending capability。** ``` java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.PerformanceDT 1 transfer 100000 4000 user 2 @@ -366,7 +366,7 @@ java -cp conf/:lib/*:apps/* org.fisco.bcos.channel.test.parallel.precompile.Perf #### step3 Verifying parallel correctness -After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction.。 +After the parallel transaction is executed, the Web3SDK prints the execution result。TPS is the TPS that the transaction sent by this SDK executes on the node。Validation is a check of the results of the execution of the transfer transaction。 ``` Total transactions: 80000 @@ -399,7 +399,7 @@ Calculate TPS from log file with script ``` cd tools -sh get_tps.sh log/log_2019031311.17.log 11:25 11:30 # Parameters: < log file > < calculation start time > < calculation end time > +sh get_tps.sh log/log_2019031311.17.log 11:25 11:30 # Parameters: ``` Get TPS (3 SDK, 4 nodes, 8 cores, 16G memory) @@ -412,6 +412,6 @@ total transactions = 3340000, execute_time = 298945ms, tps = 11172 (tx/s) ## Result description -The performance results in this example are measured under 3SDK, 4 nodes, 8 cores, 16G memory, and 1G network.。Each SDK and node are deployed in a different VPS.。Actual TPS will vary based on your hardware configuration, operating system, and network bandwidth。 +The performance results in this example are measured under 3SDK, 4 nodes, 8 cores, 16G memory, and 1G network。Each SDK and node are deployed in a different VPS。Actual TPS will vary based on your hardware configuration, operating system, and network bandwidth。 -**If you encounter obstacles or need to consult during the deployment process, you can enter the FISCO BCOS official technical exchange group for answers.。**(into the group, please long press the two-dimensional code below to identify the small assistant) \ No newline at end of file +**If you encounter obstacles or need to consult during the deployment process, you can enter the FISCO BCOS official technical exchange group for answers。**(into the group, please long press the two-dimensional code below to identify the small assistant) \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/parallel_transformation.md b/3.x/en/docs/articles/3_features/31_performance/parallel_transformation.md index 45b9769d4..07b3a4998 100644 --- a/3.x/en/docs/articles/3_features/31_performance/parallel_transformation.md +++ b/3.x/en/docs/articles/3_features/31_performance/parallel_transformation.md @@ -4,55 +4,55 @@ Author: Li Chen Xi | FISCO BCOS Core Developer ## 背景 -The introduction of PTE (Parallel Transaction Executor, a parallel transaction executor based on the DAG model) gives FISCO BCOS the ability to execute transactions in parallel, significantly improving the efficiency of node transaction processing.。We are not satisfied with this stage result, and continue to dig deeper and find that the overall TPS of FISCO BCOS still has a lot of room for improvement.。 To use a barrel as an analogy: if all the modules of the transaction processing of the participating nodes constitute a barrel, the transaction execution is just a piece of wood that makes up the entire barrel, and according to the short board theory, how much water a barrel can hold depends on the shortest piece on the barrel wall, by the same token.**FISCO BCOS performance is also determined by the slowest components**。 Despite the theoretically high performance capacity achieved by PTE, the overall performance of FISCO BCOS is still constrained by the slower transaction processing speeds of other modules。**In order to maximize the use of computing resources to further improve transaction processing capabilities, it is imperative to fully advance the parallelization transformation in FISCO BCOS。** +The introduction of PTE (Parallel Transaction Executor, a parallel transaction executor based on the DAG model) gives FISCO BCOS the ability to execute transactions in parallel, significantly improving the efficiency of node transaction processing。We are not satisfied with this stage result, and continue to dig deeper and find that the overall TPS of FISCO BCOS still has a lot of room for improvement。 To use a barrel as an analogy: if all the modules of the transaction processing of the participating nodes constitute a barrel, the transaction execution is just a piece of wood that makes up the entire barrel, and according to the short board theory, how much water a barrel can hold depends on the shortest piece on the barrel wall, by the same token**FISCO BCOS performance is also determined by the slowest components**。 Despite the theoretically high performance capacity achieved by PTE, the overall performance of FISCO BCOS is still constrained by the slower transaction processing speeds of other modules。**In order to maximize the use of computing resources to further improve transaction processing capabilities, it is imperative to fully advance the parallelization transformation in FISCO BCOS。** ## Data analysis -According to the four-step principle of "analysis → decomposition → design → verification" of parallel programming, it is first necessary to locate the precise location of the performance bottlenecks that still exist in the system in order to decompose the tasks more deeply and design the corresponding parallelization strategy.。**Using top-down analysis, we divide the transaction processing process into four modules for performance analysis**The four modules are: +According to the four-step principle of "analysis → decomposition → design → verification" of parallel programming, it is first necessary to locate the precise location of the performance bottlenecks that still exist in the system in order to decompose the tasks more deeply and design the corresponding parallelization strategy。**Using top-down analysis, we divide the transaction processing process into four modules for performance analysis**The four modules are: -**Block decoding (decode)**: Blocks need to be sent from one node to another during consensus or synchronization between nodes. In this process, blocks are transmitted between networks in the form of RLP encoding。After the node receives the block encoding, it needs to decode it and restore the block to a binary object in memory before further processing.。 +**Block decoding (decode)**: Blocks need to be sent from one node to another during consensus or synchronization between nodes. In this process, blocks are transmitted between networks in the form of RLP encoding。After the node receives the block encoding, it needs to decode it and restore the block to a binary object in memory before further processing。 **Transaction verification (verify)**: The transaction is signed by the sender before it is sent, and the data obtained by the signature can be divided into(v, r, s)In the third part, the main task of signing is to receive the transaction or execute the transaction from the(v, r, s)Restore the public key of the transaction sender from the data to verify the identity of the transaction sender。 **Transaction execution**Execute all transactions in the block, update the blockchain status。 -**Data drop (commit)**: After the block is executed, the block and related data need to be written to the disk for persistent storage.。 +**Data drop (commit)**: After the block is executed, the block and related data need to be written to the disk for persistent storage。 -Using a block containing 2,500 pre-compiled transfer contract transactions as the test object, the average time-consuming distribution of each phase in our test environment is shown in the following figure. +Using a block containing 2,500 pre-compiled transfer contract transactions as the test object, the average time-consuming distribution of each phase in our test environment is shown in the following figure ![](../../../../images/articles/parallel_transformation/IMG_5182.JPG) -As can be seen from the figure, the execution time of 2500 trades has been reduced to less than 50 milliseconds, which proves that PTE's optimization of the FISCO BCOS trade execution phase is effective.。However, the chart also reveals a very obvious problem: the time taken at other stages is much higher than the time taken for trade execution, resulting in the performance advantage of trade execution being severely offset and the PTE not being able to deliver its due value.。 +As can be seen from the figure, the execution time of 2500 trades has been reduced to less than 50 milliseconds, which proves that PTE's optimization of the FISCO BCOS trade execution phase is effective。However, the chart also reveals a very obvious problem: the time taken at other stages is much higher than the time taken for trade execution, resulting in the performance advantage of trade execution being severely offset and the PTE not being able to deliver its due value。 -As early as 1967, the law named after him by Amdahl, a veteran of computer architecture, has explained to us the rule of thumb for measuring the efficiency gains of processors after parallel computing. +As early as 1967, the law named after him by Amdahl, a veteran of computer architecture, has explained to us the rule of thumb for measuring the efficiency gains of processors after parallel computing ![](../../../../images/articles/parallel_transformation/IMG_5183.PNG) -where SpeedUp is the speedup, Ws is the serial component of the program, Wp is the parallel component in the program, and N is the number of CPUs。It can be seen that in the case of a constant total amount of work, the more parallel parts of the code, the higher the overall performance of the system.。We need to take our thinking out of the linear model, continue to subdivide the entire processing flow, identify the program hotspots with the longest execution time, and parallelize these code segments to break all the bottlenecks one by one, which is the best way to maximize performance gains through parallelization.。 +where SpeedUp is the speedup, Ws is the serial component of the program, Wp is the parallel component in the program, and N is the number of CPUs。It can be seen that in the case of a constant total amount of work, the more parallel parts of the code, the higher the overall performance of the system。We need to take our thinking out of the linear model, continue to subdivide the entire processing flow, identify the program hotspots with the longest execution time, and parallelize these code segments to break all the bottlenecks one by one, which is the best way to maximize performance gains through parallelization。 ## Root cause dismantling ### 1. Serial block decoding -The main performance problem of block decoding lies in the RLP coding method itself.。The full name of RLP is recursive length prefix coding, which is a coding method that uses length as a prefix to indicate the number of elements in the encoded object.。As shown in the following figure, the beginning of the RLP code is the number of objects in the code (Object num).。After the number, is the corresponding number of objects (Object)。Recursively, each object is also RLP encoded, and its format is also the same as the figure below。 +The main performance problem of block decoding lies in the RLP coding method itself。The full name of RLP is recursive length prefix coding, which is a coding method that uses length as a prefix to indicate the number of elements in the encoded object。As shown in the following figure, the beginning of the RLP code is the number of objects in the code (Object num)。After the number, is the corresponding number of objects (Object)。Recursively, each object is also RLP encoded, and its format is also the same as the figure below。 -It is important to note that in RLP coding。The byte size of each object is not fixed. Object num only indicates the number of objects and does not indicate the byte length of an object.。 +It is important to note that in RLP coding。The byte size of each object is not fixed. Object num only indicates the number of objects and does not indicate the byte length of an object。 ![](../../../../images/articles/parallel_transformation/IMG_5184.JPG) -RLP can theoretically encode any number of objects by combining a length prefix with recursion.。The following figure shows the RLP encoding of a block. When encoding a block, it is recursive to the bottom layer to encode multiple sealers. After the multiple sealers are encoded and the length prefix is added, the encoding becomes a string of RLP encodings (sealerList).。This is followed by layer-by-layer recursion and the final encoding becomes the RLP encoding of the block.。Because RLP encoding is recursive, the length after encoding cannot be known before encoding。 +RLP can theoretically encode any number of objects by combining a length prefix with recursion。The following figure shows the RLP encoding of a block. When encoding a block, it is recursive to the bottom layer to encode multiple sealers. After the multiple sealers are encoded and the length prefix is added, the encoding becomes a string of RLP encodings (sealerList)。This is followed by layer-by-layer recursion and the final encoding becomes the RLP encoding of the block。Because RLP encoding is recursive, the length after encoding cannot be known before encoding。 ![](../../../../images/articles/parallel_transformation/IMG_5185.JPG) -When decoding, because the length of each object in RLP encoding is uncertain, and RLP encoding only records the number of objects, not the byte length of the object, to obtain one of the encoded objects, you must recursively decode all the objects in its preamble, after decoding the preamble of the object, you can access the byte position of the encoded object that needs to be accessed.。For example, in the above figure, if you need to access the 0th transaction in the block, that is, tx0, you must first decode the blockHeader, and the decoding of the blockHeader needs to be recursive again, decoding the parentHash, stateRoot, and even the sealerList.。 +When decoding, because the length of each object in RLP encoding is uncertain, and RLP encoding only records the number of objects, not the byte length of the object, to obtain one of the encoded objects, you must recursively decode all the objects in its preamble, after decoding the preamble of the object, you can access the byte position of the encoded object that needs to be accessed。For example, in the above figure, if you need to access the 0th transaction in the block, that is, tx0, you must first decode the blockHeader, and the decoding of the blockHeader needs to be recursive again, decoding the parentHash, stateRoot, and even the sealerList。 -The most important purpose of decoding a block is to decode the transactions contained in the block, and the codes of the transactions are independent of each other, but under the special coding method of RLP, the necessary condition for decoding a transaction is to decode the previous transaction, and the decoding tasks of the transaction are interlinked, forming a chain of dependencies.。It should be pointed out that this decoding method is not a defect of RLP, one of the design goals of RLP is to minimize the space occupation, make full use of each byte, although the codec has become less efficient, but the compactness of the encoding is obvious to all, so this encoding is essentially a time-for-space trade-off.。 +The most important purpose of decoding a block is to decode the transactions contained in the block, and the codes of the transactions are independent of each other, but under the special coding method of RLP, the necessary condition for decoding a transaction is to decode the previous transaction, and the decoding tasks of the transaction are interlinked, forming a chain of dependencies。It should be pointed out that this decoding method is not a defect of RLP, one of the design goals of RLP is to minimize the space occupation, make full use of each byte, although the codec has become less efficient, but the compactness of the encoding is obvious to all, so this encoding is essentially a time-for-space trade-off。 -Due to historical reasons, RLP coding is used in FISCO BCOS as a multi-site information exchange protocol, and the rush to switch to other parallelization-friendly serialization schemes may result in a greater development burden.。Based on this consideration, we decided to slightly modify the original RLP codec scheme, by adding additional position offset information for each encoded element, we can decode the RLP in parallel without changing a lot of the original code.。 +Due to historical reasons, RLP coding is used in FISCO BCOS as a multi-site information exchange protocol, and the rush to switch to other parallelization-friendly serialization schemes may result in a greater development burden。Based on this consideration, we decided to slightly modify the original RLP codec scheme, by adding additional position offset information for each encoded element, we can decode the RLP in parallel without changing a lot of the original code。 ### 2. Transaction verification & high cost of data placement -By breaking down the code for the trade check and data drop sections, we found that the main functions of both are concentrated in a time-consuming for loop。Transaction validation is responsible for taking out transactions in sequence and then from the signature data of the transaction.(v, r, s)data and restore the public key of the transaction sender from it, where the step of restoring the public key is time-consuming due to the cryptographic algorithm involved;The data drop disk is responsible for taking out the transaction-related data from the cache one by one, encoding it into a JSON string and writing it to disk, which is also a disaster area for performance loss due to the low efficiency of the JSON encoding process itself.。 +By breaking down the code for the trade check and data drop sections, we found that the main functions of both are concentrated in a time-consuming for loop。Transaction validation is responsible for taking out transactions in sequence and then from the signature data of the transaction(v, r, s)data and restore the public key of the transaction sender from it, where the step of restoring the public key is time-consuming due to the cryptographic algorithm involved;The data drop disk is responsible for taking out the transaction-related data from the cache one by one, encoding it into a JSON string and writing it to disk, which is also a disaster area for performance loss due to the low efficiency of the JSON encoding process itself。 The two codes are as follows: @@ -78,21 +78,21 @@ for(int i = 0; i < datas.size(); ++i) } ``` -The common feature of both processes is that they both apply the same operations to different parts of the data structure, and for this type of problem, you can directly use data-level parallelism for transformation.。The so-called data-level parallelism, that is, the data as a partition object, by dividing the data into fragments of approximately equal size, by operating on different data fragments on multiple threads, to achieve the purpose of parallel processing of data sets.。 +The common feature of both processes is that they both apply the same operations to different parts of the data structure, and for this type of problem, you can directly use data-level parallelism for transformation。The so-called data-level parallelism, that is, the data as a partition object, by dividing the data into fragments of approximately equal size, by operating on different data fragments on multiple threads, to achieve the purpose of parallel processing of data sets。 -The only additional requirement for data-level parallelism is that the tasks are independent of each other, and there is no doubt that in the FISCO BCOS implementation, both transaction validation and data drop meet this requirement.。 +The only additional requirement for data-level parallelism is that the tasks are independent of each other, and there is no doubt that in the FISCO BCOS implementation, both transaction validation and data drop meet this requirement。 ## optimization practice ### 1. Block decoding parallelization -During the transformation, we added an offset field to the common RLP encoding used in the system to index the location of each Object.。As shown in the following figure, the beginning of the modified encoding format is still the number of objects (Object num), but after the number field, it is an array (Offsets) that records the offset of the object.。 +During the transformation, we added an offset field to the common RLP encoding used in the system to index the location of each Object。As shown in the following figure, the beginning of the modified encoding format is still the number of objects (Object num), but after the number field, it is an array (Offsets) that records the offset of the object。 ![](../../../../images/articles/parallel_transformation/IMG_5186.JPG) -Each element in the array has a fixed length。Therefore, to read the value of an Offset, you only need to access the array, according to the serial number of the Offset direct index can be randomly accessed.。After Offsets, is a list of objects that are the same as the RLP encoding。Offset of the corresponding ordinal, pointing to the RLP-encoded byte position of the object of the corresponding ordinal。Therefore, to decode an object arbitrarily, you only need to find its offset based on the object's serial number, and then locate the RLP encoded byte position of the corresponding object based on the offset.。 +Each element in the array has a fixed length。Therefore, to read the value of an Offset, you only need to access the array, according to the serial number of the Offset direct index can be randomly accessed。After Offsets, is a list of objects that are the same as the RLP encoding。Offset of the corresponding ordinal, pointing to the RLP-encoded byte position of the object of the corresponding ordinal。Therefore, to decode an object arbitrarily, you only need to find its offset based on the object's serial number, and then locate the RLP encoded byte position of the corresponding object based on the offset。 -The coding process has also been redesigned。The process itself is still based on the idea of recursion. For the input object array, first encode the size of the object array at the beginning of the output encoding. If the array size exceeds 1, take out the objects to be encoded one by one and cache their recursive encoding, and record the offset position of the object in the Offsets array. After the array is traversed, take out the cached object encoding for the first time and append it to the output encoding.;If the array size is 1, it is recursively encoded and written to the end of the output encoding, ending the recursion。 +The coding process has also been redesigned。The process itself is still based on the idea of recursion. For the input object array, first encode the size of the object array at the beginning of the output encoding. If the array size exceeds 1, take out the objects to be encoded one by one and cache their recursive encoding, and record the offset position of the object in the Offsets array. After the array is traversed, take out the cached object encoding for the first time and append it to the output encoding;If the array size is 1, it is recursively encoded and written to the end of the output encoding, ending the recursion。 **The pseudocode for the coding process is as follows:** @@ -127,7 +127,7 @@ void encode(objs) //Input: objs = array of objects to be encoded } ``` -The introduction of offsets enables the decoding module to have random access to the element encoding。The array range of Offsets can be spread evenly across multiple threads, so that each thread can access different parts of the object array in parallel and decode them separately。Because it is read-only access, this parallel approach is thread-safe and only needs to summarize the output at the end.。 +The introduction of offsets enables the decoding module to have random access to the element encoding。The array range of Offsets can be spread evenly across multiple threads, so that each thread can access different parts of the object array in parallel and decode them separately。Because it is read-only access, this parallel approach is thread-safe and only needs to summarize the output at the end。 **The pseudo-code for the decoding process is as follows:** @@ -156,11 +156,11 @@ Objs decode(RLP Rlps) ## 2. Transaction Verification & Parallelization of Data Drop -For data-level parallelism, there are a variety of mature multithreaded programming models in the industry.。While explicit multithreaded programming models such as Pthread can provide more granular control over threads, they require us to have skillful mastery of thread communication and synchronization.。The higher the complexity of the implementation, the greater the chance of making mistakes, and the more difficult it will be to maintain the code in the future.。Our main goal is to parallelize only intensive loops, so Keep It Simple & Stupid is our coding principle, so we use an implicit programming model to achieve our goal。 +For data-level parallelism, there are a variety of mature multithreaded programming models in the industry。While explicit multithreaded programming models such as Pthread can provide more granular control over threads, they require us to have skillful mastery of thread communication and synchronization。The higher the complexity of the implementation, the greater the chance of making mistakes, and the more difficult it will be to maintain the code in the future。Our main goal is to parallelize only intensive loops, so Keep It Simple & Stupid is our coding principle, so we use an implicit programming model to achieve our goal。 -After repeated trade-offs, we have chosen the Thread Building Blocks (TBB) open source library from Intel among the many implicit multithreaded programming models on the market.。In terms of data-level parallelism, TBB is a veteran, and the TBB runtime system not only masks the implementation details of the underlying worker threads, but also automatically balances workloads between processors based on the amount of tasks, thus making full use of the underlying CPU resources.。 +After repeated trade-offs, we have chosen the Thread Building Blocks (TBB) open source library from Intel among the many implicit multithreaded programming models on the market。In terms of data-level parallelism, TBB is a veteran, and the TBB runtime system not only masks the implementation details of the underlying worker threads, but also automatically balances workloads between processors based on the amount of tasks, thus making full use of the underlying CPU resources。 -**With TBB, the code for transaction validation and data drop is as follows.** +**With TBB, the code for transaction validation and data drop is as follows** ``` / / Parallel transaction verification @@ -192,13 +192,13 @@ tbb::parallel_for(tbb::blocked_range(0, transactions.size()), }); ``` -As you can see, in addition to using the tbb provided by the TBB::parallel _ for parallel loop and tbb::The code inside the loop body is almost unchanged outside the blocked _ range reference data shard, close to C.++Native syntax is exactly what makes TBB。TBB provides parallel interfaces with a high level of abstraction, such as generic parallel algorithms such as parallel _ for and parallel _ for _ each, which makes the transformation easier.。At the same time, TBB does not depend on any language or compiler, as long as it can support ISO C.++Standard compiler, there is TBB use。 +As you can see, in addition to using the tbb provided by the TBB::parallel _ for parallel loop and tbb::The code inside the loop body is almost unchanged outside the blocked _ range reference data shard, close to C++Native syntax is exactly what makes TBB。TBB provides parallel interfaces with a high level of abstraction, such as generic parallel algorithms such as parallel _ for and parallel _ for _ each, which makes the transformation easier。At the same time, TBB does not depend on any language or compiler, as long as it can support ISO C++Standard compiler, there is TBB use。 Of course, the use of TBB is not completely without additional burden, such as inter-thread security or need to be carefully analyzed by developers to ensure, but TBB thoughtful, provides a set of convenient tools to help us solve the problem of mutual exclusion between threads, such as atomic variables, thread local storage and parallel containers, these parallel tools are also widely used in FISCO BCOS, for the stable operation of FISCO BCOS escort。 #### Write at the end -After a set of parallel optimization of the combination of fist, FISCO BCOS performance to a higher level。The results of the stress test show that the transaction processing capacity of FISCO BCOS has been successfully improved by 1.74 times compared to before the parallel transformation, basically achieving the expected effect of this link.。 +After a set of parallel optimization of the combination of fist, FISCO BCOS performance to a higher level。The results of the stress test show that the transaction processing capacity of FISCO BCOS has been successfully improved by 1.74 times compared to before the parallel transformation, basically achieving the expected effect of this link。 -But we also deeply understand that the road to performance optimization is long, the shortest board of the barrel always alternates, the parallel way is, through repeated analysis, disassembly, quantification and optimization, so that the modules work together, the whole system to achieve an elegant balance, and the optimal solution is always in the "jump" to get the place.。 +But we also deeply understand that the road to performance optimization is long, the shortest board of the barrel always alternates, the parallel way is, through repeated analysis, disassembly, quantification and optimization, so that the modules work together, the whole system to achieve an elegant balance, and the optimal solution is always in the "jump" to get the place。 diff --git a/3.x/en/docs/articles/3_features/31_performance/performance_optimization.md b/3.x/en/docs/articles/3_features/31_performance/performance_optimization.md index 25055b9ce..3e067a62a 100644 --- a/3.x/en/docs/articles/3_features/31_performance/performance_optimization.md +++ b/3.x/en/docs/articles/3_features/31_performance/performance_optimization.md @@ -4,76 +4,76 @@ Author : SHI Xiang | FISCO BCOS Core Developer -The last article said that the speed dilemma of the blockchain is "expensive" in trust, "slow" in the final analysis, the root cause is still in its "computing for trust" design ideas.。The industry generally praised the blockchain as a machine of trust, in order to achieve trust, the blockchain has to do a lot of complex and cumbersome operations, synchronization, verification, execution, consensus, etc., are essential links in the blockchain.。 +The last article said that the speed dilemma of the blockchain is "expensive" in trust, "slow" in the final analysis, the root cause is still in its "computing for trust" design ideas。The industry generally praised the blockchain as a machine of trust, in order to achieve trust, the blockchain has to do a lot of complex and cumbersome operations, synchronization, verification, execution, consensus, etc., are essential links in the blockchain。 This is like the "traffic regulations" when driving, always telling us developers, for safety, please drive at the specified speed!However, the community still has a common voice: really too slow! -So, can we upgrade this trusted machine to make it safe and fast??Through the team's in-depth exploration and practice, we have opened up a number of ways to the era of extreme speed.。Looking back at the whole process, it's like building a car with outstanding performance。 +So, can we upgrade this trusted machine to make it safe and fast??Through the team's in-depth exploration and practice, we have opened up a number of ways to the era of extreme speed。Looking back at the whole process, it's like building a car with outstanding performance。 - High power engine**DAG-based parallel execution engine for transactions** - Fuel delivery unit**distributed storage** - Front and rear seats**Process optimization for consensus and synchronization** - Transmission**omni-directional parallel processing** - Hydrogen fuel**Precompiled Contracts** -- Monitoring instrument**Comprehensive performance analysis tools** +- Monitoring instruments**Comprehensive performance analysis tools** - Exclusive steering wheel**parallelizable contract development framework** ### High-power engine: DAG-based transaction parallel execution engine - extreme effort to make transactions execute in parallel -Traditional transaction execution engines, which execute transactions in a serial manner, can only be executed one by one.。No matter how many transactions there are in a block, they need to be executed one by one.。This is like a low-power engine. Even if it is equipped with a giant fuel tank, it still cannot output powerful power.。One cylinder is not enough, change to 4 cylinders, 8 cylinders, ok? +Traditional transaction execution engines, which execute transactions in a serial manner, can only be executed one by one。No matter how many transactions there are in a block, they need to be executed one by one。This is like a low-power engine. Even if it is equipped with a giant fuel tank, it still cannot output powerful power。One cylinder is not enough, change to 4 cylinders, 8 cylinders, ok? -FISCO BCOS implements a transaction parallel execution engine (PTE) that allows multiple transactions within a block to be executed simultaneously。If the machine has 4 cores, it can support the simultaneous execution of 4 transactions, and if it has 8 cores, it can support the simultaneous execution of 8 transactions.。Of course, under the control of "traffic regulations," the correctness of parallel execution needs to be guaranteed, that is, the results of parallel execution and serial execution need to be consistent.。In order to ensure the consistency of parallel execution, the transaction parallel execution engine (PTE) of FISCO BCOS introduces the data structure DAG (directed acyclic graph).。 +FISCO BCOS implements a transaction parallel execution engine (PTE) that allows multiple transactions within a block to be executed simultaneously。If the machine has 4 cores, it can support the simultaneous execution of 4 transactions, and if it has 8 cores, it can support the simultaneous execution of 8 transactions。Of course, under the control of "traffic regulations," the correctness of parallel execution needs to be guaranteed, that is, the results of parallel execution and serial execution need to be consistent。In order to ensure the consistency of parallel execution, the transaction parallel execution engine (PTE) of FISCO BCOS introduces the data structure DAG (directed acyclic graph)。 -Before executing transactions in a block, the execution engine automatically builds dependencies between transactions based on their mutual exclusions.。This dependency is a DAG that allows parallelizable transactions to be executed in parallel when the engine executes.。In this way, the consistency of transaction execution is guaranteed and the throughput of transaction execution is increased by orders of magnitude.。 +Before executing transactions in a block, the execution engine automatically builds dependencies between transactions based on their mutual exclusions。This dependency is a DAG that allows parallelizable transactions to be executed in parallel when the engine executes。In this way, the consistency of transaction execution is guaranteed and the throughput of transaction execution is increased by orders of magnitude。 ### Fuel delivery device: distributed storage - enough fuel for the engine -The traditional blockchain storage model is a towering MPT tree。All the data on the blockchain is gathered on this tree。Every write or read of data is a long journey from branch to root (or from root to branch)。As more and more data is on the chain and the tree gets taller, the distance from the branch to the root becomes longer and longer。What is more troublesome is that although there are many branches, there is only one root.。The writing or reading of massive amounts of data on the chain is as tragic as a thousand troops grabbing a single-plank bridge, and the extent of the tragedy can be imagined.。So the traditional blockchain, choose one by one, one data to read, one transaction to execute。Figuratively speaking, it's a pipeline that delivers fuel to the engine。 +The traditional blockchain storage model is a towering MPT tree。All the data on the blockchain is gathered on this tree。Every write or read of data is a long journey from branch to root (or from root to branch)。As more and more data is on the chain and the tree gets taller, the distance from the branch to the root becomes longer and longer。What is more troublesome is that although there are many branches, there is only one root。The writing or reading of massive amounts of data on the chain is as tragic as a thousand troops grabbing a single-plank bridge, and the extent of the tragedy can be imagined。So the traditional blockchain, choose one by one, one data to read, one transaction to execute。Figuratively speaking, it's a pipeline that delivers fuel to the engine。 -This will definitely not work!We need multiple pipelines to deliver fuel to the engine.!This time, FISCO BCOS is not crudely connecting multiple pipelines (MPT trees) to the engine because it is too slow to transport oil with pipelines (storing data with MPT)。We'll just ditch the oil pipeline and just soak the engine in the tank.!This analogy may not be appropriate, but understanding the execution engine and storage design of FISCO BCOS, I believe you will have the same feelings as me.。 +This will definitely not work!We need multiple pipelines to deliver fuel to the engine!This time, FISCO BCOS is not crudely connecting multiple pipelines (MPT trees) to the engine because it is too slow to transport oil with pipelines (storing data with MPT)。We'll just ditch the oil pipeline and just soak the engine in the tank!This analogy may not be appropriate, but understanding the execution engine and storage design of FISCO BCOS, I believe you will have the same feelings as me。 -We abandon the MPT tree and organize the data in a "table" way.。Execute the engine to read and write data, no longer need to MPT tree root to branch traversal, directly read and write on the "table"。In this way, the reading and writing of each piece of data does not depend on a global operation and can be done separately and independently.。This provides the basis for concurrent data reading and writing for the transaction parallel execution engine (PTE). Similar to an engine soaked in a fuel tank, gasoline flows directly into the cylinder, and no one shares whose fuel pipe。 +We abandon the MPT tree and organize the data in a "table" way。Execute the engine to read and write data, no longer need to MPT tree root to branch traversal, directly read and write on the "table"。In this way, the reading and writing of each piece of data does not depend on a global operation and can be done separately and independently。This provides the basis for concurrent data reading and writing for the transaction parallel execution engine (PTE). Similar to an engine soaked in a fuel tank, gasoline flows directly into the cylinder, and no one shares whose fuel pipe。 For detailed analysis of distributed storage, please click: [Distributed Storage Architecture Design](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485336&idx=1&sn=ea3a7119634c1c27daa4ec2b9a9f278b&chksm=9f2ef584a8597c9288f8c5000c7def47c3c5b9dc64f25221985cd9e3743b9364a93933e51833&token=942411972&lang=zh_CN#rd) ### Front and back seats: consensus and synchronized process optimization - no egalitarianism, rich first drives rich later -In blockchain nodes, the synchronization module and the consensus module are inseparable twins, sometimes helping each other and sometimes fighting for resources.。In previous designs, there was no prioritization between the synchronization module and the consensus module。It's like riding in a car. There are no rules about who sits in the front row and who sits in the back row. As a result, the twins often waste a lot of time fighting for the order.。 +In blockchain nodes, the synchronization module and the consensus module are inseparable twins, sometimes helping each other and sometimes fighting for resources。In previous designs, there was no prioritization between the synchronization module and the consensus module。It's like riding in a car. There are no rules about who sits in the front row and who sits in the back row. As a result, the twins often waste a lot of time fighting for the order。 Everything from reality, first rich drive after rich! -The consensus module is responsible for dominating the rhythm of the entire blockchain, and the consensus module should be allowed to go first.。The synchronization module, on the other hand, should play a good role in coordination, assisting the consensus module to come out faster.。Based on this idea, FISCO BCOS optimizes the process of consensus and synchronization: +The consensus module is responsible for dominating the rhythm of the entire blockchain, and the consensus module should be allowed to go first。The synchronization module, on the other hand, should play a good role in coordination, assisting the consensus module to come out faster。Based on this idea, FISCO BCOS optimizes the process of consensus and synchronization: -- First, the transaction verification operation in the synchronization module is stripped from the P2P callback thread, so that the consensus module can receive consensus messages more smoothly for faster consensus.。 -- Second, de-duplicate the transaction validation and cache the binary of the transaction。A transaction is verified and decoded only once, freeing up more CPU resources for the execution of blocks in the consensus module.。 -- Third, optimize the synchronization process, before the transaction synchronization, as far as possible to make the synchronization module run in front of the consensus module, so that the synchronization module priority to write transactions into the transaction pool, priority decoding and verification, so that the consensus module to get the transaction, eliminating the process of decoding and verification, faster into the block packaging stage.。 +-First, the transaction verification operation in the synchronization module is separated from the P2P callback thread, so that the consensus module can receive consensus messages more smoothly, so that consensus can be carried out faster。 +- Second, the transaction validation is deduplicated and the binary of the transaction is cached。A transaction is verified and decoded only once, freeing up more CPU resources for the execution of blocks in the consensus module。 +- Third, optimize the synchronization process, before the transaction synchronization, as far as possible to let the synchronization module run in front of the consensus module, so that the synchronization module priority to write transactions into the transaction pool, priority decoding and verification, so that the consensus module to get the transaction, eliminating the process of decoding and verification, faster into the block packaging stage。 In a word, in a word。The purpose of everything is to serve the consensus process, making it faster and smoother to package, execute, agree, and block。 ### Transmission device: all-round parallel processing - let the power output efficiently -If you do not match the appropriate transmission device, no matter how high the power of the engine will not be able to effectively output power。Signature verification, encoding and decoding, and data placement are the more time-consuming parts of the blockchain, in addition to opening transactions.。In previous designs, signature verification, codec, and data drop were all performed serially.。Even if transactions are executed in parallel, the performance of this trust machine is subject to the performance of these three links.。 +If you do not match the appropriate transmission device, no matter how high the power of the engine will not be able to effectively output power。Signature verification, encoding and decoding, and data placement are the more time-consuming parts of the blockchain, in addition to opening transactions。In previous designs, signature verification, codec, and data drop were all performed serially。Even if transactions are executed in parallel, the performance of this trust machine is subject to the performance of these three links。 -The performance problems of these three links are endless, and the performance will never rise.!Then equip the high-power engine with a high-performance transmission device to release its power。 +The performance problems of these three links are endless, and the performance will never rise!Then equip the high-power engine with a high-performance transmission device to release its power。 -FISCO BCOS introduces parallel containers, so that data reading and writing naturally supports concurrent access.。On this basis, for the verification and signing of transactions, the verification and signing of transactions are directly executed in parallel, and the verification and signing process between transactions does not affect each other.;For encoding and decoding, the encoding format of RLP has been modified so that the original RLP format, which can only be read and written sequentially, supports parallel encoding and decoding.;For block drop, the state change is encoded in parallel.。 +FISCO BCOS introduces parallel containers, so that data reading and writing naturally supports concurrent access。On this basis, for the verification and signing of transactions, the verification and signing of transactions are directly executed in parallel, and the verification and signing process between transactions does not affect each other;For encoding and decoding, the encoding format of RLP has been modified so that the original RLP format, which can only be read and written sequentially, supports parallel encoding and decoding;For block drop, the state change is encoded in parallel。 -Not only that, FISCO BCOS performs parallel processing where it can be parallelized, allowing the system CPU resources to be maximized.。Transactions are not only executed in parallel when they enter the contract engine, but are also processed in parallel in processes such as signature verification, coding and decoding, and data placement.。Powerful engine, coupled with high-performance transmission, the effect is remarkable! +Not only that, FISCO BCOS performs parallel processing where it can be parallelized, allowing the system CPU resources to be maximized。Transactions are not only executed in parallel when they enter the contract engine, but are also processed in parallel in processes such as signature verification, coding and decoding, and data placement。Powerful engine, coupled with high-performance transmission, the effect is remarkable! ### Hydrogen Fuels: Precompiled Contracts - An Efficient Lightweight Contract Framework -As we all know, the blockchain runs smart contracts, which are written in the language of solidity.。Solidity contract is deployed to the chain, gas is burned, and the result is obtained。But have you ever thought about switching to a fuel that costs less but makes the car run faster?? +As we all know, the blockchain runs smart contracts, which are written in the language of solidity。Solidity contract is deployed to the chain, gas is burned, and the result is obtained。But have you ever thought about switching to a fuel that costs less but makes the car run faster?? -And look at FISCO BCOS self-developed "hydrogen fuel" - pre-compiled contract.! +And look at FISCO BCOS self-developed "hydrogen fuel" - pre-compiled contract! -FISCO BCOS provides organizations with a high-performance, customized, lightweight contract framework。Organizations can build their own pre-compiled contracts into FISCO BCOS nodes according to their business needs.。Precompiled contracts with C.++Written with higher performance than the solidity engine, faster startup, leaner instructions, and less memory usage。Just like "hydrogen fuel," lower cost, higher calorific value, let the car run faster!Of course, extracting "hydrogen fuel" requires a little effort, and the implementation of pre-compiled contracts is relatively complex and has a high threshold.。To learn about precompiled contracts, click: [Precompiled Contract Architecture Design](http://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247484055&idx=1&sn=2f33d5231784147ed61cb6da85e6d74d&chksm=fe6a87d8c91d0ece832d34c0345d1795c4b88daf9b4af815e94987d4f7abd899a464d0075e09&scene=21#wechat_redirect) +FISCO BCOS provides organizations with a high-performance, customized, lightweight contract framework。Organizations can build their own pre-compiled contracts into FISCO BCOS nodes according to their business needs。Precompiled contracts with C++Written with higher performance than the solidity engine, faster startup, leaner instructions, and less memory usage。Just like "hydrogen fuel," lower cost, higher calorific value, let the car run faster!Of course, extracting "hydrogen fuel" requires a little effort, and the implementation of pre-compiled contracts is relatively complex and has a high threshold。To learn about precompiled contracts, click: [Precompiled Contract Architecture Design](http://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247484055&idx=1&sn=2f33d5231784147ed61cb6da85e6d74d&chksm=fe6a87d8c91d0ece832d34c0345d1795c4b88daf9b4af815e94987d4f7abd899a464d0075e09&scene=21#wechat_redirect) ### Monitoring instrument: multi-dimensional performance analysis tool - gives people a sense of stability in the overall situation -FISCO BCOS uses a large number of performance analysis tools in the development process, just like many index clear monitoring instruments are installed on the car。We have adopted mainstream performance analysis tools, such as perf and systemtap, to analyze the hotspots, locks, memory, etc. of the program. We have also developed customized performance analysis tools based on the characteristics of the blockchain program process to better evaluate data in consensus, block verification, storage modules and processes。The tool can analyze the time ratio and time change of each stage in the program.。With reliable quantification tools, developers can be aware of every optimization they do.。 +FISCO BCOS uses a large number of performance analysis tools in the development process, just like many index clear monitoring instruments are installed on the car。We have adopted mainstream performance analysis tools, such as perf and systemtap, to analyze the hotspots, locks, memory, etc. of the program. We have also developed customized performance analysis tools based on the characteristics of the blockchain program process to better evaluate data in consensus, block verification, storage modules and processes。The tool can analyze the time ratio and time change of each stage in the program。With reliable quantification tools, developers can be aware of every optimization they do。 Exclusive steering wheel: parallel contract development framework - to give developers a smooth operating experience -Everything is ready, get in the car!Sitting in the driving position, you will control the exclusive steering wheel provided by FISCO BCOS-the parallel contract development framework!How to operate this machine reasonably depends on this steering wheel。"Hand in hand" parallel contract development framework, in the development of parallel contracts, contract developers do not need to care about the specific underlying logic, but will focus more on their own contract logic.。When the contract is successfully deployed, the parallel contract is automatically recognized by the underlying code and automatically executed in parallel.! +Everything is ready, get in the car!Sitting in the driving position, you will control the exclusive steering wheel provided by FISCO BCOS-the parallel contract development framework!How to operate this machine reasonably depends on this steering wheel。"Hand in hand" parallel contract development framework, in the development of parallel contracts, contract developers do not need to care about the specific underlying logic, but will focus more on their own contract logic。When the contract is successfully deployed, the parallel contract is automatically recognized by the underlying code and automatically executed in parallel! -Now, finally got into the car。not enjoyable?It doesn't matter, the next few articles, please pick up, is the real hard core dry goods!We will introduce the parallel transaction executor (PTE) based on the DAG model in FISCO BCOS in the next article. \ No newline at end of file +Now, finally got into the car。not enjoyable?It doesn't matter, the next few articles, please pick up, is the real hard core dry goods!We will introduce the parallel transaction executor (PTE) based on the DAG model in FISCO BCOS in the next article \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/performance_optimization_tools.md b/3.x/en/docs/articles/3_features/31_performance/performance_optimization_tools.md index c685e77d2..0d6ff36c0 100644 --- a/3.x/en/docs/articles/3_features/31_performance/performance_optimization_tools.md +++ b/3.x/en/docs/articles/3_features/31_performance/performance_optimization_tools.md @@ -6,17 +6,17 @@ We should forget about small efficiencies, say about 97% of the time: premature **"Premature optimization is the root of all evil."** -Donald Knuth, the computer science pioneer who said this, is not against optimization, but emphasizes optimizing key locations in the system。Assuming that a for loop takes 0.01 seconds, even if you use various techniques such as loop expansion to improve its performance by 100 times and reduce the time taken to 0.00001 seconds, it is basically imperceptible to the user.。Before quantitative testing of performance issues, various flare-up optimizations at the code level may not only fail to improve performance, but may instead increase code maintenance or introduce more errors.。 +Donald Knuth, the computer science pioneer who said this, is not against optimization, but emphasizes optimizing key locations in the system。Assuming that a for loop takes 0.01 seconds, even if you use various techniques such as loop expansion to improve its performance by 100 times and reduce the time taken to 0.00001 seconds, it is basically imperceptible to the user。Before quantitative testing of performance issues, various flare-up optimizations at the code level may not only fail to improve performance, but may instead increase code maintenance or introduce more errors。 **"Optimization without any evidence is the root of all evil."** -Before optimizing the system, be sure to conduct a detailed performance test on the system to identify the real performance bottlenecks.。Fighting on the front line of FISCO BCOS performance optimization, we have accumulated some experience on how to use performance testing tools to pinpoint performance hotspots.。This article summarizes the tools we use in the optimization process for the reader.。 +Before optimizing the system, be sure to conduct a detailed performance test on the system to identify the real performance bottlenecks。Fighting on the front line of FISCO BCOS performance optimization, we have accumulated some experience on how to use performance testing tools to pinpoint performance hotspots。This article summarizes the tools we use in the optimization process for the reader。 ------ ## 1.Poor Man's Profiler -The Poor's Analyzer, or PMP for short。Although the name is somewhat confusing, but people are really a serious means of performance analysis, and even have their own official website https://poormansprofiler.org/。The principle of PMP is Stack Sampling. By calling a third-party debugger (such as gdb) and repeatedly obtaining the stack information of each thread in the process, PMP can obtain the hotspot distribution of the target process.。 +The Poor's Analyzer, or PMP for short。Although the name is somewhat confusing, but people are really a serious means of performance analysis, and even have their own official website https://poormansprofiler.org/。The principle of PMP is Stack Sampling. By calling a third-party debugger (such as gdb) and repeatedly obtaining the stack information of each thread in the process, PMP can obtain the hotspot distribution of the target process。 **The first step**to get a snapshot of a certain number of thread stacks: @@ -30,7 +30,7 @@ for x in $(seq 1 $(num)) done ``` -**Second step**function call stack information from the snapshot, sorted by call frequency. +**Second step**function call stack information from the snapshot, sorted by call frequency ``` awk ' @@ -45,13 +45,13 @@ Finally, the output is obtained, as shown in the following figure: ![](../../../../images/articles/performance_optimization_tools/IMG_5240.PNG) -From the output, you can observe which functions of which threads are frequently sampled, and then you can follow the graph to find possible bottlenecks.。These few lines of shell scripts are where the whole essence of PMP lies.。Extremely simple and easy to use is the biggest selling point of PMP, in addition to relying on a ubiquitous debugger, PMP does not need to install any components, as the PMP author said in the introduction: "**Although more advanced analysis techniques exist, they are all too cumbersome to install without exception... Poor man doesn't have time. Poor man needs food.**。 +From the output, you can observe which functions of which threads are frequently sampled, and then you can follow the graph to find possible bottlenecks。These few lines of shell scripts are where the whole essence of PMP lies。Extremely simple and easy to use is the biggest selling point of PMP, in addition to relying on a ubiquitous debugger, PMP does not need to install any components, as the PMP author said in the introduction: "**Although more advanced analysis techniques exist, they are all too cumbersome to install without exception... Poor man doesn't have time. Poor man needs food.**。 -The disadvantages of PMP are also obvious: the startup of gdb is very time-consuming, which limits the sampling frequency of PMP to not be too high, so some important function call events may be missed, resulting in the final profile result is not accurate enough。But in some special occasions, PMP can still play a role, such as in some Chinese technology blogs, there are developers mentioned using PMP to successfully locate the deadlock problem in the online production environment, PMP authors also said that this technology in Facebook, Intel and other large factories have applications.。Anyway, this technique that flashes the programmer's little wisdom with a little humor is worth a glimpse.。 +The disadvantages of PMP are also obvious: the startup of gdb is very time-consuming, which limits the sampling frequency of PMP to not be too high, so some important function call events may be missed, resulting in the final profile result is not accurate enough。But in some special occasions, PMP can still play a role, such as in some Chinese technology blogs, there are developers mentioned using PMP to successfully locate the deadlock problem in the online production environment, PMP authors also said that this technology in Facebook, Intel and other large factories have applications。Anyway, this technique that flashes the programmer's little wisdom with a little humor is worth a glimpse。 ## 2.perf -Perf's full name is Performance Event, which is integrated in the Linux kernel after version 2.6.31. It is a powerful performance analysis tool that comes with Linux and uses special hardware PMU (Performance Monitor Unit) and kernel performance counters in modern processors to count performance data.。Perf works by sampling the interrupts of running processes at a certain frequency to obtain the name of the currently executing function and the call stack.。If most of the sample points fall on the same function, it indicates that the function takes a long time to execute or the function is frequently called, and there may be performance problems。 +Perf's full name is Performance Event, which is integrated in the Linux kernel after version 2.6.31. It is a powerful performance analysis tool that comes with Linux and uses special hardware PMU (Performance Monitor Unit) and kernel performance counters in modern processors to count performance data。Perf works by sampling the interrupts of running processes at a certain frequency to obtain the name of the currently executing function and the call stack。If most of the sample points fall on the same function, it indicates that the function takes a long time to execute or the function is frequently called, and there may be performance problems。 Using perf requires first sampling the target process: @@ -59,7 +59,7 @@ Using perf requires first sampling the target process: $ sudo perf record -F 1000 -p `pidof fisco-bcos` -g -- sleep 60 ``` -In the above command, we use perf record to specify the statistics for recording performance;使用-F specifies that the sampling frequency is 1000Hz, that is, 1000 samples per second;使用-p specifies the process ID to be sampled (both fisco-bcos process ID), we can get it directly through the pidof command;使用-g indicates that call stack information is recorded;Use sleep to specify a sampling duration of 60 seconds。After sampling, perf writes the collected performance data to the perf.data file in the current directory.。 +In the above command, we use perf record to specify the statistics for recording performance;Use -F to specify a sampling frequency of 1000Hz, that is, 1000 samples per second;Use -p to specify the process ID to be sampled (both the process ID of fisco-bcos), which we can get directly from the pidof command;Use -g to record call stack information;Use sleep to specify a sampling duration of 60 seconds。After sampling, perf writes the collected performance data to the perf.data file in the current directory。 ``` $ perf report -n @@ -69,11 +69,11 @@ The above command reads perf.data and counts the percentage of each call stack, ![](../../../../images/articles/performance_optimization_tools/IMG_5241.JPG) -The information is rich enough, but the readability is still not very friendly。Although the use of perf in the example is relatively simple, perf can actually do much more than this。With other tools, the data sampled by perf can be presented to us in a more intuitive and clear way, which is the performance analysis artifact we will introduce next - the flame chart.。 +The information is rich enough, but the readability is still not very friendly。Although the use of perf in the example is relatively simple, perf can actually do much more than this。With other tools, the data sampled by perf can be presented to us in a more intuitive and clear way, which is the performance analysis artifact we will introduce next - the flame chart。 ## 3. Flame Diagram -Flame Graph, or Flame Graph, is powered by the dynamic tracking technology proposed by system performance giant Brendan Gregg, which is mainly used to visualize the data generated by performance analysis tools so that developers can locate performance problems at a glance.。The use of flame chart is relatively simple. We only need to download a series of tools from github and place them in any local directory: +Flame Graph, or Flame Graph, is powered by the dynamic tracking technology proposed by system performance giant Brendan Gregg, which is mainly used to visualize the data generated by performance analysis tools so that developers can locate performance problems at a glance。The use of flame chart is relatively simple. We only need to download a series of tools from github and place them in any local directory: ``` wget https://github.com/brendangregg/FlameGraph/archive/master.zip && unzip master.zip @@ -81,7 +81,7 @@ wget https://github.com/brendangregg/FlameGraph/archive/master.zip && unzip mast ### 3.1 CPU flame diagram -When we find that FISCO BCOS performance is low, we intuitively want to figure out what part of the code is slowing down the overall speed, and the CPU is our primary focus.。 +When we find that FISCO BCOS performance is low, we intuitively want to figure out what part of the code is slowing down the overall speed, and the CPU is our primary focus。 First use perf to sample the performance of the FISCO BCOS process: @@ -104,36 +104,36 @@ Finally, an image in SVG format is output to show the CPU call stack, as shown i ![](../../../../images/articles/performance_optimization_tools/IMG_5242.JPG) -**The vertical axis represents the call stack**。Each layer is a function and the parent function of its previous layer. The top is the function being executed at the time of sampling. The deeper the call stack, the higher the flame。**The horizontal axis represents the number of samples**。Note that it does not indicate execution time。If the width of a function is wider, it means that it has been drawn more times, and all call stacks will be aggregated and arranged in alphabetical sequence on the horizontal axis.。 +**The vertical axis represents the call stack**。Each layer is a function and the parent function of its previous layer. The top is the function being executed at the time of sampling. The deeper the call stack, the higher the flame。**The horizontal axis represents the number of samples**。Note that it does not indicate execution time。If the width of a function is wider, it means that it has been drawn more times, and all call stacks will be aggregated and arranged in alphabetical sequence on the horizontal axis。 -The flame diagram uses the SVG format, and the interactivity is greatly improved。When opened in the browser, each layer of the flame is labeled with a function name, and when the mouse hovers over it, the full function name, the number of times sampled, and the percentage of the total number of words sampled are displayed, as follows. +The flame diagram uses the SVG format, and the interactivity is greatly improved。When opened in the browser, each layer of the flame is labeled with a function name, and when the mouse hovers over it, the full function name, the number of times sampled, and the percentage of the total number of words sampled are displayed, as follows ![](../../../../images/articles/performance_optimization_tools/IMG_5243.JPG) -Click on a layer, the flame diagram will be horizontally enlarged, the layer will occupy all the width, and display detailed information, click on the upper left corner of the "Reset Zoom" can be restored。The following figure shows the percentage of samples for each function when the PBFT module executes the block. +Click on a layer, the flame diagram will be horizontally enlarged, the layer will occupy all the width, and display detailed information, click on the upper left corner of the "Reset Zoom" can be restored。The following figure shows the percentage of samples for each function when the PBFT module executes the block ![](../../../../images/articles/performance_optimization_tools/IMG_5244.JPG) As can be seen from the figure, in the execution of the block, the main overhead in the transaction decoding, this is due to the traditional RLP encoding in the decoding, the length of each object in the RLP encoding is uncertain, and the RLP encoding only records the number of objects, not the byte length of the object, to obtain one of the encoded objects, you must recursively decode all the objects in its preamble。 -Therefore, the decoding process of RLP encoding is a serial process, and when the number of transactions in the block is large, the overhead of this part will become very large.。In this regard, we propose an optimization scheme for parallel decoding RLP encoding. For specific implementation details, please refer to the previous article ["Parallelization Practice in FISCO BCOS"](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485324&idx=1&sn=94cdd4e7944f1058ee01eadbb7b3ec98&source=41#wechat_redirect)。 +Therefore, the decoding process of RLP encoding is a serial process, and when the number of transactions in the block is large, the overhead of this part will become very large。In this regard, we propose an optimization scheme for parallel decoding RLP encoding. For specific implementation details, please refer to the previous article ["Parallelization Practice in FISCO BCOS"](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485324&idx=1&sn=94cdd4e7944f1058ee01eadbb7b3ec98&source=41#wechat_redirect)。 -With a flame chart, it's easy to see where most of the CPU's time overhead is spent, and then optimize it.。 +With a flame chart, it's easy to see where most of the CPU's time overhead is spent, and then optimize it。 ### 3.2 Off-CPU Flame Chart -When implementing the parallel execution transaction function of FISCO BCOS, we found a confusing phenomenon: sometimes even if the transaction volume is very large and the load of the block is full, but the CPU utilization rate is still relatively low through the top command, usually the utilization rate of the 4-core CPU is less than 200%。After eliminating the possibility of dependencies between transactions, it is speculated that the CPU may be stuck in I / O or lock waiting, so you need to determine where the CPU is waiting.。 +When implementing the parallel execution transaction function of FISCO BCOS, we found a confusing phenomenon: sometimes even if the transaction volume is very large and the load of the block is full, but the CPU utilization rate is still relatively low through the top command, usually the utilization rate of the 4-core CPU is less than 200%。After eliminating the possibility of dependencies between transactions, it is speculated that the CPU may be stuck in I / O or lock waiting, so you need to determine where the CPU is waiting。 -Using perf, we can easily understand the sleep process of any process in the system, the principle is to use perf static tracer to grab the scheduling events of the process, and merge these events through perf inject, and finally get the call flow and sleep time that induce the process to sleep.。 +Using perf, we can easily understand the sleep process of any process in the system, the principle is to use perf static tracer to grab the scheduling events of the process, and merge these events through perf inject, and finally get the call flow and sleep time that induce the process to sleep。 -We're going to record sched separately through perf.:sched_stat_sleep、sched:sched_switch、sched:sched _ process _ exit three events, which represent the wait event when the process actively abandons the CPU and goes to sleep, the wait event when the process is switched to sleep by the scheduler due to I / O and lock waiting, and the exit event of the process.。 +We're going to record sched separately through perf:sched_stat_sleep、sched:sched_switch、sched:sched _ process _ exit three events, which represent the wait event when the process actively abandons the CPU and goes to sleep, the wait event when the process is switched to sleep by the scheduler due to I / O and lock waiting, and the exit event of the process。 ``` perf record -e sched:sched_stat_sleep -e sched:sched_switch \ -e sched:sched_process_exit -p `pidof fisco-bcos` -g \ -o perf.data.raw sleep 60 perf inject -v -s -i perf.data.raw -o perf.data -# Generate Off-CPU Flame Chart +# Generate Off-CPU Flame Plot perf script -f comm,pid,tid,cpu,time,period,event,ip,sym,dso,trace | awk ' NF > 4 { exec = $1; period_ms = int($5 / 1000000) } NF > 1 && NF <= 4 && period_ms > 0 { print $2 } @@ -142,7 +142,7 @@ perf script -f comm,pid,tid,cpu,time,period,event,ip,sym,dso,trace | awk ' ./flamegraph.pl --countname=ms --title="Off-CPU Time Flame Graph" --colors=io > offcpu.svg ``` -These commands may fail on newer Ubuntu or CentOS systems, which do not support logging scheduling events for performance reasons。Fortunately, we can choose another profile tool, OpenResty's SystemTap, instead of perf to help us collect performance data for the process scheduler.。When we use SystemTap under CentOS, we only need to install some dependencies kenerl debuginfo to use。 +These commands may fail on newer Ubuntu or CentOS systems, which do not support logging scheduling events for performance reasons。Fortunately, we can choose another profile tool, OpenResty's SystemTap, instead of perf to help us collect performance data for the process scheduler。When we use SystemTap under CentOS, we only need to install some dependencies kenerl debuginfo to use。 ``` wget https://raw.githubusercontent.com/openresty/openresty-systemtap-toolkit/master/sample-bt-off-cpu @@ -153,29 +153,29 @@ chmod +x sample-bt-off-cpu ./flamegraph.pl --colors=io out.folded > offcpu.svg ``` -If 'sample' occurs due to network problems-bt-off-The cpu 'script failed to download for a long time. You can try the following command: +If the 'sample-bt-off-cpu' script fails to download for a long time due to network problems, try the following command: ```bash https://gitee.com/mirrors/openresty-systemtap-toolkit/raw/master/sample-bt-off-cpu ``` -Get the Off-The CPU flame diagram is shown in the following figure: +The obtained Off-CPU flame diagram is shown in the following figure: ![](../../../../images/articles/performance_optimization_tools/IMG_5245.JPG) -After expanding the core function of executing the transaction, a bunch of lock _ wait on the right side of the flame chart quickly caught our attention。After analyzing their call stack, we found that the root cause of these lock _ wait comes from the fact that we have a lot of print debug logs in our program.。 +After expanding the core function of executing the transaction, a bunch of lock _ wait on the right side of the flame chart quickly caught our attention。After analyzing their call stack, we found that the root cause of these lock _ wait comes from the fact that we have a lot of print debug logs in our program。 -In the early development phase, we added a lot of log code to facilitate debugging, and did not delete it later.。Although we set the log level high during the test, these log-related codes still incur runtime overhead, such as accessing the log level status to determine whether to print the log, etc.。Because these states require mutually exclusive access between threads, they cause threads to starve due to competing resources。 +In the early development phase, we added a lot of log code to facilitate debugging, and did not delete it later。Although we set the log level high during the test, these log-related codes still incur runtime overhead, such as accessing the log level status to determine whether to print the log, etc。Because these states require mutually exclusive access between threads, they cause threads to starve due to competing resources。 -When we deleted these log codes, the utilization of the 4-core CPU instantly rose to 300% when the transaction was executed.+Given the overhead of scheduling and synchronization between threads, this utilization is already in the normal range。This debugging experience also reminds us that we must be careful to output logs in parallel code that pursues high performance to avoid unnecessary logs that introduce unnecessary performance losses.。 +When we deleted these log codes, the utilization of the 4-core CPU instantly rose to 300% when the transaction was executed+Given the overhead of scheduling and synchronization between threads, this utilization is already in the normal range。This debugging experience also reminds us that we must be careful to output logs in parallel code that pursues high performance to avoid unnecessary logs that introduce unnecessary performance losses。 ### **3.3** Memory Flame Chart -In the early testing phase of FISCO BCOS, we used the test method of repeatedly executing the same block and then calculating the average time taken to execute a block, and we found that the first execution of a block takes much more time than the subsequent execution of a block.。On the surface, this seems to be the first time the block is executed, the program allocates the cache somewhere, but we don't know exactly where the cache is allocated, so we set out to study the memory flame map。 +In the early testing phase of FISCO BCOS, we used the test method of repeatedly executing the same block and then calculating the average time taken to execute a block, and we found that the first execution of a block takes much more time than the subsequent execution of a block。On the surface, this seems to be the first time the block is executed, the program allocates the cache somewhere, but we don't know exactly where the cache is allocated, so we set out to study the memory flame map。 -Memory flame graph is a non-intrusive bypass analysis method. Compared with Valgrid, which simulates memory analysis, and TC Malloc, which counts heap usage, memory flame graph can obtain the memory allocation of the target process without interfering with the operation of the program.。 +Memory flame graph is a non-intrusive bypass analysis method. Compared with Valgrid, which simulates memory analysis, and TC Malloc, which counts heap usage, memory flame graph can obtain the memory allocation of the target process without interfering with the operation of the program。 -To make a memory flame map, you first need to dynamically add a probe to perf to monitor the malloc behavior of the standard library and sample the call stack of the function that captures the memory request / release in progress. +To make a memory flame map, you first need to dynamically add a probe to perf to monitor the malloc behavior of the standard library and sample the call stack of the function that captures the memory request / release in progress ``` perf record -e probe_libc:malloc -F 1000 -p `pidof fisco-bcos` -g -- sleep 60 @@ -193,7 +193,7 @@ The resulting flame diagram is shown below: ![](../../../../images/articles/performance_optimization_tools/IMG_5246.JPG) -We initially guessed that this unknown cache might be located in LevelDB's database connection module or JSON decoding module, but by comparing the memory flame maps of the first execution block and subsequent execution blocks, we found that the proportion of malloc samples in each module was approximately the same, so we quickly rejected these guesses。UNTIL COMBINATION OFF-After observing the CPU flame diagram, we noticed that the number of calls to sysmalloc was unusually high when the block was executed for the first time.。Considering the feature that malloc will pre-allocate memory when it is first called, we suspect that this may be the result of the more time-consuming first execution of the block.。 +We initially guessed that this unknown cache might be located in LevelDB's database connection module or JSON decoding module, but by comparing the memory flame maps of the first execution block and subsequent execution blocks, we found that the proportion of malloc samples in each module was approximately the same, so we quickly rejected these guesses。It was not until combined with the Off-CPU flame graph observation that we noticed an unusually high number of calls to sysmalloc when the block was first executed。Considering the feature that malloc will pre-allocate memory when it is first called, we suspect that this may be the result of the more time-consuming first execution of the block。 To test the conjecture, we lower the upper bound of malloc's pre-allocated space: @@ -201,19 +201,19 @@ To test the conjecture, we lower the upper bound of malloc's pre-allocated space export MALLOC_ARENA_MAX=1 ``` -Then test again and draw Off-CPU flame graph, found that although the performance is reduced, the first execution of the block takes time and the number of sysmalloc calls, basically the same as the subsequent execution of the block.。From this, we can basically conclude that this interesting phenomenon is due to malloc's memory pre-allocation behavior.。 +Then test again and draw the Off-CPU flame graph, and find that although the performance is reduced, the time taken to execute the block for the first time and the number of sysmalloc calls are basically the same as the blocks executed later。From this, we can basically conclude that this interesting phenomenon is due to malloc's memory pre-allocation behavior。 Of course, this behavior is introduced by the operating system in order to improve the overall performance of the program, we do not need to interfere with it, and the first block of execution speed is slow, the user experience will hardly have a negative impact, but no matter how small the performance problem is also a problem, as a developer we should get to the bottom of it and know why。 -Although this memory flame chart does not help us directly locate the essential cause of the problem, through intuitive data comparison, we can easily rule out false cause guesses and reduce a lot of trial and error costs.。In the face of complex memory problems, not only need to have a keen sense of smell, but also need a good helper such as Memory flame diagram。 +Although this memory flame chart does not help us directly locate the essential cause of the problem, through intuitive data comparison, we can easily rule out false cause guesses and reduce a lot of trial and error costs。In the face of complex memory problems, not only need to have a keen sense of smell, but also need a good helper such as Memory flame diagram。 ## 4. DIY Tools Although there are so many excellent analysis tools to help us in the performance optimization of the road ahead, but the powerful features sometimes can not keep up with the variability of performance problems, this time we need to combine their own needs, self-sufficient development of analysis tools。 -During the stability test of FISCO BCOS, we found that as the test time increases, the performance of the FISCO BCOS node shows a trend of attenuation. We need to obtain the performance trend change graph of all modules to identify the culprit that causes the performance attenuation.。 +During the stability test of FISCO BCOS, we found that as the test time increases, the performance of the FISCO BCOS node shows a trend of attenuation. We need to obtain the performance trend change graph of all modules to identify the culprit that causes the performance attenuation。 -First, we insert a large number of stakes into the code, which are used to measure the execution time of the code segment we are interested in, and record it in the log with a special identifier attached to it. +First, we insert a large number of stakes into the code, which are used to measure the execution time of the code segment we are interested in, and record it in the log with a special identifier attached to it ``` auto startTime = utcTime(); @@ -230,18 +230,18 @@ When the node performance has started to drop significantly, we export its log, ![](../../../../images/articles/performance_optimization_tools/IMG_5247.JPG) -Where the abscissa is the block height, the ordinate is the execution time (ms), and the different color curves represent the performance changes of different modules.。 +Where the abscissa is the block height, the ordinate is the execution time (ms), and the different color curves represent the performance changes of different modules。 -As can be seen from the figure, only the execution time of the block drop module represented by the red curve obviously increases rapidly with the increase of the amount of data in the database, thus it can be determined that the root cause of the node performance degradation problem lies in the block drop module.。Using the same method, we further dissect the functions of the block drop module, and we find that when the node submits new block data to the database, it calls LevelDB's update method, not the insert method.。 +As can be seen from the figure, only the execution time of the block drop module represented by the red curve obviously increases rapidly with the increase of the amount of data in the database, thus it can be determined that the root cause of the node performance degradation problem lies in the block drop module。Using the same method, we further dissect the functions of the block drop module, and we find that when the node submits new block data to the database, it calls LevelDB's update method, not the insert method。 -The difference between the two is that since LevelDB takes K-The data is stored in the form of V, and the update method performs the select operation before writing the data, because the data to be updated may already exist in the database, and the value data structure needs to be queried by Key before modification, and the query time is proportional to the amount of data, the insert method does not need this step at all.。Since we are writing brand new data, the query step is unnecessary, just change the way the data is written, and the problem of node performance degradation will be solved。 +The difference between the two is that because LevelDB stores data in the form of K-V, the update method performs a select operation before writing the data, because the data to be updated may already exist in the database, and the value data structure needs to be queried by Key before modification, while the query time is proportional to the amount of data, and the insert method does not need this step at all。Since we are writing brand new data, the query step is unnecessary, just change the way the data is written, and the problem of node performance degradation will be solved。 -A slight change in usage of the same tool can lead to other uses, such as: putting two batches of pile point performance data into the same Excel table, you can use the bar chart tool to clearly show the performance changes of the two test results.。 +A slight change in usage of the same tool can lead to other uses, such as: putting two batches of pile point performance data into the same Excel table, you can use the bar chart tool to clearly show the performance changes of the two test results。 -The following figure shows a performance bar chart before and after optimization when we optimize the transaction decoding and validation process. +The following figure shows a performance bar chart before and after optimization when we optimize the transaction decoding and validation process ![](../../../../images/articles/performance_optimization_tools/IMG_5248.JPG) -As can be seen from the figure, the optimized transaction decoding and validation process does take less time than it did before the optimization.。With the help of the bar chart, we can easily check whether the optimization method is effective, which plays an important guiding role in the process of performance optimization。 +As can be seen from the figure, the optimized transaction decoding and validation process does take less time than it did before the optimization。With the help of the bar chart, we can easily check whether the optimization method is effective, which plays an important guiding role in the process of performance optimization。 -In summary, DIY tools do not necessarily need to be complex, but they will certainly meet our customization needs as quickly as possible.。 \ No newline at end of file +In summary, DIY tools do not necessarily need to be complex, but they will certainly meet our customization needs as quickly as possible。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/sync_and_its_performance_optimization.md b/3.x/en/docs/articles/3_features/31_performance/sync_and_its_performance_optimization.md index 2337834a1..adab972c2 100644 --- a/3.x/en/docs/articles/3_features/31_performance/sync_and_its_performance_optimization.md +++ b/3.x/en/docs/articles/3_features/31_performance/sync_and_its_performance_optimization.md @@ -2,47 +2,47 @@ Author : SHI Xiang | FISCO BCOS Core Developer -Synchronization is a very important process in the blockchain, which is functionally divided into "transaction synchronization" and "state synchronization."。Transaction synchronization is executed when the transaction is submitted, giving priority to ensuring that the transaction can be sent to all nodes and packaged for processing.。State synchronization occurs when a node finds that its block height lags behind the entire network, and quickly returns to the highest height of the entire network through state synchronization, so that it can participate in the latest consensus process as a consensus node, while non-consensus nodes can obtain the latest block data for storage and verification.。 +Synchronization is a very important process in the blockchain, which is functionally divided into "transaction synchronization" and "state synchronization."。Transaction synchronization is executed when the transaction is submitted, giving priority to ensuring that the transaction can be sent to all nodes and packaged for processing。State synchronization occurs when a node finds that its block height lags behind the entire network, and quickly returns to the highest height of the entire network through state synchronization, so that it can participate in the latest consensus process as a consensus node, while non-consensus nodes can obtain the latest block data for storage and verification。 ## Transaction synchronization -Transaction synchronization is to allow transactions on the blockchain to reach all nodes as much as possible, providing a basis for consensus to package transactions into blocks.。 +Transaction synchronization is to allow transactions on the blockchain to reach all nodes as much as possible, providing a basis for consensus to package transactions into blocks。 ![](../../../../images/articles/sync_and_its_performance_optimization/IMG_5249.PNG) -A transaction (tx1) is sent from the client to a node. After receiving the transaction, the node puts the transaction into its own transaction pool (Tx Pool) for consensus packaging.。At the same time, the node broadcasts the transaction to other nodes, which receive the transaction and place it in their own transaction pool.。 +A transaction (tx1) is sent from the client to a node. After receiving the transaction, the node puts the transaction into its own transaction pool (Tx Pool) for consensus packaging。At the same time, the node broadcasts the transaction to other nodes, which receive the transaction and place it in their own transaction pool。 -In order to make the transaction reach all nodes as much as possible, the node that receives the broadcast transaction will select one or more adjacent nodes according to its own network topology and network traffic strategy to relay the broadcast.。 +In order to make the transaction reach all nodes as much as possible, the node that receives the broadcast transaction will select one or more adjacent nodes according to its own network topology and network traffic strategy to relay the broadcast。 ### Trading Broadcast Strategy -If each node does not have a limit on forwarding / broadcasting received transactions, the bandwidth will be full and there will be an avalanche of transaction broadcasts.。In order to avoid the avalanche of transaction broadcasts, FISCO BCOS has designed a more sophisticated transaction broadcast strategy to minimize duplicate transaction broadcasts while ensuring transaction accessibility as much as possible.。 +If each node does not have a limit on forwarding / broadcasting received transactions, the bandwidth will be full and there will be an avalanche of transaction broadcasts。In order to avoid the avalanche of transaction broadcasts, FISCO BCOS has designed a more sophisticated transaction broadcast strategy to minimize duplicate transaction broadcasts while ensuring transaction accessibility as much as possible。 - For transactions coming from SDK, broadcast to all nodes - For transactions broadcast from other nodes, randomly select 25% of the nodes to broadcast again -- A transaction is broadcast only once on a node, and when a duplicate transaction is received, it is not broadcast twice. +- A transaction is broadcast only once on a node, and when a duplicate transaction is received, it will not be broadcast twice -Through the above strategy, the transaction can reach all nodes as far as possible, and the transaction will be packaged, agreed and confirmed as soon as possible, so that the transaction can be executed faster.。 +Through the above strategy, the transaction can reach all nodes as far as possible, and the transaction will be packaged, agreed and confirmed as soon as possible, so that the transaction can be executed faster。 -The broadcast strategy has pursued the network's final arrival rate as much as possible in complex networks, but there is also a very small probability that a transaction will not reach a node within a certain time window.。When the transaction does not reach a certain node, it will only make the time for the transaction to be confirmed longer, will not affect the correctness of the transaction, and will not miss the transaction, because there is a broadcast mechanism, there are more nodes in the network have the opportunity to continue to process the transaction.。 +The broadcast strategy has pursued the network's final arrival rate as much as possible in complex networks, but there is also a very small probability that a transaction will not reach a node within a certain time window。When the transaction does not reach a certain node, it will only make the time for the transaction to be confirmed longer, will not affect the correctness of the transaction, and will not miss the transaction, because there is a broadcast mechanism, there are more nodes in the network have the opportunity to continue to process the transaction。 ## Block synchronization Block synchronization, which keeps the data state of blockchain nodes up to date。 -One of the most important signs of the new and old state of the blockchain is the block height, and the block contains the historical transactions on the chain. If the block height of a node is aligned with the highest block height of the whole network, the node has the opportunity to backtrack the historical transactions to obtain the latest state of the blockchain.。 +One of the most important signs of the new and old state of the blockchain is the block height, and the block contains the historical transactions on the chain. If the block height of a node is aligned with the highest block height of the whole network, the node has the opportunity to backtrack the historical transactions to obtain the latest state of the blockchain。 ![](../../../../images/articles/sync_and_its_performance_optimization/IMG_5250.PNG) -When a new node is added to the blockchain, or a node that has been disconnected restores the network, the block height of this node lags behind other nodes, and its state is not up-to-date.。At this time, block synchronization is required.。As shown in the preceding figure, the node that needs block synchronization (Node 1) actively requests other nodes to download blocks.。The entire download process spreads the network request load across multiple nodes。 +When a new node is added to the blockchain, or a node that has been disconnected restores the network, the block height of this node lags behind other nodes, and its state is not up-to-date。At this time, block synchronization is required。As shown in the preceding figure, the node that needs block synchronization (Node 1) actively requests other nodes to download blocks。The entire download process spreads the network request load across multiple nodes。 ### Block Synchronization and Download Queue -When a blockchain node is running, it regularly broadcasts its highest block height to other nodes.。After receiving the block height broadcast from other nodes, the node will compare it with its own block height. If its own block height lags behind this block height, it will start the block download process.。The download of blocks is completed through the "request / response" method. The nodes that enter the download process will randomly select the nodes that meet the requirements and send the height interval of the blocks to be downloaded.。The node that receives the download request will respond to the corresponding block based on the content of the request.。 +When a blockchain node is running, it regularly broadcasts its highest block height to other nodes。After receiving the block height broadcast from other nodes, the node will compare it with its own block height. If its own block height lags behind this block height, it will start the block download process。The download of blocks is completed through the "request / response" method. The nodes that enter the download process will randomly select the nodes that meet the requirements and send the height interval of the blocks to be downloaded。The node that receives the download request will respond to the corresponding block based on the content of the request。 ![](../../../../images/articles/sync_and_its_performance_optimization/IMG_5251.PNG) -The node that receives the response block maintains a download queue locally to buffer and sort the downloaded blocks.。The download queue is a priority queue in order of block height。New blocks downloaded are continuously inserted into the download queue, sorted by height。The sorted blocks are executed and verified by the node in turn.。After the verification is passed, update the local data status to increase the block height until the latest block is updated and the block height reaches the highest。 +The node that receives the response block maintains a download queue locally to buffer and sort the downloaded blocks。The download queue is a priority queue in order of block height。New blocks downloaded are continuously inserted into the download queue, sorted by height。The sorted blocks are executed and verified by the node in turn。After the verification is passed, update the local data status to increase the block height until the latest block is updated and the block height reaches the highest。 ## Performance optimization @@ -50,32 +50,32 @@ Performance optimization of synchronization can effectively improve system effic ### Encoding Cache -In the transaction broadcast, the transaction needs to be encoded into binary data and sent to other nodes, which, after receiving the transaction binary data, need to be decoded into a program-recognizable data structure.。Codec becomes a performance bottleneck for transaction broadcasting when the transaction volume is large。FISCO BCOS caches the binary encoding of the transaction, and when the transaction is to be sent, the binary transmission is taken directly from the cache, reducing the frequency of encoding and decoding and increasing the rate of transaction broadcasting.。 +In the transaction broadcast, the transaction needs to be encoded into binary data and sent to other nodes, which, after receiving the transaction binary data, need to be decoded into a program-recognizable data structure。Codec becomes a performance bottleneck for transaction broadcasting when the transaction volume is large。FISCO BCOS caches the binary encoding of the transaction, and when the transaction is to be sent, the binary transmission is taken directly from the cache, reducing the frequency of encoding and decoding and increasing the rate of transaction broadcasting。 ### load balancing -The node behind the block will download the block from other nodes by request.。After receiving the request, other nodes will send the blocks of the corresponding interval to the backward nodes.。When the block is far behind, the FISCO BCOS node will divide the download interval evenly, initiate requests to different nodes, and distribute the download load to different nodes to prevent a single requested node from affecting its performance due to carrying a large number of data access requests.。 +The node behind the block will download the block from other nodes by request。After receiving the request, other nodes will send the blocks of the corresponding interval to the backward nodes。When the block is far behind, the FISCO BCOS node will divide the download interval evenly, initiate requests to different nodes, and distribute the download load to different nodes to prevent a single requested node from affecting its performance due to carrying a large number of data access requests。 ### **Callback Stripping** -In the FISCO BCOS node, there are multiple callback threads that process packets received on the network。When the network traffic is large, the thread processing the network packet cannot handle it, and the network packet will be placed in the buffer queue.。The packets on the network are mainly synchronization packets and consensus packets, and consensus packets have a higher priority, which directly affects the block speed.。In order not to affect the processing of the consensus package, FISCO BCOS strips the processing logic of the synchronization package from the network callback thread and hands it over to another independent thread, which is coupled and parallelized with the consensus package.。 +In the FISCO BCOS node, there are multiple callback threads that process packets received on the network。When the network traffic is large, the thread processing the network packet cannot handle it, and the network packet will be placed in the buffer queue。The packets on the network are mainly synchronization packets and consensus packets, and consensus packets have a higher priority, which directly affects the block speed。In order not to affect the processing of the consensus package, FISCO BCOS strips the processing logic of the synchronization package from the network callback thread and hands it over to another independent thread, which is coupled and parallelized with the consensus package。 ### **Check and remove weight** -When the synchronization module receives the transaction, it needs to verify the transaction.。After the consensus module receives the block, it removes the transaction from the block and also needs to verify the transaction.。Although it was the same transaction, it was verified in both synchronization and consensus.。However, the verification is very time-consuming, which greatly affects the TPS of the transaction execution.。FISCO BCOS in the execution of the transaction to do a de-duplication logic, whether it is synchronization or consensus, before the check-and-sign record, if the transaction has been checked, then directly from the record to obtain the check-and-sign results, to ensure that the same transaction only check-and-sign once.。At the same time, FISCO BCOS allows synchronization to check signatures before consensus as much as possible, allowing consensus to obtain signature verification results as directly as possible, reducing the time-consuming process of signature verification in consensus.。Consensus is accelerated and the TPS performance of the chain is improved accordingly。 +When the synchronization module receives the transaction, it needs to verify the transaction。After the consensus module receives the block, it removes the transaction from the block and also needs to verify the transaction。Although it was the same transaction, it was verified in both synchronization and consensus。However, the verification is very time-consuming, which greatly affects the TPS of the transaction execution。FISCO BCOS in the execution of the transaction to do a de-duplication logic, whether it is synchronization or consensus, before the check-and-sign record, if the transaction has been checked, then directly from the record to obtain the check-and-sign results, to ensure that the same transaction only check-and-sign once。At the same time, FISCO BCOS allows synchronization to check signatures before consensus as much as possible, allowing consensus to obtain signature verification results as directly as possible, reducing the time-consuming process of signature verification in consensus。Consensus is accelerated and the TPS performance of the chain is improved accordingly。 ## SUMMARY -Consensus and synchronization are essential links in the blockchain。Consensus takes the lead, synchronous play auxiliary。The synchronization process enables all nodes in the entire blockchain network to achieve data consistency, ensuring that the data is verifiable across the network.。At the same time, without affecting the consensus, prepare the required data for the consensus in advance to make the consensus run faster and more stable.。 +Consensus and synchronization are essential links in the blockchain。Consensus takes the lead, synchronous play auxiliary。The synchronization process enables all nodes in the entire blockchain network to achieve data consistency, ensuring that the data is verifiable across the network。At the same time, without affecting the consensus, prepare the required data for the consensus in advance to make the consensus run faster and more stable。 #### Related reading - [Chaplin deductive consensus and synchronization process optimization](./articles/3_features/31_performance/consensus_and_sync_process_optimization.md) -- [Synchronization Module Documentation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/sync/sync.html) +- [Sync Module Documentation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/sync/sync.html) #### "Group chat interaction" **Q** **Snow without trace**: What if you synchronize to a block on a branch during load balancing synchronization?In addition, when a bifurcation occurs, how long after the branch is retained is discarded? - **A** **Shi Xiang**: The consensus algorithm used by FISCO BCOS is pbft and will not fork.。Without rollback, synchronization can be load balanced, mpt tree can be gone, mpt is changed to table structure storage, there is no data conflict between table structures, you can execute transactions in parallel, so the alliance chain can be fast.。 \ No newline at end of file + **A** **Shi Xiang**: The consensus algorithm used by FISCO BCOS is pbft and will not fork。Without rollback, synchronization can be load balanced, mpt tree can be gone, mpt is changed to table structure storage, there is no data conflict between table structures, you can execute transactions in parallel, so the alliance chain can be fast。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/31_performance/sync_optimization.md b/3.x/en/docs/articles/3_features/31_performance/sync_optimization.md index 61f234a72..1d1c6be7e 100644 --- a/3.x/en/docs/articles/3_features/31_performance/sync_optimization.md +++ b/3.x/en/docs/articles/3_features/31_performance/sync_optimization.md @@ -4,26 +4,26 @@ Author : Chen Yujie | FISCO BCOS Core Developer **Author language** -In the FISCO BCOS blockchain system, the synchronization module is responsible。As a transaction transmission expert, the client sends transactions to all node transaction pools, continuously providing the consensus module with "raw materials" for packaging blocks.;It is also a "rescuer of needy households," synchronizing the latest blocks to the "needy households" node with high and backward blocks, so that they can participate in the consensus normally.。 +In the FISCO BCOS blockchain system, the synchronization module is responsible。As a transaction transmission expert, the client sends transactions to all node transaction pools, continuously providing the consensus module with "raw materials" for packaging blocks;It is also a "rescuer of needy households," synchronizing the latest blocks to the "needy households" node with high and backward blocks, so that they can participate in the consensus normally。 -Of course, since most of the responsibilities of the synchronization module are related to the network, it is also a "big bandwidth consumer" of the system, which will lead to high bandwidth load on some nodes of the blockchain.。To this end, FISCO BCOS developers have also designed a series of strategies to optimize this "bandwidth consumer" so that it can serve the system more elegantly.。 +Of course, since most of the responsibilities of the synchronization module are related to the network, it is also a "big bandwidth consumer" of the system, which will lead to high bandwidth load on some nodes of the blockchain。To this end, FISCO BCOS developers have also designed a series of strategies to optimize this "bandwidth consumer" so that it can serve the system more elegantly。 -This article will detail the optimization of the FISCO BCOS synchronization module.。 +This article will detail the optimization of the FISCO BCOS synchronization module。 ------ ## Initial synchronization module -Transaction synchronization and block synchronization are the main responsibilities of the synchronization module of the FISCO BCOS blockchain system, both of which are network-related.。 +Transaction synchronization and block synchronization are the main responsibilities of the synchronization module of the FISCO BCOS blockchain system, both of which are network-related。 ![](../../../../images/articles/sync_optimization/IMG_5252.PNG) -As shown in the figure above, transaction synchronization is responsible for sending transactions sent by the client to all other nodes, providing the consensus module with transactions for packaging blocks。To ensure that transactions can reach all nodes, transaction synchronization mainly includes two parts: transaction broadcast and transaction forwarding. +As shown in the figure above, transaction synchronization is responsible for sending transactions sent by the client to all other nodes, providing the consensus module with transactions for packaging blocks。To ensure that transactions can reach all nodes, transaction synchronization mainly includes two parts: transaction broadcast and transaction forwarding - **Transaction Broadcast**: The client first sends the transaction to the client directly connected node, which broadcasts the received transaction to all other nodes; -- **Transaction Forwarding**: In order to ensure that the transaction can reach all nodes in the case of network disconnection, the nodes that receive the broadcast transaction are randomly selected 25% of the nodes to forward the received transaction.。 +- **Transaction Forwarding**: In order to ensure that the transaction can reach all nodes in the case of network disconnection, the nodes that receive the broadcast transaction are randomly selected 25% of the nodes to forward the received transaction。 -Block synchronization is responsible for saving the high and backward blocks"Difficult households"to synchronize the latest block high to the node behind the block high。When the block height of a node is lower than that of other nodes, a new block is actively pulled from the node with a higher block height.。 +Block synchronization is responsible for saving the high and backward blocks"Difficult households"to synchronize the latest block high to the node behind the block high。When the block height of a node is lower than that of other nodes, a new block is actively pulled from the node with a higher block height。 ## Unreasonable network use posture of synchronization module @@ -31,25 +31,25 @@ The previous section mentioned that the synchronization module is a "large bandw ### High network load of client directly connected nodes during transaction synchronization -Considering the slow messaging speed of the Gossip protocol, the alliance chain scenario generally adopts the method of fully interconnected node networks to improve network efficiency.。To ensure that the transaction sent by the client can reach all nodes quickly, the client directly connected node will broadcast the transaction to all nodes after receiving the transaction.。Due to the limited bandwidth of the external network of blockchain nodes, as the size of the nodes increases, the client directly connected nodes will inevitably become a system bottleneck due to the high network load.。 +Considering the slow messaging speed of the Gossip protocol, the alliance chain scenario generally adopts the method of fully interconnected node networks to improve network efficiency。To ensure that the transaction sent by the client can reach all nodes quickly, the client directly connected node will broadcast the transaction to all nodes after receiving the transaction。Due to the limited bandwidth of the external network of blockchain nodes, as the size of the nodes increases, the client directly connected nodes will inevitably become a system bottleneck due to the high network load。 ### Low network utilization efficiency when forwarding transactions -In order to ensure that transactions can still reach all nodes in the event that some nodes are disconnected from the network, the synchronization module introduces a transaction forwarding mechanism.。After a node receives a transaction, it randomly selects a number of nodes to broadcast the received transaction。 +In order to ensure that transactions can still reach all nodes in the event that some nodes are disconnected from the network, the synchronization module introduces a transaction forwarding mechanism。After a node receives a transaction, it randomly selects a number of nodes to broadcast the received transaction。 -When the network is fully connected, this will cause some nodes to frequently receive duplicate data packets, and the more nodes, the more redundant message packets due to transaction forwarding, which will undoubtedly cause a huge waste of network bandwidth.。 +When the network is fully connected, this will cause some nodes to frequently receive duplicate data packets, and the more nodes, the more redundant message packets due to transaction forwarding, which will undoubtedly cause a huge waste of network bandwidth。 -### During block synchronization, the network load of some nodes is high, resulting in the node size is not scalable. +### During block synchronization, the network load of some nodes is high, resulting in the node size is not scalable -Considering the high complexity and non-infinite scalability of the blockchain of the BFT or CFT consensus algorithms currently in use, most business architectures have only some nodes as consensus nodes and other nodes as observation nodes.(Do not participate in consensus, but have the full amount of blockchain data), As shown in the figure below。 +Considering the high complexity and non-infinite scalability of the blockchain of the BFT or CFT consensus algorithms currently in use, most business architectures have only some nodes as consensus nodes and other nodes as observation nodes(Do not participate in consensus, but have the full amount of blockchain data), As shown in the figure below。 ![](../../../../images/articles/sync_optimization/IMG_5253.PNG) -In this architecture, most observation nodes randomly download blocks from the consensus node with the latest block。In a blockchain system with n consensus nodes and m observation nodes, set each block size to block _ size, ideally(namely load balancing), one block per consensus, each consensus node needs to send blocks to m / n observation nodes, and the bandwidth out of the consensus node is approximately(m/n)*block_size;If the network bandwidth is bandwidth, each consensus node can send the(bandwidth/block_size)The maximum node size of the blockchain is(n*bandwidth/block_size)。When the bandwidth of the public network is small and the number of blocks is large, the number of nodes that can be accommodated is limited, so the random block synchronization strategy is not scalable.。 +In this architecture, most observation nodes randomly download blocks from the consensus node with the latest block。In a blockchain system with n consensus nodes and m observation nodes, set each block size to block _ size, ideally(namely load balancing), one block per consensus, each consensus node needs to send blocks to m / n observation nodes, and the bandwidth out of the consensus node is approximately(m/n)*block_size;If the network bandwidth is bandwidth, each consensus node can send the(bandwidth/block_size)The maximum node size of the blockchain is(n*bandwidth/block_size)。When the bandwidth of the public network is small and the number of blocks is large, the number of nodes that can be accommodated is limited, so the random block synchronization strategy is not scalable。 ## Optimization Strategy of Synchronization Module -In order to improve the efficiency of system bandwidth use and the scalability of the system, FISCO BCOS developers have proposed a series of optimization strategies to "correct" the unreasonable network usage posture of the synchronization module, so that it can serve the FISCO BCOS blockchain system more elegantly and efficiently.。 +In order to improve the efficiency of system bandwidth use and the scalability of the system, FISCO BCOS developers have proposed a series of optimization strategies to "correct" the unreasonable network usage posture of the synchronization module, so that it can serve the FISCO BCOS blockchain system more elegantly and efficiently。 ### Strategy 1: Trading Tree Broadcast @@ -57,33 +57,33 @@ In order to reduce the network pressure caused by the transaction broadcast of c ![](../../../../images/articles/sync_optimization/IMG_5254.PNG) -- **Before optimization:**After receiving the client transaction, the node broadcasts the full amount to other nodes.; +- **Before optimization:**After receiving the client transaction, the node broadcasts the full amount to other nodes; - **After optimization:**After the node receives the client transaction, it sends it to the child node, and after the child node receives the transaction, it continues to forward the transaction to its own child node。 -After the transaction tree broadcast is used, the client directly connected nodes shown in the figure above allocate part of the network load to the child nodes, reducing the bandwidth load to half of the original, achieving the goal of load balancing.。And, since the outgoing bandwidth of all node broadcast transactions is only related to the width of the tree topology, the transaction tree broadcast strategy is scalable。In addition, compared to the Gossip-based transaction broadcast mechanism, the tree broadcast strategy increases the transaction broadcast rate while reducing the number of redundant message packets in the network.。 +After the transaction tree broadcast is used, the client directly connected nodes shown in the figure above allocate part of the network load to the child nodes, reducing the bandwidth load to half of the original, achieving the goal of load balancing。And, since the outgoing bandwidth of all node broadcast transactions is only related to the width of the tree topology, the transaction tree broadcast strategy is scalable。In addition, compared to the Gossip-based transaction broadcast mechanism, the tree broadcast strategy increases the transaction broadcast rate while reducing the number of redundant message packets in the network。 ### Strategy 2: Transaction forwarding optimization based on state packets -In order to eliminate the bandwidth consumption caused by transaction forwarding and improve network efficiency, FISCO BCOS proposes a transaction forwarding strategy based on state packets.。The node can obtain the missing transactions based on the received transaction status and the existing transactions in the transaction pool, and pull the missing transactions directly to the corresponding node.。 +In order to eliminate the bandwidth consumption caused by transaction forwarding and improve network efficiency, FISCO BCOS proposes a transaction forwarding strategy based on state packets。The node can obtain the missing transactions based on the received transaction status and the existing transactions in the transaction pool, and pull the missing transactions directly to the corresponding node。 ![](../../../../images/articles/sync_optimization/IMG_5255.PNG) -In the preceding figure, the client is directly connected to node0, but node0 is disconnected from node1 and node4. In this case, node0 can only broadcast transactions to node2 and node3.。After receiving the transaction, node2 and node3 package the list of the latest transaction into a status package and send it to other nodes。After receiving the status package, node1 and node4 compare the list of transactions in the local transaction pool to obtain the list of missing transactions and request transactions from node2 or node3 that have these transactions in batches.。In a fully connected network environment, the transaction status of all nodes is basically the same, and there are fewer transaction requests between nodes, which greatly reduces the bandwidth waste caused by forwarding redundant transactions compared to the strategy of directly forwarding transactions.。 +In the preceding figure, the client is directly connected to node0, but node0 is disconnected from node1 and node4. In this case, node0 can only broadcast transactions to node2 and node3。After receiving the transaction, node2 and node3 package the list of the latest transaction into a status package and send it to other nodes。After receiving the status package, node1 and node4 compare the list of transactions in the local transaction pool to obtain the list of missing transactions and request transactions from node2 or node3 that have these transactions in batches。In a fully connected network environment, the transaction status of all nodes is basically the same, and there are fewer transaction requests between nodes, which greatly reduces the bandwidth waste caused by forwarding redundant transactions compared to the strategy of directly forwarding transactions。 ### Strategy 3: Block synchronization scalability optimization -In order to reduce the impact of the network output bandwidth of the consensus node on the network scale when multiple observation nodes synchronize blocks to a single consensus node, and improve the scalability of block synchronization in the blockchain system, FISCO BCOS designs and implements a block state tree broadcast strategy.。 +In order to reduce the impact of the network output bandwidth of the consensus node on the network scale when multiple observation nodes synchronize blocks to a single consensus node, and improve the scalability of block synchronization in the blockchain system, FISCO BCOS designs and implements a block state tree broadcast strategy。 The following figure shows the block synchronization of a blockchain system consisting of three consensus nodes and 18 observation nodes along a trigeminal tree: ![](../../../../images/articles/sync_optimization/IMG_5256.PNG) -The strategy allocates the observation node to each consensus node and constructs a tritree with the consensus node as the vertex.。After the consensus node is out of the block, it gives priority to sending the latest block status to its child observation node, and after the child observation node synchronizes the latest block, it gives priority to sending the latest block status to its own child node, and so on.。After the block state tree broadcast strategy is adopted, each node preferentially sends the latest block state to the child node, and the child node preferentially synchronizes the latest block to the parent node, set the block size to block _ size and the width of the tree to w, then the network bandwidth for block synchronization is(block_size * w), regardless of the total number of nodes in the blockchain system, is scalable。 +The strategy allocates the observation node to each consensus node and constructs a tritree with the consensus node as the vertex。After the consensus node is out of the block, it gives priority to sending the latest block status to its child observation node, and after the child observation node synchronizes the latest block, it gives priority to sending the latest block status to its own child node, and so on。After the block state tree broadcast strategy is adopted, each node preferentially sends the latest block state to the child node, and the child node preferentially synchronizes the latest block to the parent node, set the block size to block _ size and the width of the tree to w, then the network bandwidth for block synchronization is(block_size * w), regardless of the total number of nodes in the blockchain system, is scalable。 -In addition, considering that in the tree topology, the disconnection of nodes may cause blocks to fail to reach some nodes, the block state tree broadcast optimization strategy also uses the gossip protocol to synchronize the block state regularly, so that the disconnected nodes in the tree topology can also synchronize the latest blocks from their neighbors, ensuring the robustness of the tree block state broadcast.。 +In addition, considering that in the tree topology, the disconnection of nodes may cause blocks to fail to reach some nodes, the block state tree broadcast optimization strategy also uses the gossip protocol to synchronize the block state regularly, so that the disconnected nodes in the tree topology can also synchronize the latest blocks from their neighbors, ensuring the robustness of the tree block state broadcast。 ## Summary -The synchronization module is a small expert in transaction transmission, as well as a "rescuer of needy households.""Large bandwidth consumption", this"Large bandwidth consumption"Whether you are performing the task of synchronizing transactions or performing the task of synchronizing blocks, you are suspected of wasting bandwidth and overusing part of the node bandwidth.。 +The synchronization module is a small expert in transaction transmission, as well as a "rescuer of needy households.""Large bandwidth consumption", this"Large bandwidth consumption"Whether you are performing the task of synchronizing transactions or performing the task of synchronizing blocks, you are suspected of wasting bandwidth and overusing part of the node bandwidth。 -FISCO BCOS developers use a series of optimization strategies to standardize the bandwidth usage posture of the synchronization module, minimize redundant message packets of the synchronization module, and allocate the bandwidth pressure of high-load nodes to subordinate child nodes, which improves the scalability of the blockchain system. The optimized synchronization module can serve the blockchain system more elegantly, efficiently and robustly.。 \ No newline at end of file +FISCO BCOS developers use a series of optimization strategies to standardize the bandwidth usage posture of the synchronization module, minimize redundant message packets of the synchronization module, and allocate the bandwidth pressure of high-load nodes to subordinate child nodes, which improves the scalability of the blockchain system. The optimized synchronization module can serve the blockchain system more elegantly, efficiently and robustly。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/32_consensus/consensus_optimization.md b/3.x/en/docs/articles/3_features/32_consensus/consensus_optimization.md index e58aed3b4..ad45462c4 100644 --- a/3.x/en/docs/articles/3_features/32_consensus/consensus_optimization.md +++ b/3.x/en/docs/articles/3_features/32_consensus/consensus_optimization.md @@ -4,15 +4,15 @@ Author : Chen Yujie | FISCO BCOS Core Developer **Author language** -The original PBFT consensus algorithm has room for continuous optimization in terms of block packaging, transaction validation, block execution, and empty block processing, and in order to make the PBFT algorithm faster and more stable, FISCO BCOS has made a series of optimizations, including. +The original PBFT consensus algorithm has room for continuous optimization in terms of block packaging, transaction validation, block execution, and empty block processing, and in order to make the PBFT algorithm faster and more stable, FISCO BCOS has made a series of optimizations, including -- Packaging and consensus proceed concurrently; -- Do not repeat check and sign transactions; +- Packaging and consensus performed concurrently; +- Do not repeat checking and signing transactions; - Introduction of DAG parallel transaction execution framework for parallel execution of intra-block transactions; -- Empty blocks quickly trigger view switching, and switch the leader, do not drop empty blocks, eliminate the storage overhead of empty blocks, and effectively prevent nodes from doing evil.; -- It solves the problem that the node cannot quickly catch up with other node views after the node is down, ensuring the availability of the system.。 +-Empty blocks quickly trigger view switching and switch Leader, which does not drop empty blocks, eliminating the storage overhead of empty blocks and effectively preventing nodes from doing evil; +-Solves the problem that the node cannot quickly catch up with other node views after the node is down, ensuring the availability of the system。 -This article details the consensus optimization scheme of FISCO BCOS from three aspects: performance, storage and availability.。 +This article details the consensus optimization scheme of FISCO BCOS from three aspects: performance, storage and availability。 ## Performance optimization @@ -20,66 +20,66 @@ Taking into account**Leader Rotating Serial Packaging Transactions**、**Slow tr ### Packaging and consensus concurrent execution -PBFT consensus algorithm in each round of consensus, including**Packaging Phase**和**Consensus phase**When the Leader packages a new block, all consensus nodes are in the state of waiting for the Prepae package and cannot enter the consensus phase.;When the consensus node is in the consensus phase, the leader's packaging thread does not work, but packaging blocks and consensus are two independent and mutually exclusive processes that can be executed concurrently.。 +PBFT consensus algorithm in each round of consensus, including**Packaging Phase**和**Consensus phase**When the Leader packages a new block, all consensus nodes are in the state of waiting for the Prepae package and cannot enter the consensus phase;When the consensus node is in the consensus phase, the leader's packaging thread does not work, but packaging blocks and consensus are two independent and mutually exclusive processes that can be executed concurrently。 ![](../../../../images/articles/consensus_optimization/IMG_4897.PNG) -Let the time overhead of the packaging phase be t, the time overhead of the consensus phase be u, and the time overhead of n rounds of consensus be n ∗(t+u);However, if the leader of the next round of consensus participates in the consensus phase and also packages the blocks in advance and broadcasts the already packaged blocks at the time of the next round of consensus, the consensus time overhead can be reduced to n ∗ u.+t, time overhead is reduced(n-1)*t, can effectively improve the performance of PBFT consensus algorithm。 +Let the time overhead of the packaging phase be t, the time overhead of the consensus phase be u, and the time overhead of n rounds of consensus be n ∗(t+u);However, if the leader of the next round of consensus participates in the consensus phase and also packages the blocks in advance and broadcasts the already packaged blocks at the time of the next round of consensus, the consensus time overhead can be reduced to n ∗ u+t, time overhead is reduced(n-1)*t, can effectively improve the performance of PBFT consensus algorithm。 ### Avoid duplicate trade checks ![](../../../../images/articles/consensus_optimization/IMG_4898.PNG) -After receiving the Prepare packet sent by the leader, the consensus node will take out the block and verify the validity of each transaction signature in the block. However, transaction verification is a time-consuming operation, which will increase the time cost of the PBFT Prepare phase and reduce performance.。 +After receiving the Prepare packet sent by the leader, the consensus node will take out the block and verify the validity of each transaction signature in the block. However, transaction verification is a time-consuming operation, which will increase the time cost of the PBFT Prepare phase and reduce performance。 -Considering that when a transaction is inserted into the transaction pool, a validation is performed, as shown in the following figure, the FISCO BCOS system has been optimized to prevent duplicate validation of transactions, and the following is a detailed description of the FISCO BCOS process to prevent duplicate validation of transactions, taking into account the entire transaction flow process. +Considering that when a transaction is inserted into the transaction pool, a validation is performed, as shown in the following figure, the FISCO BCOS system has been optimized to prevent duplicate validation of transactions, and the following is a detailed description of the FISCO BCOS process to prevent duplicate validation of transactions, taking into account the entire transaction flow process ![](../../../../images/articles/consensus_optimization/IMG_4899.PNG) -1. After RPC receives the transaction sent by the client, it checks and signs the transaction.; +1. After RPC receives the transaction sent by the client, it checks and signs the transaction; 2. After the transaction is verified, it is inserted into the transaction pool, and the synchronization module broadcasts the transaction; -3. After receiving the transaction from other nodes, the synchronization module of other nodes checks the transaction and inserts the valid transaction into the transaction pool.; -4. After receiving the Prepare package, the consensus module solves the blocks in the Prepare package, determines whether the transactions in the block are in the transaction pool, and verifies only the transaction signatures that are not included in the transaction pool.。 +3. After receiving the transaction from other nodes, the synchronization module of other nodes checks the transaction and inserts the valid transaction into the transaction pool; +4. After receiving the Prepare package, the consensus module solves the blocks in the Prepare package, determines whether the transactions in the block are in the transaction pool, and verifies only the transaction signatures that are not included in the transaction pool。 -After the above optimization, the block decoding and verification time of 10000 transactions in the Prepare request is reduced from 2s to 200ms, which greatly reduces the time overhead of the Prepare phase.。 +After the above optimization, the block decoding and verification time of 10000 transactions in the Prepare request is reduced from 2s to 200ms, which greatly reduces the time overhead of the Prepare phase。 ### block parallel execution -Block execution is one of the main time overheads of the PBFT consensus algorithm. Without any parallel optimization, the PBFT consensus algorithm can hardly reach consensus on a block containing tens of thousands of transactions.。 +Block execution is one of the main time overheads of the PBFT consensus algorithm. Without any parallel optimization, the PBFT consensus algorithm can hardly reach consensus on a block containing tens of thousands of transactions。 -In order to improve the TPS of the blockchain system, the FISCO BCOS system has developed a DAG-based transaction parallel execution engine and introduced a parallelizable contract development framework to support parallel execution of transactions, reaching tens of thousands of TPS.。For details, please refer to: [Blockchain Performance Take Off: DAG-based Parallel Transaction Execution Engine]。](https://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247484211&idx=1&sn=73591fef0a1a7cc683fd6577b362efca&chksm=fe6a867cc91d0f6aad155a2b7ecd2e077ff35af41e088533626ede34af24a57f3613e197af2d&mpshare=1&scene=21&srcid=0806kJGQCVXQewNJU9ZsRQ2w&sharer_sharetime=1565076787459&sharer_shareid=bc5c95f953e1901389b9c82c159fbb6b&rd2werd=1#wechat_redirect) +In order to improve the TPS of the blockchain system, the FISCO BCOS system has developed a DAG-based transaction parallel execution engine and introduced a parallelizable contract development framework to support parallel execution of transactions, reaching tens of thousands of TPS。For details, please refer to: [Blockchain Performance Take Off: DAG-based Parallel Transaction Execution Engine]。](https://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247484211&idx=1&sn=73591fef0a1a7cc683fd6577b362efca&chksm=fe6a867cc91d0f6aad155a2b7ecd2e077ff35af41e088533626ede34af24a57f3613e197af2d&mpshare=1&scene=21&srcid=0806kJGQCVXQewNJU9ZsRQ2w&sharer_sharetime=1565076787459&sharer_shareid=bc5c95f953e1901389b9c82c159fbb6b&rd2werd=1#wechat_redirect) ## Storage optimization ![](../../../../images/articles/consensus_optimization/IMG_4900.PNG) -In order to ensure the normal operation of the system, confirm that the Leader is available, and prevent the Leader from deliberately doing evil, the blockchain system based on the PBFT consensus algorithm will generate empty blocks when there are no transactions and reach a consensus on the empty blocks.。 -Although the empty block consensus is necessary, considering that the QPS of the current blockchain system is not large, the empty block will consume storage space and reduce the efficiency of hard disk utilization.(Number of transactions that can be stored)。 -FISCO BCOS implements an efficient empty block processing method based on the PBFT consensus algorithm, ensuring that empty blocks participate in the PBFT consensus process while not falling empty blocks, improving disk utilization efficiency.。Detailed scheme can refer to here: ["FISCO BCOS PBFT empty block processing"](https://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247485288&idx=2&sn=35e32f22cda893e7f02fe58369000164&chksm=fe6a8227c91d0b31133d7302b25decb6f6bba08a8d70848fcaf6573e6983a8e69885d2ed7fa3&mpshare=1&scene=21&srcid=&sharer_sharetime=1565077005952&sharer_shareid=bc5c95f953e1901389b9c82c159fbb6b&rd2werd=1#wechat_redirect)。 +In order to ensure the normal operation of the system, confirm that the Leader is available, and prevent the Leader from deliberately doing evil, the blockchain system based on the PBFT consensus algorithm will generate empty blocks when there are no transactions and reach a consensus on the empty blocks。 +Although the empty block consensus is necessary, considering that the QPS of the current blockchain system is not large, the empty block will consume storage space and reduce the efficiency of hard disk utilization(Number of transactions that can be stored)。 +FISCO BCOS implements an efficient empty block processing method based on the PBFT consensus algorithm, ensuring that empty blocks participate in the PBFT consensus process while not falling empty blocks, improving disk utilization efficiency。Detailed scheme can refer to here: ["FISCO BCOS PBFT empty block processing"](https://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247485288&idx=2&sn=35e32f22cda893e7f02fe58369000164&chksm=fe6a8227c91d0b31133d7302b25decb6f6bba08a8d70848fcaf6573e6983a8e69885d2ed7fa3&mpshare=1&scene=21&srcid=&sharer_sharetime=1565077005952&sharer_shareid=bc5c95f953e1901389b9c82c159fbb6b&rd2werd=1#wechat_redirect)。 ## Availability Optimization ![](../../../../images/articles/consensus_optimization/IMG_4901.PNG) -When a newly started node or a new node joins the blockchain network, if it cannot immediately reach an agreement with other node views, it will affect the fault tolerance of the system.。 +When a newly started node or a new node joins the blockchain network, if it cannot immediately reach an agreement with other node views, it will affect the fault tolerance of the system。 -- case1: 4-node blockchain system, node0 downtime, the number of fault-tolerant nodes of the remaining three nodes is 0;If node0 restarts and cannot quickly catch up with other node views, the number of fault-tolerant nodes in the system is still 0, and node0 is in the consensus exception state。 -- case2: The 2-node blockchain system is running normally, and the new node node2 is added. If node2 cannot quickly catch up with other node views, the system will be abnormal due to one node(New Joined Node)while in a consensus abnormal state。 +-case1: 4-node blockchain system, node0 downtime, the number of fault-tolerant nodes of the remaining three nodes is 0;If node0 restarts and cannot quickly catch up with other node views, the number of fault-tolerant nodes in the system is still 0, and node0 is in the consensus exception state。 +-case2: The 2-node blockchain system is running normally. The new node node2 is added. If node2 cannot quickly catch up with other node views, the system will be abnormal due to one node(New Joined Node)while in a consensus abnormal state。 -To solve the above problems, the FISCO BCOS PBFT consensus algorithm introduces a fast view catch-up mechanism. The newly started node sends a view switching packet to all other consensus nodes, and other nodes reply to the latest view after receiving the packet, so that the newly started node can quickly reach a consistent view with other consensus nodes, and the system will not have a consensus exception after adding a new node.。 +To solve the above problems, the FISCO BCOS PBFT consensus algorithm introduces a fast view catch-up mechanism. The newly started node sends a view switching packet to all other consensus nodes, and other nodes reply to the latest view after receiving the packet, so that the newly started node can quickly reach a consistent view with other consensus nodes, and the system will not have a consensus exception after adding a new node。 ![](../../../../images/articles/consensus_optimization/IMG_4902.PNG) As shown in the figure above, the core process is as follows: -- The newly started node broadcasts the view switch request package ViewChange to all other nodes, and the view ViewChange.toView in the request package is 1; -- When another node receives a ViewChange request with a toView much smaller than the current node view, the reply contains the current view(view)ViewChange package for; -- Just started node collection full 2*f+After 1 ViewChange package, switch to a view consistent with other consensus nodes。 +- The newly started node broadcasts the view switching request package ViewChange to all other nodes, and the view ViewChange.toView in the request package is 1; +- When other nodes receive a ViewChange request with a toView much smaller than the current node view, the reply contains the current view(view)ViewChange package for; +- just started node collection full 2*f+After 1 ViewChange package, switch to a view consistent with other consensus nodes。 ## SUMMARY -The above details the optimization strategy of FISCO BCOS on the consensus algorithm, FISCO BCOS uses a systematic approach to make the PBFT algorithm more efficient and usable.。 -Of course, in addition to the problems mentioned above, the PBFT algorithm also has room for continuous optimization in terms of network complexity. The FISCO BCOS development team is also actively investigating the latest consensus algorithm and consensus algorithm optimization strategies, and seeking solutions for large-scale node consensus.。 +The above details the optimization strategy of FISCO BCOS on the consensus algorithm, FISCO BCOS uses a systematic approach to make the PBFT algorithm more efficient and usable。 +Of course, in addition to the problems mentioned above, the PBFT algorithm also has room for continuous optimization in terms of network complexity. The FISCO BCOS development team is also actively investigating the latest consensus algorithm and consensus algorithm optimization strategies, and seeking solutions for large-scale node consensus。 diff --git a/3.x/en/docs/articles/3_features/32_consensus/pbft_empty_block_processing.md b/3.x/en/docs/articles/3_features/32_consensus/pbft_empty_block_processing.md index 682c16cc6..6e13f42f3 100644 --- a/3.x/en/docs/articles/3_features/32_consensus/pbft_empty_block_processing.md +++ b/3.x/en/docs/articles/3_features/32_consensus/pbft_empty_block_processing.md @@ -6,25 +6,25 @@ Author : Chen Yujie | FISCO BCOS Core Developer In order to ensure the normal operation of the system, confirm that the leader is available, and prevent the leader from deliberately doing evil, the blockchain system based on the PBFT consensus algorithm(Such as Algorand) In the absence of a transaction, an empty block is generated。 -In common blockchain networks, the bookkeeper usually continues to block according to the algorithm, in order to ensure the normal operation of the system, to prevent evil, etc., even if the block does not contain transactions, empty blocks will be consensus confirmation and drop storage.。 +In common blockchain networks, the bookkeeper usually continues to block according to the algorithm, in order to ensure the normal operation of the system, to prevent evil, etc., even if the block does not contain transactions, empty blocks will be consensus confirmation and drop storage。 While consensus on empty blocks has a role to play, falling empty blocks consumes storage space and reduces hard drive utilization(Number of transactions that can be stored)and to some extent affect the efficiency of block-oriented data backtracking and retrieval。 -Therefore, FISCO BCOS is based on the PBFT consensus algorithm, which implements an efficient empty block processing method to ensure that each block participates in the PBFT consensus process without falling into the empty block, which improves the efficiency of disk utilization and ensures the security and robustness of the system.。 +Therefore, FISCO BCOS is based on the PBFT consensus algorithm, which implements an efficient empty block processing method to ensure that each block participates in the PBFT consensus process without falling into the empty block, which improves the efficiency of disk utilization and ensures the security and robustness of the system。 ## noun explanation ### Node Type -- **Leader/Primary**: The consensus node is responsible for packaging transactions into blocks and block consensus. There is only one leader in each round of consensus. To prevent leaders from forging blocks, the leader is switched after each round of PBFT consensus.; +- **Leader/Primary**: The consensus node is responsible for packaging transactions into blocks and block consensus. There is only one leader in each round of consensus. To prevent leaders from forging blocks, the leader is switched after each round of PBFT consensus; - **Replica**: Replica node, which is responsible for block consensus. There are multiple Replica nodes in each round of consensus. The process of each Replica node is similar; -- **Observer**: The observer node, which is responsible for obtaining the latest block from the consensus node or the replica node, and after executing and verifying the block execution result, the resulting block is on the chain.。 +- **Observer**: The observer node, which is responsible for obtaining the latest block from the consensus node or the replica node, and after executing and verifying the block execution result, the resulting block is on the chain。 where Leader and Replica are collectively referred to as consensus nodes。 ### View(view) -PBFT consensus algorithm using**The view records the consensus phase of each node, and the same view node maintains the same list of Leader and Replica nodes.**。When a Leader fails, a view switch occurs and a new Leader is selected based on the new view。 +PBFT consensus algorithm using**The view records the consensus phase of each node, and the same view node maintains the same list of Leader and Replica nodes**。When a Leader fails, a view switch occurs and a new Leader is selected based on the new view。 In the FISCO BCOS system, the calculation formula of Leader is as follows: @@ -38,40 +38,40 @@ leader_idx = (view + block_number) % node_num ![](../../../../images/articles/pbft_empty_block_processing/IMG_5292.PNG) -As shown in the preceding figure, node0 is an untrusted current leader. If you do not switch the leader after the consensus is empty, the node always broadcasts empty blocks to other nodes, making the leader always node0, causing the system to be unable to process normal transactions.。After an empty block is agreed upon, switch the leader to a trusted node, effectively preventing the leader from doing evil.。 +As shown in the preceding figure, node0 is an untrusted current leader. If you do not switch the leader after the consensus is empty, the node always broadcasts empty blocks to other nodes, making the leader always node0, causing the system to be unable to process normal transactions。After an empty block is agreed upon, switch the leader to a trusted node, effectively preventing the leader from doing evil。 ### Prevent system exceptions caused by switching to no-trade leader ![](../../../../images/articles/pbft_empty_block_processing/IMG_5293.PNG) -As shown in the figure above, node0 is a trusted current leader, but the number of transactions that can be packaged in its trading pool is 0. If you do not switch the leader after the consensus empty block, the node will always broadcast empty blocks to other nodes, and the system leader will always be node0, unable to process normal transactions.。After the consensus is empty, switch the leader, which can be switched to a node with transactions, to ensure the normal operation of the system.。 +As shown in the figure above, node0 is a trusted current leader, but the number of transactions that can be packaged in its trading pool is 0. If you do not switch the leader after the consensus empty block, the node will always broadcast empty blocks to other nodes, and the system leader will always be node0, unable to process normal transactions。After the consensus is empty, switch the leader, which can be switched to a node with transactions, to ensure the normal operation of the system。 ## Problems with the empty block of the falling plate ### Waste of storage space -Some businesses have busy periods of the day, for example, in the middle of the night, there may be a large period of time and no users in the transaction, this time if the continued out of the block, will continue to have free blocks generated.。 +Some businesses have busy periods of the day, for example, in the middle of the night, there may be a large period of time and no users in the transaction, this time if the continued out of the block, will continue to have free blocks generated。 -Example: a blockchain system 1s out of a block, 1 day 50% of the time there is no transaction, each empty block size is 1KB, if these empty blocks are down, then a sky block occupies the disk space: 3600s / h.* 24h * 50% * 1KB ≈ 43.2MB, 1 year empty blocks occupy approximately 15.7GB of disk space。Assuming an average transaction size of 1KB, this 15.7GB of disk space can be used to store 15.7GB / 1KB = 15,700 transactions。 +Example: a blockchain system 1s out of a block, 1 day 50% of the time there is no transaction, each empty block size is 1KB, if these empty blocks are down, then a sky block occupies the disk space: 3600s / h* 24h * 50% * 1KB ≈ 43.2MB, 1 year empty blocks occupy approximately 15.7GB of disk space。Assuming an average transaction size of 1KB, this 15.7GB of disk space can be used to store 15.7GB / 1KB = 15,700 transactions。 ![](../../../../images/articles/pbft_empty_block_processing/IMG_5294.PNG) ## FISCO BCOS PBFT Empty Block Processing Scheme -As shown in the figure below, the FISCO BCOS PBFT consensus algorithm triggers fast view switching through empty blocks to achieve the purpose of switching leaders without falling into empty blocks.。 +As shown in the figure below, the FISCO BCOS PBFT consensus algorithm triggers fast view switching through empty blocks to achieve the purpose of switching leaders without falling into empty blocks。 ![](../../../../images/articles/pbft_empty_block_processing/IMG_5295.PNG) ### Core Process -In conjunction with the above figure, the following describes the main process of the FISCO BCOS PBFT empty block processing algorithm. +In conjunction with the above figure, the following describes the main process of the FISCO BCOS PBFT empty block processing algorithm 1. Leader(node0)node at a specified interval(Currently 1 second)Without packaging to transactions, an empty block is constructed based on the highest block, with 0 transactions; 2. Leader encapsulates the empty block in the Prepare package and broadcasts it to all other consensus nodes; 3. After receiving the Prepare packet, other consensus nodes take out the block, and if the block is empty, set the view toView to be switched by the node to the current view plus one, and broadcast the view switch request to each other viewchange _ request, the view in viewchange _ request is toView, that is, the view is increased by the current view; -4. Consensus node collection view switching package: node collection full n.- f (n is the number of consensus nodes, at least 3*f+1;f is the number of fault-tolerant nodes in the system) After a view switch request from a different node, the view and the node toView value are consistent, the view switch is triggered and the current view is switched to toView.; +4. Consensus node collection view switching package: node collection full n-f(n is the number of consensus nodes, at least 3*f+1;f is the number of fault-tolerant nodes in the system) After a view switch request from a different node, the view and the node toView value are consistent, the view switch is triggered and the current view is switched to toView; 5. Due to view switching, the Leader of the next round of consensus switches to the next node(That is, node1)。 ## SUMMARY -In summary, the FISCO BCOS PBFT consensus algorithm triggers fast view switching through empty blocks, and switches the leader to optimize the empty block processing process, which solves the system exception caused by the consensus empty block not switching the leader, and realizes the storage of empty blocks without falling to disk, which improves disk utilization efficiency, accelerates the efficiency of data traceability, and reduces the complexity of data analysis.。 \ No newline at end of file +In summary, the FISCO BCOS PBFT consensus algorithm triggers fast view switching through empty blocks, and switches the leader to optimize the empty block processing process, which solves the system exception caused by the consensus empty block not switching the leader, and realizes the storage of empty blocks without falling to disk, which improves disk utilization efficiency, accelerates the efficiency of data traceability, and reduces the complexity of data analysis。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/32_consensus/rpbft_design_analysis.md b/3.x/en/docs/articles/3_features/32_consensus/rpbft_design_analysis.md index 553987db4..4a29900ec 100644 --- a/3.x/en/docs/articles/3_features/32_consensus/rpbft_design_analysis.md +++ b/3.x/en/docs/articles/3_features/32_consensus/rpbft_design_analysis.md @@ -4,31 +4,31 @@ Author : Chen Yujie | FISCO BCOS Core Developer ## Foreword -The consensus module is the engine of the blockchain system and plays a vital role in ensuring the data consistency of each blockchain node.。After FISCO BCOS introduces a pluggable consensus engine, it supports both PBFT and Raft consensus algorithms. Compared with blockchain systems using POW consensus algorithm, transaction confirmation latency is lower and throughput is higher, which can meet the performance requirements of most current alliance chain systems.。 +The consensus module is the engine of the blockchain system and plays a vital role in ensuring the data consistency of each blockchain node。After FISCO BCOS introduces a pluggable consensus engine, it supports both PBFT and Raft consensus algorithms. Compared with blockchain systems using POW consensus algorithm, transaction confirmation latency is lower and throughput is higher, which can meet the performance requirements of most current alliance chain systems。 -The PBFT consensus algorithm is more naturally applicable to blockchain systems because of its tolerance of Byzantine errors.。The PBFT consensus algorithm also has the problem of low scalability.。The FISCO BCOS team has been working on new consensus algorithms since 2019 to simultaneously guarantee the performance and scalability of blockchain systems.。The RPBFT consensus algorithm released in FISCO BCOS version 2.3 is one of the research results.。This paper will introduce the design purpose and technical implementation of RPBFT consensus algorithm in detail.。 +The PBFT consensus algorithm is more naturally applicable to blockchain systems because of its tolerance of Byzantine errors。The PBFT consensus algorithm also has the problem of low scalability。The FISCO BCOS team has been working on new consensus algorithms since 2019 to simultaneously guarantee the performance and scalability of blockchain systems。The RPBFT consensus algorithm released in FISCO BCOS version 2.3 is one of the research results。This paper will introduce the design purpose and technical implementation of RPBFT consensus algorithm in detail。 ## PBFT Consensus Algorithm Challenge -Before introducing the RPBFT consensus algorithm, let's take a look at the challenges of the PBFT consensus algorithm and the corresponding solutions from academia.。PBFT consensus algorithm originated in the last century。In 1999, Miguel Castro(Castro)and Barbara Liskov(Liskov)The PBFT consensus algorithm is proposed to reduce the complexity of the BFT algorithm from the exponential level to the polynomial level, so that the PBFT consensus algorithm can be applied to the actual system.。Unlike the POW consensus algorithm, PBFT guarantees the ultimate consistency of distributed systems based on the principle of distributed consistency.。Due to the abandonment of computing power, the PBFT consensus algorithm has higher performance and lower transaction confirmation latency, and the anti-evil mechanism based on cryptography technology is naturally suitable for alliance blockchain systems.。The PBFT consensus algorithm also has"soft rib", a glimpse of its three-stage consensus process。 +Before introducing the RPBFT consensus algorithm, let's take a look at the challenges of the PBFT consensus algorithm and the corresponding solutions from academia。PBFT consensus algorithm originated in the last century。In 1999, Miguel Castro(Castro)and Barbara Liskov(Liskov)The PBFT consensus algorithm is proposed to reduce the complexity of the BFT algorithm from the exponential level to the polynomial level, so that the PBFT consensus algorithm can be applied to the actual system。Unlike the POW consensus algorithm, PBFT guarantees the ultimate consistency of distributed systems based on the principle of distributed consistency。Due to the abandonment of computing power, the PBFT consensus algorithm has higher performance and lower transaction confirmation latency, and the anti-evil mechanism based on cryptography technology is naturally suitable for alliance blockchain systems。The PBFT consensus algorithm also has"soft rib", a glimpse of its three-stage consensus process。 ![](../../../../images/articles/rpbft_design_analysis/IMG_5296.PNG) -As can be seen from the above figure, in the PBFT consensus process, nodes need to broadcast consensus packets to each other, and the network complexity is proportional to the square of the number of nodes, which severely limits the scalability of PBFT.。 +As can be seen from the above figure, in the PBFT consensus process, nodes need to broadcast consensus packets to each other, and the network complexity is proportional to the square of the number of nodes, which severely limits the scalability of PBFT。 -The following chart, compiled from IBM researcher Marko research, reflects the relationship between node size and transaction latency in blockchain systems using different consensus algorithms. +The following chart, compiled from IBM researcher Marko research, reflects the relationship between node size and transaction latency in blockchain systems using different consensus algorithms ![](../../../../images/articles/rpbft_design_analysis/IMG_5297.PNG) -It can be seen that the BFT class consensus algorithm has high performance, but it supports a maximum of 1000 nodes.。In 2019, HotStuff will become a consensus algorithm for blockchain platforms to study, and compared to PBFT, HotStuff has many advantages such as simple algorithm, linear relationship between network complexity and node size, and so on.。The following figure shows the core process of HotStuff: +It can be seen that the BFT class consensus algorithm has high performance, but it supports a maximum of 1000 nodes。In 2019, HotStuff will become a consensus algorithm for blockchain platforms to study, and compared to PBFT, HotStuff has many advantages such as simple algorithm, linear relationship between network complexity and node size, and so on。The following figure shows the core process of HotStuff: ![](../../../../images/articles/rpbft_design_analysis/IMG_5298.PNG) -Because the complexity of HotStuff is still proportional to the size of the node, the scalability of the consensus algorithm cannot be fundamentally solved, and each stage of HotStuff relies on the Leader to collect and broadcast message packets, the Leader will become the bottleneck of each round of consensus.。Based on the above research, the FISCO BCOS team has implemented the PBFT packet consensus algorithm and the HotStuff consensus algorithm.。However, as the node size increases, the performance and throughput of these consensus algorithms gradually decline。Therefore, we began to explore a consensus mechanism that will not cause a rapid linear decline in the performance of the blockchain system due to the increase in the number of nodes. The RPBFT consensus algorithm came into being in this case。 +Because the complexity of HotStuff is still proportional to the size of the node, the scalability of the consensus algorithm cannot be fundamentally solved, and each stage of HotStuff relies on the Leader to collect and broadcast message packets, the Leader will become the bottleneck of each round of consensus。Based on the above research, the FISCO BCOS team has implemented the PBFT packet consensus algorithm and the HotStuff consensus algorithm。However, as the node size increases, the performance and throughput of these consensus algorithms gradually decline。Therefore, we began to explore a consensus mechanism that will not cause a rapid linear decline in the performance of the blockchain system due to the increase in the number of nodes. The RPBFT consensus algorithm came into being in this case。 ## The core idea of RPBFT consensus algorithm -The goal of the RPBFT consensus algorithm is to decouple the network complexity of the consensus algorithm from the consensus node size and improve the scalability of the blockchain system while ensuring the performance and security of the blockchain system.。In order to achieve this goal, the FISCO BCOS team refers to the DPOS idea and randomly selects some nodes as "consensus member nodes" to participate in each round of PBFT consensus under the large node scale.。In addition, in order to ensure the security of the system and prevent the consensus member nodes from joining forces, the RPBFT algorithm periodically replaces the consensus member nodes, as shown in the following figure +The goal of the RPBFT consensus algorithm is to decouple the network complexity of the consensus algorithm from the consensus node size and improve the scalability of the blockchain system while ensuring the performance and security of the blockchain system。In order to achieve this goal, the FISCO BCOS team refers to the DPOS idea and randomly selects some nodes as "consensus member nodes" to participate in each round of PBFT consensus under the large node scale。In addition, in order to ensure the security of the system and prevent the consensus member nodes from joining forces, the RPBFT algorithm periodically replaces the consensus member nodes, as shown in the following figure ![](../../../../images/articles/rpbft_design_analysis/IMG_5299.PNG) @@ -36,8 +36,8 @@ The goal of the RPBFT consensus algorithm is to decouple the network complexity The RPBFT algorithm mainly includes two system parameters: -- epoch _ sealer _ num: The number of nodes participating in the consensus process in each round of consensus. You can dynamically configure this parameter by sending transactions on the console.。 -- epoch_block_num: The consensus node replacement cycle. To prevent the selected consensus nodes from jointly doing evil, RPBFT replaces several consensus member nodes for each epoch _ block _ num block. This parameter can be dynamically configured by issuing transactions on the console.。 +-epoch _ sealer _ num: the number of nodes participating in the consensus process in each round of consensus. You can dynamically configure this parameter by sending transactions on the console。 +- epoch_block_num: The consensus node replacement cycle. To prevent the selected consensus nodes from jointly doing evil, RPBFT replaces several consensus member nodes for each epoch _ block _ num block. This parameter can be dynamically configured by issuing transactions on the console。 These two configuration items are recorded in the system configuration table. The configuration table mainly includes three fields: configuration keyword, configuration corresponding value, and effective block height. The effective block height records the latest effective block height configured with the latest value. For example, in a 100-block transaction, set epoch _ sealer _ num and epoch _ block _ num to 4 and 10000 respectively. The system configuration table is as follows: @@ -57,18 +57,18 @@ Sort the NodeIDs of all consensus nodes, as shown in the following figure. The N ### chain initialization -During chain initialization, RPBFT needs to select epoch _ sealer _ num consensus nodes to participate in consensus among consensus members. Currently, the initial implementation is to select the index from 0 to epoch _ sealer _ num.-1 node participates in pre-epoch _ block _ num block consensus。 +During chain initialization, RPBFT needs to select epoch _ sealer _ num consensus nodes to participate in consensus among consensus members. Currently, the initial implementation is to select nodes with indexes from 0 to epoch _ sealer _ num-1 to participate in pre-epoch _ block _ num block consensus。 -### The consensus member node runs the PBFT consensus algorithm. +### The consensus member node runs the PBFT consensus algorithm The selected epoch _ sealer _ num consensus committee nodes run the PBFT consensus algorithm to verify node synchronization and verify the blocks generated by the consensus of these verification nodes. The verification steps include: -- Check the block signature list: each block must contain at least two-thirds of the signatures of the consensus member nodes -- Check the block execution result: the local execution result must be consistent with the block execution result generated by the consensus committee. +- Checked list of block signatures: each block must contain at least the signatures of two-thirds of the consensus member nodes +- Check the block execution result: the local execution result must be consistent with the block execution result generated by the consensus committee ### Dynamic replacement consensus member node list -To ensure system security, the RPBFT algorithm removes several nodes from the consensus member list and adds several validation nodes after each epoch _ block _ num block. +To ensure system security, the RPBFT algorithm removes several nodes from the consensus member list and adds several validation nodes after each epoch _ block _ num block ![](../../../../images/articles/rpbft_design_analysis/IMG_5301.PNG) @@ -76,11 +76,11 @@ In the current implementation of the RPBFT algorithm, the consensus committee li ## RPBFT Network Optimization -Considering that Prepare packets are large and account for a large portion of the network overhead, in order to further improve the scalability of the RPBFT consensus algorithm, we introduced Prepare packet broadcast optimization in FISCO BCOS 2.3.。Allocate the outgoing bandwidth generated by the leader's broadcast of the Prepare packet to its subordinate child nodes, that is, after the leader generates the Prepare packet, it propagates the packet to other nodes along the tree topology, as shown in the following figure: +Considering that Prepare packets are large and account for a large portion of the network overhead, in order to further improve the scalability of the RPBFT consensus algorithm, we introduced Prepare packet broadcast optimization in FISCO BCOS 2.3。Allocate the outgoing bandwidth generated by the leader's broadcast of the Prepare packet to its subordinate child nodes, that is, after the leader generates the Prepare packet, it propagates the packet to other nodes along the tree topology, as shown in the following figure: ![](../../../../images/articles/rpbft_design_analysis/IMG_5302.PNG) -To ensure that the Prepare packet can still reach each node when the tree broadcast is turned on in the event of node disconnection, RPBFT introduces a fault tolerance mechanism based on state packets, as shown in the following figure. +To ensure that the Prepare packet can still reach each node when the tree broadcast is turned on in the event of node disconnection, RPBFT introduces a fault tolerance mechanism based on state packets, as shown in the following figure ![](../../../../images/articles/rpbft_design_analysis/IMG_5303.PNG) @@ -88,7 +88,7 @@ The main processes include: 1. After receiving Prepare, node A randomly selects 33% of nodes to broadcast the Prepare packet status, which is recorded as prepareStatus, including{blockNumber, blockHash, view, idx}。 -2. After receiving the prepareStatus randomly broadcast by node A, node B determines whether the status of the Prepare package of node A is newer than the localPrepare status of the current Prepare package of node B.。The main judgment basis includes: +2. After receiving the prepareStatus randomly broadcast by node A, node B determines whether the status of the Prepare package of node A is newer than the localPrepare status of the current Prepare package of node B。The main judgment basis includes: (1) Is prepareStatus.blockNumber greater than the current block height @@ -96,11 +96,11 @@ The main processes include: (3) Is prepareStatus.view greater than localPrepare.view when prepareStatus.blockNumber equals localPrepare.blockNumber - Any of the above conditions holds, indicating that the Prepare package state of node A is newer than the state of node B.。 + Any of the above conditions holds, indicating that the Prepare package state of node A is newer than the state of node B。 3. If the state of node B lags behind that of node A and node B is disconnected from its parent node, node B sends a prepareRequest request to node A, requesting the corresponding Prepare package。 -4. If the state of node B is behind node A, but node B is connected to its parent node, if node B waits up to 100ms(Can be matched)node B sends a prepareRequest request to node A, requesting the corresponding Prepare package.。 +4. If the state of node B is behind node A, but node B is connected to its parent node, if node B waits up to 100ms(Can be matched)node B sends a prepareRequest request to node A, requesting the corresponding Prepare package。 5. After receiving the prepareRequest request from node A, node B replies to the corresponding Prepare message package。 @@ -110,8 +110,8 @@ After the block is released, in order to reduce the impact of the network bandwi ## Prospect of RPBFT Algorithm Optimization -FISCO BCOS 2.3 initially implements the RPBFT consensus algorithm, eliminating the impact of node size on the complexity of the consensus algorithm。However, the current implementation of the RPBFT consensus algorithm, there is still room for improvement, such as: consensus committee node selection replacement rules are relatively simple.。Future plans to introduce VRF verifiable random number algorithm to achieve private, random, non-interactive consensus committee node selection method, welcome to experience and feedback。 +FISCO BCOS 2.3 initially implements the RPBFT consensus algorithm, eliminating the impact of node size on the complexity of the consensus algorithm。However, the current implementation of the RPBFT consensus algorithm, there is still room for improvement, such as: consensus committee node selection replacement rules are relatively simple。Future plans to introduce VRF verifiable random number algorithm to achieve private, random, non-interactive consensus committee node selection method, welcome to experience and feedback。 ## Summary -This paper describes the challenges of BFT-like algorithms and the initial results of the FISCO BCOS team's exploration in the field of consensus algorithms.。Distributed system consensus is a large and complex field. The RPBFT algorithm released by FISCO BCOS 2.3 only decouples the influence of node size on network complexity, which is the first step to realize high-security and scalable consensus algorithm. In the future, VRF algorithm will be introduced to ensure the security of consensus committee node selection.。 \ No newline at end of file +This paper describes the challenges of BFT-like algorithms and the initial results of the FISCO BCOS team's exploration in the field of consensus algorithms。Distributed system consensus is a large and complex field. The RPBFT algorithm released by FISCO BCOS 2.3 only decouples the influence of node size on network complexity, which is the first step to realize high-security and scalable consensus algorithm. In the future, VRF algorithm will be introduced to ensure the security of consensus committee node selection。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/33_storage/crud_guidance.md b/3.x/en/docs/articles/3_features/33_storage/crud_guidance.md index a122dd5c6..f149fdf96 100644 --- a/3.x/en/docs/articles/3_features/33_storage/crud_guidance.md +++ b/3.x/en/docs/articles/3_features/33_storage/crud_guidance.md @@ -2,33 +2,33 @@ Author : Liao Feiqiang | FISCO BCOS Core Developer -This article will introduce the CRUD function of FISCO BCOS to help developers develop blockchain applications more efficiently and easily.。 +This article will introduce the CRUD function of FISCO BCOS to help developers develop blockchain applications more efficiently and easily。 ## Why Design CRUD Features? -In FISCO BCOS 1.0, nodes use MPT data structure to store data locally through LevelDB, which is limited by the size of the local disk, and when the volume of business increases, the data will expand dramatically, and data migration is also very complex, bringing greater cost and maintenance difficulty to data storage.。 +In FISCO BCOS 1.0, nodes use MPT data structure to store data locally through LevelDB, which is limited by the size of the local disk, and when the volume of business increases, the data will expand dramatically, and data migration is also very complex, bringing greater cost and maintenance difficulty to data storage。 -In order to break through the capacity and performance bottlenecks, FISCO BCOS 2.0 has been redesigned for the underlying storage to achieve distributed storage, bringing capacity and performance improvements.。FISCO BCOS 2.0 designs a CRUD thanks to the use of a library table structure for distributed storage(Create Add, Read, Update Update, and Delete Delete)Let the interface be more natural.。CRUD's library-table-oriented development approach is in line with business development habits and also provides another option for business development.(Previously, only Solidity contracts could be used.), thus making blockchain application development more convenient。 +In order to break through the capacity and performance bottlenecks, FISCO BCOS 2.0 has been redesigned for the underlying storage to achieve distributed storage, bringing capacity and performance improvements。FISCO BCOS 2.0 designs a CRUD thanks to the use of a library table structure for distributed storage(Create Add, Read, Update Update, and Delete Delete)Let the interface be more natural。CRUD's library-table-oriented development approach is in line with business development habits and also provides another option for business development(Previously, only Solidity contracts could be used), thus making blockchain application development more convenient。 ## **What are the advantages of CRUD??** -The core design idea of CRUD is to provide a blockchain application development specification for SQL programming.。The benefits are obvious, mainly in two liters and two drops.。 +The core design idea of CRUD is to provide a blockchain application development specification for SQL programming。The benefits are obvious, mainly in two liters and two drops。 ### Improve the efficiency of development block chain application -CRUD is similar to the traditional business SQL programming development model, greatly reducing the difficulty of contract development.。Developers use contracts as stored procedures in the database to convert the read and write operations of blockchain data into table-oriented read and write operations, which is simple and easy to use, greatly improving the efficiency of developing blockchain applications.。 +CRUD is similar to the traditional business SQL programming development model, greatly reducing the difficulty of contract development。Developers use contracts as stored procedures in the database to convert the read and write operations of blockchain data into table-oriented read and write operations, which is simple and easy to use, greatly improving the efficiency of developing blockchain applications。 ### Improve the performance of blockchain applications -The underlying logic of CRUD is implemented based on pre-compiled contracts, and its data storage adopts distributed storage. Transactions generated by reading and writing data are no longer executed by slow EVM virtual machines, but by high-speed pre-compiled contract engines, thus improving the efficiency of contract data reading and writing, making blockchain applications developed based on CRUD have higher performance.。 +The underlying logic of CRUD is implemented based on pre-compiled contracts, and its data storage adopts distributed storage. Transactions generated by reading and writing data are no longer executed by slow EVM virtual machines, but by high-speed pre-compiled contract engines, thus improving the efficiency of contract data reading and writing, making blockchain applications developed based on CRUD have higher performance。 ### Reduce contract maintenance and upgrade complexity -The logic of CRUD contracts is separated from storage, so that developers only need to care about the core business logic, and data is no longer bound to specific contracts, making it easier to upgrade contracts.。When the contract logic needs to be changed, a new contract can be deployed. The new contract can read and write the tables and data created by the old contract.。 +The logic of CRUD contracts is separated from storage, so that developers only need to care about the core business logic, and data is no longer bound to specific contracts, making it easier to upgrade contracts。When the contract logic needs to be changed, a new contract can be deployed. The new contract can read and write the tables and data created by the old contract。 ### Reduce migration costs for SQL business -A large part of traditional business applications manage data through database tables, and CRUD contracts can smoothly migrate business applications designed for SQL to the blockchain, reducing business migration costs.。 +A large part of traditional business applications manage data through database tables, and CRUD contracts can smoothly migrate business applications designed for SQL to the blockchain, reducing business migration costs。 ## How to use CRUD? @@ -36,9 +36,9 @@ Introducing the two-liter, two-drop advantages of CRUD, I'm sure you're very con ### CRUD Contracts -The medium for developers to interact with the blockchain is mainly smart contracts, and the CRUD function will naturally be integrated into smart contracts.。The integration method is very light, and the Solidity contract only needs to introduce the Table.sol abstract interface contract file provided by FISCO BCOS.。Table.sol contains a smart contract interface dedicated to distributed storage. The interface is implemented on the blockchain node. You can create tables and add, delete, and query tables.。Contracts that introduce this abstract interface are called CRUD contracts to distinguish them from Solidity contracts that do not reference the interface.。 +The medium for developers to interact with the blockchain is mainly smart contracts, and the CRUD function will naturally be integrated into smart contracts。The integration method is very light, and the Solidity contract only needs to introduce the Table.sol abstract interface contract file provided by FISCO BCOS。Table.sol contains a smart contract interface dedicated to distributed storage. The interface is implemented on the blockchain node. You can create tables and add, delete, and query tables。Contracts that introduce this abstract interface are called CRUD contracts to distinguish them from Solidity contracts that do not reference the interface。 -The Table.sol abstract interface contract file includes the following abstract contract interfaces, which are described separately below.。 +The Table.sol abstract interface contract file includes the following abstract contract interfaces, which are described separately below。 #### TableFactory Contract @@ -46,8 +46,8 @@ Used to create and open a table, its fixed contract address is 0x1001, and the i | Interface| Function| Parameters| Return value| |-------------------------------------|--------|------------------------------------------------------------------------------|-----------------------------------------------| -| createTable(string ,string, string) | Create Table| Table name, primary key name (currently only a single primary key is supported), and other field names of the table (separated by commas)| The error code (int256) is returned. For details, see the following table.| -| opentTable(string) | Open Table| Table Name| Returns the address of the contract table. If the table name does not exist, an empty address is returned.| +| createTable(string ,string, string) | Create Table| Table name, primary key name (currently only a single primary key is supported), and other field names of the table (separated by commas)| The error code (int256) is returned. For details, see the following table| +| opentTable(string) | Open Table| Table Name| Returns the address of the contract table. If the table name does not exist, an empty address is returned| **The createTable interface returns:** @@ -135,7 +135,7 @@ In the TableTest.sol contract file, the core code for creating a table is as fol ```solidity / / Create a TableFactory object whose fixed address on the blockchain is 0x1001 TableFactory tf =TableFactory(0x1001); -/ / Create the t _ test table. The primary key of the table is name, and the other fields are item _ id and item _ name. +/ / Create the t _ test table. The primary key of the table is name, and the other fields are item _ id and item _ name int count =tf.createTable("t_test", "name","item_id,item_name"); / / Check whether the creation is successful if(count >= 0) @@ -146,8 +146,8 @@ if(count >= 0) **Note:** -- CreateTable execution principle: After createTable is successfully executed, it will be displayed in the blockchain system table _ sys _ tables _(The blockchain startup automatically creates the table, specifically recording the information of all tables in the blockchain)The table information for t _ test is inserted into the table name, primary key name, and other field names, but the table is not formally created。When adding, deleting, and modifying the t _ test table, it first determines whether the t _ test table exists. If it does not exist, it queries the _ sys _ tables _ table to obtain information about the t _ test table. If the query contains information about the t _ test table, it creates the table. Otherwise, the execution fails.。If the t _ test table exists, add, delete, modify, and query operations continue。 -- This step is optional: for example, if the new contract only reads and writes the table created by the old contract, you do not need to create the table.。 +-createTable execution principle: After createTable is successfully executed, it will be displayed in the blockchain system table _ sys _ tables _(The blockchain startup automatically creates the table, specifically recording the information of all tables in the blockchain)The table information for t _ test is inserted into the table name, primary key name, and other field names, but the table is not formally created。When adding, deleting, and modifying the t _ test table, it first determines whether the t _ test table exists. If it does not exist, it queries the _ sys _ tables _ table to obtain information about the t _ test table. If the query contains information about the t _ test table, it creates the table. Otherwise, the execution fails。If the t _ test table exists, add, delete, modify, and query operations continue。 +- This step is optional: for example, if the new contract only reads and writes the table created by the old contract, you do not need to create the table。 #### Step 3: CRUD the table @@ -169,7 +169,7 @@ entry.set("name", name); entry.set("item_id",item_id); entry.set("item_name",item_name); / / Call the insert method of Table to insert records -/ / Return the number of records affected by the insertion. If the value is 1, the insertion is successful. Otherwise, the insertion fails. +/ / Return the number of records affected by the insertion. If the value is 1, the insertion is successful. Otherwise, the insertion fails int count = table.insert(name,entry); ``` @@ -193,12 +193,12 @@ bytes32[] memoryuser_name_bytes_list = new bytes32[](uint256(size)); int[] memory item_id_list = newint[](uint256(size)); bytes32[] memoryitem_name_bytes_list = new bytes32[](uint256(size)); / / Traverse the record collection -/ / Store the values of the three fields of the record into three arrays, which is convenient to return the data of the query. +/ / Store the values of the three fields of the record into three arrays, which is convenient to return the data of the query for(int i = 0; i< size; ++i) { / / Get the records in the record collection based on the index Entry entry = entries.get(i); / / Query the field value based on the field name of the record -/ / Note that the types of field values are different and the corresponding get method needs to be selected. +/ / Note that the types of field values are different and the corresponding get method needs to be selected user_name_bytes_list[uint256(i)] =entry.getBytes32("name"); item_id_list[uint256(i)] =entry.getInt("item_id"); item_name_bytes_list[uint256(i)] =entry.getBytes32("item_name"); @@ -222,7 +222,7 @@ Condition condition =table.newCondition(); condition.EQ("name",name); condition.EQ("item_id",item_id); / / Call the update method of Table to update the record -/ / The number of records affected by the update. If the value is greater than 0, the update succeeds. Otherwise, the update fails. +/ / The number of records affected by the update. If the value is greater than 0, the update succeeds. Otherwise, the update fails int count = table.update(name, entry,condition); ``` @@ -240,15 +240,15 @@ Condition condition =table.newCondition(); condition.EQ("name",name); condition.EQ("item_id",item_id); / / Call the remove method of Table to delete records -/ / Returns the number of records affected by deletion. If the value is greater than 0, the deletion is successful. Otherwise, the deletion fails. +/ / Returns the number of records affected by deletion. If the value is greater than 0, the deletion is successful. Otherwise, the deletion fails int count = table.remove(name,condition); ``` ### SDKCRUD Service Interface -Through the CRUD contract, we can see that as long as the contract method involves data read and write operations, first open the corresponding table, and then call the read and write related interfaces to read and write blockchain data.。At the same time, we note that the development of CRUD contracts is still inseparable from writing contracts, compiling contracts, deploying contracts, and finally calling contracts to implement related functions.。So is there a more convenient, more concise way?For example, developers can read and write data on the blockchain without writing contracts or deploying contracts.。The answer is obvious, we have achieved such extreme ease of use requirements。 +Through the CRUD contract, we can see that as long as the contract method involves data read and write operations, first open the corresponding table, and then call the read and write related interfaces to read and write blockchain data。At the same time, we note that the development of CRUD contracts is still inseparable from writing contracts, compiling contracts, deploying contracts, and finally calling contracts to implement related functions。So is there a more convenient, more concise way?For example, developers can read and write data on the blockchain without writing contracts or deploying contracts。The answer is obvious, we have achieved such extreme ease of use requirements。 -FISCO BCOS SDK provides CRUD Service data link ports. These interfaces are implemented by calling a pre-compiled CRUD contract built into the blockchain, which is responsible for adding, deleting, modifying and checking user tables.。The Java SDKCRUD service is implemented in the org.fisco.bcos.web3j.precompile.crud.CRUDService class (the Python SDK is similar to the Nodejs SDK). Its interface is as follows: +FISCO BCOS SDK provides CRUD Service data link ports. These interfaces are implemented by calling a pre-compiled CRUD contract built into the blockchain, which is responsible for adding, deleting, modifying and checking user tables。The Java SDKCRUD service is implemented in the org.fisco.bcos.web3j.precompile.crud.CRUDService class (the Python SDK is similar to the Nodejs SDK). Its interface is as follows: | Interface| Function| Parameters| Return value| |--------------------------------------------------------|--------------|----------------------------------------------------------------------------------------------|--------------------------| @@ -259,45 +259,45 @@ FISCO BCOS SDK provides CRUD Service data link ports. These interfaces are imple | remove(Table table, Condition condition) | Delete Data| Table object, Condition object| error code| | desc(String tableName) | Query table information| Table Name| KeyField and valueField for tables| -The above interfaces cover the creation, viewing, addition, deletion, modification and query of tables.。Users only need to call the SDK interface to complete the relevant operations, [see the following address for specific examples](https://github.com/FISCO-BCOS/web3sdk/blob/master/src/integration-test/java/org/fisco/bcos/precompile/CRUDServiceTest.java) [gitee address view](https://gitee.com/FISCO-BCOS/web3sdk/blob/master/src/integration-test/java/org/fisco/bcos/precompile/CRUDServiceTest.java)。 +The above interfaces cover the creation, viewing, addition, deletion, modification and query of tables。Users only need to call the SDK interface to complete the relevant operations, [see the following address for specific examples](https://github.com/FISCO-BCOS/web3sdk/blob/master/src/integration-test/java/org/fisco/bcos/precompile/CRUDServiceTest.java) [gitee address view](https://gitee.com/FISCO-BCOS/web3sdk/blob/master/src/integration-test/java/org/fisco/bcos/precompile/CRUDServiceTest.java)。 -The call to the write interface will generate the equivalent transaction to the call to the CRUD contract interface, which will not be stored until the consensus node consensus is consistent.。It is worth noting that using the CRUD Service interface, the FISCO BCOS console implements easy-to-use sql statement commands for each interface, [Welcome to the following address experience](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html)。 +The call to the write interface will generate the equivalent transaction to the call to the CRUD contract interface, which will not be stored until the consensus node consensus is consistent。It is worth noting that using the CRUD Service interface, the FISCO BCOS console implements easy-to-use sql statement commands for each interface, [Welcome to the following address experience](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html)。 ### Comparison of Two Kinds of CRUD Usage Perhaps users have such doubts, CRUDService interface is so simple and easy to use, then we only rely on this group of interfaces to complete the blockchain business??In fact, it is not true, compared to the CRUD contract, its limitations are mainly in two points: -- Limited data access: The CRUDService interface is very specific and is essentially a set of read and write interfaces for blockchain user table data。So for non-user table data, this set of interfaces is powerless。For example, the state variable of the Solidity contract is not stored in a user-created table, and the data for the state variable cannot be queried through this set of interfaces.。However, the CRUD contract is essentially a Solidity contract, except that an additional Table.sol contract file is introduced to give it CRUD functionality.。Solidity state data and user table data can be manipulated through CRUD contracts。 -- Limited business processing capacity: Many blockchain businesses not only use blockchain data, but also need to design relevant contract logic to submit data to the blockchain only if the relevant conditions and verifications are met.。 +-Limited data access range: The CRUDService interface is very specific, and its essence is a set of read and write interfaces for blockchain user table data。So for non-user table data, this set of interfaces is powerless。For example, the state variable of the Solidity contract is not stored in a user-created table, and the data for the state variable cannot be queried through this set of interfaces。However, the CRUD contract is essentially a Solidity contract, except that an additional Table.sol contract file is introduced to give it CRUD functionality。Solidity state data and user table data can be manipulated through CRUD contracts。 +- Limited business processing capacity: Many blockchain businesses not only use blockchain data, but also need to design relevant contract logic to submit data to the blockchain only if the relevant conditions and verifications are met。 ## What are the Effective CRUD best practices? -Although the CRUD function is simple and easy to use, it also requires a clear understanding of how to use CRUD efficiently.。 +Although the CRUD function is simple and easy to use, it also requires a clear understanding of how to use CRUD efficiently。 #### Clause 1: In most cases, please choose CRUD contract to develop blockchain applications -In the development of blockchain applications, if it is determined that the data is only on-chain data and the data is only stored in user tables and does not rely on contracts to execute relevant business logic, then consider using the CRUD Service interface.。Otherwise, CRUD contracts are recommended。In addition, in order to facilitate development and debugging, you can always use the CRUD Service read interface (select and desc interface) or use the console corresponding to the relevant commands for data query.。 +In the development of blockchain applications, if it is determined that the data is only on-chain data and the data is only stored in user tables and does not rely on contracts to execute relevant business logic, then consider using the CRUD Service interface。Otherwise, CRUD contracts are recommended。In addition, in order to facilitate development and debugging, you can always use the CRUD Service read interface (select and desc interface) or use the console corresponding to the relevant commands for data query。 #### Clause 2: Contents of Table.sol file cannot be modified -The Table.sol file defines abstract interfaces related to CRUD functions, and each interface blockchain has a corresponding concrete implementation.。If the user modifies the file, it will cause problems such as transaction execution failure or abnormal function.。 +The Table.sol file defines abstract interfaces related to CRUD functions, and each interface blockchain has a corresponding concrete implementation。If the user modifies the file, it will cause problems such as transaction execution failure or abnormal function。 #### Clause 3: Primary key of CRUD user table is not unique -Users are accustomed to having unique primary key fields in relational databases, so the primary key in the default CRUD user table is also unique, which is not。When inserting multiple records, you can specify the same primary key。The primary key is not unique because the CRUD table stores the mapping from the primary key to the corresponding Entries, and then adds, deletes, and checks based on the primary key.。Therefore, the CRUD interface is called(Insert, update, query, and delete records)The primary key value needs to be passed in(Not a primary key name)。In addition, the primary key value is too long, which will affect the efficiency of reading and writing. It is recommended not to set the primary key value too long。 +Users are accustomed to having unique primary key fields in relational databases, so the primary key in the default CRUD user table is also unique, which is not。When inserting multiple records, you can specify the same primary key。The primary key is not unique because the CRUD table stores the mapping from the primary key to the corresponding Entries, and then adds, deletes, and checks based on the primary key。Therefore, the CRUD interface is called(Insert, update, query, and delete records)The primary key value needs to be passed in(Not a primary key name)。In addition, the primary key value is too long, which will affect the efficiency of reading and writing. It is recommended not to set the primary key value too long。 #### Article 4: When using CRUD contract, the table field is too many, use array or struct to encapsulate the field parameters -Due to the limitations of the Solidity contract language, the number of local variables in a contract method must not exceed 16, otherwise there will be"Stacktoo deep, try removing local variables"Compilation Error。Therefore, for the case of more table fields, you can consider using arrays or struct encapsulation for multiple field parameters to reduce the number of local variables and make them meet the compilation conditions.。 +Due to the limitations of the Solidity contract language, the number of local variables in a contract method must not exceed 16, otherwise there will be"Stacktoo deep, try removing local variables"Compilation Error。Therefore, for the case of more table fields, you can consider using arrays or struct encapsulation for multiple field parameters to reduce the number of local variables and make them meet the compilation conditions。 #### Clause 5: When using a CRUD contract, if table operations are involved in the method, the table needs to be opened first -TableFactory abstract contract has openTable(string)method, which returns the open table object。We may set the table object as a global variable of the contract for direct use the next time we operate the table.。Note that this is an error operation, because each time the table is opened, the table object it gets is registered at a temporary address, which is no longer valid after the next block, so the table object cannot be set to be used as a global variable, but the table needs to be opened before the method needs to manipulate the table.。In addition, because TableFactory is registered in the blockchain with the fixed address 0x1001, you can set the TableFactory object as a global variable without having to create the object every time。 +TableFactory abstract contract has openTable(string)method, which returns the open table object。We may set the table object as a global variable of the contract for direct use the next time we operate the table。Note that this is an error operation, because each time the table is opened, the table object it gets is registered at a temporary address, which is no longer valid after the next block, so the table object cannot be set to be used as a global variable, but the table needs to be opened before the method needs to manipulate the table。In addition, because TableFactory is registered in the blockchain with the fixed address 0x1001, you can set the TableFactory object as a global variable without having to create the object every time。 #### Article 6: Using CRUD contracts, you can store data using a mixture of tables and state variables -The CRUD contract is a Solidity contract with CRUD functionality, so the state variable storage method before Solidity can still be used.。When storing a variable value, it may be convenient to use a variable storage directly, so you can define several state variables and create several user tables for data storage.。Table storage is recommended for structured data.。 +The CRUD contract is a Solidity contract with CRUD functionality, so the state variable storage method before Solidity can still be used。When storing a variable value, it may be convenient to use a variable storage directly, so you can define several state variables and create several user tables for data storage。Table storage is recommended for structured data。 #### Clause 7: Adopt permission control to manage user tables -The logic of the CRUD contract is separated from the user table data, so the write operations of the user table (including insert, update, and delete operations) are no longer controlled by the restrictions of the contract interface.。To prevent any account from being able to write user tables, it is recommended to use permission control to manage user tables ([click Reference Permission Control for specific use](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/security_control/permission_control.html)), specify that a specific account has permission to write to the user table.。 +The logic of the CRUD contract is separated from the user table data, so the write operations of the user table (including insert, update, and delete operations) are no longer controlled by the restrictions of the contract interface。To prevent any account from being able to write user tables, it is recommended to use permission control to manage user tables ([click Reference Permission Control for specific use](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/security_control/permission_control.html)), specify that a specific account has permission to write to the user table。 diff --git a/3.x/en/docs/articles/3_features/33_storage/data_chain_or_database.md b/3.x/en/docs/articles/3_features/33_storage/data_chain_or_database.md index 7ebfd4dad..9fd88cd25 100644 --- a/3.x/en/docs/articles/3_features/33_storage/data_chain_or_database.md +++ b/3.x/en/docs/articles/3_features/33_storage/data_chain_or_database.md @@ -8,35 +8,35 @@ Before you answer this question, you must first clarify. "**Blockchain data**" "Blockchain data" broadly includes blockchain**Block Data**and blockchain**Status data:** -- Block data records every transaction that occurs on the blockchain, such as Xiaoming transferring $50 to Xiao Wang, Xiao Wang recharging $20, and so on.; +-Block data records every transaction that occurs on the blockchain, such as Xiaoming's transfer of 50 yuan to Xiao Wang, Xiao Wang's recharge of 20 yuan, and so on; - Status data records the current status of each account or smart contract on the blockchain, such as Xiao Ming's current balance of $50 and Xiao Wang's current balance of $100。 Both block data and state data are used and stored by blockchain nodes。A blockchain node is a program that runs on our personal computer, virtual machine, or server。Multiple blockchain nodes distributed on different computers or servers are connected to each other through the network to form a complete blockchain network。 Blockchain nodes typically store blockchain data on a PC, virtual machine, or server, and the most common medium for storing blockchain data is disk。 -Blockchain nodes do not have direct access to disks, they manipulate data through a specific database, such as a stand-alone or distributed database like LevelDB, RocksDB, or MySQL。Compared with direct disk operation, the database abstracts a specific data access model and is more friendly to blockchain nodes.。 +Blockchain nodes do not have direct access to disks, they manipulate data through a specific database, such as a stand-alone or distributed database like LevelDB, RocksDB, or MySQL。Compared with direct disk operation, the database abstracts a specific data access model and is more friendly to blockchain nodes。 -Therefore, when we say: "Blockchain data is stored in a database," it can be considered that the blockchain node stores the blockchain data in MySQL (or other database), and MySQL stores the blockchain data on disk.。 +Therefore, when we say: "Blockchain data is stored in a database," it can be considered that the blockchain node stores the blockchain data in MySQL (or other database), and MySQL stores the blockchain data on disk。 ![](../../../../images/articles/data_chain_or_database/IMG_5304.JPG) The database has**freestanding**与**Embedded**Points: -- Standalone databases, such as MySQL and Oracle, are commonly understood databases that run as independent processes and need to be deployed and started and stopped separately.。The standalone database can be deployed on the same server as the blockchain node or on a different server. It also supports distributed and clustered deployment.。Regardless of the deployment method, the standalone database is the storage component of the blockchain node, belongs to the blockchain node, and has nothing to do with the blockchain network。 -- Embedded databases, such as LevelDB and RocksDB, are integrated with blockchain nodes in the same process in the form of dynamic dependency libraries or static dependency libraries, and start and stop at the same time, without users noticeably feeling their presence.。 +-Standalone databases, such as MySQL and Oracle, are commonly understood databases. Standalone databases run as independent processes and need to be deployed and started and stopped separately。The standalone database can be deployed on the same server as the blockchain node or on a different server. It also supports distributed and clustered deployment。Regardless of the deployment method, the standalone database is the storage component of the blockchain node, belongs to the blockchain node, and has nothing to do with the blockchain network。 +- Embedded databases such as LevelDB and RocksDB, which are integrated with blockchain nodes in the same process in the form of dynamic or static dependency libraries, start and stop at the same time, and users will not be aware of their existence。 ## On-chain data -The block data and state data of blockchain data are not generated out of thin air.。The transaction in the block data is generated by the user of the block chain. The user sends the transaction to the block chain node. The block chain node packages multiple transactions into the block. The block will broadcast and agree on the block. After the block chain network reaches a consensus on the block, it agrees with the transaction in the block and saves the execution result of the transaction in the status data.。 +The block data and state data of blockchain data are not generated out of thin air。The transaction in the block data is generated by the user of the block chain. The user sends the transaction to the block chain node. The block chain node packages multiple transactions into the block. The block will broadcast and agree on the block. After the block chain network reaches a consensus on the block, it agrees with the transaction in the block and saves the execution result of the transaction in the status data。 -Assuming that the original status data of the blockchain is: Xiao Ming's current balance is 50 yuan, Xiao Wang's current balance is 100 yuan, then after the execution of the "Xiao Ming to Xiao Wang transferred 50 yuan" transaction, the status data will change, Xiao Ming's current balance will become 0 yuan, Xiao Wang's current balance becomes 150 yuan.。 +Assuming that the original status data of the blockchain is: Xiao Ming's current balance is 50 yuan, Xiao Wang's current balance is 100 yuan, then after the execution of the "Xiao Ming to Xiao Wang transferred 50 yuan" transaction, the status data will change, Xiao Ming's current balance will become 0 yuan, Xiao Wang's current balance becomes 150 yuan。 -Blocks require blockchain consensus, and state data is generated by executing transactions in the block, both of which are directly or indirectly related to blockchain consensus and can be referred to as "on-chain data."。Well, the clear definition of "on-chain data" is: on-chain data is data generated directly or indirectly by blockchain consensus.。 +Blocks require blockchain consensus, and state data is generated by executing transactions in the block, both of which are directly or indirectly related to blockchain consensus and can be referred to as "on-chain data."。Well, the clear definition of "on-chain data" is: on-chain data is data generated directly or indirectly by blockchain consensus。 **Back to the original question** -Obviously, "on-chain data" and "database" are not the same level of concept, "blockchain data is there on the chain or there is a database."?This problem is not true, blockchain data, whether stored in LevelDB, RocksDB, MySQL database or directly stored on disk, as long as it is directly or indirectly generated by blockchain consensus, can be regarded as on-chain data.。 +Obviously, "on-chain data" and "database" are not the same level of concept, "blockchain data is there on the chain or there is a database."?This problem is not true, blockchain data, whether stored in LevelDB, RocksDB, MySQL database or directly stored on disk, as long as it is directly or indirectly generated by blockchain consensus, can be regarded as on-chain data。 ## On-chain data for FISCO BCOS @@ -54,18 +54,18 @@ The blockchain data of FISCO BCOS is saved in the disk through RocksDB by defaul Among them: -- type indicates the storage type of the blockchain node, which is set to MySQL. This indicates that MySQL is used to store blockchain data.; -- db _ ip is the IP address of the MySQL database. If the database is deployed on this computer, it is 127.0.0.1.; -- db _ port indicates the port of the MySQL database. The default value is 3306.; -- db _ username is the login user name of the MySQL database; -- db _ name is the name of the database used to store blockchain data in the MySQL database.; -- db _ passwd is the login password for the MySQL database。 +-type indicates the storage type of the blockchain node, which is configured as mysql, indicating that MySQL is used to store blockchain data; +-db _ ip is the IP address of the MySQL database. If it is deployed on this machine, it is 127.0.0.1; +-db _ port indicates the port of the MySQL database. The default value is 3306; +-db _ username is the login user name of the MySQL database; +-db _ name is the name of the database used to store blockchain data in the MySQL database; +-db _ passwd is the login password for the MySQL database。 -For other unmentioned configuration items, you can leave the default values unchanged. After completing the filling of these information, ensure that the database is running normally, and then restart the blockchain node, the blockchain node will save the blockchain data to the MySQL database.。FISCO BCOS's blockchain, whether stored in RocksDB or MySQL, can be considered as on-chain data。Using MySQL, you can easily view the size and structure of the data on the chain, such as the size of the block, the size of the account, and so on.。 +For other unmentioned configuration items, you can leave the default values unchanged. After completing the filling of these information, ensure that the database is running normally, and then restart the blockchain node, the blockchain node will save the blockchain data to the MySQL database。FISCO BCOS's blockchain, whether stored in RocksDB or MySQL, can be considered as on-chain data。Using MySQL, you can easily view the size and structure of the data on the chain, such as the size of the block, the size of the account, and so on。 ## SUMMARY -FISCO BCOS provides a flexible data storage mechanism, for the pursuit of convenience and performance scenarios, you can use the default RocksDB;For scenarios that focus on auditing and governance, you can use MySQL to meet different needs.。 +FISCO BCOS provides a flexible data storage mechanism, for the pursuit of convenience and performance scenarios, you can use the default RocksDB;For scenarios that focus on auditing and governance, you can use MySQL to meet different needs。 **For FISCO BCOS storage, refer to [FISCO BCOS Distributed Storage Documentation](../../../manual/distributed_storage.html)** @@ -75,10 +75,10 @@ FISCO BCOS provides a flexible data storage mechanism, for the pursuit of conven 【Q】 **What a small world**: The data on the blockchain is added to the top, can not be deleted, so long time use, will the efficiency of the node continue to decline? -【A】 **CUHK Liu Zhongnan**The transaction and status data stored in the node header, which is the root hash value with a limited length.。 +【A】 **CUHK Liu Zhongnan**The transaction and status data stored in the node header, which is the root hash value with a limited length。 【A】**Mo Nan**: Blockchain will indeed continue to grow with its use, but the data access model of blockchain is usually kv, and the query efficiency of kv model is little affected by the amount of data, so it will not significantly affect performance。 【Q】 **What a small world**If you add a node to the blockchain network, the node will automatically copy data from other nodes via broadcast, right?? -【A】**Mo Nan**Add a node to the blockchain network. This node automatically synchronizes the data of other nodes.。 \ No newline at end of file +【A】**Mo Nan**Add a node to the blockchain network. This node automatically synchronizes the data of other nodes。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/33_storage/storage_by_table_structure.md b/3.x/en/docs/articles/3_features/33_storage/storage_by_table_structure.md index 33e630da1..0f439fe1b 100644 --- a/3.x/en/docs/articles/3_features/33_storage/storage_by_table_structure.md +++ b/3.x/en/docs/articles/3_features/33_storage/storage_by_table_structure.md @@ -2,20 +2,20 @@ Author : YIN Qiangwen | FISCO BCOS Core Developer -The underlying storage data structure of FISCO BCOS does not use the traditional MPT storage structure, but uses a table-based structure.。On the one hand, it avoids the problem of performance degradation caused by the rapid expansion of the world state;On the other hand, the table structure can be compatible with various storage engines, making business development more convenient.。 +The underlying storage data structure of FISCO BCOS does not use the traditional MPT storage structure, but uses a table-based structure。On the one hand, it avoids the problem of performance degradation caused by the rapid expansion of the world state;On the other hand, the table structure can be compatible with various storage engines, making business development more convenient。 ## Classification of FISCO BCOS tables -Each FISCO BCOS table has a primary key field and one or more value fields. Tables are divided into system tables (beginning with _ sys _), user tables (beginning with _ user _), and StorageState account tables (beginning with _ contract _ data _).。 +Each FISCO BCOS table has a primary key field and one or more value fields. Tables are divided into system tables (beginning with _ sys _), user tables (beginning with _ user _), and StorageState account tables (beginning with _ contract _ data _)。 All records of the table structure have built-in fields of '_ id _', '_ status _', '_ num _', '_ hash _'。The user table and StorageState account table key fields are of type varchar(255)value is of type mediumtext。 ## System tables -System tables exist by default, by node process or amdb-When the proxy process is started, the system tables are created. The description of each table is as follows。 +The system table exists by default. When the node process or the amdb-proxy process starts, the system table is created. The description of each table is as follows。 - **_sys_tables_** -Stores the structure of all tables. Each table has a record in this table to record the structure of the table, including the key and field fields of the table.。The table structure is as follows: +Stores the structure of all tables. Each table has a record in this table to record the structure of the table, including the key and field fields of the table。The table structure is as follows: | Field| table_name | key_field | value_field | | ---- | ---------- | --------- | ------------------------ | @@ -27,7 +27,7 @@ Taking the table name _ sys _ tx _ hash _ 2 _ block _ as an example, the data of table_name=_sys_tx_hash_2_block_key_field=hashvalue_field=value,index ``` -The underlying creation table and read structure table are based on the _ sys _ tables _ table. As can be seen from the created _ sys _ tx _ hash _ 2 _ block _ table, this table contains three fields, namely the main key field hash, and the value field has two values, namely value and index.。 +The underlying creation table and read structure table are based on the _ sys _ tables _ table. As can be seen from the created _ sys _ tx _ hash _ 2 _ block _ table, this table contains three fields, namely the main key field hash, and the value field has two values, namely value and index。 - **_sys_consensus_** @@ -35,7 +35,7 @@ Stores a list of consensus and observation nodes。The table structure is as fol | Field| name | type | node_id | enable_num | | ---- | ---------------- | ---------------------------------------------- | ------- | ---------- | -| Description| primary key, fixed to node| The node type. The sealer is the consensus node and the observer is the observation node.| node id| Effective block height| +| Description| primary key, fixed to node| The node type. The sealer is the consensus node and the observer is the observation node| node id| Effective block height| For example, a chain consists of four nodes, the initialization is a consensus node, you can see that the four nodes are sealer (consensus node), the effective block height is 0, as shown in the figure: @@ -47,7 +47,7 @@ Remove the '149f3777a0...' node from the console and add it to the observer list - **_sys_current_state_** -Stores the latest status of the current blockchain. Every time block data is stored, this table will update the information, including the current allocated self-increasing id, the current block height, the number of failed transactions, and the total number of transactions.。The table structure is as follows: +Stores the latest status of the current blockchain. Every time block data is stored, this table will update the information, including the current allocated self-increasing id, the current block height, the number of failed transactions, and the total number of transactions。The table structure is as follows: | Field| key | value | | ---- | ---- | ----- | @@ -57,14 +57,14 @@ The stored information is as follows: | key | Meaning| | ------------------------------ | -------------------- | -| current_id | the auto - increment id currently allocated.| +| current_id | the auto - increment id currently allocated| | current_number | Current block height| | total_failed_transaction_count | Number of failed transactions| | total_transaction_count | Total transactions| - **_sys_config_** -Stores the group configuration items that require consensus. The table structure is the same as _ sys _ current _ state _. Currently, two numeric items are configured, which are the maximum number of transactions contained in a block and the value of gas.。When writing to the Genesis block, two configuration items, consensus.max _ trans _ num and tx.gas _ limit, are read from the group. [groupid] .genesis file and written to the table。The stored information is as follows: +Stores the group configuration items that require consensus. The table structure is the same as _ sys _ current _ state _. Currently, two numeric items are configured, which are the maximum number of transactions contained in a block and the value of gas。When writing to the Genesis block, two configuration items, consensus.max _ trans _ num and tx.gas _ limit, are read from the group. [groupid] .genesis file and written to the table。The stored information is as follows: | key | Meaning| | -------------- | ------------------------ | @@ -79,11 +79,11 @@ Store external account addresses with write permissions。The table structure is | ---- | ---------- | -------------------- | ---------- | | Description| Table Name| External address with write permission| Effective block height| -By default, this table has no data, indicating that all external accounts have read and write permissions. If you use the 'grantDeployAndCreateManager' command to authorize an account in the console, an entry will be added to the '_ sys _ table _ access _' table.。 +By default, this table has no data, indicating that all external accounts have read and write permissions. If you use the 'grantDeployAndCreateManager' command to authorize an account in the console, an entry will be added to the '_ sys _ table _ access _' table。 ![](../../../../images/articles/storage_by_table_structure/IMG_4905.JPG) -At the same time, you can see that in addition to authorized external accounts that can deploy contracts, other accounts will be prompted to deploy contracts without permission.。 +At the same time, you can see that in addition to authorized external accounts that can deploy contracts, other accounts will be prompted to deploy contracts without permission。 ![](../../../../images/articles/storage_by_table_structure/IMG_4906.JPG) @@ -93,7 +93,7 @@ Storage block number to block hash mapping, can be mapped to block hash value ba | Field| number | value | | ---- | ------ | ---------- | -| Description| Block No.| Block hash value| +| Description| Block No| Block hash value| - **_sys_hash_2_block_** @@ -105,11 +105,11 @@ Store hash to serialized block data mapping, which can be mapped to block values - **_sys_block_2_nonces_** -The nonces of the transaction in the storage block, which can be mapped to the nonces value used when the block is generated based on the block number.。The table structure is as follows: +The nonces of the transaction in the storage block, which can be mapped to the nonces value used when the block is generated based on the block number。The table structure is as follows: | Field| number | value | | ---- | ------ | ------------------------ | -| Description| Block No.| nonces value used to generate the block| +| Description| Block No| nonces value used to generate the block| - **_sys_tx_hash_2_block_** @@ -117,9 +117,9 @@ The mapping from a transaction hash to a block number. The table structure is as | Field| hash | value | index | | ---- | -------- | ------ | -------------------- | -| Description| Transaction hash| Block No.| Number of the transaction in the block| +| Description| Transaction hash| Block No| Number of the transaction in the block| -A block may include multiple transactions. Therefore, a block hash and a transaction hash are in a one-to-many relationship. Therefore, a block generates multiple pieces of data in this table.。 +A block may include multiple transactions. Therefore, a block hash and a transaction hash are in a one-to-many relationship. Therefore, a block generates multiple pieces of data in this table。 - **_sys_cns_** @@ -127,7 +127,7 @@ Store the mapping from contract name to contract address. The table structure is | Field| name | version | address | abi | | ---- | ------------- | ---------- | -------- | ------------------------------------------------------------ | -| Description| Primary key, contract name| Contract version number| Contract Address| The interface description of the contract, which describes the contract field name, field type, method name, parameter name, parameter type, and method return value type.| +| Description| Primary key, contract name| Contract version number| Contract Address| The interface description of the contract, which describes the contract field name, field type, method name, parameter name, parameter type, and method return value type| Contracts deployed with CNS can be called by contract name, specifically by finding a list of contract addresses with multiple version numbers based on the contract name, filtering out the contract address with the correct version number, and then using '_ contract _ data _'+`Address`+'_' as the table name, to query the value of the code, execute the contract code。 @@ -137,7 +137,7 @@ For example, through the TableTest contract deployed by CNS, you can query the f ## User Table -A table created by calling the CRUD interface. The name of the table is' _ user _ < TableName > '. The prefix' _ user _ 'is automatically added to the underlying layer.。 +The table created by the user calling the CRUD interface to '_ user _'For the table name, the bottom layer automatically adds the '_ user _' prefix。 The table name and table structure are determined by the contract. For example, the contract code for creating a table is: @@ -174,15 +174,15 @@ At the same time, you can see that a table named '_ contract _ data _ a582f529ff ## SUMMARY -The table-based storage method abstracts the underlying storage model of the blockchain, implements an SQL-like abstract storage interface, and supports a variety of back-end databases.。 -After the introduction of table-based storage, data read and write requests directly access the storage without MPT, combined with the cache mechanism, the storage performance is greatly improved compared to MPT-based storage.。MPT data structure remains as an option。 +The table-based storage method abstracts the underlying storage model of the blockchain, implements an SQL-like abstract storage interface, and supports a variety of back-end databases。 +After the introduction of table-based storage, data read and write requests directly access the storage without MPT, combined with the cache mechanism, the storage performance is greatly improved compared to MPT-based storage。MPT data structure remains as an option。 ### "Group Questions" **Q**: Tenglong(He Zhiqun): Is there a comparison of the advantages and disadvantages of table and smart contract internal storage?? -**A**: Wheat: table is a design similar to the traditional database usage, which can make the development of writing business logic easier to get started.。Table-based data storage, management is more convenient。Table format performance is also higher than contract mpt format for storing data。 +**A**: Wheat: table is a design similar to the traditional database usage, which can make the development of writing business logic easier to get started。Table-based data storage, management is more convenient。Table format performance is also higher than contract mpt format for storing data。 **Q**Mr. Wang: I'm a little confused, isn't table storage the same as traditional database storage, and what's the use of blockchain?? -**A**: Yin Qiangwen: What storage structure to use, essentially does not change the blockchain has the characteristics of decentralization, non-tampering, irreversible, anonymous, etc.。Just use table-based storage, there are some advantages, one is the data is based on table storage, management is more convenient。Compared with contract mpt format to store data, table format performance is also higher, while the table structure can be compatible with a variety of storage engines, making business development more convenient.。 \ No newline at end of file +**A**: Yin Qiangwen: What storage structure to use, essentially does not change the blockchain has the characteristics of decentralization, non-tampering, irreversible, anonymous, etc。Just use table-based storage, there are some advantages, one is the data is based on table storage, management is more convenient。Compared with contract mpt format to store data, table format performance is also higher, while the table structure can be compatible with a variety of storage engines, making business development more convenient。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/33_storage/why_switch_to_rocksdb.md b/3.x/en/docs/articles/3_features/33_storage/why_switch_to_rocksdb.md index 02417343f..3cfa4dffa 100644 --- a/3.x/en/docs/articles/3_features/33_storage/why_switch_to_rocksdb.md +++ b/3.x/en/docs/articles/3_features/33_storage/why_switch_to_rocksdb.md @@ -2,21 +2,21 @@ Author : Bai Xingqiang | FISCO BCOS Core Developer -The storage module is one of the cores of the underlying blockchain platform and is responsible for storing all the data in the blockchain that needs to be persisted to disk.。An excellent blockchain underlying platform must have a strong storage module support。FISCO BCOS storage modules have been refactored and optimized many times to provide strong support for performance breakthroughs.。Currently, FISCO BCOS single chain TPS reaches 20,000+and supports parallel expansion of parallel multi-chain。 +The storage module is one of the cores of the underlying blockchain platform and is responsible for storing all the data in the blockchain that needs to be persisted to disk。An excellent blockchain underlying platform must have a strong storage module support。FISCO BCOS storage modules have been refactored and optimized many times to provide strong support for performance breakthroughs。Currently, FISCO BCOS single chain TPS reaches 20,000+and supports parallel expansion of parallel multi-chain。 -2.0.0-Before rc3, FISCO BCOS supported LevelDB and MySQL as data storage engines. After rc3, the embedded storage engine was switched from LevelDB to RocksDB.。Why Switch??What can you bring after switching RocksDB?This article will take you all together to review our considerations in making this decision.。 +Before version 2.0.0-rc3, FISCO BCOS supported LevelDB and MySQL as data storage engines. After rc3, the embedded storage engine was switched from LevelDB to RocksDB。Why Switch??What can you bring after switching RocksDB?This article will take you all together to review our considerations in making this decision。 ## FISCO BCOS Storage Module Overview ### Data submission process -The data that needs to be stored in FISCO BCOS can be divided into two parts, one is consensus-based on-chain data, including transaction, receipt, block, and contract data.;The other part is the data required by each node to maintain the operation of the blockchain, including the current block height, the number of transactions on the chain, and the index information related to some transaction blocks.。New blocks on the blockchain come from the synchronization module and the consensus module。Take the synchronization module as an example, when a new block is obtained, the synchronization module calls the BlockVerifier module to execute and verify the block, and if the verification passes, the BlockChain module is called to submit the data generated by the block and the execution block to the storage module, which is responsible for serializing the data into the database.。 +The data that needs to be stored in FISCO BCOS can be divided into two parts, one is consensus-based on-chain data, including transaction, receipt, block, and contract data;The other part is the data required by each node to maintain the operation of the blockchain, including the current block height, the number of transactions on the chain, and the index information related to some transaction blocks。New blocks on the blockchain come from the synchronization module and the consensus module。Take the synchronization module as an example, when a new block is obtained, the synchronization module calls the BlockVerifier module to execute and verify the block, and if the verification passes, the BlockChain module is called to submit the data generated by the block and the execution block to the storage module, which is responsible for serializing the data into the database。 ![](../../../../images/articles/why_switch_to_rocksdb/IMG_5305.PNG) ### Storage Module Overview -After the data is submitted to the storage module, it is an abstract table structure. The storage module first adds the submitted data to the cache layer to improve query performance。After the cache is updated, the data to be submitted is added to the submission queue, and the cache layer is responsible for asynchronous submission to the adaptation layer.。 +After the data is submitted to the storage module, it is an abstract table structure. The storage module first adds the submitted data to the cache layer to improve query performance。After the cache is updated, the data to be submitted is added to the submission queue, and the cache layer is responsible for asynchronous submission to the adaptation layer。 ![](../../../../images/articles/why_switch_to_rocksdb/IMG_5306.PNG) @@ -24,11 +24,11 @@ The adaptation layer needs to convert the submitted data from the abstract table ![](../../../../images/articles/why_switch_to_rocksdb/IMG_5307.PNG) -For KV storage modes such as RocksDB or LevelDB, the table name and the primary key set during insertion are combined as the database KEY, and the corresponding data is serialized as VALUE。The data corresponding to the table _ sys _ config _ and the main key tx _ conut _ limit. The KEY in the KV database is _ sys _ config _ _ tx _ conut _ limit.。 +For KV storage modes such as RocksDB or LevelDB, the table name and the primary key set during insertion are combined as the database KEY, and the corresponding data is serialized as VALUE。The data corresponding to the table _ sys _ config _ and the main key tx _ conut _ limit. The KEY in the KV database is _ sys _ config _ _ tx _ conut _ limit。 ## Why choose RocksDB?? -FISCO BCOS has been using LevelDB as the underlying data storage engine since version 1.0, and we have encountered some minor problems during use, such as high memory footprint, overrun file descriptors causing processes to be killed, and possible DB damage after nodes are killed.。 +FISCO BCOS has been using LevelDB as the underlying data storage engine since version 1.0, and we have encountered some minor problems during use, such as high memory footprint, overrun file descriptors causing processes to be killed, and possible DB damage after nodes are killed。 When refactoring version 2.0, for higher performance, we need a better storage engine, which should meet the following conditions: @@ -42,13 +42,13 @@ When refactoring version 2.0, for higher performance, we need a better storage e Based on the above conditions, RocksDB entered our field of vision。 -RocksDB fork comes from LevelDB and is open source and maintained by backbook. Compared with LevelDB, it has obvious performance improvement, maintains the same interface as LevelDB, and has extremely low migration cost.。From the data, it is very consistent with our needs.。 +RocksDB fork comes from LevelDB and is open source and maintained by backbook. Compared with LevelDB, it has obvious performance improvement, maintains the same interface as LevelDB, and has extremely low migration cost。From the data, it is very consistent with our needs。 ### Performance comparison between LevelDB and RocksDB -The following test data is in a 4 vCPU E5-26xx 2.4GHz 8G 500GB Tengxun cloud hard disk machine, provided by Yin Qiangwen, the core developer of FISCO BCOS。 +The following test data is obtained on a 4 vCPU E5-26xx 2.4GHz 8G 500GB Tengxun cloud hard disk machine, provided by Yin Qiangwen, the core developer of FISCO BCOS。 -The length of the test key is 16 bytes, the length of VALUE is 100 bytes, the compression algorithm uses Snappy, and other parameters use default values. In the case of 10 million pieces of data and 100 million pieces of data, we can see the performance comparison between LevelDB and RocksDB: under the two data volumes, RocksDB has achieved no worse or better performance than LevelDB in all scenarios.。 +The length of the test key is 16 bytes, the length of VALUE is 100 bytes, the compression algorithm uses Snappy, and other parameters use default values. In the case of 10 million pieces of data and 100 million pieces of data, we can see the performance comparison between LevelDB and RocksDB: under the two data volumes, RocksDB has achieved no worse or better performance than LevelDB in all scenarios。 ![](../../../../images/articles/why_switch_to_rocksdb/IMG_5308.PNG) @@ -57,17 +57,17 @@ The length of the test key is 16 bytes, the length of VALUE is 100 bytes, the co ### Using RocksDB in FISCO BCOS -On the official wiki of RocksDB, there is a page called Features Not in LevelDB. This page describes all the new features in RocksDB, such as support for column families, support for logical database partitioning, support for backup and checkpoint, support for backup to HDFS, and two compaction methods. It allows users to choose between STD compression algorithms such as read amplification, write amplification, and space amplification. Statistics comes with modules for easy tuning.。 +On the official wiki of RocksDB, there is a page called Features Not in LevelDB. This page describes all the new features in RocksDB, such as support for column families, support for logical database partitioning, support for backup and checkpoint, support for backup to HDFS, and two compaction methods. It allows users to choose between STD compression algorithms such as read amplification, write amplification, and space amplification. Statistics comes with modules for easy tuning。 -The official wiki also mentions RocksDB's optimizations to improve performance, including multi-threaded Compact, multi-threaded memtable insertion, reduced DB lock holding time, write lock optimization, and fewer comparison operations when skipping table searches.。According to the official documentation, RocksDB uses multi-threaded compaction in scenarios where the insertion key is ordered, making RocksDB's performance significantly higher than LevelDB's.。 +The official wiki also mentions RocksDB's optimizations to improve performance, including multi-threaded Compact, multi-threaded memtable insertion, reduced DB lock holding time, write lock optimization, and fewer comparison operations when skipping table searches。According to the official documentation, RocksDB uses multi-threaded compaction in scenarios where the insertion key is ordered, making RocksDB's performance significantly higher than LevelDB's。 -When using RocksDB, FISCO BCOS only uses the default parameters and the read-write interface compatible with LevelDB, and does not do further parameter tuning. RocksDB has pointed out in the official document that the default parameters can already achieve good performance, and further tuning parameters cannot bring significant performance improvement.。 +When using RocksDB, FISCO BCOS only uses the default parameters and the read-write interface compatible with LevelDB, and does not do further parameter tuning. RocksDB has pointed out in the official document that the default parameters can already achieve good performance, and further tuning parameters cannot bring significant performance improvement。 In the future, as we learn more about RocksDB, if we find better parameter settings, we will also use the。 ## SUMMARY -Why change to RocksDB? In fact, in a word, RocksDB has higher performance!Anything that can make FISCO BCOS better, we are willing to do。Recently, FISCO BCOS released version v2.2.0, which has been further optimized in terms of performance. Every time the performance improvement is the result of the developers of FISCO BCOS, we will continue to do this. I hope that the students in the community will also participate in it.! +Why change to RocksDB? In fact, in a word, RocksDB has higher performance!Anything that can make FISCO BCOS better, we are willing to do。Recently, FISCO BCOS released version v2.2.0, which has been further optimized in terms of performance. Every time the performance improvement is the result of the developers of FISCO BCOS, we will continue to do this. I hope that the students in the community will also participate in it! ------ diff --git a/3.x/en/docs/articles/3_features/34_protocol/amop_introduction.md b/3.x/en/docs/articles/3_features/34_protocol/amop_introduction.md index 92b9dc98d..2da12389b 100644 --- a/3.x/en/docs/articles/3_features/34_protocol/amop_introduction.md +++ b/3.x/en/docs/articles/3_features/34_protocol/amop_introduction.md @@ -2,7 +2,7 @@ Author : YIN Qiangwen | FISCO BCOS Core Developer -**Introduction to AMOP**: The Advanced Messages Onchain Protocol (AMOP) is designed to provide a secure and efficient message transmission channel for all agencies in the alliance chain, supporting real-time message communication across agencies, point-to-point, and providing a standardized interface for interaction between off-chain systems. AMOP is based on SSL communication encryption to ensure that messages cannot be eavesdropped.。 +**Introduction to AMOP**: The Advanced Messages Onchain Protocol (AMOP) is designed to provide a secure and efficient message transmission channel for all agencies in the alliance chain, supporting real-time message communication across agencies, point-to-point, and providing a standardized interface for interaction between off-chain systems. AMOP is based on SSL communication encryption to ensure that messages cannot be eavesdropped。 ## logical architecture @@ -12,15 +12,15 @@ AMOP uses the underlying P2P communication of FISCO BCOS. The logical architectu The regions are summarized as follows: -- **out-chain region**: The business service area within the organization. The business subsystems in this area use the blockchain SDK to connect to the blockchain nodes.。 +- **out-chain region**: The business service area within the organization. The business subsystems in this area use the blockchain SDK to connect to the blockchain nodes。 -- **P2P network area inside blockchain**: This area is a logical area. The blockchain nodes of each organization are deployed. The blockchain nodes can also be deployed inside the organization.。 +- **P2P network area inside blockchain**: This area is a logical area. The blockchain nodes of each organization are deployed. The blockchain nodes can also be deployed inside the organization。 ## **Core implementation** AMOP's messaging is based on**Sub-Pub subscription mechanism**, the server first sets a topic, the client sends a message to the topic, the server receives。 -AMOP supports multiple topics in the same blockchain network to send and receive messages, and supports any number of servers and clients. When multiple servers pay attention to the same topic, the messages of the topic are randomly distributed to one of the available servers.。 +AMOP supports multiple topics in the same blockchain network to send and receive messages, and supports any number of servers and clients. When multiple servers pay attention to the same topic, the messages of the topic are randomly distributed to one of the available servers。 **AMOP includes two processes**: @@ -28,9 +28,9 @@ AMOP supports multiple topics in the same blockchain network to send and receive 2. The client sends a message to the topic。 -The following is an example to illustrate the internal implementation. As shown in the following figure, there are two SDKs, SDK1 and SDK2, and two nodes, Node1 and Node2, respectively.。Set the topic T1 in the SDK1 connection to Node1, and send the message that the topic is T1 in the SDK2 connection to Node2。 +The following is an example to illustrate the internal implementation. As shown in the following figure, there are two SDKs, SDK1 and SDK2, and two nodes, Node1 and Node2, respectively。Set the topic T1 in the SDK1 connection to Node1, and send the message that the topic is T1 in the SDK2 connection to Node2。 -### 1. The server sets the topic to listen to the message timing of the topic. +### 1. The server sets the topic to listen to the message timing of the topic ![](../../../../images/articles/amop_introduction/IMG_5316.JPG) @@ -38,23 +38,23 @@ The following is an example to illustrate the internal implementation. As shown 1. SDK1 sends a request to listen for a topic message to its directly connected node Node1, which maintains the mapping relationship between the node and the topic list. This mapping relationship is used for message routing and is a map structure, key is NodeId, value is a set, and set stores the list of topics that the NodeId can receive messages。 2. After Node1 adds a new topic, the node updates the mapping table between the node and the topic。 -3. Node1 updates seq: seq is mainly used to ensure that the mapping table of each node is consistent. After adding a topic, the seq of this node will be increased by 1, and the heartbeat packet between nodes will be sent to other nodes. After receiving the heartbeat packet, other nodes (Node2) will compare the seq in the parameter with the seq of this node.(Node1)Request the mapping relationship between the node and the topic list, update the latest mapping relationship to this node and update seq。This ensures the consistency of the global mapping relationship.。 +3. Node1 updates seq: seq is mainly used to ensure that the mapping table of each node is consistent. After adding a topic, the seq of this node will be increased by 1, and the heartbeat packet between nodes will be sent to other nodes. After receiving the heartbeat packet, other nodes (Node2) will compare the seq in the parameter with the seq of this node(Node1)Request the mapping relationship between the node and the topic list, update the latest mapping relationship to this node and update seq。This ensures the consistency of the global mapping relationship。 ### 2. Timing of the message sent by the client to the topic ![](../../../../images/articles/amop_introduction/IMG_5317.JPG) -- SDK2 sends a message to Node2。 +- SDK2 sends messages to Node2。 - Node2 finds the list of nodes to which the topic can be sent from the mapping between nodes and the topic list, and randomly selects a node Node1 to send。 -- Node 1 receives the message and pushes it to SDK1。 +-Node1 node pushes the message to SDK1 after receiving it。 ## Profile Configuration -AMOP does not require any additional configuration. The following is the reference configuration of Web3Sdk. Please refer to [Reference Document] for details.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/sdk.html)。 +AMOP does not require any additional configuration. The following is the reference configuration of Web3Sdk. Please refer to [Reference Document] for details(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/sdk.html)。 ![](../../../../images/articles/amop_introduction/IMG_5318.PNG) -The configuration files of different SDKs are different because of the different node addresses. Take the logical architecture diagram at the beginning of the article as an example, assume that the listening address of Node1 is 127.0.0.1.:20200, the listening address of Node2 is 127.0.0.1:20201, then SDK1 is configured as 127.0.0.1:20200, SDK2 Configuration 127.0.0.1:20201。 +The configuration files of different SDKs are different because of the different node addresses. Take the logical architecture diagram at the beginning of the article as an example, assume that the listening address of Node1 is 127.0.0.1:20200, the listening address of Node2 is 127.0.0.1:20201, then SDK1 is configured as 127.0.0.1:20200, SDK2 Configuration 127.0.0.1:20201。 ## TEST @@ -80,14 +80,14 @@ The client and server have the following effect after execution: ## Common error codes and problem location -- **99**Failed to send the message. After AMOP attempts to send the message through all links, the message cannot be sent to the server. It is recommended to use the 'seq' generated during sending to check the processing of each node on the link.。 +- **99**Failed to send the message. After AMOP attempts to send the message through all links, the message cannot be sent to the server. It is recommended to use the 'seq' generated during sending to check the processing of each node on the link。 -- **100**: After attempting to pass through all links between blockchain nodes, the message cannot be sent to the node that can receive the message. Like the error code '99', it is recommended to use the 'seq' generated at the time of sending to check the processing of each node on the link.。 +- **100**: After attempting to pass through all links between blockchain nodes, the message cannot be sent to the node that can receive the message. Like the error code '99', it is recommended to use the 'seq' generated at the time of sending to check the processing of each node on the link。 -- **101**: The blockchain node pushes the message to the Sdk. After attempting to pass through all links, the message fails to reach the Sdk. Like the error code '99', it is recommended to use the 'seq' generated during sending to check the processing status of each node on the link and the Sdk.。 +- **101**: The blockchain node pushes the message to the Sdk. After attempting to pass through all links, the message fails to reach the Sdk. Like the error code '99', it is recommended to use the 'seq' generated during sending to check the processing status of each node on the link and the Sdk。 - **102**: The message timed out. It is recommended to check whether the server correctly handles the message and whether the bandwidth is sufficient。 ## Future plans -In the future, we will continue to enrich the functions of AMOP, supporting binary transmission, message multicast protocol and topic authentication mechanism. We also welcome you to use AMOP and put forward optimization suggestions.。 \ No newline at end of file +In the future, we will continue to enrich the functions of AMOP, supporting binary transmission, message multicast protocol and topic authentication mechanism. We also welcome you to use AMOP and put forward optimization suggestions。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/34_protocol/network_compression.md b/3.x/en/docs/articles/3_features/34_protocol/network_compression.md index a6ba4b840..7f72746f0 100644 --- a/3.x/en/docs/articles/3_features/34_protocol/network_compression.md +++ b/3.x/en/docs/articles/3_features/34_protocol/network_compression.md @@ -4,19 +4,19 @@ Author : Chen Yujie | FISCO BCOS Core Developer **Author language** -In the external network environment, the performance of the blockchain system is limited by the network bandwidth. In order to minimize the impact of the network bandwidth on the system performance, FISCO BCOS-2.0.0-rc2 began to support the network compression function, which mainly compresses network packets at the sending end, unpacks the data at the receiving end, and passes the unpacked data to the upper module.。 +In the external network environment, the performance of the blockchain system is limited by the network bandwidth. In order to minimize the impact of the network bandwidth on the system performance, FISCO BCOS supports the network compression function from relate-2.0.0-rc2。 -This article is about the FISCO BCOS network compression function, the author from the FISCO BCOS system framework, core implementation, processing flow, test results and other aspects of the analysis.。 +This article is about the FISCO BCOS network compression function, the author from the FISCO BCOS system framework, core implementation, processing flow, test results and other aspects of the analysis。 ## Part 1. System framework -Network compression is mainly implemented at the P2P network layer, and the system framework is as follows. +Network compression is mainly implemented at the P2P network layer, and the system framework is as follows ![](../../../../images/articles/network_compression/IMG_5310.JPG) Network compression consists of two main processes: - **Send-side compressed data packets**When the group layer sends data through the P2P layer, if the packet size exceeds 1KB, the packet is compressed and sent to the destination node; -- **Receiver Decompressed Data Packets**After the node receives the data packet, it first determines whether the received data packet is compressed. If the data packet is a compressed data packet, it will be decompressed and passed to the specified group.。 +- **Receiver Decompressed Data Packets**After the node receives the data packet, it first determines whether the received data packet is compressed. If the data packet is a compressed data packet, it will be decompressed and passed to the specified group。 ## Part 2. Core implementation @@ -42,31 +42,31 @@ Considering that compressing and decompressing small data packets cannot save da ![](../../../../images/articles/network_compression/IMG_5312.PNG) -- The highest bit of Version is 0, indicating that the data data corresponding to the data packet is uncompressed data; -- The highest bit of Version is 1, indicating that the data data corresponding to the data packet is compressed data。 +- The highest bit of Version is 0, indicating that the data Data corresponding to the data packet is uncompressed data; +- The highest bit of Version is 1, indicating that the data Data corresponding to the data packet is compressed data。 ## Part 3. Process Flow -The following is an example of a node in group 1 sending a network message packet packetA to other nodes in the group (for example, sending transactions, blocks, consensus message packets, etc.) to describe in detail the key processing flow of the network compression module.。 +The following is an example of a node in group 1 sending a network message packet packetA to other nodes in the group (for example, sending transactions, blocks, consensus message packets, etc.) to describe in detail the key processing flow of the network compression module。 #### Send-side processing flow: -- The group module of group 1 passes packetA to the P2P layer.; -- If P2P determines that the packet of packetA is greater than 'c _ compressThreshold', it calls the compression interface to compress packetA, otherwise it directly passes packetA to the encoding module.; -- The encoding module adds a packet header to packetA, along with data compression information, that is, if packetA is a compressed packet, set the highest position of the packet header Version to 1, otherwise set it to 0.; -- P2P transmits the encoded data packet to the destination node。 +- The group module of group 1 passes packetA to the P2P layer; +-P2P determines that the packet of packetA is greater than 'c _ compressThreshold', then it calls the compression interface to compress packetA, otherwise it directly passes packetA to the encoding module; +- The encoding module adds a packet header to packetA with data compression information, i.e., if packetA is a compressed packet, set the highest position of the packet header Version to 1, otherwise set it to 0; +- P2P delivers the encoded packet to the destination node。 #### Receiving end processing flow: -- After the target machine receives the data packet, the decoding module separates the packet header and determines whether the network data is compressed by whether the highest bit of the packet header Version field is 1.; -- If the network data packet has been compressed, the decompression interface is called to decompress the data part, and according to the GID and PID attached to the data packet header, the decompressed data is passed to the specified module of the specified group.;Otherwise, the data packet is directly passed to the upper module.。 +-After the target machine receives the data packet, the decoding module separates the packet header, and judges whether the network data is compressed by whether the highest bit of the packet header Version field is 1; +- If the network data packet is compressed, call the decompression interface to decompress the data part, and pass the decompressed data to the specified module of the specified group according to the GID and PID attached to the data packet header;Otherwise, the data packet is directly passed to the upper module。 ## Part 4. Configuration and Compatibility #### Configuration Description -- Compression on: 2.0.0-rc2 and later versions support network compression. Set 'config.ini' to '[p2p] .enable _ compresss = true' -- Turn off compression: '[p2p] .enable _ compresss = false for' config.ini' +- Enable compression: 2.0.0-rc2 and later versions support network compression. Configure '[p2p] .enable _ compresss = true' for 'config.ini' +- Turn off compression: '[p2p] .enable _ compresss = false' for 'config.ini' #### Compatibility Description @@ -75,7 +75,7 @@ The following is an example of a node in group 1 sending a network message packe ## Part 5. Test Results -To test the effect of network compression, respectively.**Intranet and Extranet**environment, to**Same pressure test procedure and QPS**Pressure measurement**Network compression turned on and not turned on**of the four-node blockchain, the test results are as follows。 +To test the effect of network compression, respectively**Intranet and Extranet**environment, to**Same pressure test procedure and QPS**Pressure measurement**Network compression turned on and not turned on**of the four-node blockchain, the test results are as follows。 Through the test results can be seen: @@ -94,7 +94,7 @@ As can be seen from Figure 1, the implementation of**Serial Solidity Contract**, ![](../../../../images/articles/network_compression/IMG_5314.JPG) -As can be seen from Figure 2,**In the intranet environment, turning on compression has little effect on the performance of the blockchain system.**;In the external network environment, the performance of the blockchain is improved by about one third because the compression can process more transactions under the limited bandwidth limit.。 +As can be seen from Figure 2,**In the intranet environment, turning on compression has little effect on the performance of the blockchain system**;In the external network environment, the performance of the blockchain is improved by about one third because the compression can process more transactions under the limited bandwidth limit。 ### Figure 3: Detailed Data @@ -125,17 +125,17 @@ As can be seen from Figure 2,**In the intranet environment, turning on compressi **@ nameless**: What software is used to test bandwidth?? -**@ Chen Yujie**: At that time, when testing bandwidth, it was an exclusive machine, directly using nload, of course, in a multi-process environment, you can also use nethogs, etc.。 +**@ Chen Yujie**: At that time, when testing bandwidth, it was an exclusive machine, directly using nload, of course, in a multi-process environment, you can also use nethogs, etc。 **@elikong**Ask two questions: -1. Why choose snappy? Have you done compression performance analysis and comparison, including compression rate, cpu time, typical messages, etc.。 +1. Why choose snappy? Have you done compression performance analysis and comparison, including compression rate, cpu time, typical messages, etc。 2. The bandwidth of the intranet before and after compression varies greatly, but what is the reason why the tps increase is not obvious? **@ Chen Yujie** -1, there is a preliminary research, when the research of various compression library compression ratio, compression and decompression speed, license and so on.。The primaries are lz4 and snappy, and a version that supports both library compression algorithms is implemented, and pressure tests are performed, which show that the test results of the two libraries are not much different.。Since snappy is already integrated into our system, in order to avoid introducing additional libraries, snappy was eventually chosen.。 +1, there is a preliminary research, when the research of various compression library compression ratio, compression and decompression speed, license and so on。The primaries are lz4 and snappy, and a version that supports both library compression algorithms is implemented, and pressure tests are performed, which show that the test results of the two libraries are not much different。Since snappy is already integrated into our system, in order to avoid introducing additional libraries, snappy was eventually chosen。 -2. In the case of intranet, the performance bottleneck is CPU(Including transaction execution speed, checking performance, etc.), IO, etc., the network is not a bottleneck, so even if compression is turned on, it saves network resources and has little impact on performance。Of course, this also shows that compression and decompression themselves have little performance loss.;In the external network environment, the network is the bottleneck, at this time most of the time is spent on the network, open compression, saving a lot of network bandwidth, so that in the same time, more packets can be transmitted between nodes, thus improving performance.。 +2. In the case of intranet, the performance bottleneck is CPU(Including transaction execution speed, checking performance, etc), IO, etc., the network is not a bottleneck, so even if compression is turned on, it saves network resources and has little impact on performance。Of course, this also shows that compression and decompression themselves have little performance loss;In the external network environment, the network is the bottleneck, at this time most of the time is spent on the network, open compression, saving a lot of network bandwidth, so that in the same time, more packets can be transmitted between nodes, thus improving performance。 diff --git a/3.x/en/docs/articles/3_features/34_protocol/network_interface.md b/3.x/en/docs/articles/3_features/34_protocol/network_interface.md index cac367717..bb108ea66 100644 --- a/3.x/en/docs/articles/3_features/34_protocol/network_interface.md +++ b/3.x/en/docs/articles/3_features/34_protocol/network_interface.md @@ -4,8 +4,8 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS **Author language** -The blockchain network consists of multiple interconnected nodes, each of which is in turn connected to client browser monitoring tools, etc.;Clarifying the existence of various network ports and achieving smooth network flow while ensuring security is the basis for building a blockchain network.。 -At the same time, there are some hot issues in the process of chaining, such as why the node opens so many ports?Or why the network doesn't work?Node cannot connect?No consensus out of blocks?is the so-called"General rule does not hurt",**A smooth network can link everything.**。 +The blockchain network consists of multiple interconnected nodes, each of which is in turn connected to client browser monitoring tools, etc;Clarifying the existence of various network ports and achieving smooth network flow while ensuring security is the basis for building a blockchain network。 +At the same time, there are some hot issues in the process of chaining, such as why the node opens so many ports?Or why the network doesn't work?Node cannot connect?No consensus out of blocks?is the so-called"General rule does not hurt",**A smooth network can link everything**。 This article is about the network port interworking this matter, the author from the FISCO BCOS network port, FISCO BCOS 2.0 typical network configuration, design network security group of some strategies and so on。 @@ -18,8 +18,8 @@ FISCO BCOS 2.0 network includes P2P port, RPC port, Channel port。 ### 1. P2P Port -P2P ports for interconnection between blockchain nodes, including multiple nodes within an institution, and interconnection of nodes and nodes across multiple institutions。If other nodes are outside the organization, the connection should listen to the public network address, or listen to the intranet, and the gateway connected to the public network (such as nginx) forwards the network connection.。 -Connections between nodes are controlled by the admission mechanism of the federation chain, and connections between nodes rely on node certificate verification to exclude unauthorized dangerous connections.。The data on this link is encrypted by SSL, using strong keys, which can effectively protect the security of communication.。 +P2P ports for interconnection between blockchain nodes, including multiple nodes within an institution, and interconnection of nodes and nodes across multiple institutions。If other nodes are outside the organization, the connection should listen to the public network address, or listen to the intranet, and the gateway connected to the public network (such as nginx) forwards the network connection。 +Connections between nodes are controlled by the admission mechanism of the federation chain, and connections between nodes rely on node certificate verification to exclude unauthorized dangerous connections。The data on this link is encrypted by SSL, using strong keys, which can effectively protect the security of communication。 [P2P network detailed design](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/p2p/p2p.html) @@ -36,15 +36,15 @@ The channel port should only listen to the intranet IP address for other applica ### 3. RPC Port -RPC is a set of protocols and interfaces for the client to interact with the blockchain system. Users can query blockchain-related information (such as block height, block, node connection, etc.) and send transactions through the RPC interface.。 +RPC is a set of protocols and interfaces for the client to interact with the blockchain system. Users can query blockchain-related information (such as block height, block, node connection, etc.) and send transactions through the RPC interface。 -RPC port accepts JSON-RPC format requests, the format is more intuitive and clear, using CURL, JavaScript, Python, Go and other languages can be assembled JSON format requests, sent to the node to process.。Of course, when sending a transaction, you need to implement a transaction signature on the client side.。It should be noted that the RPC connection does not do certificate verification, and the network transmission is clear by default, the security is relatively low, it is recommended to only listen to the intranet port, for monitoring, operation management, status query and other internal workflow.。Currently, in the monitoring script, the blockchain browser is connected to the RPC port.。 +The RPC port accepts requests in JSON-RPC format, which is intuitive and clear. Requests in JSON format can be assembled in CURL, JavaScript, Python, Go and other languages and sent to the node for processing。Of course, when sending a transaction, you need to implement a transaction signature on the client side。It should be noted that the RPC connection does not do certificate verification, and the network transmission is clear by default, the security is relatively low, it is recommended to only listen to the intranet port, for monitoring, operation management, status query and other internal workflow。Currently, in the monitoring script, the blockchain browser is connected to the RPC port。 [RPC Port Documentation](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/api.html) ## A typical network configuration for FISCO BCOS 2.0 -A typical network configuration for FISCO BCOS 2.0 is shown below. You can see that RPC and channel ports share the same IP, and P2P connections listen to one IP separately, that is, a blockchain node uses two IPs and three ports.。 +A typical network configuration for FISCO BCOS 2.0 is shown below. You can see that RPC and channel ports share the same IP, and P2P connections listen to one IP separately, that is, a blockchain node uses two IPs and three ports。 The config.ini file under the node: @@ -54,25 +54,25 @@ The config.ini file under the node: ## Typical network addresses of several computers -**1. Special address**: 0.0.0.0, which means that all local addresses, including local, intranet, and public (if any) addresses, are monitored.。This address should generally not be monitored unless it is convenient and safe to do so.。 +**1. Special address**: 0.0.0.0, which means that all local addresses, including local, intranet, and public (if any) addresses, are monitored。This address should generally not be monitored unless it is convenient and safe to do so。 -**2. Local address**: 127.0.0.1 (some configurations can be written as localhost), only other processes on the same machine can connect to this address, other machines are not connected.。For security and simplicity, this address is written by default in some sample scripts of FISCO BCOS, including the default configuration of the build _ chain script.。Users sometimes find that other machines running client programs can not connect, the probability is that this is the reason, or you can also check whether the network policy has enabled the interconnection, it is recommended that you can use the system's telnet [ip] [port] command to quickly check whether the first can connect。 +**2. Local address**: 127.0.0.1 (some configurations can be written as localhost), only other processes on the same machine can connect to this address, other machines are not connected。For security and simplicity, this address is written by default in some sample scripts of FISCO BCOS, including the default configuration of the build _ chain script。Users sometimes find that other machines running client programs can not connect, the probability is that this is the reason, or you can also check whether the network policy has enabled the interconnection, it is recommended that you can use the system's telnet [ip] [port] command to quickly check whether the first can connect。 **3. Intranet address**Usually 192.168.xxx.xxx, 172.xxx.xxx.xxx, 10.xxx.xxx.xxx start address is the intranet address, such as listening to this address, only other machines in the same LAN can access it。 -**4. External network address**: The public address exposed on the Internet, or the address that can be accessed from the external network of the organization, in short, the external server can connect to the external address.。 +**4. External network address**: The public address exposed on the Internet, or the address that can be accessed from the external network of the organization, in short, the external server can connect to the external address。 ## Some strategies for designing a network security group -In different network topologies, this may be involved: although the server can access the external network, it is forwarded by the gateway, router, NAT, and then you need to understand the specific network structure and configure it.。If you listen to an intranet address, configure the intranet address and the listening port on the forwarder, you can also receive connections from the external network.。 +In different network topologies, this may be involved: although the server can access the external network, it is forwarded by the gateway, router, NAT, and then you need to understand the specific network structure and configure it。If you listen to an intranet address, configure the intranet address and the listening port on the forwarder, you can also receive connections from the external network。 -In terms of network security, it is necessary to carefully design the network security group policy, IP and port black and white list, and accurately control the two-way connection.。Including but not limited to the following strategies: +In terms of network security, it is necessary to carefully design the network security group policy, IP and port black and white list, and accurately control the two-way connection。Including but not limited to the following strategies: -- **1. Set an external IP whitelist**Only these external IPs (usually other organizations that have established alliances) can be connected.; +- **1. Set an external IP whitelist**Only these external IPs (usually other organizations that have established alliances) can be connected; - **2. Set IP blacklist**, Deny some specific IP connections without waiting for it to connect to the node before making admission control judgments; - **3. Control RPC Port**, (such as port 8545) is only open to this machine, and other intranet servers cannot connect to this port; -- **4. Control Channel Port**It is only open to a certain intranet network segment or a few IPs, and its application is deployed to the server corresponding to the open network segment or IP, and other applications in the intranet cannot access the blockchain node.; -- **5. Whenever there is an external network port**It is recommended to set anti-DDoS measures to avoid frequent connections and massive concurrent connection attacks.。 +- **4. Control Channel Port**It is only open to a certain intranet network segment or a few IPs, and its application is deployed to the server corresponding to the open network segment or IP, and other applications in the intranet cannot access the blockchain node; +- **5. Whenever there is an external network port**It is recommended to set anti-DDoS measures to avoid frequent connections and massive concurrent connection attacks。 ## SUMMARY @@ -86,11 +86,11 @@ IP addresses, ports, roles, and security considerations of the three network por **Q**Using P2P communication between nodes and SSL authentication, how does each node obtain and verify the public key certificate, preset root certificate, and certificate chain of the linked node??When a node communicates with other nodes, which side is the Server and which side is the Client? -**A**When the chain is created, the root certificate of the chain is assigned.;When each node communicates with other nodes, each node is a Server Client。 +**A**When the chain is created, the root certificate of the chain is assigned;When each node communicates with other nodes, each node is a Server Client。 **Q**: The external IP is recommended to protect against DDoS attacks, which is a standard centralized defense logic。The implementation logic of blockchain is decentralization, does it make no sense to attack a single node?Can the mechanism arrange more nodes? -**A**On the issue of DDoS, for the alliance chain, an organization generally deploys two nodes, if attacked, it may affect the organization's business, but not the entire network.。The mechanism can arrange multiple nodes, such as 4, 5, or 10。 +**A**On the issue of DDoS, for the alliance chain, an organization generally deploys two nodes, if attacked, it may affect the organization's business, but not the entire network。The mechanism can arrange multiple nodes, such as 4, 5, or 10。 Thank you for participating in this topic discussion of small partners!Open source community, because you are more beautiful! diff --git a/3.x/en/docs/articles/3_features/35_contract/16skills_to_high-level_smart_contracts.md b/3.x/en/docs/articles/3_features/35_contract/16skills_to_high-level_smart_contracts.md index e4ace2a27..72a8ee2b4 100644 --- a/3.x/en/docs/articles/3_features/35_contract/16skills_to_high-level_smart_contracts.md +++ b/3.x/en/docs/articles/3_features/35_contract/16skills_to_high-level_smart_contracts.md @@ -5,91 +5,91 @@ Author : ZHANG Long | FISCO BCOS Core Developer ## What is a smart contract?? Smart contract refers to the contract / agreement terms in the form of code electronically placed on the blockchain network, through the call of the relevant terms of the interface to achieve execution, can also be regarded as an automatically executable program fragments。As a participant in the blockchain, smart contracts can both receive and store value, as well as send out information and value。 -In the blockchain, smart contracts are very important and run through the entire blockchain application development process.。 +In the blockchain, smart contracts are very important and run through the entire blockchain application development process。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5392.PNG) -From another perspective, let's look at the importance of smart contracts in the execution of transactions.。 -First of all, in the transaction construction, we will carry out contract deployment and function calls, contract deployment depends on the binary encoding of smart contracts.。Function calls rely on the ABI of the smart contract, where the ABI is generated during the smart contract compilation phase.。 -Transaction signing is the signing of a constructed transaction, which is then broadcast and packaged on the network.。 -Before the transaction is executed, the contract deployment needs to be completed and the binary encoding of the smart contract is stored on the blockchain network.。During the transaction execution phase, the callback will also obtain the binary code of the entire smart contract, parse the corresponding binary fragment according to the constructed transaction, the binary fragment corresponds to the execution instruction set of the transaction, and the transaction will be executed according to the instruction set.。 +From another perspective, let's look at the importance of smart contracts in the execution of transactions。 +First of all, in the transaction construction, we will carry out contract deployment and function calls, contract deployment depends on the binary encoding of smart contracts。Function calls rely on the ABI of the smart contract, where the ABI is generated during the smart contract compilation phase。 +Transaction signing is the signing of a constructed transaction, which is then broadcast and packaged on the network。 +Before the transaction is executed, the contract deployment needs to be completed and the binary encoding of the smart contract is stored on the blockchain network。During the transaction execution phase, the callback will also obtain the binary code of the entire smart contract, parse the corresponding binary fragment according to the constructed transaction, the binary fragment corresponds to the execution instruction set of the transaction, and the transaction will be executed according to the instruction set。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5393.JPG) -Thus, smart contracts are also very important in the entire transaction process.。 +Thus, smart contracts are also very important in the entire transaction process。 ## **Types of smart contracts for the FISCO BCOS platform** -The FISCO BCOS platform currently supports two main types of smart contracts: Solidity smart contracts and pre-compiled smart contracts.。 -Precompiled contracts are mainly used for the underlying blockchain platform, such as the implementation of system contracts.。At the application development level, we recommend using Solidity contracts and pre-compiled contracts based on the CRUD interface.。 -The biggest difference between the two is that the Solidity contract uses the EVM engine, and the precompiled contract and the precompiled contract developed based on the CRUD contract interface use the precompiled engine.。There is a very big difference in execution effectiveness between the two engines, which will be described in detail later.。 -The Solidity contract originated from Ethereum and is now supported on many platforms。Similar to other development languages, writing a Solidity contract requires writing the contract name before defining its state variable, which is similar to a member variable in the java language, except that it defines modifiers, which are used for conditional or permission verification.。 +The FISCO BCOS platform currently supports two main types of smart contracts: Solidity smart contracts and pre-compiled smart contracts。 +Precompiled contracts are mainly used for the underlying blockchain platform, such as the implementation of system contracts。At the application development level, we recommend using Solidity contracts and pre-compiled contracts based on the CRUD interface。 +The biggest difference between the two is that the Solidity contract uses the EVM engine, and the precompiled contract and the precompiled contract developed based on the CRUD contract interface use the precompiled engine。There is a very big difference in execution effectiveness between the two engines, which will be described in detail later。 +The Solidity contract originated from Ethereum and is now supported on many platforms。Similar to other development languages, writing a Solidity contract requires writing the contract name before defining its state variable, which is similar to a member variable in the java language, except that it defines modifiers, which are used for conditional or permission verification。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5394.PNG) -This is followed by the definition of function events, which focus on the execution of method calls to facilitate business-level monitoring of the transaction execution of smart contracts.。Defining a constructor is the same as creating an instantiated object of a class。 -Finally, we'll do some operations or business processing on the defined state variables, and we'll need to write some contract functions.。This is the structure of the Solidity contract.。 -The advantage of the Solidity contract is that it covers a wide range of users and applications, it is powerful, and after years of development, it has gradually matured and stabilized.。 +This is followed by the definition of function events, which focus on the execution of method calls to facilitate business-level monitoring of the transaction execution of smart contracts。Defining a constructor is the same as creating an instantiated object of a class。 +Finally, we'll do some operations or business processing on the defined state variables, and we'll need to write some contract functions。This is the structure of the Solidity contract。 +The advantage of the Solidity contract is that it covers a wide range of users and applications, it is powerful, and after years of development, it has gradually matured and stabilized。 -However, compared to native contracts, the Solidity language has a certain learning threshold for developers.;At the same time, EVM needs to be used during execution, performance is limited, and EVM objects have a large memory overhead.;Finally, the data and logic of smart contracts are relatively coupled, making it difficult to upgrade contracts and expand storage capacity.。 +However, compared to native contracts, the Solidity language has a certain learning threshold for developers;At the same time, EVM needs to be used during execution, performance is limited, and EVM objects have a large memory overhead;Finally, the data and logic of smart contracts are relatively coupled, making it difficult to upgrade contracts and expand storage capacity。 FISCO BCOS designed pre-compiled contracts to address the shortcomings of Solidity contracts。 -Pre-compiled contracts also have some shortcomings, such as assigning some fixed contract addresses, compiling the underlying source code。In order to solve these problems, we have developed the CRUD contract interface, the development process as long as the user inherits the Table contract, by introducing the abstract interface file Table.sol can develop pre-compiled contracts based on the CRUD interface.。 +Pre-compiled contracts also have some shortcomings, such as assigning some fixed contract addresses, compiling the underlying source code。In order to solve these problems, we have developed the CRUD contract interface, the development process as long as the user inherits the Table contract, by introducing the abstract interface file Table.sol can develop pre-compiled contracts based on the CRUD interface。 -Pre-compiled contracts based on the CRUD interface are not particularly different from Solidity contracts in nature.。There are three main differences: +Pre-compiled contracts based on the CRUD interface are not particularly different from Solidity contracts in nature。There are three main differences: -1. The Table.sol contract interface needs to be introduced.。 -2. When conducting an on-chain transaction, first create a table with the functionality provided by the interface, so that the data and logic can be separated.。 -3. During the operation of contract-related state variables, use the Table contract-related interface to manipulate contract data.。For example, insert through the Table.insert interface.。 +1. The Table.sol contract interface needs to be introduced。 +2. When conducting an on-chain transaction, first create a table with the functionality provided by the interface, so that the data and logic can be separated。 +3. During the operation of contract-related state variables, use the Table contract-related interface to manipulate contract data。For example, insert through the Table.insert interface。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5395.PNG) The advantages of pre-compiled contracts based on the CRUD interface are clear: 1. Similar to the operation of the database, interface-oriented programming, reduce the learning threshold and cost。 -2. The bottom layer is executed by the reservation engine and can be executed in parallel, so its performance is very high.。 -3. The underlying layer stores data in the form of tables, separating data and logic to facilitate contract upgrades and storage expansion.。 +2. The bottom layer is executed by the reservation engine and can be executed in parallel, so its performance is very high。 +3. The underlying layer stores data in the form of tables, separating data and logic to facilitate contract upgrades and storage expansion。 But it also has some shortcomings。 1. Based on FISCO BCOS platform, not cross-platform。 -2. Applicable to some scenarios with simple business logic, such as the certificate deposit business.。 +2. Applicable to some scenarios with simple business logic, such as the certificate deposit business。 ## 16 tips for writing smart contracts quickly In the development of smart contracts, we often face three mountains。 -1. Contract security。Security is the foundation and lifeblood of smart contracts and blockchain applications.。Throughout the history of blockchain development, there have been many incidents that have caused significant losses to users and platforms due to smart contract vulnerabilities.。 -2. Contract Performance。Performance is an important indicator to measure the availability of blockchain applications, which determines the load capacity and user experience of the system.。 -3. Scalability。Scalability is an effective means for smart contracts and blockchain application systems to respond to business changes and upgrades, ensuring the timeliness and cost of system upgrades.。 +1. Contract security。Security is the foundation and lifeblood of smart contracts and blockchain applications。Throughout the history of blockchain development, there have been many incidents that have caused significant losses to users and platforms due to smart contract vulnerabilities。 +2. Contract Performance。Performance is an important indicator to measure the availability of blockchain applications, which determines the load capacity and user experience of the system。 +3. Scalability。Scalability is an effective means for smart contracts and blockchain application systems to respond to business changes and upgrades, ensuring the timeliness and cost of system upgrades。 -For these three mountains, we have compiled 16 tips to help you quickly get started with smart contract development.。 +For these three mountains, we have compiled 16 tips to help you quickly get started with smart contract development。 ### Contract Safety Here, we summarize several smart contract security issues。 -- Program error: Incorrect initialization method, variable hiding causing mix-up -- Insufficient checks: Insufficient permissions and boundary checks -- Logical defects: the block can be manipulated, re-entry attacks +- Program error: initialization method error, variable hiding caused by mixed use +- Insufficient checks: insufficient permissions and boundary checks +- Logical flaw: out of the block can be manipulated, re-entry attacks - Malicious contract: scam tx.origin, RTLO character attack For the security issues in the development of smart contracts, give you a few suggestions。 -#### Tip 1: Do a good job of encrypting private data. +#### Tip 1: Do a good job of encrypting private data -The data on smart contracts is completely transparent, so some data privacy protection schemes are needed to ensure data security。For example, the data on the chain is encrypted by hash, homomorphic encryption or zero knowledge proof.。 +The data on smart contracts is completely transparent, so some data privacy protection schemes are needed to ensure data security。For example, the data on the chain is encrypted by hash, homomorphic encryption or zero knowledge proof。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5396.PNG) -The two methods in this contract are simple: add an employee。In the above method, the contract-related information is written directly, while the following method writes the contract hash.。Encrypt contracts to ensure user privacy。We recommend the second way。 +The two methods in this contract are simple: add an employee。In the above method, the contract-related information is written directly, while the following method writes the contract hash。Encrypt contracts to ensure user privacy。We recommend the second way。 #### Tip 2: Set the visible range of state variables and functions reasonably ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5397.PNG) -Here are two modifyScores, one using public and the other using internal。The difference between the two is that the following Test contract inherits the Base contract and uses onlyOwner to call modifyScore in testFunction.。 -If you directly use the public modifier, it does not work here, it will not check onlyOwner, because the public function method is exposed to the outside world, the user does not need to call onlyOwner through testFunction, but directly call。So be sure to pay attention to the visible range of state variables and functions.。 +Here are two modifyScores, one using public and the other using internal。The difference between the two is that the following Test contract inherits the Base contract and uses onlyOwner to call modifyScore in testFunction。 +If you directly use the public modifier, it does not work here, it will not check onlyOwner, because the public function method is exposed to the outside world, the user does not need to call onlyOwner through testFunction, but directly call。So be sure to pay attention to the visible range of state variables and functions。 #### Tip 3: Function Permissions and Variable Boundary Checking @@ -97,73 +97,73 @@ Here's an example of variable boundary checking: ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5398.PNG) -We want to add points to the student's score, assuming that its type is uint8, in Solidity uint8 according to the data conversion is 0 to 255 data range, if you do not do verification to add the two, may cause finalScore direct overflow, resulting in incorrect results.。 -So we need to check the legality after addition or similar variable operations.。If you add a require condition here, you can check in time to ensure the correctness of the business logic.。 +We want to add points to the student's score, assuming that its type is uint8, in Solidity uint8 according to the data conversion is 0 to 255 data range, if you do not do verification to add the two, may cause finalScore direct overflow, resulting in incorrect results。 +So we need to check the legality after addition or similar variable operations。If you add a require condition here, you can check in time to ensure the correctness of the business logic。 #### Tip 4: Learn to use security tools -Use Securify, Mythx, Slither and other tools to scan smart contracts for security, there are many such tools, some of which are completely free, you can learn about and try to use the tools you are interested in。Other smart contract security implementation techniques will not be repeated here.。 +Use Securify, Mythx, Slither and other tools to scan smart contracts for security, there are many such tools, some of which are completely free, you can learn about and try to use the tools you are interested in。Other smart contract security implementation techniques will not be repeated here。 ### Contract Performance Ensuring the performance of smart contracts is critical. If the performance of the system does not meet the requirements, it will affect the availability of the entire system。 -Performance depends on the use of machine resources during code execution.。Machine resources mainly include CPU, memory, network, etc.。 -Unlike centralized systems, blockchain requires each node to execute each transaction during the consensus phase, and the machine configuration of each node may be different, while the shortest machine affects the performance of the entire blockchain network。So under certain machine configuration, some resource consumption can be saved through smart contracts.。 -Here we also give you some optimization suggestions.。 +Performance depends on the use of machine resources during code execution。Machine resources mainly include CPU, memory, network, etc。 +Unlike centralized systems, blockchain requires each node to execute each transaction during the consensus phase, and the machine configuration of each node may be different, while the shortest machine affects the performance of the entire blockchain network。So under certain machine configuration, some resource consumption can be saved through smart contracts。 +Here we also give you some optimization suggestions。 #### Tip 5: Reduce CPU overhead by reducing unnecessary computation and validation logic ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5399.PNG) Both methods sum the squares of an array and square each number in the errorMethod。 -The following method does not calculate and directly outputs the results, because a large number of complex calculations will have a great impact on the blockchain network and performance, it is recommended that such complex calculations and verification logic do not need to appear in the blockchain smart contract, but in the chain or business system implementation.。 +The following method does not calculate and directly outputs the results, because a large number of complex calculations will have a great impact on the blockchain network and performance, it is recommended that such complex calculations and verification logic do not need to appear in the blockchain smart contract, but in the chain or business system implementation。 #### Tip 6: Reduce unnecessary data and reduce memory, network, and storage overhead ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5400.PNG) -The state variables of the two companies are defined here, and the companyInduction variable in Company1 holds the company's profile.。This is commented out in Company2。This is because the company profile does not have much impact on the on-chain transaction logic, but from a performance perspective, it occupies a large amount of blockchain network node memory, network and storage overhead, thus causing great pressure on the overall network performance.。 -Therefore, it is suggested that you only need to link the core data and lightweight data associated with the business.。 +The state variables of the two companies are defined here, and the companyInduction variable in Company1 holds the company's profile。This is commented out in Company2。This is because the company profile does not have much impact on the on-chain transaction logic, but from a performance perspective, it occupies a large amount of blockchain network node memory, network and storage overhead, thus causing great pressure on the overall network performance。 +Therefore, it is suggested that you only need to link the core data and lightweight data associated with the business。 #### Tip 7: Use different forms of data assembly to reduce cross-contract calls -In cross-contract calls, the blockchain network node will rebuild an EVM, a time-consuming process that has a significant impact on memory overhead and blockchain network performance.。 -Therefore, it is recommended to use different forms of data assembly flexibly, for example, using structs to avoid cross-contract calls, thus saving blockchain node memory, network and time overhead.。 +In cross-contract calls, the blockchain network node will rebuild an EVM, a time-consuming process that has a significant impact on memory overhead and blockchain network performance。 +Therefore, it is recommended to use different forms of data assembly flexibly, for example, using structs to avoid cross-contract calls, thus saving blockchain node memory, network and time overhead。 #### Tip 8: Reduce cross-contract calls through advanced features provided by smart contracts, such as inheritance ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5401.PNG) -The above two examples, one is to introduce direct calls through contracts, and the other is to call getName in contracts through inheritance.。 -Inheritance in a smart contract refers to the compilation phase, where all the parent contract code is copied into the child contract for compilation.。That is, in the final contract, the parent contract is integrated into the child contract。When the parent contract is called, it is not a cross-contract call.。 +The above two examples, one is to introduce direct calls through contracts, and the other is to call getName in contracts through inheritance。 +Inheritance in a smart contract refers to the compilation phase, where all the parent contract code is copied into the child contract for compilation。That is, in the final contract, the parent contract is integrated into the child contract。When the parent contract is called, it is not a cross-contract call。 #### Tip 9: Change the data type and learn to trade space for time ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5402.PNG) -The above example avoids the use of arrays by mapping and improves query performance.。However, according to past experience, mapping takes up four times as much space as an array, and whether it is used depends on the specific needs of the business: for performance reasons, mapping can be used to change the data type to improve the efficiency of smart contract execution.。 +The above example avoids the use of arrays by mapping and improves query performance。However, according to past experience, mapping takes up four times as much space as an array, and whether it is used depends on the specific needs of the business: for performance reasons, mapping can be used to change the data type to improve the efficiency of smart contract execution。 #### **Tip 10: Compact state variable packaging to reduce memory and storage overhead** -What is a compact state variable?The execution of smart contracts in the EVM is based on the stack, which has corresponding card slots, each of which is about 32 bits.。If you do not pay attention to the order of variables, it will occupy more card slots and consume more resources.。The following gasUsed is the cost of computer resources。 +What is a compact state variable?The execution of smart contracts in the EVM is based on the stack, which has corresponding card slots, each of which is about 32 bits。If you do not pay attention to the order of variables, it will occupy more card slots and consume more resources。The following gasUsed is the cost of computer resources。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5403.PNG) -In the above example, if you define a bytes1, bytes31, bytes32, the same is occupied by 64 bytes, here because bytes1 and bytes31 in the same position, EVM automatically into a card slot。In the wrong way, the EVM is placed in two different card slots.。Therefore, the upper structure occupies two card slots, and the lower one occupies three card slots. They use different resources. In actual operation, you need to pay attention to these details。 +In the above example, if you define a bytes1, bytes31, bytes32, the same is occupied by 64 bytes, here because bytes1 and bytes31 in the same position, EVM automatically into a card slot。In the wrong way, the EVM is placed in two different card slots。Therefore, the upper structure occupies two card slots, and the lower one occupies three card slots. They use different resources. In actual operation, you need to pay attention to these details。 #### Tip 11: Pay attention to function modifiers to reduce unnecessary execution ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5404.PNG) -Function modifiers generally include pure, view, etc. If these modifiers are not added, the blockchain network will automatically understand the smart contract as a transaction.。According to the definition of Ethereum Yellow Book, query operations are calls, and changes to state data can be understood as transactions.。 -Transactions need to go through the execution, consensus process, call without.。In a smart contract, if the view modifier is set, it is a call that does not need to execute consensus and enter the EVM, but directly queries the local node data, so the performance will be greatly improved.。 +Function modifiers generally include pure, view, etc. If these modifiers are not added, the blockchain network will automatically understand the smart contract as a transaction。According to the definition of Ethereum Yellow Book, query operations are calls, and changes to state data can be understood as transactions。 +Transactions need to go through the execution, consensus process, call without。In a smart contract, if the view modifier is set, it is a call that does not need to execute consensus and enter the EVM, but directly queries the local node data, so the performance will be greatly improved。 ### Expandable -In the smart contract development process, after the deployment of the chain, the upgrade of the smart contract is a very complex matter.。The value of smart contract scalability lies in the following: +In the smart contract development process, after the deployment of the chain, the upgrade of the smart contract is a very complex matter。The value of smart contract scalability lies in the following: -- minimize cost (time+manpower) for business upgrades -- As an emergency treatment method for system abnormality +- Minimize costs (time+manpower) for business upgrades +- as an emergency treatment of system anomalies - Easy for others to take over and maintain Here are some tips to help you improve smart contract scalability。 @@ -172,16 +172,16 @@ Here are some tips to help you improve smart contract scalability。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5405.PNG) -The smart contract on the left manages the results through setScore, but if you want to add other attributes to the student's results, the entire contract needs to be redefined and deployed, resulting in the inability to use the data on the previous chain.。 +The smart contract on the left manages the results through setScore, but if you want to add other attributes to the student's results, the entire contract needs to be redefined and deployed, resulting in the inability to use the data on the previous chain。 Adopting a three-tier architecture can solve this problem。 -First, we put the data separately in the Score contract, manage it through datamap, and then operate on the data through the ScoreManager.。This is the classic three-tier architecture, which ensures the separation of logic and data.。 -If you want to add other fields in Score, such as studentid, we only need to update the Score contract and the ScoreManager contract to be compatible.。Since the data in the datamap is completely immutable, we only need to perform different logical processing on different data entities in the Manage contract to ensure the scalability of the contract.。 +First, we put the data separately in the Score contract, manage it through datamap, and then operate on the data through the ScoreManager。This is the classic three-tier architecture, which ensures the separation of logic and data。 +If you want to add other fields in Score, such as studentid, we only need to update the Score contract and the ScoreManager contract to be compatible。Since the data in the datamap is completely immutable, we only need to perform different logical processing on different data entities in the Manage contract to ensure the scalability of the contract。 #### Tip 13: Abstract General Logic ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5406.PNG) -In the contract on the left, each contract has the onlyOwner modifier, and if 10 contracts use this modifier at the same time, the maintenance cost will be very high.。Therefore, it is recommended that you refactor the Base contract and inherit it in a specific business contract so that it can be reused.。This is the extensibility of the abstract general logic implementation contract.。When the next contract is upgraded, simply modify the Base contract。 +In the contract on the left, each contract has the onlyOwner modifier, and if 10 contracts use this modifier at the same time, the maintenance cost will be very high。Therefore, it is recommended that you refactor the Base contract and inherit it in a specific business contract so that it can be reused。This is the extensibility of the abstract general logic implementation contract。When the next contract is upgraded, simply modify the Base contract。 #### Tip 14: Modular Programming: Single Responsibility Model @@ -189,20 +189,20 @@ The single responsibility model is a coding specification。 ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5407.PNG) -The Rolemanager on the top left includes two roles, account and company, as well as operations on both, which clearly violates the single responsibility model。Once the operation of the account is modified, the operation of the company will also change, resulting in a greater impact.。 -Separate the operations of different entities by means of the lower right.。When the account operation is modified, the company operation is not affected, thereby reducing the maintenance cost of the smart contract。 +The Rolemanager on the top left includes two roles, account and company, as well as operations on both, which clearly violates the single responsibility model。Once the operation of the account is modified, the operation of the company will also change, resulting in a greater impact。 +Separate the operations of different entities by means of the lower right。When the account operation is modified, the company operation is not affected, thereby reducing the maintenance cost of the smart contract。 #### Tip 15: Try to reuse mature libraries -The first benefit of reusing mature libraries as much as possible is to improve the efficiency of smart contract development.;The second is to reduce the loopholes in the writing of smart contracts, after all, the mature library is summed up by a large number of previous business practices, and its security is guaranteed.。 +The first benefit of reusing mature libraries as much as possible is to improve the efficiency of smart contract development;The second is to reduce the loopholes in the writing of smart contracts, after all, the mature library is summed up by a large number of previous business practices, and its security is guaranteed。 #### Tip 16: Reserve free fields appropriately ![](../../../../images/articles/16skills_to_high-level_smart_contracts/IMG_5408.PNG) -In the ScoreManager contract above, the Score structure consists of two fields, the score itself and the status score status.。If you need to add studentid or other comments, you need to redeploy and upgrade the smart contract。Therefore, its usability can be improved by adding a reserved field for resever。But in fact, this way will also affect the safety and performance。 +In the ScoreManager contract above, the Score structure consists of two fields, the score itself and the status score status。If you need to add studentid or other comments, you need to redeploy and upgrade the smart contract。Therefore, its usability can be improved by adding a reserved field for resever。But in fact, this way will also affect the safety and performance。 -Today, I mainly share with you two smart contracts of FISCO BCOS.。At the same time, for smart contract security, performance and scalability, to provide the corresponding development skills。The development process of smart contracts is a game of security, performance and scalability.。Developers should choose the applicable skills and solutions according to the actual business needs.。 +Today, I mainly share with you two smart contracts of FISCO BCOS。At the same time, for smart contract security, performance and scalability, to provide the corresponding development skills。The development process of smart contracts is a game of security, performance and scalability。Developers should choose the applicable skills and solutions according to the actual business needs。 ------ @@ -210,7 +210,7 @@ Today, I mainly share with you two smart contracts of FISCO BCOS.。At the same **Q** Contract upgrade and redeployment, how to make data reusable?Two contract addresses, my SDK layer needs to modify the adaptation。 - **A** After the contract is logically separated from the data through the three-tier model, if the contract needs to be upgraded, different contract data needs to be processed differently at the data processing layer, and the SDK level also needs to be adapted.。 + **A** After the contract is logically separated from the data through the three-tier model, if the contract needs to be upgraded, different contract data needs to be processed differently at the data processing layer, and the SDK level also needs to be adapted。 **Q** : In uint256, how many card slots are in the EVM stack?? @@ -220,12 +220,12 @@ Today, I mainly share with you two smart contracts of FISCO BCOS.。At the same **Q** : Can smart contracts blur queries??How to deal with traceability? - **A** Fuzzy queries and historical data queries on the blockchain are not recommended because the blockchain is not suitable for large data processing.。At present, we provide data export tools, which are open source and can help businesses quickly process big data.。 + **A** Fuzzy queries and historical data queries on the blockchain are not recommended because the blockchain is not suitable for large data processing。At present, we provide data export tools, which are open source and can help businesses quickly process big data。 **Q** : How do you handle percentages safely in smart contracts?? - **A** Smart contracts do not have decimal types, they can be multiplied by 100 or 1000 according to the accuracy before they are chained, and they can be processed under the chain by dividing them. + **A** Smart contracts do not have decimal types, they can be multiplied by 100 or 1000 according to the accuracy before they are chained, and they can be processed under the chain by dividing them **Q** Can you bind a smart contract to run on a node??Can the same smart contract open multiple instances on a node? diff --git a/3.x/en/docs/articles/3_features/35_contract/abi_of_contract.md b/3.x/en/docs/articles/3_features/35_contract/abi_of_contract.md index 68760443f..e598bab40 100644 --- a/3.x/en/docs/articles/3_features/35_contract/abi_of_contract.md +++ b/3.x/en/docs/articles/3_features/35_contract/abi_of_contract.md @@ -4,26 +4,26 @@ Author : WANG Zhang | FISCO BCOS Core Developer ## Introduction -When the contract interface is called, you can send a transaction to the blockchain and obtain the transaction receipt, which saves the input parameters, output, Event log, execution status and other information of the transaction.。An example of a transaction receipt is shown in the following figure. +When the contract interface is called, you can send a transaction to the blockchain and obtain the transaction receipt, which saves the input parameters, output, Event log, execution status and other information of the transaction。An example of a transaction receipt is shown in the following figure ![](../../../../images/articles/abi_of_contract/IMG_5500.PNG) -[Transaction Receipt Details](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html#gettransactionreceipt)如下。 In the transaction receipt, the input and output fields can represent the input parameters of the transaction and the return value after the EVM executes the transaction, respectively.。 +[Transaction Receipt Details](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html#gettransactionreceipt)如下。 In the transaction receipt, the input and output fields can represent the input parameters of the transaction and the return value after the EVM executes the transaction, respectively。 ## What is contract ABI? -"Contract ABI is the standard way to interact with contracts in the Ethereum ecosystem, whether it's external client interaction with contracts or contract-to-contract interaction.。The above is the definition given by the official Ethereum document, which is more popular and contains two aspects: +"Contract ABI is the standard way to interact with contracts in the Ethereum ecosystem, whether it's external client interaction with contracts or contract-to-contract interaction。The above is the definition given by the official Ethereum document, which is more popular and contains two aspects: -1. ABI is a description of the contract interface.。 -2. ABI defines data encoding rules for interacting with contracts.。 +1. ABI is a description of the contract interface。 +2. ABI defines data encoding rules for interacting with contracts。 -Below we will explain ABI from these two aspects.。 +Below we will explain ABI from these two aspects。 ### ABI Interface Description -ABI is the description of the contract interface, including the contract interface list, interface name, parameter name, parameter type, return type, etc.。This information is saved in JSON format and can be generated by the contract compiler when the solidity file is compiled.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html#id12)。 +ABI is the description of the contract interface, including the contract interface list, interface name, parameter name, parameter type, return type, etc。This information is saved in JSON format and can be generated by the contract compiler when the solidity file is compiled(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html#id12)。 -Take the Asset.sol contract as an example.: +Take the Asset.sol contract as an example: ![](../../../../images/articles/abi_of_contract/IMG_5501.PNG) @@ -164,7 +164,7 @@ Take the Asset.sol contract as an example.: ] ``` -As you can see, the ABI is a JSON object array that contains information about the interface and Event.。The transfer interface of the Asset contract and its ABI are as follows. +As you can see, the ABI is a JSON object array that contains information about the interface and Event。The transfer interface of the Asset contract and its ABI are as follows #### Interface: @@ -194,15 +194,15 @@ Assuming that the user needs to call the transfer interface of the Asset contrac `BigInteger amount = 10000;` -How the user passes these parameters to the EVM that finally executes the transaction, so that the EVM knows that the interface called by the user is the transfer interface, and the EVM can correctly read the parameters entered by the user?The return value of EVM and how the user should use it.? +How the user passes these parameters to the EVM that finally executes the transaction, so that the EVM knows that the interface called by the user is the transfer interface, and the EVM can correctly read the parameters entered by the user?The return value of EVM and how the user should use it? This is another role of ABI, defining the encoding format of the data。 -Here, the input field of the transaction receipt in the introduction is used as an example to analyze the input code of the transaction. +Here, the input field of the transaction receipt in the introduction is used as an example to analyze the input code of the transaction `"input": "0x9b80b050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000027100000000000000000000000000000000000000000000000000000000000000005416c6963650000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003426f620000000000000000000000000000000000000000000000000000000000"` -Input data can be divided into two parts: function selector and parameter encoding.。 +Input data can be divided into two parts: function selector and parameter encoding。 ### 1. Function Selector @@ -214,7 +214,7 @@ In the transfer interface call: ### 2. Parameter coding -Encoding of parameters(Decoding also applies)You need to combine the content of the ABI description information and encode the parameters according to the list of interface types in the ABI description information.。 +Encoding of parameters(Decoding also applies)You need to combine the content of the ABI description information and encode the parameters according to the list of interface types in the ABI description information。 #### List of transfer types: @@ -238,19 +238,19 @@ Merge the Function selector with the parameter encoding to get the input。 ### Why contract ABI? -As can be seen from the definition of ABI, ABI is a standard form of interaction with contracts, which is equivalent to defining the interface protocol specification for access contracts, unifying the form of interaction between contracts and contracts, and between clients and contracts on different platforms.。 +As can be seen from the definition of ABI, ABI is a standard form of interaction with contracts, which is equivalent to defining the interface protocol specification for access contracts, unifying the form of interaction between contracts and contracts, and between clients and contracts on different platforms。 ### Limitations of Contract ABI Here are some of the limitations of contract ABI coding: -- The rules of the ABI encoding itself are complex, which makes it more difficult for users to implement, but except for individual ABI library authors, ordinary users do not need to implement them themselves.。 -- ABI's encoding forces 32-byte alignment on all data encodings, which eventually need to be persisted with the transaction, wasting a lot of storage space.。 -- Difficult to upgrade: When ABI adds new type support or even new rules, the implementation of all platforms needs to be upgraded, and these new features are not necessarily easy to support on some platforms.。For example: ABIEncoderV2 So far, the support of each library is still not very perfect.。 +-The rules of the ABI encoding itself are complex, which makes it more difficult for users to implement, but except for individual ABI library authors, ordinary users do not need to implement them themselves。 +-ABI encoding will force 32-byte alignment for all data encoding, and eventually these encoded data need to be persisted with the transaction, wasting a lot of storage space。 +-Difficult to upgrade: When ABI adds new type support or even new rules, the implementation of all platforms needs to be upgraded. These new features may not be easy to support on some platforms。For example: ABIEncoderV2 So far, the support of each library is still not very perfect。 # SUMMARY -This paper introduces the concept of contract ABI, the JSON description information of ABI and ABI codec, and finally analyzes the advantages and limitations of ABI codec, so that users have a preliminary understanding of contract ABI.。If you have more in-depth requirements, you can check [ABI's official document](https://solidity.readthedocs.io/en/develop/abi-spec.html)。 +This paper introduces the concept of contract ABI, the JSON description information of ABI and ABI codec, and finally analyzes the advantages and limitations of ABI codec, so that users have a preliminary understanding of contract ABI。If you have more in-depth requirements, you can check [ABI's official document](https://solidity.readthedocs.io/en/develop/abi-spec.html)。 ------ diff --git a/3.x/en/docs/articles/3_features/35_contract/contract_design_practice_deposit&points_scene.md b/3.x/en/docs/articles/3_features/35_contract/contract_design_practice_deposit&points_scene.md index 45f3101d7..a987e9c31 100644 --- a/3.x/en/docs/articles/3_features/35_contract/contract_design_practice_deposit&points_scene.md +++ b/3.x/en/docs/articles/3_features/35_contract/contract_design_practice_deposit&points_scene.md @@ -4,145 +4,145 @@ Author : MAO Jiayu | FISCO BCOS Core Developer ## Scenario 1: Blockchain+Compilation of authority contract for certificate of deposit -Electronic data storage is a record of "user authentication."-Data creation-Storage-The whole process of transmission, the application of a series of security technologies to ensure the authenticity, integrity and security of electronic data, with complete legal effect in the judiciary.。 +Electronic data storage certificate is a way to record the whole process of "user authentication - data creation - storage - transmission," applying a series of security technologies to ensure the authenticity, integrity and security of electronic data in an all-round way, and has complete legal effect in the administration of justice。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5409.PNG) The following features of blockchain technology help reduce costs, improve efficiency, and ensure the security of stored data。 -- Improved tamper-proof mechanism: using blockchain technology to preserve evidence, further strengthening the immutability of evidence; -- The validity of the evidence is recognized by the institution: the judiciary, as the node on the chain, participates in the recognition and signature of the chain data, and can subsequently confirm the true validity of the data from the chain.; -- The service continues to be effective: after the data is linked by multi-party consensus, even if some of the consensus parties exit, the data will not be lost or invalidated.。 +- Improved tamper-proof mechanism: the use of blockchain technology to preserve evidence, further strengthening the immutability of evidence; +- The validity of the evidence is recognized by the institution: the judiciary, as the node on the chain, participates in the recognition and signature of the chain data, which can be confirmed afterwards from the chain; +- The service continues to be effective: after the data is multi-party consensus on the chain, even if some consensus parties exit, it will not cause data loss or invalidation。 ### Brief business process of certificate deposit scenario -Three types of typical users can be abstracted in the certificate storage scenario.**Depository, Auditor and Forensics**。 +Three types of typical users can be abstracted in the certificate storage scenario**Depository, Auditor and Forensics**。 -- The depositary submits an application for the need to deposit the certificate.。 -- The reviewer reviews and signs the certificate data based on the content.。In actual business scenarios, the reviewer may be involved in the multi-sign process of voting and multi-party review.。 -- After the certificate is put on the chain, the forensics can check the address, timestamp and audit details of the depositor at any time.。 +- The depositary submits an application for the deposit of a certificate。 +- The reviewer reviews and signs the certificate data based on the content。In actual business scenarios, the reviewer may be involved in the multi-sign process of voting and multi-party review。 +- After the certificate is put on the chain, the forensics can query the address, timestamp, audit details and other relevant information of the depositor at any time for verification。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5410.PNG) ### Example Explanation of Permission Contract in Certificate Deposit Scenario -Let's explain it with the permission contract in the deposit scenario.。 +Let's explain it with the permission contract in the deposit scenario。 #### Summary Design of Deposit Contract -**First separate the logic and data layers**。Because the Solidity smart contract language does not have an independent data layer, in order to facilitate the subsequent expansion and upgrade of the contract, the logic and data layer need to be separated, as reflected in the figure below is to distinguish the data layer and the control layer.。 +**First separate the logic and data layers**。Because the Solidity smart contract language does not have an independent data layer, in order to facilitate the subsequent expansion and upgrade of the contract, the logic and data layer need to be separated, as reflected in the figure below is to distinguish the data layer and the control layer。 -**Secondly, the introduction of permission layer**。All nodes on a consortium chain have free access to the data on the chain, and smart contracts provide a decorator mechanism that controls the access of contracts to designated authorized users, abstracting this layer according to the principle of a single responsibility of the contract.。 +**Secondly, the introduction of permission layer**。All nodes on a consortium chain have free access to the data on the chain, and smart contracts provide a decorator mechanism that controls the access of contracts to designated authorized users, abstracting this layer according to the principle of a single responsibility of the contract。 -At the same time, we need to control the permissions of the data layer to prevent the interface permissions written to the data layer from being open to everyone, so we need to rely on and introduce permission contracts.。 +At the same time, we need to control the permissions of the data layer to prevent the interface permissions written to the data layer from being open to everyone, so we need to rely on and introduce permission contracts。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5411.PNG) #### Permission contract example explanation -Permission contracts are relatively simple, do not need to rely on other contracts, in many contract development needs, can be reused.。 +Permission contracts are relatively simple, do not need to rely on other contracts, in many contract development needs, can be reused。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5412.PNG) -The Authentication contract first defines two member variables, the contract owner and the permission control mapping list acl.(access control list)。 +The Authentication contract first defines two member variables, the contract owner and the permission control mapping list acl(access control list)。 -- **owner**: Owner is automatically assigned to msg.sender during contract construction, which is the caller of the contract, and the decorator onlyOwner can be used to determine whether the subsequent caller is the creator of the original contract.。 -- **acl**: The acl variable is a mapping of the address to bool type. Let's look at the allow and deny functions. Allow sets the bool value mapped by the address of the incoming parameter to true and deny to false. After setting, you can use the auth decorator to determine the contract visitor permissions.。 +- **owner**: Owner is automatically assigned to msg.sender during contract construction, which is the caller of the contract, and the decorator onlyOwner can be used to determine whether the subsequent caller is the creator of the original contract。 +- **acl**: The acl variable is a mapping of the address to bool type. Let's look at the allow and deny functions. Allow sets the bool value mapped by the address of the incoming parameter to true and deny to false. After setting, you can use the auth decorator to determine the contract visitor permissions。 ##### Depository Data -The following figure shows the evidence data layer, the code of the deposit data contract.。 +The following figure shows the evidence data layer, the code of the deposit data contract。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5413.PNG) -EvidenceRepository is a depository data warehouse that inherits the rights contract, and the methods and modifiers in the rights contract can be used in the depository contract.。 +EvidenceRepository is a depository data warehouse that inherits the rights contract, and the methods and modifiers in the rights contract can be used in the depository contract。 -- The deposit data contract defines a struct structure, EvidenceData, which is used to store the deposit data.。To simplify the model, we have defined only three core data fields: the Hash value of the depository data, the submitter address, and the depository timestamp, which can be extended according to the required fields in the actual business depository scenario.。 -- The mapping variable, which is the mapping relationship between byte32 and the structure, is actually the data of the structure mapped with the deposit hash as the main key.。The key is a hash, and the value is the above structure. You can use the hash value to retrieve, query, and save the stored certificate data.。 -- Only the core setData and getData functions are determined in the defined functions.。Note that the contract itself inherits the Authentication contract, so you can use the auth decorator in setData to control access only to authorized users, preventing malicious attacks or calls to the smart contract after it is deployed on the chain.。The getData function queries the stored data as a whole according to the incoming hash and returns。 +-The deposit data contract defines a struct structure, EvidenceData, which is used to store the deposit data。To simplify the model, we have defined only three core data fields: the Hash value of the depository data, the submitter address, and the depository timestamp, which can be extended according to the required fields in the actual business depository scenario。 +-mapping variable, which is the mapping relationship between byte32 and the structure. In fact, the data of this structure is mapped with the storage hash as the main key。The key is a hash, and the value is the above structure. You can use the hash value to retrieve, query, and save the stored certificate data。 +- Only the core setData and getData functions are identified in the defined functions。Note that the contract itself inherits the Authentication contract, so you can use the auth decorator in setData to control access only to authorized users, preventing malicious attacks or calls to the smart contract after it is deployed on the chain。The getData function queries the stored data as a whole according to the incoming hash and returns。 -As you can see, all the depository data is saved to the data contract.。This can play a unified storage, unified management effect。Of course, this is not necessarily the optimal solution。 -In business scenarios, if the contract has a large amount of certificate data, it may become a performance bottleneck, and it is more reasonable to adopt a split design scheme.。 +As you can see, all the depository data is saved to the data contract。This can play a unified storage, unified management effect。Of course, this is not necessarily the optimal solution。 +In business scenarios, if the contract has a large amount of certificate data, it may become a performance bottleneck, and it is more reasonable to adopt a split design scheme。 ##### Request Data -When the depositary begins to submit the depositary data, it will not be written directly into the depositary warehouse, but will only be submitted after the signature of the auditor.。 +When the depositary begins to submit the depositary data, it will not be written directly into the depositary warehouse, but will only be submitted after the signature of the auditor。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5414.PNG) -After understanding the purpose of the contract, let's take a closer look at the contract.。 +After understanding the purpose of the contract, let's take a closer look at the contract。 -- The contract structure of the request data is consistent with the deposit data.。The first part is the data structure, SaveRequest is a structure, which defines a detailed data structure of the depository request, including the core depository hash, submitters, and approved votes.。 -- ext is a description field. Status records the reviewers who have passed the voting signature. Threshold is the voting threshold. Voters is the address list of the reviewers. SaveRequests defines the mapping between the request hash and the request itself.。Similar to a depository data contract, in a request data contract, all request data is also stored in a single data warehouse contract.。 -- Finally, let's look at the two core functions: creating requests and voting reviews.。auth modifier control permissions in both functions。When creating a request, the function uses the require statement to check whether the request already exists.;When voting, the function uses the require statement to check whether the reviewer has voted, whether the request itself exists, and whether the reviewer is legal.。If the check is passed, the number of audit votes is increased by one, marking that the auditor has signed。 +- The contract structure of the requested data is consistent with the deposited data。The first part is the data structure, SaveRequest is a structure, which defines a detailed data structure of the depository request, including the core depository hash, submitters, and approved votes。 +-ext is a description field, status records the reviewers who have passed the voting signature, threshold is the voting threshold, voters is the list of addresses of the reviewers, saveRequests defines the mapping between the request hash and the request itself。Similar to a depository data contract, in a request data contract, all request data is also stored in a single data warehouse contract。 +- Finally, look at the two core functions: create request and vote review。auth modifier control permissions in both functions。When creating a request, the function uses the require statement to check whether the request already exists;When voting, the function uses the require statement to check whether the reviewer has voted, whether the request itself exists, and whether the reviewer is legal。If the check is passed, the number of audit votes is increased by one, marking that the auditor has signed。 ##### Controller ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5415.PNG) -The controller introduces two data warehouse contracts, and we can complete all user interface interactions by simply calling the controller;Its constructor parameter variables contain the parameters required to request contract construction: the list of auditors and the voting threshold, and this constructor automatically constructs and creates the contract.。 +The controller introduces two data warehouse contracts, and we can complete all user interface interactions by simply calling the controller;Its constructor parameter variables contain the parameters required to request contract construction: the list of auditors and the voting threshold, and this constructor automatically constructs and creates the contract。 -The controller defines two methods, one is to create a certificate request, and the other is for the reviewer to vote based on the request.。 -The creation request function is simpler and will directly call the creation request function in the request data warehouse contract.。 -Dealing with voting functions is relatively complex。After verifying that the hash data is not empty, the audit interface will be called, and if the audit is successful, it will trigger a check to see if the number of passes of the current request exceeds the threshold, and once it does, it will be automatically saved to the certificate data contract and the request will be deleted.。 +The controller defines two methods, one is to create a certificate request, and the other is for the reviewer to vote based on the request。 +The creation request function is simpler and will directly call the creation request function in the request data warehouse contract。 +Dealing with voting functions is relatively complex。After verifying that the hash data is not empty, the audit interface will be called, and if the audit is successful, it will trigger a check to see if the number of passes of the current request exceeds the threshold, and once it does, it will be automatically saved to the certificate data contract and the request will be deleted。 -In addition, three event events are defined in this contract, which have the following effects. +In addition, three event events are defined in this contract, which have the following effects -- Record the parameters defined by the event and store them in the blockchain transaction log, providing cheap storage。 -- Provides a callback mechanism. After the event is successfully executed, the node sends a callback notification to the SDK registered for listening, triggering the callback function to be executed。 -- Provides a filter that supports parameter retrieval and filtering。 +- Record event-defined parameters and store them in the blockchain transaction log, providing cheap storage。 +-Provide a callback mechanism. After the event is successfully executed, the node sends a callback notification to the SDK registered for listening, triggering the callback function to be executed。 +- Provide a filter that supports parameter retrieval and filtering。 -For example, the createSaveRequest log records the hash and call address。If we cooperate with the SDK, we can listen to this specific event and automatically trigger a custom callback function.。 +For example, the createSaveRequest log records the hash and call address。If we cooperate with the SDK, we can listen to this specific event and automatically trigger a custom callback function。 #### Summary of Examples of Depository Contracts -The above is a complete deposit certificate scenario permission contract demo.。In order to facilitate understanding, we did not design the example to cover all aspects, I hope you can better understand the design idea of demo: +The above is a complete deposit certificate scenario permission contract demo。In order to facilitate understanding, we did not design the example to cover all aspects, I hope you can better understand the design idea of demo: - Separation of data and logic; -- It is recommended to adopt bottom-up development, first develop the least dependent part, modular, hierarchical design and implementation.; -- Pay attention to permission control and inspection to avoid unauthorized access; +- It is recommended to adopt bottom-up development, first develop the least dependent part, modular, hierarchical design and implementation; +- Focus on permission control and inspection to avoid unauthorized access; - Define uniform and clear interfaces; -- Deposit certificate data hash chain。 +- save certificate data hash chain。 ## Scenario 2: Blockchain+Examples of Points Contracts -Here's another typical application scenario for smart contracts - the integration scenario.。 +Here's another typical application scenario for smart contracts - the integration scenario。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5416.PNG) How can blockchain technology solve these pain points in the integration scenario?? -- Increase brand exposure: multiple agencies form a points alliance, points can be effectively exchanged, to achieve customer resource drainage, improve marketing effectiveness.。 -- Ensure the safety of points: all points are generated and transferred to the chain to prevent merchants from tampering and denying。 -- Improve the user experience: different merchants and users to achieve the flow of points, interoperability, more convenient。 +- Increase brand exposure: multiple agencies form a point alliance, points can be effectively exchanged, to achieve customer resource drainage, improve marketing effect。 +-Ensure the security of points: all points generated and transferred to the chain to prevent merchants from tampering and denial。 +- Improve user experience: different merchants and users to achieve points flow, interoperability, more convenient。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5417.PNG) Figure: Example of a typical points business scenario -One idea is based on blockchain technology, where multiple merchants form a points alliance to achieve points pass-through and mutual diversion of customer resources.。We abstract a manager who deploys and manages contracts, and merchants have the authority to issue points, pull in other merchants, and revoke the issuer's identity.;Consumers have the right to open an account, close an account, spend points and transfer points.。 +One idea is based on blockchain technology, where multiple merchants form a points alliance to achieve points pass-through and mutual diversion of customer resources。We abstract a manager who deploys and manages contracts, and merchants have the authority to issue points, pull in other merchants, and revoke the issuer's identity;Consumers have the right to open an account, close an account, spend points and transfer points。 ### Integral Scenario Contract Example Explanation -Let's start with the summary design of the points contract.。In the deposit contract, we introduce the idea of separation of data and logic;In the points contract, we will introduce the idea of separation of management, data and logic。 -Why add a management contract??In the original two-tier structure, the control contract automatically creates the data contract, while the data-tier contract says that the owner is the control contract.。 -With the introduction of management contracts, a similar effect of control reversal is achieved, and both control and data contracts are created by management contracts.;At the same time, the management contract can also set the address of the control contract in the data contract at any time.。In this way, control contracts can be upgraded smoothly to business logic at any time.;Separating management contracts also facilitates on-chain authority governance。 -In addition, we abstract common permissions, role functions into contracts, and abstract the library for permission mapping and data calculation.。 +Let's start with the summary design of the points contract。In the deposit contract, we introduce the idea of separation of data and logic;In the points contract, we will introduce the idea of separation of management, data and logic。 +Why add a management contract??In the original two-tier structure, the control contract automatically creates the data contract, while the data-tier contract says that the owner is the control contract。 +With the introduction of management contracts, a similar effect of control reversal is achieved, and both control and data contracts are created by management contracts;At the same time, the management contract can also set the address of the control contract in the data contract at any time。In this way, control contracts can be upgraded smoothly to business logic at any time;Separating management contracts also facilitates on-chain authority governance。 +In addition, we abstract common permissions, role functions into contracts, and abstract the library for permission mapping and data calculation。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5418.PNG) -Below we will look at the specific code implementation of the points contract.。 +Below we will look at the specific code implementation of the points contract。 #### Contract Library - Secure Computing ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5419.PNG) -Secure computing is very important in Solidity。When it comes to numerical calculations, preference may be given to using mature open source libraries.。Share a tip, because the chain resources are very valuable, it is recommended that you use the library can cut off redundant code, saving resources。 +Secure computing is very important in Solidity。When it comes to numerical calculations, preference may be given to using mature open source libraries。Share a tip, because the chain resources are very valuable, it is recommended that you use the library can cut off redundant code, saving resources。 Secure computing library, will re-check the value after execution, to avoid overflow, to avoid attacks。 #### Library - Role Management -The role management library provides the functions of creating roles, deleting roles, and querying roles.。There is a basic mapping bearer, which is the mapping from address to bool and maintains the role identity in mapping.。 +The role management library provides the functions of creating roles, deleting roles, and querying roles。There is a basic mapping bearer, which is the mapping from address to bool and maintains the role identity in mapping。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5420.PNG) @@ -154,35 +154,35 @@ In BasicAuth's underlying permissions contract, we provide a judgment of the own #### Issuer Contracts -The issuer contract relies on LibRole's contract above.。To simplify the rules and make them easy to understand, let's define it this way: issuers are allowed to add new issuers and can also revoke their own issuer identity.。With the publisher role, we can release points。 +The issuer contract relies on LibRole's contract above。To simplify the rules and make them easy to understand, let's define it this way: issuers are allowed to add new issuers and can also revoke their own issuer identity。With the publisher role, we can release points。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5422.PNG) #### Points Data Contract -Now go to the main body of the points contract - admin.-controller-data three-tier architecture。 -First of all, the points data contract is introduced.。It will save all user credits, as well as role information, into the credits data contract.。 +Now enter the main body of the point contract - admin-controller-data three-tier architecture。 +First of all, the points data contract is introduced。It will save all user credits, as well as role information, into the credits data contract。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5423.PNG) -- balances maintains the balance of each user; -- accounts maintain registered accounts; -- The totalAmount system is the total number of points issued.; -- description comment or note information -- latestVersion implements access control; -- The method of upgradeVersion, which can be called by authorized users to upgrade the contract, here initiated and called by the Admin contract.; -- setBalance method, set the balance of an account must be checked by onlyLatestVersion, only the owner with permission can call this data contract.。 +-balances maintains the balance of each user; +-accounts Maintains registered accounts; +-totalAmount is the total number of points issued in the system; +-description comment or note information +-latestVersion for access control; +-upgradeVersion method, authorized users can call this method to upgrade the contract, here by the Admin contract to initiate and call; +-setBalance method, set the balance of an account must be checked by onlyLatestVersion, only the owner with permission can call this data contract。 #### Management contract -The role of management contracts is to create all contracts.。The constructor updates the version number held in the data contract, and once the controller needs to be upgraded, just call this method.。 +The role of management contracts is to create all contracts。The constructor updates the version number held in the data contract, and once the controller needs to be upgraded, just call this method。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5424.PNG) #### control contract -Finally, the control contract is introduced.。Due to the long controller code, only the two most typical functions are shown here.。 -Balance check balance adjustment is the interface of data check balance.。The consumption of points is realized through transfer, where there will be many modifiers to check whether the account has been registered, whether it is valid, etc. In addition, smart contract events are used to output and print logs。 +Finally, the control contract is introduced。Due to the long controller code, only the two most typical functions are shown here。 +Balance check balance adjustment is the interface of data check balance。The consumption of points is realized through transfer, where there will be many modifiers to check whether the account has been registered, whether it is valid, etc. In addition, smart contract events are used to output and print logs。 ![](../../../../images/articles/contract_design_practice_deposit&points_scene/IMG_5425.PNG) @@ -191,32 +191,32 @@ Balance check balance adjustment is the interface of data check balance.。The c Summarize some design ideas in the integral scene demo: - Reference to a three-tier architecture: using data, logic, and management to manage contracts; -- Abstracts libraries and encapsulates commonly used contracts for reuse.; -- You must always pay attention to contract security, such as whether the contract value calculation is correct and whether the permissions are appropriate.; -- Abstracting responsibilities on a contractual basis to achieve as single contractual responsibilities as possible。 +- Abstracts libraries and encapsulates commonly used contracts for reuse; +- You must always pay attention to contract security, such as whether the contract value calculation is correct and whether the permissions are appropriate; +- Abstracts responsibilities on a contractual basis to achieve as single contractual responsibilities as possible。 ## How to Write High Quality Design Documentation ### The "Technique" of Document Writing -The "technique" is mainly the structural elements needed to write a document, which is divided into three categories. +The "technique" is mainly the structural elements needed to write a document, which is divided into three categories -- Business background: First of all, do not presuppose that others can understand all the technical terms when writing documents, and second, clearly introduce the pain points of the business, whether the blockchain can solve these pain points.。 -- Technical scheme design: need to clearly explain the basic business requirements (such as participants, scenarios, activities), design ideas summary, detailed contract responsibilities, functions, parameters, etc.。 -- Instructions for use: the actual use of the scene, the finger on the south and the use of manual。 +-Business background: first, do not presuppose that others can understand all the technical terms when writing the document, and secondly, clearly introduce the pain points of the business, and whether the blockchain can solve these pain points。 +- Technical scheme design: need to clearly explain the basic business requirements (such as participants, scenarios, activities), design ideas summary, detailed contract responsibilities, functions, parameters, etc。 +- Instructions for use: the actual use of the scene, the finger on the south and the use of manual, etc。 ### The "Way" of Document Writing -With the technique, the integrity of the document's content structure is ensured, and the document has a skeleton and flesh and blood.;However, the "Tao" is the soul and essence of the document, and here are five concerns. +With the technique, the integrity of the document's content structure is ensured, and the document has a skeleton and flesh and blood;However, the "Tao" is the soul and essence of the document, and here are five concerns - Highlights: show the uniqueness of the program (creative / design / function / specification / documentation); - Pain point: explain the problem and the solution; -- Focus: code, comments, documentation to be clear, readable, around the problem and solution, not to show off; -- Difficulty: Weighing specification, efficiency, and security based on smart contract features; -- Key points: take explanation as the basic guide, do not presume that others can understand all business and technical terms。 +-Key points: code, comments, documentation to be clear, readable, around the problem and solution, not to show off; +- Difficulty: Based on smart contract features, trade-off specification, efficiency and security; +-Key points: take explanation as the basic guide, do not presume that others can understand all business and technical terms。 -This article mainly shares the contract design ideas and example code analysis of two typical application scenarios, namely, deposit certificate and integration, and summarizes the relevant development skills and document writing skills for everyone.。 -In the smart contract development process, developers need to choose the applicable skills and solutions according to the actual business needs.。As the so-called "soldiers are impermanent, water is impermanent," there is no optimal design, only the most suitable design.。 +This article mainly shares the contract design ideas and example code analysis of two typical application scenarios, namely, deposit certificate and integration, and summarizes the relevant development skills and document writing skills for everyone。 +In the smart contract development process, developers need to choose the applicable skills and solutions according to the actual business needs。As the so-called "soldiers are impermanent, water is impermanent," there is no optimal design, only the most suitable design。 ------ @@ -224,12 +224,12 @@ In the smart contract development process, developers need to choose the applica **Q** How to get previous data in a smart contract? -**A** : There are two main methods: 1. Define the function that needs to query historical data in the smart contract, and query it through the contract query interface。2. Use WeBASE-Collect-The Bee data export component exports the on-chain data to the off-chain database, and can query all the data。 +**A** : There are two main methods: 1. Define the function that needs to query historical data in the smart contract, and query it through the contract query interface。2. Use the WeBASE-Collect-Bee data export component to export the on-chain data to the off-chain database, you can query all the data。 **Q** : The variable in the contract defines a private attribute, can it also be publicly available on the chain?? -**A** : All data on the chain is public, even if a variable with a private attribute is defined, it can be obtained by technical means.。 +**A** : All data on the chain is public, even if a variable with a private attribute is defined, it can be obtained by technical means。 **Q** : Can other external interfaces be called within the smart contract?? @@ -239,25 +239,25 @@ In the smart contract development process, developers need to choose the applica **Q** : If the points will decrease within a certain period of time, how can this integration scenario be achieved? -**A** : First define a business rule with reduced points.;Secondly, it is technically feasible to implement this scenario, for example, you can design a points destruction function, which can be called to reduce the number of points for a given account and reduce the total number of points at the same time.。Finally, the specific implementation logic and approach depends on the business rules。In addition, it should be noted that smart contracts do not support similar timing scripting mechanisms and require external calls to trigger。 +**A** : First define a business rule with reduced points;Secondly, it is technically feasible to implement this scenario, for example, you can design a points destruction function, which can be called to reduce the number of points for a given account and reduce the total number of points at the same time。Finally, the specific implementation logic and approach depends on the business rules。In addition, it should be noted that smart contracts do not support similar timing scripting mechanisms and require external calls to trigger。 **Q** : Do you have to write smart contracts according to a three-tier structure?? -**A** : Not necessarily, depending on the specific business scenario, you need to analyze the pros and cons from the actual business scenario and choose the appropriate solution。In general scenarios, we recommend layering, which is more flexible and more conducive to contract upgrades and maintenance.。 +**A** : Not necessarily, depending on the specific business scenario, you need to analyze the pros and cons from the actual business scenario and choose the appropriate solution。In general scenarios, we recommend layering, which is more flexible and more conducive to contract upgrades and maintenance。 **Q** How to control the data read and write permissions of different users in the same group through smart contracts? -**A** : Once the data is on the chain, it is open and transparent to all participants on the chain. Therefore, it is not feasible to control the data read and write permissions of different users in the same group on the chain.。But we can achieve a similar effect by encrypting the data itself before it is put on the chain.。 +**A** : Once the data is on the chain, it is open and transparent to all participants on the chain. Therefore, it is not feasible to control the data read and write permissions of different users in the same group on the chain。But we can achieve a similar effect by encrypting the data itself before it is put on the chain。 **Q** Is the hash value in the certificate of deposit the hash value of the contract or invoice?? -**A** This rule depends on the specific application requirements. It can be a hash of the file or a hash value calculated after assembling other information. It depends on the specific scenario requirements.。 +**A** This rule depends on the specific application requirements. It can be a hash of the file or a hash value calculated after assembling other information. It depends on the specific scenario requirements。 **Q** : What to do if the data on the chain is wrong? -**A** Once the data is on the chain, it cannot be tampered with and physically deleted.;However, you can design a contract logic deletion mechanism, such as adding a status field to a specific data contract to mark whether the data has been deleted.。 \ No newline at end of file +**A** Once the data is on the chain, it cannot be tampered with and physically deleted;However, you can design a contract logic deletion mechanism, such as adding a status field to a specific data contract to mark whether the data has been deleted。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/35_contract/contract_name_service.md b/3.x/en/docs/articles/3_features/35_contract/contract_name_service.md index 04dc3d6a9..a6ff6d110 100644 --- a/3.x/en/docs/articles/3_features/35_contract/contract_name_service.md +++ b/3.x/en/docs/articles/3_features/35_contract/contract_name_service.md @@ -7,35 +7,35 @@ Author : CHEN Yu | FISCO BCOS Core Developer The original FISCO BCOS call smart contract process is: 1. Preparation of contracts; -2. Compile the contract to get the contract interface abi description.; -3. Deploy the contract to get the contract address.; -4. Encapsulate the abi and address of the contract and call the contract through the SDK.。 +2. Compile the contract to get the contract interface abi description; +3. Deploy the contract to get the contract address; +4. Encapsulate the abi and address of the contract and call the contract through the SDK。 -As can be seen from the above contract call process, the business party must obtain the contract abi and the contract address before calling the contract, which is a common method for calling contracts in the industry.。 +As can be seen from the above contract call process, the business party must obtain the contract abi and the contract address before calling the contract, which is a common method for calling contracts in the industry。 However, through follow-up user research, we have collected the following suggestions from the business side: -1. For longer contract abi strings, a location needs to be provided for storage instead of the business party's own storage.; -2. For a 20-byte contract address magic number, its loss will result in inaccessibility of the contract, reducing the cost of memory for the business side.; -3. After the contract is redeployed, the relevant multiple businesses can quickly and imperceptibly update the contract address.; +1. For longer contract abi strings, a location needs to be provided for storage instead of the business party's own storage; +2. For a 20-byte contract address magic number, its loss will result in inaccessibility of the contract, reducing the cost of memory for the business side; +3. After the contract is redeployed, the relevant multiple businesses can quickly and imperceptibly update the contract address; 4. Easy version management of contracts。 -In order to provide a better experience for business parties to invoke smart contracts, FISCO BCOS proposes a CNS contract naming service solution.。 +In order to provide a better experience for business parties to invoke smart contracts, FISCO BCOS proposes a CNS contract naming service solution。 ## How the CNS is implemented? -CNS provides a record of the mapping between the contract name and the contract address on the chain and the corresponding query function, which facilitates the business party to call the contract on the chain by memorizing the simple contract name.。In order to facilitate the business party to call the contract, the SDK encapsulates the CNS way to call the contract interface, the interface internal implementation of the contract address to find, the business party is not aware of this.。 +CNS provides a record of the mapping between the contract name and the contract address on the chain and the corresponding query function, which facilitates the business party to call the contract on the chain by memorizing the simple contract name。In order to facilitate the business party to call the contract, the SDK encapsulates the CNS way to call the contract interface, the interface internal implementation of the contract address to find, the business party is not aware of this。 ### Information record -CNS records include: contract name, contract version, contract address, and contract abi。where contract abi refers to the interface description of the contract, describing the contract field name, field type, method name, parameter name, parameter type, method return value type。The above CNS information is stored as a system table, and the nodes in the ledger are consistent, but each ledger is independent.。The CNS table is defined as follows: +CNS records include: contract name, contract version, contract address, and contract abi。where contract abi refers to the interface description of the contract, describing the contract field name, field type, method name, parameter name, parameter type, method return value type。The above CNS information is stored as a system table, and the nodes in the ledger are consistent, but each ledger is independent。The CNS table is defined as follows: ![](../../../../images/articles/contract_name_service/IMG_5496.PNG) ### Interface Description -The interface between the SDK and the blockchain node is provided in the form of a contract.。The CNS contract is logically implemented as a precompiled contract, declaring the following interfaces. +The interface between the SDK and the blockchain node is provided in the form of a contract。The CNS contract is logically implemented as a precompiled contract, declaring the following interfaces ``` pragma solidity ^0.4.2; @@ -43,17 +43,17 @@ contract CNS { / / CNS information on the chain function insert(string name, string version, string addr, string abi) public returns(uint256); - / / The query returns all the records of different versions of the contract in the table, in JSON format. + / / The query returns all the records of different versions of the contract in the table, in JSON format function selectByName(string name) public constant returns(string); - / / The query returns the unique address of the contract version in the table. + / / The query returns the unique address of the contract version in the table function selectByNameAndVersion(string name, string version) public constant returns(string); } ``` -The SDK provides the CnsService class corresponding to the precompiled contract to support the CNS.。CnsService can be called by business parties to configure and query CNS information. Its API is as follows: +The SDK provides the CnsService class corresponding to the precompiled contract to support the CNS。CnsService can be called by business parties to configure and query CNS information. Its API is as follows: - `String registerCns(String name, String version, String address, String abi)': Registration of CNS information according to contract name, contract version, contract address and contract abi。 -- `String getAddressByContractNameAndVersion(String contractNameAndVersion)': Query the contract address based on the contract name and contract version (the contract name and contract version are connected by a colon)。If the contract version is missing, the latest contract version is used by default.。 +- `String getAddressByContractNameAndVersion(String contractNameAndVersion)': Query the contract address based on the contract name and contract version (the contract name and contract version are connected by a colon)。If the contract version is missing, the latest contract version is used by default。 - `List queryCnsByName(String name)': Query CNS information based on contract name。 - `List queryCnsByNameAndVersion(String name, String version)': Query CNS information based on contract name and contract version。 @@ -64,21 +64,21 @@ The SDK provides the CnsService class corresponding to the precompiled contract #### Deployment contract -The process for a business party to deploy a contract through the CNS consists of two steps, both of which are performed by the SDK。The first is to send the deployment contract on the transaction chain.;The second is to associate the contract name with the contract address by sending an on-chain transaction。 +The process for a business party to deploy a contract through the CNS consists of two steps, both of which are performed by the SDK。The first is to send the deployment contract on the transaction chain;The second is to associate the contract name with the contract address by sending an on-chain transaction。 ![](../../../../images/articles/contract_name_service/IMG_5497.PNG) #### Call Contract -When the SDK receives a request from a business party to invoke a contract based on the CNS, it first queries for the contract address corresponding to the contract name, and then invokes the contract based on the contract address.。 +When the SDK receives a request from a business party to invoke a contract based on the CNS, it first queries for the contract address corresponding to the contract name, and then invokes the contract based on the contract address。 ![](../../../../images/articles/contract_name_service/IMG_5498.PNG) ## CNS use demonstration -Let's take the console that calls CnsService as an example to describe the CNS-related registration, invocation, and query functions.。 +Let's take the console that calls CnsService as an example to describe the CNS-related registration, invocation, and query functions。 #### deployByCNS @@ -128,7 +128,7 @@ Run callByCNS to invoke the contract with CNS, that is, invoke the contract dire When a contract version is omitted, such as HelloWorld or HelloWorld.sol, the latest version of the contract is invoked。 -- Contract Interface Name: The name of the contract interface to call。 +- Contract interface name: the name of the contract interface called。 - Parameters: Determined by contract interface parameters。 @@ -154,4 +154,4 @@ Hello,CNS2 ## SUMMARY -FISCO BCOS simplifies the way business parties invoke contracts through CNS, and facilitates business parties to manage and upgrade contracts.。At the same time, CNS focuses on implementing address mapping functions。The address type mapped by CNS can be mapped to an account address in addition to a contract address.。When CNS maps the account address, the contract abi content is empty。 \ No newline at end of file +FISCO BCOS simplifies the way business parties invoke contracts through CNS, and facilitates business parties to manage and upgrade contracts。At the same time, CNS focuses on implementing address mapping functions。The address type mapped by CNS can be mapped to an account address in addition to a contract address。When CNS maps the account address, the contract abi content is empty。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/35_contract/entry_quick_guide.md b/3.x/en/docs/articles/3_features/35_contract/entry_quick_guide.md index 03a41a424..69822721e 100644 --- a/3.x/en/docs/articles/3_features/35_contract/entry_quick_guide.md +++ b/3.x/en/docs/articles/3_features/35_contract/entry_quick_guide.md @@ -2,17 +2,17 @@ Author : ZHANG Long | FISCO BCOS Core Developer -Currently, the FISCO BCOS platform supports two types of smart contracts, Solidity and Precompiled. At the same time, it provides an interactive console tool (Console) to facilitate developers to interact with the chain, deploy and invoke smart contracts.。 -In order to let everyone quickly get started with smart contracts, FISCO BCOS has launched a series of smart contract tutorials. This article will take you to get started quickly and use FISCO BCOS to develop and deploy a simple smart contract.。 +Currently, the FISCO BCOS platform supports two types of smart contracts, Solidity and Precompiled. At the same time, it provides an interactive console tool (Console) to facilitate developers to interact with the chain, deploy and invoke smart contracts。 +In order to let everyone quickly get started with smart contracts, FISCO BCOS has launched a series of smart contract tutorials. This article will take you to get started quickly and use FISCO BCOS to develop and deploy a simple smart contract。 ## Introduction to Smart Contracts -As we all know, the emergence of smart contracts enables blockchain not only to handle simple transfer functions, but also to implement complex business logic, which greatly promotes the development of blockchain technology and accelerates application landing.。 +As we all know, the emergence of smart contracts enables blockchain not only to handle simple transfer functions, but also to implement complex business logic, which greatly promotes the development of blockchain technology and accelerates application landing。 -Currently, most of the numerous blockchain platforms integrate the Ethereum Virtual Machine and use Solidity as the smart contract development language.。As a contract-oriented high-level programming language, Solidity draws on C.++The design of languages such as Python, Python, and JavaScript uses static typing, which not only supports basic / complex data type operations and logical operations, but also provides high-level language-related features such as inheritance, overloading, libraries, and user-defined types.。 +Currently, most of the numerous blockchain platforms integrate the Ethereum Virtual Machine and use Solidity as the smart contract development language。As a contract-oriented high-level programming language, Solidity draws on C++The design of languages such as Python, Python, and JavaScript uses static typing, which not only supports basic / complex data type operations and logical operations, but also provides high-level language-related features such as inheritance, overloading, libraries, and user-defined types。 -As the largest and most active domestic open source alliance chain community, FISCO BCOS seamlessly supports Solidity contracts and provides full-link tools and complete solutions from development, compilation, deployment to invocation, making smart contract and blockchain application development simple.。 -In addition, based on a lot of exploration and practice, FISCO BCOS not only supports Solidity contracts, but also supports Precompiled contracts, and provides CRUD contract interfaces at the user level.。CRUD contracts for library table development are not only more in line with user development habits, further reducing the difficulty of contract development, improving performance, and enabling blockchain applications to meet the demands of high concurrency scenarios.。 +As the largest and most active domestic open source alliance chain community, FISCO BCOS seamlessly supports Solidity contracts and provides full-link tools and complete solutions from development, compilation, deployment to invocation, making smart contract and blockchain application development simple。 +In addition, based on a lot of exploration and practice, FISCO BCOS not only supports Solidity contracts, but also supports Precompiled contracts, and provides CRUD contract interfaces at the user level。CRUD contracts for library table development are not only more in line with user development habits, further reducing the difficulty of contract development, improving performance, and enabling blockchain applications to meet the demands of high concurrency scenarios。 ## Smart Contract Classification @@ -20,47 +20,47 @@ The FISCO BCOS platform supports two types of smart contracts: Solidity contract ### Solidity Contract -The Solidity contract runs on the EVM, which is an Ethereum virtual machine with a Harvard architecture that completely separates instructions, data, and stack.。 -During the running of the smart contract, first create a sandbox environment (EVM instance), the sandbox environment is completely isolated from the external environment, can not access the network, file system and other processes, the smart contract in the EVM only allows limited operations.。When the transaction is executed, the EVM obtains the opcode of the contract, converts the opcode into the corresponding EVM instruction, and executes it in accordance with the instruction.。 +The Solidity contract runs on the EVM, which is an Ethereum virtual machine with a Harvard architecture that completely separates instructions, data, and stack。 +During the running of the smart contract, first create a sandbox environment (EVM instance), the sandbox environment is completely isolated from the external environment, can not access the network, file system and other processes, the smart contract in the EVM only allows limited operations。When the transaction is executed, the EVM obtains the opcode of the contract, converts the opcode into the corresponding EVM instruction, and executes it in accordance with the instruction。 In terms of the number of applications landed, Solidity contracts are the most widely used, supported by almost all blockchain platforms, but Solidity also has many shortcomings。As follows: - Contracts execute serially in EVM, poor performance; -- Cross-contract calls create a new EVM with high memory overhead; -- Contract variables and data exist in the MPT number, which is not convenient for contract upgrade.; -- Logic and data coupling, not easy to expand storage capacity。 +- Cross-contract calls will create a new EVM, resulting in high memory overhead; +- Contract variables and data exist in the MPT number, which is not convenient for contract upgrade; +- Logic and data coupling, not convenient for storage expansion。 ### Precompiled contract -Precompiled contracts are precompiled contracts.。The precompiled contract is executed through the precompiled engine, using C++Write contract logic, contract compilation integrated into the FISCO BCOS underlying node。 -Call contracts do not enter the EVM and can be executed in parallel to break the EVM performance bottleneck.;Provide a standard development framework, just inherit the base class and implement the call interface;Suitable for scenarios where logic is relatively certain and high concurrency is sought.;Data exists in the table, separated from the contract, and the contract logic can be upgraded.。 +Precompiled contracts are precompiled contracts。The precompiled contract is executed through the precompiled engine, using C++Write contract logic, contract compilation integrated into the FISCO BCOS underlying node。 +Call contracts do not enter the EVM and can be executed in parallel to break the EVM performance bottleneck;Provide a standard development framework, just inherit the base class and implement the call interface;Suitable for scenarios where logic is relatively certain and high concurrency is sought;Data exists in the table, separated from the contract, and the contract logic can be upgraded。 Of course, there are certain thresholds for the use of precompiled contracts。As follows: - For data storage, you need to create a FISCO BCOS-specific table structure; - Inherit the Precompiled class when writing contracts, and then implement the Call interface function; -- After completing the contract development, you need to register the address for the precompiled contract at the bottom level.; +- After completing the contract development, you need to register the address for the precompiled contract at the bottom level; - After writing the contract, you need to recompile the FISCO BCOS source code。 -In order to mask the threshold of pre-compiled contracts in development and use, FISCO BCOS designed the CRUD contract interface based on pre-compiled contracts and distributed storage.。When writing a Solidity contract, you only need to introduce the abstract contract interface file Table.sol to use the CRUD function, and you don't need to care about the underlying implementation.。 +In order to mask the threshold of pre-compiled contracts in development and use, FISCO BCOS designed the CRUD contract interface based on pre-compiled contracts and distributed storage。When writing a Solidity contract, you only need to introduce the abstract contract interface file Table.sol to use the CRUD function, and you don't need to care about the underlying implementation。 ## Smart Contract Development -This section will be based on the global English certification test score management as a scenario, based on the FISCO BCOS platform for the development of smart contracts.。Global certification exams include GRE, TOEFL, IELTS and more。In order to simplify the contract logic, all scores are issued and managed by the examination management center, and students can query their test scores according to their account (address).。 +This section will be based on the global English certification test score management as a scenario, based on the FISCO BCOS platform for the development of smart contracts。Global certification exams include GRE, TOEFL, IELTS and more。In order to simplify the contract logic, all scores are issued and managed by the examination management center, and students can query their test scores according to their account (address)。 ### Solidity Contract Development In Solidity, a contract is similar to a class in an object-oriented programming language。The Solidity contract has its own code structure and consists of several parts, as shown below。 -- State variables: State variables are values that are permanently stored in the contract +- State variable: The state variable is the value permanently stored in the contract - Constructor: used to deploy and initialize the contract -- Events: Events are interfaces that make it easy to call the logging function of Ethereum virtual machines -- Decorators: Function decorators can be used to change the behavior of functions, such as auto-checking, similar to Spring's AOP -- Functions: A function is an executable unit of code in a contract +- Events: Events are interfaces that can easily call the logging function of Ethereum virtual machines +- Decorators: Function decorators can be used to change the behavior of functions, such as automatic checking, similar to Spring's AOP +- Function: A function is an executable unit of code in a contract #### Create contract -Start by creating a contract called StudentScoreBySol to manage students' grades。As shown in the following code, the contract version needs to be introduced at the beginning.。 +Start by creating a contract called StudentScoreBySol to manage students' grades。As shown in the following code, the contract version needs to be introduced at the beginning。 ![](../../../../images/articles/entry_quick_guide/IMG_4913.PNG) @@ -73,19 +73,19 @@ Define two variables in the current scenario, where _ owner is the creator of th #### Define events -Define an event setScoreEvent to track the addition / modification of student scores, which can be monitored at the business level。The definition of the event is optional, and it doesn't matter if there is no definition, at the business level you can judge whether the transaction is successful based on the return value of the method, but you can't do a more granular problem positioning.。 +Define an event setScoreEvent to track the addition / modification of student scores, which can be monitored at the business level。The definition of the event is optional, and it doesn't matter if there is no definition, at the business level you can judge whether the transaction is successful based on the return value of the method, but you can't do a more granular problem positioning。 ![](../../../../images/articles/entry_quick_guide/IMG_4915.PNG) #### Define decorator -Modifiers in smart contracts are similar to AOP in object-oriented programming, and follow-up actions are performed only when conditions are met.。As shown below, the decorator requires the owner of the contract to follow up, where the owner is the exam management center。 +Modifiers in smart contracts are similar to AOP in object-oriented programming, and follow-up actions are performed only when conditions are met。As shown below, the decorator requires the owner of the contract to follow up, where the owner is the exam management center。 ![](../../../../images/articles/entry_quick_guide/IMG_4916.PNG) #### Defining Construction Methods -The construction method is used to instantiate the contract, and in the current construction method, the Owner is specified as the deployer of the contract.。 +The construction method is used to instantiate the contract, and in the current construction method, the Owner is specified as the deployer of the contract。 ![](../../../../images/articles/entry_quick_guide/IMG_4917.PNG) @@ -95,15 +95,15 @@ In the current contract, two functions are defined, the setScore function is use ![](../../../../images/articles/entry_quick_guide/IMG_4918.PNG) -The complete code for the Solidity contract is shown below。Contract development based on the Solidity language seems simple, but requires in-depth study of the Solidity programming language in order to write highly available contracts, with a certain learning cost.。 -Through the smart contract topic launched by FISCO BCOS open source community, developers can learn more about the methods and techniques of using Solidity to write smart contracts.。 +The complete code for the Solidity contract is shown below。Contract development based on the Solidity language seems simple, but requires in-depth study of the Solidity programming language in order to write highly available contracts, with a certain learning cost。 +Through the smart contract topic launched by FISCO BCOS open source community, developers can learn more about the methods and techniques of using Solidity to write smart contracts。 For more details, please refer to the official Solidity documentation: https://solidity-cn.readthedocs.io/zh/develop/ ![](../../../../images/articles/entry_quick_guide/IMG_4919.PNG) ### CRUD Contract Development -The CRUD contract is the core of the CRUD function. Users can directly reference the CRUD interface file Table.sol in the contract and call the CRUD interface in the Solidity contract.。The development of CRUD contracts fully complies with the operating habits of the database and is easier to understand and use.。 +The CRUD contract is the core of the CRUD function. Users can directly reference the CRUD interface file Table.sol in the contract and call the CRUD interface in the Solidity contract。The development of CRUD contracts fully complies with the operating habits of the database and is easier to understand and use。 CRUD contract for more development details can refer to: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html#crud @@ -115,20 +115,20 @@ The contract created by CRUD is not much different from that created by Solidity #### Event Definition -In the Solidity contract, you can add / modify / delete scores through setScore, but in the CRUD contract, you need to use different interfaces of the CRUD interface file to implement different functions, so you need to define different events for different functions, as shown below.。 +In the Solidity contract, you can add / modify / delete scores through setScore, but in the CRUD contract, you need to use different interfaces of the CRUD interface file to implement different functions, so you need to define different events for different functions, as shown below。 ![](../../../../images/articles/entry_quick_guide/IMG_4921.PNG) -- createEvent: used to track table creation operations; -- insertEvent: Used to track insert grade actions; -- updateEvent: Used to track updated grade actions; -- removeEvent: Used to track delete grade actions。 +-createEvent: used to track table creation operations; +-insertEvent: used to track insert grade operations; +-updateEvent: used to track updated grade actions; +-removeEvent: used to track delete grade actions。 #### Create Table Function -The CRUD contract implements business functions by first creating a table for storing data, just like a database operation.。 -The underlying layer of FISCO BCOS provides the TableFactory contract, the address of which is fixed at 0x1001, and the table can be created by the method provided by the TableFactory object.(createTable)and open(openTable), as shown below。 -If the value returned by the createTable interface is 0, the creation is successful。Note that in order for the created table to be shared by multiple contracts, the table name must be globally visible and unique within the group. You cannot create multiple tables with the same name in the same group on the same chain.。 +The CRUD contract implements business functions by first creating a table for storing data, just like a database operation。 +The underlying layer of FISCO BCOS provides the TableFactory contract, the address of which is fixed at 0x1001, and the table can be created by the method provided by the TableFactory object(createTable)and open(openTable), as shown below。 +If the value returned by the createTable interface is 0, the creation is successful。Note that in order for the created table to be shared by multiple contracts, the table name must be globally visible and unique within the group. You cannot create multiple tables with the same name in the same group on the same chain。 ![](../../../../images/articles/entry_quick_guide/IMG_4922.PNG) @@ -138,21 +138,21 @@ When operating on a table, you first need to open the corresponding table throug ![](../../../../images/articles/entry_quick_guide/IMG_4923.PNG) -Note that the return values of the INSERT, REMOVE, UPDATE, and SELECT functions of the Table interface contract are similar to those of a database, all of which are the number of affected record rows, and the key in the interface is of type string.。 +Note that the return values of the INSERT, REMOVE, UPDATE, and SELECT functions of the Table interface contract are similar to those of a database, all of which are the number of affected record rows, and the key in the interface is of type string。 In the current scenario, the student's studentId is of the address type, so you need to convert the address type to the string type inside the function. The code is as follows。 ![](../../../../images/articles/entry_quick_guide/IMG_4924.PNG) #### Update grade function -The steps to update the grade include opening the table through the TableFactory object and then constructing the filter criteria like a database.。 +The steps to update the grade include opening the table through the TableFactory object and then constructing the filter criteria like a database。 In the CRUD contract interface, a Condition object is provided that provides a series of conditional methods such as greater than, equal to, and less than。After constructing the condition object, you can call the udpdate interface of the table object to complete the update operation. The code is as follows。 ![](../../../../images/articles/entry_quick_guide/IMG_4925.PNG) #### Delete Grade Operation -The delete operation is similar to the update operation. You need to call the table.remove interface to complete the operation.。 +The delete operation is similar to the update operation. You need to call the table.remove interface to complete the operation。 ![](../../../../images/articles/entry_quick_guide/IMG_4926.PNG) @@ -162,13 +162,13 @@ Query results operation is very simple, you need to call the table.select interf ![](../../../../images/articles/entry_quick_guide/IMG_4927.PNG) -The CRUD-based contract development is now complete.。 +The CRUD-based contract development is now complete。 -In terms of the number of lines of code in the current scenario, the CRUD contract is more complex and the Solidity contract is relatively simple.。But it's just an illusion, and that may not be the case.。And the development of CRUD contracts is more in line with developer habits, no extra learning costs, easier to understand and get started.。 +In terms of the number of lines of code in the current scenario, the CRUD contract is more complex and the Solidity contract is relatively simple。But it's just an illusion, and that may not be the case。And the development of CRUD contracts is more in line with developer habits, no extra learning costs, easier to understand and get started。 ## Contract deployment and invocation -After the smart contract is developed, the contract needs to be compiled and deployed before it can be called.。The FISCO BCOS platform provides interactive Console tools that make it easy to interact with the chain。The following will take the above smart contract as an example, using the console tool for deployment and invocation.。 +After the smart contract is developed, the contract needs to be compiled and deployed before it can be called。The FISCO BCOS platform provides interactive Console tools that make it easy to interact with the chain。The following will take the above smart contract as an example, using the console tool for deployment and invocation。 Console installation and use can refer to: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html ### Preparations @@ -180,13 +180,13 @@ Three things to do before deploying and invoking a contract。First copy the wri Secondly, compile the contract. You can use the sol2java.sh script in the console directory to compile the contract. After compilation, the following folder will be generated in the console / contracts / sdk directory, as shown in the following figure。 ```eval_rst .. note:: - If the console version is greater than or equal to v2.8.0, run bash sol2java.sh-H command to view the script usage + If the console version is greater than or equal to v2.8.0, run the bash sol2java.sh -h command to view the usage of the script ``` ![](../../../../images/articles/entry_quick_guide/IMG_4929.PNG) -Where abi stores the ABI of the contract, bin stores the secondary coding file of the contract.(BINARY)The corresponding JAVA contract is in the JAVA folder, which is easy to interact with the chain through the SDK.。 -Note that when the CRUD contract is compiled。You must put the CRUD interface contract Table.sol in the console / contracts / consolidation directory. Otherwise, an error will be reported.。 +Where abi stores the ABI of the contract, bin stores the secondary coding file of the contract(BINARY)The corresponding JAVA contract is in the JAVA folder, which is easy to interact with the chain through the SDK。 +Note that when the CRUD contract is compiled。You must put the CRUD interface contract Table.sol in the console / contracts / consolidation directory. Otherwise, an error will be reported。 Finally, when deploying contracts, you rely on external accounts, so you first need to generate accounts。The account generation tool get _ account.sh is provided in the console. Running the script generates the account in the console / accounts directory。 We use the account generation tool to generate two accounts。An account for Exam Management Center to deploy and add / modify / delete student scores;An account for students to view test scores。As shown below。 @@ -203,17 +203,17 @@ Then use the deploy command to deploy the contract. After the contract is succes ![](../../../../images/articles/entry_quick_guide/IMG_4932.PNG) -After the contract is deployed, you can call the contract function through the call command in the console.。As shown in the figure below, the new student's GRE score is 70 (both modification and deletion can be operated by calling the setScore method), and the function return value is true, which means that the transaction was successful。The specific usage of the call command can be passed through the call-h View。 +After the contract is deployed, you can call the contract function through the call command in the console。As shown in the figure below, the new student's GRE score is 70 (both modification and deletion can be operated by calling the setScore method), and the function return value is true, which means that the transaction was successful。The specific usage of the call command can be viewed through call -h。 ![](../../../../images/articles/entry_quick_guide/IMG_4933.PNG) -Use the student account to start the console, check the score through the getScore function, as shown in the figure below, the return value is 70, indicating that there is no problem。You can also use the student account to call the setScore method, which will report an error and print without permission, so I won't repeat it.。 +Use the student account to start the console, check the score through the getScore function, as shown in the figure below, the return value is 70, indicating that there is no problem。You can also use the student account to call the setScore method, which will report an error and print without permission, so I won't repeat it。 ![](../../../../images/articles/entry_quick_guide/IMG_4934.PNG) ### CRUD contract deployment and invocation -The deployment and invocation of the CRUD contract is no different from the Solidity contract, which is also done in the console.。 +The deployment and invocation of the CRUD contract is no different from the Solidity contract, which is also done in the console。 Start the console with the exam management center account and deploy the Student ScoreByCRUD contract。As shown in the following figure。 @@ -227,11 +227,11 @@ After creating the table, you can call the relevant interface to operate on stud ![](../../../../images/articles/entry_quick_guide/IMG_4937.PNG) -After the score is inserted successfully, close the current console, log in to the console with the student account, and call the select function to query the score, as shown in the figure below, return 70, indicating that the query was successful。The residual function test can be done on its own and will not be repeated.。 +After the score is inserted successfully, close the current console, log in to the console with the student account, and call the select function to query the score, as shown in the figure below, return 70, indicating that the query was successful。The residual function test can be done on its own and will not be repeated。 ![](../../../../images/articles/entry_quick_guide/IMG_4938.PNG) ## Conclusion -This article focuses on the development of smart contracts on the FISCO BCOS platform.。In the FISCO BCOS platform, you can develop smart contracts using either the native Solidity language or the precompiled contract model.。Solidity contracts have poor performance and high learning costs;Precompiled contracts, using precompiled engines, support parallel computing, higher performance, and support storage expansion.。 -However, due to the use of pre-compiled contracts there is a certain threshold, based on this, FISCO BCOS platform developed the CRUD contract interface, users do not need to care about the underlying implementation logic, only need to introduce the Table.sol contract interface file, by calling the relevant interface to complete the development of the contract.。 +This article focuses on the development of smart contracts on the FISCO BCOS platform。In the FISCO BCOS platform, you can develop smart contracts using either the native Solidity language or the precompiled contract model。Solidity contracts have poor performance and high learning costs;Precompiled contracts, using precompiled engines, support parallel computing, higher performance, and support storage expansion。 +However, due to the use of pre-compiled contracts there is a certain threshold, based on this, FISCO BCOS platform developed the CRUD contract interface, users do not need to care about the underlying implementation logic, only need to introduce the Table.sol contract interface file, by calling the relevant interface to complete the development of the contract。 diff --git a/3.x/en/docs/articles/3_features/35_contract/outside_account_generation.md b/3.x/en/docs/articles/3_features/35_contract/outside_account_generation.md index 0e4bb7a83..f057d581f 100644 --- a/3.x/en/docs/articles/3_features/35_contract/outside_account_generation.md +++ b/3.x/en/docs/articles/3_features/35_contract/outside_account_generation.md @@ -4,23 +4,23 @@ Author : Bai Xingqiang | FISCO BCOS Core Developer ## What is an account? -FISCO BCOS uses accounts to identify and differentiate each individual user。In a blockchain system that uses a public-private key system, each account corresponds to a pair of public and private keys.。where the address string obtained by the public key calculated by a secure one-way algorithm such as hashing is used as the account name of the account, i.e., the account address。The private key known only to the user corresponds to the password in the traditional authentication model.。Such accounts with private keys are also often referred to as external accounts or accounts.。 +FISCO BCOS uses accounts to identify and differentiate each individual user。In a blockchain system that uses a public-private key system, each account corresponds to a pair of public and private keys。where the address string obtained by the public key calculated by a secure one-way algorithm such as hashing is used as the account name of the account, i.e., the account address。The private key known only to the user corresponds to the password in the traditional authentication model。Such accounts with private keys are also often referred to as external accounts or accounts。 -The smart contracts deployed to the chain in FISCO BCOS also correspond to an account in the underlying storage, which we call contract accounts.。The difference with an external account is that the address of the contract account is determined at the time of deployment, calculated from the deployer's account address and the information in its account, and the contract account does not have a private key.。 +The smart contracts deployed to the chain in FISCO BCOS also correspond to an account in the underlying storage, which we call contract accounts。The difference with an external account is that the address of the contract account is determined at the time of deployment, calculated from the deployer's account address and the information in its account, and the contract account does not have a private key。 -This article will focus on the generation of external accounts and will not discuss contract accounts. For more information about how to use the generated external accounts, please refer to the documentation of each SDK of FISCO BCOS.。 +This article will focus on the generation of external accounts and will not discuss contract accounts. For more information about how to use the generated external accounts, please refer to the documentation of each SDK of FISCO BCOS。 ## Account usage scenarios In FISCO BCOS, the account has the following usage scenarios: -- The SDK must hold an external account private key. Use the external account private key to sign transactions.。In a blockchain system, each call to the contract write interface is a transaction, and each transaction needs to be signed with the account's private key.。 -- Permission control requires the address of an external account。FISCO BCOS permission control model, based on the external account address of the sender of the transaction, to determine whether there is permission to write data.。 -- Contract account address uniquely identifies the contract on the blockchain。After each contract is deployed, the underlying node generates a contract address for it, which needs to be provided when calling the contract interface.。 +- The SDK needs to hold the external account private key and use the external account private key to sign the transaction。In a blockchain system, each call to the contract write interface is a transaction, and each transaction needs to be signed with the account's private key。 +- Permission control requires the address of an external account。FISCO BCOS permission control model, based on the external account address of the sender of the transaction, to determine whether there is permission to write data。 +- Contract account address uniquely identifies the contract on the blockchain。After each contract is deployed, the underlying node generates a contract address for it, which needs to be provided when calling the contract interface。 ## **Generation of external accounts** -For convenience, references to external accounts are referred to below as simply accounts.。FISCO BCOS provides the get _ account.sh script and the Web3SDK interface to create an account, and the console and Web3SDK also support loading the created account private key for transaction signing.。Users can store the account private key as a file in PEM or PKCS12 format。The PEM format uses plaintext to store the private key, while the PKCS12 format uses the password provided by the user to encrypt the private key.(https://zh.wikipedia.org/wiki/PKCS_12)。 +For convenience, references to external accounts are referred to below as simply accounts。FISCO BCOS provides the get _ account.sh script and the Web3SDK interface to create an account, and the console and Web3SDK also support loading the created account private key for transaction signing。Users can store the account private key as a file in PEM or PKCS12 format。The PEM format uses plaintext to store the private key, while the PKCS12 format uses the password provided by the user to encrypt the private key(https://zh.wikipedia.org/wiki/PKCS_12)。 ### Use the get _ account.sh script to generate an account (action) @@ -69,11 +69,11 @@ Verifying - Enter Export Password: [INFO] Private Key (p12) : accounts/0x02f1b23310ac8e28cb6084763d16b25a2cc7f5e1.p12 ``` -### Using Java-SDK interface generation account +### Generate accounts using the Java-SDK interface -Sometimes we need to generate a new account in the code, this time we need to use Java-SDK(The project is called web3SDK)Provided Interface。 +Sometimes we need to generate a new account in the code, this time we need to use the Java-SDK(The project is called web3SDK)Provided Interface。 -As shown below, Java-The SDK provides functions such as generating an account, calculating an account address, and obtaining a public key. Compared with the get _ account.sh script, the SDK supports the generation of state secret accounts.。 +As shown below, the Java-SDK provides functions such as generating an account, calculating an account address, and obtaining a public key. Compared with the get _ account.sh script, the Java-SDK provides more support for generating a secret account。 ``` import org.fisco.bcos.web3j.crypto.EncryptType @@ -82,7 +82,7 @@ import org.fisco.bcos.web3j.crypto.gm.GenCredential / / Create a regular account EncryptType.encryptType = 0; -/ / To create a national secret account and send transactions to the national secret blockchain node, you need to use the national secret account. +/ / To create a national secret account and send transactions to the national secret blockchain node, you need to use the national secret account // EncryptType.encryptType = 1; Credentials credentials = GenCredential.create(); / / Account address @@ -93,15 +93,15 @@ String privateKey = credentials.getEcKeyPair().getPrivateKey().toString(16); String publicKey = credentials.getEcKeyPair().getPublicKey().toString(16); ``` -The above interfaces can be used directly in Java business code, while Java-The SDK also provides the function of loading private keys stored in PEM format or PKCS12 format. For more information, see here.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/sdk.html#id5)。 +The above interface can be directly used in Java business code, and Java-SDK also provides the function of loading private keys stored in PEM format or PKCS12 format, for details [please refer to here](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/sdk.html#id5)。 ## Account address calculation method -The FISCO BCOS account address is calculated from the ECDSA public key.-256sum hash, taking the hexadecimal representation of the last 20 bytes of the calculation result as the account address, each byte requires two hexadecimal representations, so the account address length is 40。FISCO BCOS account address compatible with Ethereum。 +The account address of FISCO BCOS is calculated by the ECDSA public key, the keccak-256sum hash is calculated for the ECDSA public key, and the hexadecimal representation of the last 20 bytes of the calculation is taken as the account address, each byte needs two hexadecimal numbers to represent, so the account address length is 40。FISCO BCOS account address compatible with Ethereum。 The following is a brief demonstration of the account address calculation steps: -- Use OpenSSL to generate the elliptic curve private key, and the parameters of the elliptic curve use secp256k1。Run the following command to generate the private key in PEM format and save it in the ecprivkey.pem file。 +- Generate elliptic curve private key using OpenSSL, elliptic curve parameters using secp256k1。Run the following command to generate the private key in PEM format and save it in the ecprivkey.pem file。 ``` openssl ecparam -name secp256k1 -genkey -noout -out ecprivkey.pem @@ -121,5 +121,5 @@ dcc703c0e500b653ca82273b7bfad8045d85a470 ## **SUMMARY** -This article briefly introduces the definition, generation and calculation method of FISCO BCOS external account.。In the future, we will also open more useful supporting components to help developers manage their accounts more easily and securely.。 +This article briefly introduces the definition, generation and calculation method of FISCO BCOS external account。In the future, we will also open more useful supporting components to help developers manage their accounts more easily and securely。 diff --git a/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_architecture_design.md b/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_architecture_design.md index 7b29c8e4b..622400fbd 100644 --- a/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_architecture_design.md +++ b/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_architecture_design.md @@ -2,48 +2,48 @@ Author : Bai Xingqiang | FISCO BCOS Core Developer -FISCO BCOS 2.0 proposes a pre-compiled contract framework that allows users to use C++Write a smart contract.。Precompiled contracts can achieve higher performance because they do not enter EVM execution, which is suitable for scenarios where the contract logic is simple but frequently called, or where the contract logic is fixed and computationally intensive.。 +FISCO BCOS 2.0 proposes a pre-compiled contract framework that allows users to use C++Write a smart contract。Precompiled contracts can achieve higher performance because they do not enter EVM execution, which is suitable for scenarios where the contract logic is simple but frequently called, or where the contract logic is fixed and computationally intensive。 -This article describes the origin and implementation of precompiled contracts, including the following. +This article describes the origin and implementation of precompiled contracts, including the following - Solidity contract use and encountered problems; -- FISCO BCOS 2.0 adds precompiled contracts, its architecture design and execution process flow; +- FISCO BCOS 2.0 new pre-compiled contract, its architecture design and execution process flow; - Why precompiled contracts are better than Solidity in some specific scenarios; -- Use of Precompiled Contracts in FISCO BCOS Version 2.0。 +- Use of pre-compiled contracts in FISCO BCOS 2.0。 ## Use and Deficiency of Solidity Contract -Using the Solidity contract in the FISCO BCOS platform generally requires the following five steps。After the Solidity contract is developed, the compiled contract must be deployed to the underlying platform, and the contract interface can be called based on the address returned by the platform.。 +Using the Solidity contract in the FISCO BCOS platform generally requires the following five steps。After the Solidity contract is developed, the compiled contract must be deployed to the underlying platform, and the contract interface can be called based on the address returned by the platform。 ![](../../../../images/articles/pre-compiled_contract_architecture_design/IMG_5426.PNG) -The advantage of the Solidity contract is that it is fully compatible with Ethereum, rich in development resources and more general, but the Solidity contract also has the problems of low virtual machine execution performance, high cost and complex development.。Especially for the scenario of alliance chain governance, some parameters need to be consistent for all nodes on the chain, which is very suitable for contract management, but if you use Solidity implementation, the deployment steps are not!Often!Compound!miscellaneous! +The advantage of the Solidity contract is that it is fully compatible with Ethereum, rich in development resources and more general, but the Solidity contract also has the problems of low virtual machine execution performance, high cost and complex development。Especially for the scenario of alliance chain governance, some parameters need to be consistent for all nodes on the chain, which is very suitable for contract management, but if you use Solidity implementation, the deployment steps are not!Often!Compound!miscellaneous! FISCO-BCOS version 1.3 uses Solidity to implement a set of system contracts, using a proxy contract to manage other system contracts。The deployment process is shown in the following figure: ![](../../../../images/articles/pre-compiled_contract_architecture_design/IMG_5427.PNG) -After deploying the system contract, you need to configure the system contract address in the proxy contract, and then configure the proxy contract address in the node configuration file and restart, in order to call this set of system governance contracts, and the subsequent node expansion also needs to be based on the creation node configuration operations, in order to be consistent.。 +After deploying the system contract, you need to configure the system contract address in the proxy contract, and then configure the proxy contract address in the node configuration file and restart, in order to call this set of system governance contracts, and the subsequent node expansion also needs to be based on the creation node configuration operations, in order to be consistent。 ## FISCO BCOS 2.0 adds precompiled contracts -FISCO BCOS 2.0, inspired by Ethereum's built-in contracts, implements a pre-compiled contract framework。In the future, we will also try to abstract the existing typical business scenarios and develop them into pre-compiled contract templates as the basic capability provided by the underlying layer to help users use FISCO BCOS in their business faster and more conveniently.。 +FISCO BCOS 2.0, inspired by Ethereum's built-in contracts, implements a pre-compiled contract framework。In the future, we will also try to abstract the existing typical business scenarios and develop them into pre-compiled contract templates as the basic capability provided by the underlying layer to help users use FISCO BCOS in their business faster and more conveniently。 ### Benefits of Precompiled Contracts -**Access to distributed storage interfaces**Based on this framework, users can access the local DB storage state and implement any logic they need.。 +**Access to distributed storage interfaces**Based on this framework, users can access the local DB storage state and implement any logic they need。 **Better performance**Since the implementation is C++The code will be compiled in the underlying layer without entering the EVM for execution, which can have better performance。 -**Get started without learning Solidity language**Based on the FISCO BCOS pre-compiled contract framework, developers can use C.++Develop your own pre-compiled contracts to quickly implement the required business logic without learning the Solidity language。 +**Get started without learning Solidity language**Based on the FISCO BCOS pre-compiled contract framework, developers can use C++Develop your own pre-compiled contracts to quickly implement the required business logic without learning the Solidity language。 -**Parallel models greatly improve processing power**In version 2.0, we implemented parallel execution of contracts based on precompiled contracts and DAGs. Users only need to specify the interface conflict domain, and the underlying layer will automatically build a transaction dependency graph according to the conflict domain, and execute transactions in parallel as much as possible according to the dependencies, thus greatly improving the transaction processing capacity.。 +**Parallel models greatly improve processing power**In version 2.0, we implemented parallel execution of contracts based on precompiled contracts and DAGs. Users only need to specify the interface conflict domain, and the underlying layer will automatically build a transaction dependency graph according to the conflict domain, and execute transactions in parallel as much as possible according to the dependencies, thus greatly improving the transaction processing capacity。 ### Precompiled Contracts vs. Ethereum Built-in Contracts -As mentioned above, the FISCO BCOS precompiled contract is inspired by the Ethereum built-in contract, but the implementation principle is very different.。 +As mentioned above, the FISCO BCOS precompiled contract is inspired by the Ethereum built-in contract, but the implementation principle is very different。 -Ethereum uses built-in contracts to avoid the cost of complex calculations in EVM. Ethereum currently implements 8 functions using built-in contracts (as shown in the following table)。As you can see, the Ethereum built-in contract takes up 0x1-0x8 These 8 addresses, each built-in contract is actually a local function call, can only be used for state-independent calculations。 +Ethereum uses built-in contracts to avoid the cost of complex calculations in EVM. Ethereum currently implements 8 functions using built-in contracts (as shown in the following table)。As you can see, Ethereum's built-in contracts occupy the eight addresses 0x1-0x8, and each built-in contract is actually a local function call that can only be used for state-independent calculations。 ![](../../../../images/articles/pre-compiled_contract_architecture_design/IMG_5428.PNG) @@ -53,7 +53,7 @@ call(gasLimit, to, value, inputOffset, inputSize, outputOffset, outputSize) Including built-in contract address, input parameter offset, input parameter size, output parameter offset, and output parameter size, this is not a simple matter for users。 -The pre-compiled contract framework of FISCO BCOS supports complex parameter types and supports reading and storing data through AMDB.。The address of each pre-compiled contract is fixed, and multiple interfaces can be implemented in the contract, which is called in exactly the same way as the native Solidity.。 +The pre-compiled contract framework of FISCO BCOS supports complex parameter types and supports reading and storing data through AMDB。The address of each pre-compiled contract is fixed, and multiple interfaces can be implemented in the contract, which is called in exactly the same way as the native Solidity。 **The following figure is a more intuitive comparison**: @@ -63,32 +63,32 @@ The pre-compiled contract framework of FISCO BCOS supports complex parameter typ ## FISCO BCOS Precompiled Contract Architecture -This section gives you a clear understanding of the location of the precompiled contract module in FISCO BCOS and the execution process of the precompiled contract.。 +This section gives you a clear understanding of the location of the precompiled contract module in FISCO BCOS and the execution process of the precompiled contract。 -As shown in the following figure, the precompiled contract is called by the block execution engine, and the block validator executes the block through the block execution engine. When the execution engine executes the block, it determines whether to use the EVM or the precompiled contract engine based on the address of the called contract.。 +As shown in the following figure, the precompiled contract is called by the block execution engine, and the block validator executes the block through the block execution engine. When the execution engine executes the block, it determines whether to use the EVM or the precompiled contract engine based on the address of the called contract。 ![](../../../../images/articles/pre-compiled_contract_architecture_design/IMG_5430.PNG) -When the called contract address is an EVM contract, the execution engine creates and executes the EVM to execute the transaction;When the called contract address is a registered precompiled contract address, the execution engine executes the transaction by calling the precompiled contract interface corresponding to the address.。 +When the called contract address is an EVM contract, the execution engine creates and executes the EVM to execute the transaction;When the called contract address is a registered precompiled contract address, the execution engine executes the transaction by calling the precompiled contract interface corresponding to the address。 -**The precompiled contract execution process is shown in the following figure.**: +**The precompiled contract execution process is shown in the following figure**: ![](../../../../images/articles/pre-compiled_contract_architecture_design/IMG_5431.PNG) -The execution engine first gets the contract object based on the precompiled contract address, and then gets the execution result by calling the call interface of the contract object.。The operations in the call interface mainly include: +The execution engine first gets the contract object based on the precompiled contract address, and then gets the execution result by calling the call interface of the contract object。The operations in the call interface mainly include: -1. Resolve the called interface based on the call parameters. -2. Parse the incoming parameters according to the ABI encoding. -3. Execute the called contract interface. -4. The result ABI encoding will be executed and returned. +1. Resolve the called interface based on the call parameters +2. Parse the incoming parameters according to the ABI encoding +3. Execute the called contract interface +4. The result ABI encoding will be executed and returned -Therefore, if a developer wants to develop a precompiled contract, he only needs to implement the call interface of his precompiled contract and register the address of the implemented contract in the execution engine.。 +Therefore, if a developer wants to develop a precompiled contract, he only needs to implement the call interface of his precompiled contract and register the address of the implemented contract in the execution engine。 ## Application of Precompiled Contracts in FISCO BCOS 2.0 ### System contract -FISCO BCOS 2.0 implements a set of system contracts based on pre-compiled contracts, which are used to manage the chain configuration that requires consensus, including the addition and deletion of nodes in the group, the transformation of node identities, the management of CNS services, the management of chain permissions, and the use of CRUD contracts.。 +FISCO BCOS 2.0 implements a set of system contracts based on pre-compiled contracts, which are used to manage the chain configuration that requires consensus, including the addition and deletion of nodes in the group, the transformation of node identities, the management of CNS services, the management of chain permissions, and the use of CRUD contracts。 **The current system contract and address of FISCO BCOS are as follows**: @@ -96,4 +96,4 @@ FISCO BCOS 2.0 implements a set of system contracts based on pre-compiled contra ### CRUD Contract Support -FISCO BCOS 2.0 implements a pre-compiled contract corresponding to AMDB storage based on a pre-compiled contract, enabling users to access AMDB storage in Solidity, which is the CRUD contract of FISCO BCOS 2.0.。In this way, users can store contract data in the underlying AMDB storage, separating contract logic from data, improving contract processing performance on the one hand, and making it easier to upgrade contract logic on the other.。 \ No newline at end of file +FISCO BCOS 2.0 implements a pre-compiled contract corresponding to AMDB storage based on a pre-compiled contract, enabling users to access AMDB storage in Solidity, which is the CRUD contract of FISCO BCOS 2.0。In this way, users can store contract data in the underlying AMDB storage, separating contract logic from data, improving contract processing performance on the one hand, and making it easier to upgrade contract logic on the other。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_rapid_development.md b/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_rapid_development.md index 6492345c2..7cd16e0a8 100644 --- a/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_rapid_development.md +++ b/3.x/en/docs/articles/3_features/35_contract/pre-compiled_contract_rapid_development.md @@ -2,9 +2,9 @@ Author : Bai Xingqiang | FISCO BCOS Core Developer -In the previous article, we highlighted [the architectural design of the FISCO BCOS precompiled contract](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485333&idx=1&sn=5561ae72507526380381856c307ffe61&chksm=9f2ef589a8597c9f6ed68bd2eb7f46fb8083f302dfdd47ae5ef75dd9d5f114631d21dfbedc9c&token=422221390&lang=zh_CN#rd)The framework has many advantages such as fixed address, no need to deploy, and higher local execution performance.。Because precompiled contracts are used in exactly the same way as ordinary Solidity contracts, the framework can achieve extremely high running speeds without changing the client developer experience, which can be described as a butcher's knife for scenarios with relatively certain logic and the pursuit of high speed and concurrency.。 +In the previous article, we highlighted [the architectural design of the FISCO BCOS precompiled contract](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485333&idx=1&sn=5561ae72507526380381856c307ffe61&chksm=9f2ef589a8597c9f6ed68bd2eb7f46fb8083f302dfdd47ae5ef75dd9d5f114631d21dfbedc9c&token=422221390&lang=zh_CN#rd)The framework has many advantages such as fixed address, no need to deploy, and higher local execution performance。Because precompiled contracts are used in exactly the same way as ordinary Solidity contracts, the framework can achieve extremely high running speeds without changing the client developer experience, which can be described as a butcher's knife for scenarios with relatively certain logic and the pursuit of high speed and concurrency。 -Today, I will use the HelloWorld contract as an example to show you how to use the pre-compiled contract version of HelloWorld.。Note that this chapter requires you to have a certain C++Development experience, and read in detail [FISCO BCOS 2.0 Principle Analysis:](http://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247483970&idx=1&sn=eb2049961515acafe8a2d29e8b0e28e9&chksm=fe6a870dc91d0e1b016fe96e97d519ff1e65bd7d79143f94467ff15e0cdf79ccb44293e52a7b&scene=21#wechat_redirect)[Distributed Storage Architecture Design](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485336&idx=1&sn=ea3a7119634c1c27daa4ec2b9a9f278b&chksm=9f2ef584a8597c9288f8c5000c7def47c3c5b9dc64f25221985cd9e3743b9364a93933e51833&token=422221390&lang=zh_CN#rd)。The five steps shown in the following figure are the only way to develop a precompiled contract. I will implement the HelloWorld precompiled contract step by step, and then use the console and Solidity contract to call the HelloWorld precompiled contract.。 +Today, I will use the HelloWorld contract as an example to show you how to use the pre-compiled contract version of HelloWorld。Note that this chapter requires you to have a certain C++Development experience, and read in detail [FISCO BCOS 2.0 Principle Analysis:](http://mp.weixin.qq.com/s?__biz=MzU5NTg0MjA4MA==&mid=2247483970&idx=1&sn=eb2049961515acafe8a2d29e8b0e28e9&chksm=fe6a870dc91d0e1b016fe96e97d519ff1e65bd7d79143f94467ff15e0cdf79ccb44293e52a7b&scene=21#wechat_redirect)[Distributed Storage Architecture Design](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485336&idx=1&sn=ea3a7119634c1c27daa4ec2b9a9f278b&chksm=9f2ef584a8597c9288f8c5000c7def47c3c5b9dc64f25221985cd9e3743b9364a93933e51833&token=422221390&lang=zh_CN#rd)。The five steps shown in the following figure are the only way to develop a precompiled contract. I will implement the HelloWorld precompiled contract step by step, and then use the console and Solidity contract to call the HelloWorld precompiled contract。 ![](../../../../images/articles/pre-compiled_contract_rapid_development/IMG_5433.PNG) @@ -12,7 +12,7 @@ Today, I will use the HelloWorld contract as an example to show you how to use t ## HelloWorld Precompiled Contract Development -Let's first look at the Solidity version of the HelloWorld contract we want to implement.。Solidity version of HelloWorld, there is a member name for storing data, two interfaces get(),set(string)for reading and setting the member variable respectively。 +Let's first look at the Solidity version of the HelloWorld contract we want to implement。Solidity version of HelloWorld, there is a member name for storing data, two interfaces get(),set(string)for reading and setting the member variable respectively。 ``` pragma solidity ^0.4.24; @@ -33,7 +33,7 @@ contract HelloWorld{ ### step1 Defining the HelloWorld Interface -Solidity's interface calls are encapsulated as a transaction, where transactions that call read-only interfaces are not packaged into blocks, while write-interface transactions are packaged into blocks.。Since the underlying layer needs to determine the called interface and parse the parameters based on the ABI code in the transaction data, the interface needs to be defined first。The ABI interface rules for precompiled contracts are exactly the same as Solidity. When defining a precompiled contract interface, you usually need to define a Solidity contract with the same interface.**Interface Contract**。The interface contract needs to be used when calling the precompiled contract.。 +Solidity's interface calls are encapsulated as a transaction, where transactions that call read-only interfaces are not packaged into blocks, while write-interface transactions are packaged into blocks。Since the underlying layer needs to determine the called interface and parse the parameters based on the ABI code in the transaction data, the interface needs to be defined first。The ABI interface rules for precompiled contracts are exactly the same as Solidity. When defining a precompiled contract interface, you usually need to define a Solidity contract with the same interface**Interface Contract**。The interface contract needs to be used when calling the precompiled contract。 ``` pragma solidity ^0.4.24; @@ -46,7 +46,7 @@ contract HelloWorldPrecompiled{ ### step2 Design storage structure -When precompiled contracts involve storage operations, you need to determine the stored table information.(Table name and table structure. The stored data is abstracted into a table structure in FISCO BCOS.)。This is in the previous article [Distributed Storage Architecture Design](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485336&idx=1&sn=ea3a7119634c1c27daa4ec2b9a9f278b&chksm=9f2ef584a8597c9288f8c5000c7def47c3c5b9dc64f25221985cd9e3743b9364a93933e51833&token=422221390&lang=zh_CN#rd)have introduced。If variable storage is not involved in the contract, you can ignore this step。For HelloWorld, we design the following table。The table only stores a pair of key-value pairs. The key field is hello _ key, and the value field is hello _ value to store the corresponding string value.(string)Interface modification, through get()interface acquisition。 +When precompiled contracts involve storage operations, you need to determine the stored table information(Table name and table structure. The stored data is abstracted into a table structure in FISCO BCOS)。This is in the previous article [Distributed Storage Architecture Design](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485336&idx=1&sn=ea3a7119634c1c27daa4ec2b9a9f278b&chksm=9f2ef584a8597c9288f8c5000c7def47c3c5b9dc64f25221985cd9e3743b9364a93933e51833&token=422221390&lang=zh_CN#rd)have introduced。If variable storage is not involved in the contract, you can ignore this step。For HelloWorld, we design the following table。The table only stores a pair of key-value pairs. The key field is hello _ key, and the value field is hello _ value to store the corresponding string value(string)Interface modification, through get()interface acquisition。 ![](../../../../images/articles/pre-compiled_contract_rapid_development/IMG_5434.PNG) @@ -60,7 +60,7 @@ virtual bytes call(std::shared_ptr _context, bytesConstRef _param, Address const& _origin) = 0; ``` -The call function has three parameters, _ context saves the context of the transaction execution, _ param is the parameter information of the calling contract, the corresponding contract interface and the parameters of the interface can be obtained from _ param parsing, _ origin is the transaction sender, used for permission control.。 Next, we have the source code**FISCO-BCOS/libprecompiled/extension**directory implements the HelloWorldPrecompiled class, overloads the call function, and implements get()/set(string)Two interfaces。 +The call function has three parameters, _ context saves the context of the transaction execution, _ param is the parameter information of the calling contract, the corresponding contract interface and the parameters of the interface can be obtained from _ param parsing, _ origin is the transaction sender, used for permission control。 Next, we have the source code**FISCO-BCOS/libprecompiled/extension**directory implements the HelloWorldPrecompiled class, overloads the call function, and implements get()/set(string)Two interfaces。 ##### Interface Registration: @@ -121,13 +121,13 @@ else ##### Parsing and returning parameters: -The parameters when calling the contract are included in the _ param parameter of the call function and are encoded in the Solidity ABI format, using dev::eth::The ContractABI tool class can parse parameters, and the return value of the same interface needs to be encoded according to the encoding grid.。 +The parameters when calling the contract are included in the _ param parameter of the call function and are encoded in the Solidity ABI format, using dev::eth::The ContractABI tool class can parse parameters, and the return value of the same interface needs to be encoded according to the encoding grid。 dev::eth::In the ContractABI class, we need to use abiIn / abiOut two interfaces, the former is serialized for user parameters, and the latter can parse parameters from serialized data。 ##### HelloWorldPrecompiled implementation: -Considering the reading experience on the mobile phone, we introduce the internal implementation of the call interface in blocks and omit some error handling logic. For detailed code implementation, please refer to the FISCO BCOS 2.0 document user manual.-> Smart Contract Development-> [Precompiled Contract Development](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html#id2)。 +Considering the reading experience on the mobile phone, we introduce the internal implementation of the call interface in blocks and omit some error handling logic. For detailed code implementation, please refer to the FISCO BCOS 2.0 document user manual->Smart Contract Development ->[Precompiled Contract Development](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html#id2)。 ``` bytes HelloWorldPrecompiled::call(dev::blockverifier::ExecutiveContext::Ptr _context, @@ -201,13 +201,13 @@ else if (func == name2Selector[HELLO_WORLD_METHOD_SET]) ### step4 Assign and register a contract address -When FSICO BCOS 2.0 executes a transaction, the contract address is used to distinguish whether it is a pre-compiled contract, so after the pre-compiled contract is developed, it needs to be registered as the pre-compiled contract registration address at the bottom.。The version 2.0 address space is divided as follows: +When FSICO BCOS 2.0 executes a transaction, the contract address is used to distinguish whether it is a pre-compiled contract, so after the pre-compiled contract is developed, it needs to be registered as the pre-compiled contract registration address at the bottom。The version 2.0 address space is divided as follows: ![](../../../../images/articles/pre-compiled_contract_rapid_development/IMG_5435.PNG) -The user-allocated address space is 0x5001-0xffff, the user needs to assign an unused address to the newly added precompiled contract.**Precompiled contract addresses must be unique and non-conflicting**。 +The user-allocated address space is 0x5001-0xffff, and the user needs to allocate an unused address for the newly added precompiled contract**Precompiled contract addresses must be unique and non-conflicting**。 -Developers need to modify FISCO-BCOS / cmake / templates / UserPrecompiled.h.in file to register the address of the HelloWorldPrecompiled contract in the registerUserPrecompiled function(**Requires v2.0.0-Rc2 and above versions**)register the HelloWorldPrecompiled contract as follows: +Developers need to modify the FISCO-BCOS / cmake / templates / UserPrecompiled.h.in file to register the address of the HelloWorldPrecompiled contract in the registerUserPrecompiled function(**Requires v2.0.0-rc2 or later**)register the HelloWorldPrecompiled contract as follows: ``` void ExecutiveContextFactory::registerUserPrecompiled(ExecutiveContext::Ptr context) @@ -219,7 +219,7 @@ void ExecutiveContextFactory::registerUserPrecompiled(ExecutiveContext::Ptr cont ### Step5 compiled source code -Refer to FISCO BCOS 2.0 manual-> Get Executable Program-> [source code compilation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/get_executable.html)。Note that the implementations of HelloWorldPrecompile.cpp and HelloWorldPrecompile.h need to be placed in the FISCO-BCOS / libprecompiled / extension directory。 +Refer to FISCO BCOS 2.0 manual ->Get executable program ->[source code compilation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/get_executable.html)。Note that the implementations of HelloWorldPrecompiled.cpp and HelloWorldPrecompiled.h need to be placed in the FISCO-BCOS / libprecompiled / extension directory。 ## HelloWorld precompiled contract call @@ -266,5 +266,5 @@ Deploy the HelloWorldHelper contract, and then call the HelloWorldHelper contrac ![](../../../../images/articles/pre-compiled_contract_rapid_development/IMG_5437.JPG) -Here, you can congratulate you on the smooth completion of the development of the HelloWorld precompiled contract, the development process of other precompiled contracts is the same.。 +Here, you can congratulate you on the smooth completion of the development of the HelloWorld precompiled contract, the development process of other precompiled contracts is the same。 diff --git a/3.x/en/docs/articles/3_features/35_contract/smart_contract_concept_and_evolution.md b/3.x/en/docs/articles/3_features/35_contract/smart_contract_concept_and_evolution.md index 35c6c66d7..ae7d2347d 100644 --- a/3.x/en/docs/articles/3_features/35_contract/smart_contract_concept_and_evolution.md +++ b/3.x/en/docs/articles/3_features/35_contract/smart_contract_concept_and_evolution.md @@ -5,71 +5,71 @@ Author: Chu Yuzhi | FISCO BCOS Core Developer ## Foreword Since Bitcoin started the blockchain era in 2009, in the past 10 years, with the development of technology and ecology, blockchain-based distributed applications (dapps) have shown a blowout trend, and the underlying technology supporting dapps is "blockchain."+Smart Contracts "。 -The combination of smart contracts and blockchain is widely regarded as a landmark upgrade in the blockchain world.。The first platform that combines blockchain and smart contract technology--The birth of Ethereum is believed to have started"Blockchain 2.0"Times。 +The combination of smart contracts and blockchain is widely regarded as a landmark upgrade in the blockchain world。The birth of Ethereum, the first platform that combines blockchain and smart contract technology, is believed to have opened"Blockchain 2.0"Times。 ## What is a smart contract? In 1996, Nick Szabo wrote in the article "Smart Contracts.": Building Blocks For Digital Markets introduces the concept of smart contracts。 -The so-called "contract" is the provisions, contracts and other things, which record the conditions of occurrence and the corresponding implementation of the terms, in order to support the right and other operations.;Called"Intelligent", which means automated, programmable。 +The so-called "contract" is the provisions, contracts and other things, which record the conditions of occurrence and the corresponding implementation of the terms, in order to support the right and other operations;Called"Intelligent", which means automated, programmable。 -So, a smart contract is a programmable contract, which can also be understood as an automatically executed clause contract, which in a computer is an automatically executed program fragment.。It is easier to save the contract and is run by a defined algorithm, given the input, you get the corresponding output, which greatly guarantees the execution of the contract.。 +So, a smart contract is a programmable contract, which can also be understood as an automatically executed clause contract, which in a computer is an automatically executed program fragment。It is easier to save the contract and is run by a defined algorithm, given the input, you get the corresponding output, which greatly guarantees the execution of the contract。 Using the analogy of vending machines can help us better understand the core features of smart contracts。 -When the user selects the goods to be purchased and completes the payment, the shipping logic is triggered and the user gets the goods he wants, a process that does not require manual intervention and saves the labor cost of selling the goods.。If you're going to break this contract, you're going to have to physically destroy the vending machine.。Like POS card swipers, EDI (Electronic Data Interchange), etc., can also be used for this type of ratio.。 +When the user selects the goods to be purchased and completes the payment, the shipping logic is triggered and the user gets the goods he wants, a process that does not require manual intervention and saves the labor cost of selling the goods。If you're going to break this contract, you're going to have to physically destroy the vending machine。Like POS card swipers, EDI (Electronic Data Interchange), etc., can also be used for this type of ratio。 ## Smart Contracts and Blockchain -Smart contracts were proposed in the last century, and blockchain was only born in 2009, and by definition, smart contracts have little to do with blockchain.。 -Why are smart contracts and blockchain so closely related in these 10 years??Because the blockchain can ensure that the smart contract can not be tampered with, not only the content of the contract can not be tampered with, each call record can not be tampered with.。 +Smart contracts were proposed in the last century, and blockchain was only born in 2009, and by definition, smart contracts have little to do with blockchain。 +Why are smart contracts and blockchain so closely related in these 10 years??Because the blockchain can ensure that the smart contract can not be tampered with, not only the content of the contract can not be tampered with, each call record can not be tampered with。 The most basic prerequisite for smart contracts to generate value is to have a strong underlying medium for storage so that they cannot be physically destroyed。 -However, the ontology of a smart contract is a piece of code that can be easily tampered with, and how to provide a powerful storage medium for it becomes a problem.。This is exactly what blockchain is good at solving - through the practice of Bitcoin, it proves that blockchain can make electronic records immutable in a distributed environment.。 -At the same time, smart contracts are also feeding the blockchain, which greatly expands the business scenario of the blockchain.。 +However, the ontology of a smart contract is a piece of code that can be easily tampered with, and how to provide a powerful storage medium for it becomes a problem。This is exactly what blockchain is good at solving - through the practice of Bitcoin, it proves that blockchain can make electronic records immutable in a distributed environment。 +At the same time, smart contracts are also feeding the blockchain, which greatly expands the business scenario of the blockchain。 -Combined with smart contracts, blockchain no longer serves a single currency payment, which can be extended to all aspects of life.。The rich application scenarios also create new challenges to the capabilities of the blockchain.。 +Combined with smart contracts, blockchain no longer serves a single currency payment, which can be extended to all aspects of life。The rich application scenarios also create new challenges to the capabilities of the blockchain。 ## Blockchain 2.0: The Birth of Ethereum Bitcoin, born in 2009, uses blockchain and other technologies to ensure ecology, creating the era of blockchain 1.0。 -Users can customize some content through script code, such as how to unlock a fund。These script codes are saved with the transaction, thus enjoying the immutable qualities and being deterministic。So in a way, these scripts can also be seen as smart contracts.。But they don't work。 -First, the script code is not Turing-complete, which limits the functionality of the implementation;Secondly, the development threshold is high, the experience of writing complex logic will be very poor, such as using JVM bytecode to write programs.。 +Users can customize some content through script code, such as how to unlock a fund。These script codes are saved with the transaction, thus enjoying the immutable qualities and being deterministic。So in a way, these scripts can also be seen as smart contracts。But they don't work。 +First, the script code is not Turing-complete, which limits the functionality of the implementation;Secondly, the development threshold is high, the experience of writing complex logic will be very poor, such as using JVM bytecode to write programs。 -In 2013, a young V-god proposed Ethereum, the core of which is to update and verify blockchain data through the state of the world。The biggest difference between Ethereum and Bitcoin is that complex logical operations can be performed through smart contracts.。 -On Ethereum, the language of smart contracts is Solidity, which is a Turing-complete and upper-level language, which greatly expands the scope of smart contracts and reduces the difficulty of writing smart contracts.。 -Because of this, the birth of Ethereum also marks the beginning of the blockchain 2.0 era.。Subsequently, smart contract technology has gradually penetrated multiple business scenarios such as traceability, depository, and supply chain.。 +In 2013, a young V-god proposed Ethereum, the core of which is to update and verify blockchain data through the state of the world。The biggest difference between Ethereum and Bitcoin is that complex logical operations can be performed through smart contracts。 +On Ethereum, the language of smart contracts is Solidity, which is a Turing-complete and upper-level language, which greatly expands the scope of smart contracts and reduces the difficulty of writing smart contracts。 +Because of this, the birth of Ethereum also marks the beginning of the blockchain 2.0 era。Subsequently, smart contract technology has gradually penetrated multiple business scenarios such as traceability, depository, and supply chain。 ## Status and Prospects of Smart Contracts From a programming perspective, a smart contract is a piece of code。Smart contracts have many differences and limitations compared to conventional code, such as: -- single-threaded execution +- Single-threaded execution - Code execution consumes resources and cannot exceed resource limits -- It is currently difficult to obtain off-chain data, such as weather information, race results, etc. -- Other restrictions, such as TPS +- It is currently difficult to obtain off-chain data, such as weather information, race results, etc +- Other restrictions such as TPS -These characteristics make the current smart contract ecology take the governance of resources on the chain as the core.。Like the various ERC standards and governance solutions on Ethereum;There are various resource models on EOS, such as CPU, RAM, economic model, Rex, Bancor protocol, etc.。 -Clearly, with the current ecology, smart contracts have limited impact on the real world.。 -But things are always evolving。There has been a lot of research dedicated to breaking through these limitations, typically Oracle (the oracle, but often called the oracle), which allows smart contracts to interact with off-chain, thus greatly improving the use of smart contracts, as if a computer were connected to the Internet.;Another example is those attempts to break through the performance bottlenecks of the chain itself, such as payment channels, cross-chain, plasma, rollups, all of which break the shackles of security and performance from different perspectives.。 -There is no doubt that smart contracts will play an increasingly important role, and in the future, with the landing of Ethereum 2.0, a new blockchain era may be opened.。 +These characteristics make the current smart contract ecology take the governance of resources on the chain as the core。Like the various ERC standards and governance solutions on Ethereum;There are various resource models on EOS, such as CPU, RAM, economic model, Rex, Bancor protocol, etc。 +Clearly, with the current ecology, smart contracts have limited impact on the real world。 +But things are always evolving。There has been a lot of research dedicated to breaking through these limitations, typically Oracle (the oracle, but often called the oracle), which allows smart contracts to interact with off-chain, thus greatly improving the use of smart contracts, as if a computer were connected to the Internet;Another example is those attempts to break through the performance bottlenecks of the chain itself, such as payment channels, cross-chain, plasma, rollups, all of which break the shackles of security and performance from different perspectives。 +There is no doubt that smart contracts will play an increasingly important role, and in the future, with the landing of Ethereum 2.0, a new blockchain era may be opened。 ## Smart Contract Technology -Ethereum uses Solidity as the smart contract language, a high-level programming language created to implement smart contracts that can run on nodes that allow Ethereum programs。The language absorbs C.++Some features of JavaScript, for example, it is a statically typed language, supports inheritance, libraries, etc.。 -In addition to Solidity, the smart contract technology of each platform is also different. Next, we will introduce the technology adopted by other platforms from the public chain and alliance chain.。 +Ethereum uses Solidity as the smart contract language, a high-level programming language created to implement smart contracts that can run on nodes that allow Ethereum programs。The language absorbs C++Some features of JavaScript, for example, it is a statically typed language, supports inheritance, libraries, etc。 +In addition to Solidity, the smart contract technology of each platform is also different. Next, we will introduce the technology adopted by other platforms from the public chain and alliance chain。 ### Public Chain -First of all, you might want to know the smart contract technology of the three public chains.。 +First of all, you might want to know the smart contract technology of the three public chains。 ![](../../../../images/articles/smart_contract_concept_and_evolution/IMG_5438.PNG) ### Alliance Chain In addition to the public chain, the alliance chain is also an important type of blockchain。Compared to the public chain, the complexity of the alliance chain consensus is greatly reduced and therefore has higher execution efficiency。 -Alliance chains are favored by enterprise-level organizations, and in general, alliances are formed between relevant organizations to share data through alliance chains.。The alliance chain can cover supply chain finance, judicial deposit, traceability and other scenarios, and will be combined with IOT, AI and other technologies in the future.。 -In today's alliance chain ecology, except for the Fabric that uses chaincode, most platforms use Solidity as a smart contract language, as is the case with FISCO BCOS.。 +Alliance chains are favored by enterprise-level organizations, and in general, alliances are formed between relevant organizations to share data through alliance chains。The alliance chain can cover supply chain finance, judicial deposit, traceability and other scenarios, and will be combined with IOT, AI and other technologies in the future。 +In today's alliance chain ecology, except for the Fabric that uses chaincode, most platforms use Solidity as a smart contract language, as is the case with FISCO BCOS。 -Nowadays, Solidity can be said to occupy the C position of smart contracts, and mastering Solidity is an important part of learning smart contracts and blockchain.。Later series will also provide an in-depth introduction to how to write, run, and test smart contracts with Solidity.。 -In addition to Solidity, some smart contract languages such as WebAssembly and Libra's Move are also in development, so you can keep an eye on them.。 +Nowadays, Solidity can be said to occupy the C position of smart contracts, and mastering Solidity is an important part of learning smart contracts and blockchain。Later series will also provide an in-depth introduction to how to write, run, and test smart contracts with Solidity。 +In addition to Solidity, some smart contract languages such as WebAssembly and Libra's Move are also in development, so you can keep an eye on them。 ## Smart contract operation analysis @@ -91,27 +91,27 @@ contract HelloWorld{ } ``` -The function of this Solidity code is to access the _ num field。This field is called a "state variable" and is persisted by the blockchain.。 -Users can deploy this code on Ethereum or similar blockchains, and successful deployment means that the smart contract can no longer be modified, as long as the underlying blockchain is not destroyed, the contract will always exist.。Anyone can call the contract interface through the "contract address," and each call will be recorded on the chain.。 -Before explaining how this code works, let's review how traditional java programs work.。 +The function of this Solidity code is to access the _ num field。This field is called a "state variable" and is persisted by the blockchain。 +Users can deploy this code on Ethereum or similar blockchains, and successful deployment means that the smart contract can no longer be modified, as long as the underlying blockchain is not destroyed, the contract will always exist。Anyone can call the contract interface through the "contract address," and each call will be recorded on the chain。 +Before explaining how this code works, let's review how traditional java programs work。 First, after the user compiles the java code, it will be saved on the disk as bytecode;The user then invokes the program, which is hosted by the JVM for execution;Call parameters may be logged during program execution, or IO with disk。 -The execution of Solidity is similar to this。The difference is that the media has changed from hard drives to blockchains and from stand-alone to distributed.。 -After the code is deployed, it is stored as bytecode on each node。When the user asks to call a function, the call request will be included in the transaction and packaged on a block, which means that the call is legal once the whole network has reached a consensus on the block.。 -Next, the EVM calls the bytecode, which is responsible for accessing the underlying state variables, like the IO of traditional programming.。 +The execution of Solidity is similar to this。The difference is that the media has changed from hard drives to blockchains and from stand-alone to distributed。 +After the code is deployed, it is stored as bytecode on each node。When the user asks to call a function, the call request will be included in the transaction and packaged on a block, which means that the call is legal once the whole network has reached a consensus on the block。 +Next, the EVM calls the bytecode, which is responsible for accessing the underlying state variables, like the IO of traditional programming。 ![](../../../../images/articles/smart_contract_concept_and_evolution/IMG_5439.PNG) -From the code alone, contract development seems to be nothing more than that, a single contract only needs to operate around the field, for many simple businesses, it's just CRUD.。 +From the code alone, contract development seems to be nothing more than that, a single contract only needs to operate around the field, for many simple businesses, it's just CRUD。 But its complexity lies precisely in this, the contract is executed on the blockchain environment, is not modifiable。 -So if a bug occurs, a new contract must be deployed, which challenges the maintainability of the contract。And, once the business becomes complex, it is prone to security vulnerabilities, resulting in the loss of assets on the chain.。Also consider the cost of completing code writing, logic execution, and data storage。 -In summary, writing a contract is not difficult, but writing a good contract requires a certain level of skill.。 +So if a bug occurs, a new contract must be deployed, which challenges the maintainability of the contract。And, once the business becomes complex, it is prone to security vulnerabilities, resulting in the loss of assets on the chain。Also consider the cost of completing code writing, logic execution, and data storage。 +In summary, writing a contract is not difficult, but writing a good contract requires a certain level of skill。 ## Conclusion -This article introduces the concept and historical evolution of smart contracts.。 -Smart contracts are a technology proposed in the last century that has taken on new life under the blockchain wave。On the other hand, the wide application of smart contracts has greatly promoted the development of blockchain.。 +This article introduces the concept and historical evolution of smart contracts。 +Smart contracts are a technology proposed in the last century that has taken on new life under the blockchain wave。On the other hand, the wide application of smart contracts has greatly promoted the development of blockchain。 -To learn smart contracts, it is recommended to choose the Solidity language because it has some characteristics of traditional languages, and the execution environment is completely based on the blockchain, so the actual business development experience will be different from the previous programming experience.。 -Readers can try to quickly build a blockchain environment based on FISCO BCOS, deploy the simplest contract, be familiar with the deployment and invocation methods, and then go further into the world of Solidity.。 +To learn smart contracts, it is recommended to choose the Solidity language because it has some characteristics of traditional languages, and the execution environment is completely based on the blockchain, so the actual business development experience will be different from the previous programming experience。 +Readers can try to quickly build a blockchain environment based on FISCO BCOS, deploy the simplest contract, be familiar with the deployment and invocation methods, and then go further into the world of Solidity。 diff --git a/3.x/en/docs/articles/3_features/35_contract/smart_contract_test_practice.md b/3.x/en/docs/articles/3_features/35_contract/smart_contract_test_practice.md index 59de30507..f87bc32fa 100644 --- a/3.x/en/docs/articles/3_features/35_contract/smart_contract_test_practice.md +++ b/3.x/en/docs/articles/3_features/35_contract/smart_contract_test_practice.md @@ -4,27 +4,27 @@ Author : MAO Jiayu | FISCO BCOS Core Developer ## Foreword -The development of blockchain is accompanied by the topic of information security。In the short history of Solidity, there has been more than one appalling and far-reaching security attack that has caused irreparable damage to some institutions and organizations.。"Misfortune is born of negligence, testing precedes delivery," and if these defects and vulnerabilities are found in the testing process, losses can be effectively avoided.。Testing is a vital part of smart contract development and delivery。It can effectively check whether the actual results meet the design expectations, help identify errors, check for gaps and leaks.。At the same time, high-quality, reusable testing also helps to improve overall development efficiency.。The previous section describes Solidity's past lives, syntax features, design patterns, programming strategies, and underlying principles.。As the final part of the series, this article will focus on and share Solidity's test scenarios, methods, and practices.。 +The development of blockchain is accompanied by the topic of information security。In the short history of Solidity, there has been more than one appalling and far-reaching security attack that has caused irreparable damage to some institutions and organizations。"Misfortune is born of negligence, testing precedes delivery," and if these defects and vulnerabilities are found in the testing process, losses can be effectively avoided。Testing is a vital part of smart contract development and delivery。It can effectively check whether the actual results meet the design expectations, help identify errors, check for gaps and leaks。At the same time, high-quality, reusable testing also helps to improve overall development efficiency。The previous section describes Solidity's past lives, syntax features, design patterns, programming strategies, and underlying principles。As the final part of the series, this article will focus on and share Solidity's test scenarios, methods, and practices。 ## Pre-preparation Before entering the test, you need to complete the following steps: chain building, console installation, smart contract development, smart contract compilation and deployment, and developing an application using Java and other SDKs。 -Detailed preparation can refer to the "[FISCO BCOS zero-based entry, five-step easy to build applications](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485305&idx=1&sn=5a8dc012880aac6f5cd3dacd7db9f1d9&chksm=9f2ef565a8597c73b87fd248c41d1a5b9b0e6a6c6c527baf873498e351e3cb532b77eda9377a&scene=21#wechat_redirect)and FISCO BCOS official documentation, which will not be repeated here.。[FISCO BCOS Official Documentation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/installation.html)Address。 +Detailed preparation can refer to the "[FISCO BCOS zero-based entry, five-step easy to build applications](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485305&idx=1&sn=5a8dc012880aac6f5cd3dacd7db9f1d9&chksm=9f2ef565a8597c73b87fd248c41d1a5b9b0e6a6c6c527baf873498e351e3cb532b77eda9377a&scene=21#wechat_redirect)and FISCO BCOS official documentation, which will not be repeated here。[FISCO BCOS Official Documentation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/installation.html)Address。 ![](../../../../images/articles/smart_contract_test_practice/IMG_5483.PNG) ## Test Environment -FISCO BCOS provides console, WeBASE-Front and SDK code testing, these three environments are suitable for different test scenarios: +FISCO BCOS provides console, WeBASE-Front, and SDK code testing. These three environments are suitable for different test scenarios: -- Console: Provides a command line interface for simple debugging by creating contracts and entering call and query instructions in the console.。For very simple contracts。 -- WeBASE-Front: Provides a visual interface and a simple IDE environment。Applicable to contracts with uncomplicated business logic. It is recommended that developers perform some debugging.。 -- SDK: For example, integrate with the Java SDK, create a Java project, and write applications and test code.。For scenarios that require high quality smart contracts, reusable test cases, complex business logic, or continuous integration。 +- Console: Provides a command line interface for simple debugging by creating contracts and entering call and query instructions in the console。For very simple contracts。 +- WeBASE-Front: Provides a visual interface and a simple IDE environment。Applicable to contracts with uncomplicated business logic. It is recommended that developers perform some debugging。 +- SDK: For example, integrate with the Java SDK, create a Java project, and write applications and test code。For scenarios that require high quality smart contracts, reusable test cases, complex business logic, or continuous integration。 ### Console Test -FISCO BCOS 2.0 and above provides an easy-to-use command line terminal and an "out-of-the-box" blockchain tool. For more information, please refer to the [FISCO BCOS console for detailed explanation, flying general blockchain experience](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485275&idx=1&sn=95e1cb1a961041d5800b76b4a407d24e&chksm=9f2ef547a8597c51a8940548dd1e30f22eb883dd1864371e832bc50188c153989050244f31e5&scene=21#wechat_redirect)》。Next, a contract example will be used to explain how to use the console for testing.。First, we write a HelloWorld contract: +FISCO BCOS 2.0 and above provides an easy-to-use command line terminal and an "out-of-the-box" blockchain tool. For more information, please refer to the [FISCO BCOS console for detailed explanation, flying general blockchain experience](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485275&idx=1&sn=95e1cb1a961041d5800b76b4a407d24e&chksm=9f2ef547a8597c51a8940548dd1e30f22eb883dd1864371e832bc50188c153989050244f31e5&scene=21#wechat_redirect)》。Next, a contract example will be used to explain how to use the console for testing。First, we write a HelloWorld contract: ``` pragma solidity ^0.4.25; @@ -64,7 +64,7 @@ contract address: 0x34e95689e05255d160fb96437a11ba97bb31809f [group:1]> ``` -After the contract is successfully deployed, you can start testing.。We first print the value of name in this contract, then modify it to a new value, and finally re-query the value in name。 +After the contract is successfully deployed, you can start testing。We first print the value of name in this contract, then modify it to a new value, and finally re-query the value in name。 ``` [group:1]> call HelloWorld 0x34e95689e05255d160fb96437a11ba97bb31809f name @@ -79,13 +79,13 @@ Hello, test! [group:1]> ``` -The above example demonstrates how to deploy and debug a contract using the console。The console design is simple and elegant, and the experience is silky smooth。However, when dealing with complex scenarios, such as when you need to switch external accounts or operate through a visual interface, WeBASE-Front does its part to carry the flag。 +The above example demonstrates how to deploy and debug a contract using the console。The console design is simple and elegant, and the experience is silky smooth。However, when dealing with complex scenarios, such as the need to switch external accounts or operate through a visual interface, WeBASE-Front does its part to carry the banner。 ### WeBASE-Front Test -WeBASE-Front provides developers with the visual operation of running core information, the IDE environment developed by Solidity, and the private key management function, making it easier for everyone to start the blockchain journey.。About WeBASE-For the introduction of Front, please refer to [WeBASE Node Front Component Function Analysis](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485288&idx=1&sn=d4a69c02496591e9bbf3fa4de150aa5b&chksm=9f2ef574a8597c6210f742514a71537e49bd8f56017d53b48b441ac7c40f65bb7b66b6049aeb&scene=21#wechat_redirect)and [Installation and Deployment Instructions](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/developer.html#)》。 +WeBASE-Front provides developers with the visualization of running core information, the IDE environment developed by Solidity, and the private key management function, making it easier for everyone to start the blockchain journey。For the introduction of WeBASE-Front, please refer to [Function Analysis of WeBASE Node Front Components](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485288&idx=1&sn=d4a69c02496591e9bbf3fa4de150aa5b&chksm=9f2ef574a8597c6210f742514a71537e49bd8f56017d53b48b441ac7c40f65bb7b66b6049aeb&scene=21#wechat_redirect)and [Installation and Deployment Instructions](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/developer.html#)》。 -Next, we demonstrate a test case that requires switching external accounts, here is the contract code. +Next, we demonstrate a test case that requires switching external accounts, here is the contract code ``` @@ -112,11 +112,11 @@ contract BasicAuth { } ``` -In this example, the contract owner is automatically assigned the contract deployer。The decorator onlyOwner determines that the setOwner function can only be initiated by the _ owner user。in contract management-In the contracts IDE, create the test folder and copy the contract code: +In this example, the contract owner is automatically assigned the contract deployer。The decorator onlyOwner determines that the setOwner function can only be initiated by the _ owner user。In the Contract Management - Contracts IDE, create the test folder and copy the contract code: ![](../../../../images/articles/smart_contract_test_practice/IMG_5484.PNG) -Then, click Private key management-Add a new user and create two users, user1 and user2。 +Then, click Private Key Management - Add User, create two users, user1 and user2。 ![](../../../../images/articles/smart_contract_test_practice/IMG_5485.PNG) @@ -124,7 +124,7 @@ At this point, select the deployment contract, the user address window will pop ![](../../../../images/articles/smart_contract_test_practice/IMG_5486.PNG) -After the contract is deployed, the contract address, contract name, abi, and contract binary are displayed. +After the contract is deployed, the contract address, contract name, abi, and contract binary are displayed ![](../../../../images/articles/smart_contract_test_practice/IMG_5487.PNG) @@ -132,7 +132,7 @@ Click on the contract call, the call window pops up, the "method" drop-down box ![](../../../../images/articles/smart_contract_test_practice/IMG_5488.PNG) -Now let's test the setOwner()Function。As mentioned above, the _ owner of this contract is user1, which is called by switching user user2, and the expected result is that the call fails.。We choose the setOwner method and select the private key address as user2: +Now let's test the setOwner()Function。As mentioned above, the _ owner of this contract is user1, which is called by switching user user2, and the expected result is that the call fails。We choose the setOwner method and select the private key address as user2: ![](../../../../images/articles/smart_contract_test_practice/IMG_5489.PNG) @@ -140,26 +140,26 @@ As expected, the call to this function failed: ![](../../../../images/articles/smart_contract_test_practice/IMG_5490.PNG) -The above execution results print out the TransactionReceipt of the entire transaction, click Restore to convert to the original output value.。"What you see is what you get," WeBASE-Front makes blockchain easier to use。Using WeBASE-The biggest weakness of Front test is that test cases cannot be reused.。If the contract is very complex, then all test cases have to be manually entered over and over again, and the original operation is inefficient.。 +The above execution results print out the TransactionReceipt of the entire transaction, click Restore to convert to the original output value。"What you see is what you get," WeBASE-Front makes blockchain easier to use。The biggest weakness in testing with WeBASE-Front is that test cases cannot be reused。If the contract is very complex, then all test cases have to be manually entered over and over again, and the original operation is inefficient。 ### SDK Test In system testing, you need to follow the classic AIR practice principles: -- Automatic: Testing should be fully automated, which is a prerequisite for continuous integration.。 -- Independent: Test cases remain independent of each other, there are no interdependencies and calls.。 -- Repeatable: Test cases must be reusable。Can be repeated across hardware and software environments。 +-Automatic: The test should be executed fully automatically, which is also a prerequisite for continuous integration。 +-Independent: Test cases remain independent of each other, there is no interdependence and call。 +-Repeatable: test cases must be reusable。Can be repeated across hardware and software environments。 -To meet the above and even more test practice principles, use the console or WeBASE-The Front approach is somewhat inadequate, and the way to integrate the SDK and write test code is more recommended.。Although this approach takes longer upfront and costs more;However, the late test can greatly reduce the repetitive workload and significantly improve the overall test efficiency.。This is also in line with IT companies' current software testing practices.。This part of the code is usually written by the company's development and testing personnel or quality assurance (QA) engineers.。But in many companies, this part of the code is written by the developer.。Good test code can improve code quality, reduce the difficulty of code refactoring, and improve development efficiency.。 +To meet the above and more test practice principles, the way to use the console or WeBASE-Front is somewhat inadequate, and the way to integrate the SDK and write test code is more recommended。Although this approach takes longer upfront and costs more;However, the late test can greatly reduce the repetitive workload and significantly improve the overall test efficiency。This is also in line with IT companies' current software testing practices。This part of the code is usually written by the company's development and testing personnel or quality assurance (QA) engineers。But in many companies, this part of the code is written by the developer。Good test code can improve code quality, reduce the difficulty of code refactoring, and improve development efficiency。 -FISCO BCOS provides multi-language SDKs, such as Java, Python, Node.js, etc. The most mature and commonly used is the Java SDK.。[Using JavaSDK in IDE](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/quick_start.html)The details of creating a new project in the IDE and importing an already provided sample project into the IDE.。In Java development practice, the use of Springboot is more popular, FISCO BCOS also provides the corresponding use case, the relevant configuration documents can be referred to: Spring Boot project configuration, through the spring-boot-Starter developers can quickly download sample projects and import them into their preferred IDE。 +FISCO BCOS provides multi-language SDKs, such as Java, Python, Node.js, etc. The most mature and commonly used is the Java SDK。[Using JavaSDK in IDE](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/quick_start.html)The details of creating a new project in the IDE and importing an already provided sample project into the IDE。In Java development practice, the use of Springboot is more popular, FISCO BCOS also provides the corresponding use case, the relevant configuration documents can refer to: Spring Boot project configuration, through the spring-boot-starter developers can quickly download the sample project, and import the preferred IDE。 -After configuring the basic environment of the engineering chain, the following spring-boot-Starter project as an example, introduce specific test steps and key points。 +After configuring the basic environment of the project chain, the following will take the above spring-boot-starter project as an example to introduce the specific test steps and key points。 1. Write a contract: HelloWorld contract。 -2. Compile the smart contract and turn it into a Java file, from the above WeBASE-In Front, you can also compile and export Java files。 +2. Compile the smart contract and convert it to a Java file, from the above WeBASE-Front, you can also compile and export the Java file。 3. Import the Java file generated by compilation into the project, such as HelloWorld.java。 -4. Based on the above documents, call the contract function and interface, and write relevant test cases, such as ContractTest.。 +4. Based on the above documents, call the contract function and interface, and write relevant test cases, such as ContractTest。 5. Based on the gradle plugin provided by Spring, we can use the"./gradlew test"command to run all test cases。 6. If continuous integration is required, you can add the step 5 command to the automation script after configuring and initializing FISCO BCOS。 @@ -179,9 +179,9 @@ public void deployAndCallHelloWorld() throws Exception { } ``` -- Line 4, the HelloWorld contract is deployed。To comply with the principle of independence, it is recommended to deploy a separate contract in each test case to avoid interference from the test case execution sequence to normal testing.。Except where simulated contract dependencies are required。 -- Lines 9 and 11 call set and get, respectively。In order to comply with the principle of repeatability, the test case must be designed to be idempotent, i.e., the expected results of the test case are consistent in any hardware and software environment.。 -- Lines 7 and 12 use the assertion method provided by the Junit framework to determine whether the smart contract execution results meet expectations.。 +- Line 4, the HelloWorld contract is deployed。To comply with the principle of independence, it is recommended to deploy a separate contract in each test case to avoid interference from the test case execution sequence to normal testing。Except where simulated contract dependencies are required。 +- Line 9 and line 11, set and get are called respectively。In order to comply with the principle of repeatability, the test case must be designed to be idempotent, i.e., the expected results of the test case are consistent in any hardware and software environment。 +-Lines 7 and 12 use the assertion method provided by the Junit framework to determine whether the smart contract execution results meet expectations。 It is worth mentioning that in the Java SDK, after any transaction is chained, a receipt TransactionReceipt object is obtained, which contains the return status and error information (if the transaction is successful, the message is null), which can be used to determine whether the transaction is normal, for example: @@ -194,19 +194,19 @@ The above is based on the testing features provided by Springboot and implements ## Type of test -Like traditional software, smart contract testing can also be divided into functional testing, non-functional testing, security testing and regression testing, which will be described below.。 +Like traditional software, smart contract testing can also be divided into functional testing, non-functional testing, security testing and regression testing, which will be described below。 ### Function test -Functional testing includes, but is not limited to, unit testing, integration testing, smoke testing, and user acceptance testing.。In addition to user acceptance testing, other tests can be implemented by code written by developers or testers.。One of the important purposes of smart contract testing is to detect the correctness of the contract code and check whether the output value meets expectations given a predetermined input value.。 +Functional testing includes, but is not limited to, unit testing, integration testing, smoke testing, and user acceptance testing。In addition to user acceptance testing, other tests can be implemented by code written by developers or testers。One of the important purposes of smart contract testing is to detect the correctness of the contract code and check whether the output value meets expectations given a predetermined input value。 -Above we introduced the console, WeBASE-Front and SDK three test environments。In some logically complex smart contracts, one of the test difficulties is constructing test cases。In this scenario, using smart contracts can better simulate and construct test data, and writing smart contracts directly using Solidity is more native and friendly.。 +Above, we introduced the console, WeBASE-Front and SDK three test environments。In some logically complex smart contracts, one of the test difficulties is constructing test cases。In this scenario, using smart contracts can better simulate and construct test data, and writing smart contracts directly using Solidity is more native and friendly。 -Finally, testing is not outside of smart contract development, but an important part of it, and testing also follows the dependency principle, which means that developers need to consider the "testability" of smart contracts when developing.。For example, if the test code is written entirely using the SDK, then the modification of the smart contract may cause the test code to need to make corresponding changes, which will affect the test effect and increase the test cost.。Based on the principle of "non-reliance on changeable parts" in software design, testability also cannot rely on changeable parts.。 +Finally, testing is not outside of smart contract development, but an important part of it, and testing also follows the dependency principle, which means that developers need to consider the "testability" of smart contracts when developing。For example, if the test code is written entirely using the SDK, then the modification of the smart contract may cause the test code to need to make corresponding changes, which will affect the test effect and increase the test cost。Based on the principle of "non-reliance on changeable parts" in software design, testability also cannot rely on changeable parts。 -In order to solve the above problems, we introduce test code in the smart contract layer.。This code is designed only as a test component and will not be released to the online environment to decouple the effects of test case changes and encapsulate them in the smart contract layer.。Test contracts as stand-alone components to support development and testing。 +In order to solve the above problems, we introduce test code in the smart contract layer。This code is designed only as a test component and will not be released to the online environment to decouple the effects of test case changes and encapsulate them in the smart contract layer。Test contracts as stand-alone components to support development and testing。 -The test component can first abstract and define some test tool contracts, such as. +The test component can first abstract and define some test tool contracts, such as ``` pragma solidity 0.4.25; @@ -233,7 +233,7 @@ library LibAssert { } ``` -This is the simplest test contract library that provides equal and notEqual methods to determine uint8 variables, and developers can extend their own test tool contracts based on this tool.。 +This is the simplest test contract library that provides equal and notEqual methods to determine uint8 variables, and developers can extend their own test tool contracts based on this tool。 Second, relying on tool contracts, we can write independent test contract cases。 @@ -274,23 +274,23 @@ Event logs --------------------------------------------------------------------------------------------- ``` -In addition to relying on custom smart contract test code, you can also write test cases using the smart contract itself.。In the SDK layer, we only need to implement the test function code in TestDemo.。Even if the test logic changes in the future, there is no need to change the SDK code, thus ensuring the robustness of the test code.。The contract test code component needs to implement the design principles followed by the smart contract in the overall design.。 +In addition to relying on custom smart contract test code, you can also write test cases using the smart contract itself。In the SDK layer, we only need to implement the test function code in TestDemo。Even if the test logic changes in the future, there is no need to change the SDK code, thus ensuring the robustness of the test code。The contract test code component needs to implement the design principles followed by the smart contract in the overall design。 ### Non-functional test -Non-functional testing mainly includes performance testing, capacity testing, usability testing, etc.。Since smart contracts run on the underlying nodes of FISCO BCOS, capacity testing and usability testing are more relevant to the underlying nodes, so for users, the focus of non-functional testing is on performance testing.。 +Non-functional testing mainly includes performance testing, capacity testing, usability testing, etc。Since smart contracts run on the underlying nodes of FISCO BCOS, capacity testing and usability testing are more relevant to the underlying nodes, so for users, the focus of non-functional testing is on performance testing。 Although we can use a series of general performance testing tools to test smart contracts, there will be some pain points in the actual pressure testing process, such as: -- Testing for specific contract scenarios, there is a large and repetitive test code, time-consuming and laborious; -- Performance indicators lack a unified measure and cannot be compared horizontally.; -- Showing results is not intuitive enough。 +-For specific contract scenario testing, there is a large and repetitive test code, which is time-consuming and laborious; +-Performance indicators lack uniform measurement and cannot be compared horizontally; +- Presentation of results is not intuitive enough。 -In order to solve the above pain points, FISCO BCOS is adapted to the professional blockchain testing tool Caliper, allowing users to perform performance tests elegantly.。For more details and content, please refer to: [Practice of Performance Test Tool Caliper in FISCO BCOS Platform](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485260&idx=1&sn=118e20d331f2dc51e033e12402868cc5&scene=21#wechat_redirect)》。 +In order to solve the above pain points, FISCO BCOS is adapted to the professional blockchain testing tool Caliper, allowing users to perform performance tests elegantly。For more details and content, please refer to: [Practice of Performance Test Tool Caliper in FISCO BCOS Platform](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485260&idx=1&sn=118e20d331f2dc51e033e12402868cc5&scene=21#wechat_redirect)》。 ### Safety test -Smart contracts require rigorous security testing before they go live。Security testing methods include: disclosing smart contracts and issuing rewards, hiring a dedicated smart contract security agency to detect and evaluate contracts, and using smart contract-specific tools for auditing.。You can choose the corresponding security test level depending on the importance and logical complexity of your contract.。For individual developers or non-major business smart contracts, choose the free smart contract tool to detect.。The following uses VS Code as an example to demonstrate how to use the smart contract security plug-in for contract security detection.。Open VS Code and search for Beosin in its plugin library-VaaS: ETH, select install。Subsequently, open the smart contract file, right-click and select Beosin-VaaS:ETH option, select the current contract code version in the pop-up window。After the installation is complete, the following interface will pop up automatically: +Smart contracts require rigorous security testing before they go live。Security testing methods include: disclosing smart contracts and issuing rewards, hiring a dedicated smart contract security agency to detect and evaluate contracts, and using smart contract-specific tools for auditing。You can choose the corresponding security test level depending on the importance and logical complexity of your contract。For individual developers or non-major business smart contracts, choose the free smart contract tool to detect。The following uses VS Code as an example to demonstrate how to use the smart contract security plug-in for contract security detection。Open VS Code and search for Beosin-VaaS in its plugin library: ETH, select install。Next, open the smart contract file, right-click and select Beosin-VaaS:ETH option, select the current contract code version in the pop-up window。After the installation is complete, the following interface will pop up automatically: ![](../../../../images/articles/smart_contract_test_practice/IMG_5491.PNG) @@ -319,24 +319,24 @@ warningType: Info line: 5 ``` -By reporting the error message, we can find that the function neither reads nor uses the state on the blockchain, which meets the conditions for the use of the pure modifier. We recommend that you use the "pure" keyword to modify the。Security testing is an integral part of the contract testing process and needs to be highly valued.。In strict engineering practice, the safety test report must be issued by the relevant test leader before production on the contract.。In addition, developers can collect and summarize Solidity security programming rules and vulnerability risk tips, and dynamically update maintenance。Before delivering the test, the developer can organize code review and walk-through, based on the summary, centralized, item-by-item risk troubleshooting.。Security testing is an essential part of detecting and evaluating the security of smart contracts, but contract security cannot be placed solely on testing.。More importantly, at all stages of design, development and testing, users are required to maintain a sense of security at all times and to establish and develop secure coding habits.。After the contract is released and launched, you cannot relax your vigilance, always pay attention to the latest security warnings in the industry, and regularly and dynamically detect and scan all codes。 +By reporting the error message, we can find that the function neither reads nor uses the state on the blockchain, which meets the conditions for the use of the pure modifier. We recommend that you use the "pure" keyword to modify the。Security testing is an integral part of the contract testing process and needs to be highly valued。In strict engineering practice, the safety test report must be issued by the relevant test leader before production on the contract。In addition, developers can collect and summarize Solidity security programming rules and vulnerability risk tips, and dynamically update maintenance。Before delivering the test, the developer can organize code review and walk-through, based on the summary, centralized, item-by-item risk troubleshooting。Security testing is an essential part of detecting and evaluating the security of smart contracts, but contract security cannot be placed solely on testing。More importantly, at all stages of design, development and testing, users are required to maintain a sense of security at all times and to establish and develop secure coding habits。After the contract is released and launched, you cannot relax your vigilance, always pay attention to the latest security warnings in the industry, and regularly and dynamically detect and scan all codes。 ### regression test -Regression testing usually includes automated test cases that perform continuous integration and manual testing before the contract goes live.。The above SDK and smart contract test cases can effectively cover the execution path and scope of the test cases.。Automated regression testing helps quickly detect and identify problems, saves a lot of duplication, and ensures that contracts do not deviate from the backbone function after a series of modifications and tests.。Similarly, developers can build and test with tools like Jenkins, Travis, and more。In addition to cases where individual automated tests cannot be performed, manual testing is more about confirming whether the previously modified code meets expectations and whether the smart contract runs as originally designed.。In addition, before the final release of the contract, manual testing also plays a role of audit and inspection, so it must be taken seriously.。 +Regression testing usually includes automated test cases that perform continuous integration and manual testing before the contract goes live。The above SDK and smart contract test cases can effectively cover the execution path and scope of the test cases。Automated regression testing helps quickly detect and identify problems, saves a lot of duplication, and ensures that contracts do not deviate from the backbone function after a series of modifications and tests。Similarly, developers can build and test with tools like Jenkins, Travis, and more。In addition to cases where individual automated tests cannot be performed, manual testing is more about confirming whether the previously modified code meets expectations and whether the smart contract runs as originally designed。In addition, before the final release of the contract, manual testing also plays a role of audit and inspection, so it must be taken seriously。 ## Test points Regardless of the type of system test, special attention should be paid to the following test points。 -- Pay attention to the testing of boundary values, such as number overflow, special values, cycle boundary values, etc.。 -- Pay attention to check whether the implementation of the smart contract meets expectations, and whether its operating logic and process are consistent with the design.。 -- In addition to the normal process, it is also necessary to simulate and test whether the smart contract operates normally and whether it can achieve the expected processing results in various abnormal environments, even extreme environments.。 -- Around the smart contract unchanged business logic, ignore the change value, the corresponding test.。 +- Pay attention to the testing of boundary values, such as number overflow, special values, cycle boundary values, etc。 +-Pay attention to check whether the implementation of the smart contract meets expectations, and whether its operating logic and process are consistent with the design。 +-In addition to the normal process, it is also necessary to simulate and test whether the smart contract operates normally and whether it can achieve the expected processing results in various abnormal environments, even extreme environments。 +- around the smart contract unchanged business logic, ignoring the change value, the corresponding test。 ### boundary value -Boundary value testing is a common method of software testing.。According to statistics, the vast majority of bugs occur at the boundary of the input or output range, not within it。Let's demonstrate a typical numerical overflow test case.。The following simple contract implements the addition of two uint8 values and returns the result. +Boundary value testing is a common method of software testing。According to statistics, the vast majority of bugs occur at the boundary of the input or output range, not within it。Let's demonstrate a typical numerical overflow test case。The following simple contract implements the addition of two uint8 values and returns the result ``` pragma solidity 0.4.25; @@ -364,25 +364,25 @@ return value: (0) --------------------------------------------------------------------------------------------- ``` -You can see that the final result becomes 0 instead of the expected 256. Obviously, this is because the result overflows。Of course, when taking values using the boundary value test method, not only valid boundary values are taken, but also invalid boundary values are included.。Boundary value testing helps find more errors and defects。 +You can see that the final result becomes 0 instead of the expected 256. Obviously, this is because the result overflows。Of course, when taking values using the boundary value test method, not only valid boundary values are taken, but also invalid boundary values are included。Boundary value testing helps find more errors and defects。 ### Whether in line with expectations -After the construction company delivers the house, the developer will check the construction of the house according to the design criteria.。Similarly, in addition to checking whether the contract execution results are correct, the test also needs to check whether the contract interaction, operation, data flow direction, performance performance and other aspects of the implementation are in line with the design.。For example, in a financial industry scenario, we will use Solidity to develop specific on-chain business logic。Among other things, inevitably dealing with amounts requires testing whether the corresponding asset size meets the design criteria。 +After the construction company delivers the house, the developer will check the construction of the house according to the design criteria。Similarly, in addition to checking whether the contract execution results are correct, the test also needs to check whether the contract interaction, operation, data flow direction, performance performance and other aspects of the implementation are in line with the design。For example, in a financial industry scenario, we will use Solidity to develop specific on-chain business logic。Among other things, inevitably dealing with amounts requires testing whether the corresponding asset size meets the design criteria。 -During the requirements and design phase, we estimate the size of the assets running on the chain and make design requirements, such as requiring a supply chain finance blockchain business system to operate normally at a scale of hundreds of billions of assets.。Assuming that the field defining the amount in a smart contract is uint32, the maximum value that can be calculated is 232, which is less than 4.3 billion.。Although most of the time after the launch of the smart contract, the problem will not cause any impact;However, once the business grows rapidly, there will be serious failures when the asset size exceeds 4.3 billion。These problems are very hidden, so more comprehensive and careful testing is needed to detect and locate them early.。 +During the requirements and design phase, we estimate the size of the assets running on the chain and make design requirements, such as requiring a supply chain finance blockchain business system to operate normally at a scale of hundreds of billions of assets。Assuming that the field defining the amount in a smart contract is uint32, the maximum value that can be calculated is 232, which is less than 4.3 billion。Although most of the time after the launch of the smart contract, the problem will not cause any impact;However, once the business grows rapidly, there will be serious failures when the asset size exceeds 4.3 billion。These problems are very hidden, so more comprehensive and careful testing is needed to detect and locate them early。 ### Abnormal process -In actual business scenarios, even after a series of tests to ensure that the smart contract runs in line with the requirements of most scenarios, it may be due to negligence or omission, resulting in problems in specific abnormal processes, which has a significant impact after the release of the smart contract.。Therefore, developers need to pay special attention to the search and coverage of exception processes in the scope and content of the test, and cover all exception processes as much as possible.。 +In actual business scenarios, even after a series of tests to ensure that the smart contract runs in line with the requirements of most scenarios, it may be due to negligence or omission, resulting in problems in specific abnormal processes, which has a significant impact after the release of the smart contract。Therefore, developers need to pay special attention to the search and coverage of exception processes in the scope and content of the test, and cover all exception processes as much as possible。 ### change and unchanged -Using Solidity testing is also significantly different from other languages in that many transactions and values in contracts cannot be reproduced.。Just as "one cannot step into the same river twice in one's life," as the private key, certificate, environment, etc. change, the transaction hash value, block hash, contract address, etc. will also be different, which also brings some objective difficulties to the test.。Solidity, on the other hand, has stringent requirements for consistency, such as consistency in the semantics of EVM execution instructions, "exclusion" of external random variables, and so on.。The key to mastering contract testing is to capture the unchanging aspects of it.。 +Using Solidity testing is also significantly different from other languages in that many transactions and values in contracts cannot be reproduced。Just as "one cannot step into the same river twice in one's life," as the private key, certificate, environment, etc. change, the transaction hash value, block hash, contract address, etc. will also be different, which also brings some objective difficulties to the test。Solidity, on the other hand, has stringent requirements for consistency, such as consistency in the semantics of EVM execution instructions, "exclusion" of external random variables, and so on。The key to mastering contract testing is to capture the unchanging aspects of it。 ## Testing Skills -As a dedicated programming language on the blockchain, Solidity has many limitations compared to other high-level languages.。For example, lack of support for sophisticated testing tools to debug at runtime and view status and data within EVM virtual machines。At the same time, Solidity lacks an independent data abstraction layer, which makes it impossible to view the detailed variable status in the contract directly by connecting to the database like traditional applications, and can only "save the country by adding query calls in the contract."。 Of course, we can also take some skills, as far as possible to avoid the above problems。 +As a dedicated programming language on the blockchain, Solidity has many limitations compared to other high-level languages。For example, lack of support for sophisticated testing tools to debug at runtime and view status and data within EVM virtual machines。At the same time, Solidity lacks an independent data abstraction layer, which makes it impossible to view the detailed variable status in the contract directly by connecting to the database like traditional applications, and can only "save the country by adding query calls in the contract."。 Of course, we can also take some skills, as far as possible to avoid the above problems。 ### How to show more internal variables in a contract? @@ -413,13 +413,13 @@ contract HelloWorld{ } ``` -Then, execute it in the console and you can see that the LogSet event you just defined is printed.。We also define the address where msg.sender is printed in the event, as shown in the figure below, and the corresponding address is also printed。 +Then, execute it in the console and you can see that the LogSet event you just defined is printed。We also define the address where msg.sender is printed in the event, as shown in the figure below, and the corresponding address is also printed。 ![](../../../../images/articles/smart_contract_test_practice/IMG_5492.PNG) -Event provides a simple mechanism for logging. When a situation cannot be resolved and the internal state of the contract needs to be displayed, Event provides a suitable method and mechanism.。 +Event provides a simple mechanism for logging. When a situation cannot be resolved and the internal state of the contract needs to be displayed, Event provides a suitable method and mechanism。 -### How to get the full data model on the chain.? +### How to get the full data model on the chain? #### Problem scenario: @@ -427,20 +427,20 @@ It is said that Solidity is a Turing complete language, EVM is a simple design o #### Solution: -Solidity lacks an independent, externally accessible data layer to directly capture details of each transaction or each status。However, we can export all on-chain data through the WeBASE data export component。WeBankBlockchain-Data-Export can export basic data on the blockchain, such as current block height, total transaction volume, etc.。If all contracts running on FISCO BCOS are configured correctly, the data-Export can export the business data of contracts on the blockchain, including events, constructors, contract addresses, and information about executing functions.。Data Export Components Data-The purpose of Export is to lower the development threshold for obtaining blockchain data and improve R & D efficiency.。Developers almost no need to write any code, only a simple configuration, you can export data to the Mysql database。 +Solidity lacks an independent, externally accessible data layer to directly capture details of each transaction or each status。However, we can export all on-chain data through the WeBASE data export component。WeBankBlockchain-Data-Export can export basic data on the blockchain, such as the current block height and total transaction volume。If all contracts running on FISCO BCOS are configured correctly, Data-Export can export the business data of contracts on the blockchain, including event, constructor, contract address, information about executing functions, and so on。The purpose of Data-Export, a data export component, is to lower the development threshold for obtaining blockchain data and improve R & D efficiency。Developers almost no need to write any code, only a simple configuration, you can export data to the Mysql database。 -Or take the above HelloWorld as an example, refer to the installation documentation: Data-Export Rapid Deployment。The installation process skips the table, and the following table is automatically generated in the database after the export: +Or take the above HelloWorld as an example, refer to the installation document: Data-Export Rapid Deployment。The installation process skips the table, and the following table is automatically generated in the database after the export: ![](../../../../images/articles/smart_contract_test_practice/IMG_5493.PNG) The database table functions as follows: -- account_info: Records all contract information deployed on the chain, including contract name, contract address, block height, and chain time.。 -- blcok_detail_info: Records the information of each block, including the block hash, the number of transactions, the block height, and the time of the chain.。 -- block _ task _ pool: the task table for data export, which records the task details and status information.。 -- block _ tx _ detail _ info: records the information of each transaction, including the contract name, function name, transaction sender address, transaction receiver address, transaction hash, etc.。 -- hello _ world _ log _ set _ event: Data export automatically generates a database table for each event. The naming rule is the contract name.+Event Name。The table automatically generated by the event defined in the above example includes all variables and transaction information defined in the event.。 -- hello _ world _ set _ method: Data export automatically generates database tables for different transactions. The naming rule is the contract name.+Function Name。The set function table defined in the above example contains the function's input parameters, return codes, and transaction hashes.。 +- account_info: Records all contract information deployed on the chain, including contract name, contract address, block height, and chain time。 +- blcok_detail_info: Records the information of each block, including the block hash, the number of transactions, the block height, and the time of the chain。 +-block _ task _ pool: the task table for data export, which records the task details and status information。 +-block _ tx _ detail _ info: records the information of each transaction, including the contract name, function name, transaction sender address, transaction receiver address, transaction hash, etc。 +-hello _ world _ log _ set _ event: Data export automatically generates a database table for each event. The naming rule is the contract name+Event Name。The table automatically generated by the event defined in the above example includes all variables and transaction information defined in the event。 +-hello _ world _ set _ method: Data export automatically generates database tables for different transactions. The naming rule is the contract name+Function Name。The set function table defined in the above example contains the function's input parameters, return codes, and transaction hashes。 hello _ world _ log _ set _ event displays information logged by the LogSetEvent event in all HelloWorld contracts。 @@ -450,11 +450,11 @@ hello _ world _ set _ method displays information about all functions called his ![](../../../../images/articles/smart_contract_test_practice/IMG_5495.PNG) -With the above-mentioned database export status and information, the use of blockchain instantly becomes easy, testing is more handy, like a tiger.。The database will faithfully export and record all operations on the blockchain, so that all data on the chain is under control.! +With the above-mentioned database export status and information, the use of blockchain instantly becomes easy, testing is more handy, like a tiger。The database will faithfully export and record all operations on the blockchain, so that all data on the chain is under control! ## SUMMARY -High-quality testing can improve the quality of smart contract writing, eliminate significant losses caused by contract writing loopholes, and slow down the contract development curve, helping to improve the efficiency of smart contract development.。"Over the mountains, we meet the sea.。Travel across the snowfield, coinciding with the flowering period.。"Only after a hard, rigorous, careful, and repeated test cycle can we welcome the dawn of smart contract release.。At this point, the smart contract series is also coming to an end.。This tutorial introduces the concept of smart contracts, introduces the basic language features, design patterns, programming strategies of Solidity, and then goes deep into the EVM core, Solidity testing.。In just a few small articles, it is difficult to exhaust all the details of Solidity programming. We look forward to the community of developers to participate and share more knowledge and experience to build a better FISCO BCOS community together.。 +High-quality testing can improve the quality of smart contract writing, eliminate significant losses caused by contract writing loopholes, and slow down the contract development curve, helping to improve the efficiency of smart contract development。"Over the mountains, we meet the sea。Travel across the snowfield, coinciding with the flowering period。"Only after a hard, rigorous, careful, and repeated test cycle can we welcome the dawn of smart contract release。At this point, the smart contract series is also coming to an end。This tutorial introduces the concept of smart contracts, introduces the basic language features, design patterns, programming strategies of Solidity, and then goes deep into the EVM core, Solidity testing。In just a few small articles, it is difficult to exhaust all the details of Solidity programming. We look forward to the community of developers to participate and share more knowledge and experience to build a better FISCO BCOS community together。 ------ diff --git a/3.x/en/docs/articles/3_features/35_contract/smart_contract_write_elegantly.md b/3.x/en/docs/articles/3_features/35_contract/smart_contract_write_elegantly.md index 826ef9f65..8b93d7e6c 100644 --- a/3.x/en/docs/articles/3_features/35_contract/smart_contract_write_elegantly.md +++ b/3.x/en/docs/articles/3_features/35_contract/smart_contract_write_elegantly.md @@ -4,17 +4,17 @@ Author : ZHANG Long | FISCO BCOS Core Developer ## Write at the beginning -As we all know, the emergence of smart contracts allows blockchain to handle not only simple transfer functions, but also complex business logic processing, the core of which lies in the account model.。 +As we all know, the emergence of smart contracts allows blockchain to handle not only simple transfer functions, but also complex business logic processing, the core of which lies in the account model。 -Most of the current blockchain platforms integrate the Ethereum Virtual Machine and use Solidity as the development language for smart contracts.。The Solidity language not only supports basic / complex data type operations and logical operations, but also provides features related to high-level languages, such as inheritance and overloading.。 +Most of the current blockchain platforms integrate the Ethereum Virtual Machine and use Solidity as the development language for smart contracts。The Solidity language not only supports basic / complex data type operations and logical operations, but also provides features related to high-level languages, such as inheritance and overloading。 -In addition, the Solidity language also has many common methods built in, such as a complete set of encryption algorithm interfaces, making data encryption and decryption very simple.;Provides event events to track the execution status of transactions and facilitate business logic processing, monitoring, and O & M。 +In addition, the Solidity language also has many common methods built in, such as a complete set of encryption algorithm interfaces, making data encryption and decryption very simple;Provides event events to track the execution status of transactions and facilitate business logic processing, monitoring, and O & M。 -However, when we write smart contract code, we still encounter various problems, including code bugs, scalability, maintainability, and business interoperability friendliness.。At the same time, the Solidity language is not perfect, needs to be executed on the EVM, the language itself and the execution environment will also bring us some holes.。 +However, when we write smart contract code, we still encounter various problems, including code bugs, scalability, maintainability, and business interoperability friendliness。At the same time, the Solidity language is not perfect, needs to be executed on the EVM, the language itself and the execution environment will also bring us some holes。 -Based on this, we combine the previous projects and experience to sort out, hoping to summarize the problems encountered before, and provide reference for the follow-up development.。 +Based on this, we combine the previous projects and experience to sort out, hoping to summarize the problems encountered before, and provide reference for the follow-up development。 -**⊙ Note**: Smart contract security is not discussed in this article, the smart contract code is written in version 0.4.。 +**⊙ Note**: Smart contract security is not discussed in this article, the smart contract code is written in version 0.4。 ## Solidity FAQ @@ -22,7 +22,7 @@ Based on this, we combine the previous projects and experience to sort out, hopi The stack depth of EVM is 1024, but the maximum access depth of EVM instruction set is 16, which brings many restrictions to the writing of smart contracts. Common errors are: stack overflows。 -This error occurs during the smart contract compilation phase.。We know that the EVM stack is used to store temporary or local variables, such as function parameters or variables inside functions.。Optimization is generally from these two aspects.。 +This error occurs during the smart contract compilation phase。We know that the EVM stack is used to store temporary or local variables, such as function parameters or variables inside functions。Optimization is generally from these two aspects。 The following code snippet may have a stack overflow problem: @@ -47,7 +47,7 @@ function addStudentScores( Function parameters and local variables can not exceed 16, the general recommendation is not more than 10。Problems with too many parameters: 1. Easy stack overflow; -2. Writing code is difficult and error-prone.; +2. Writing code is difficult and error-prone; 3. Not conducive to business understanding and maintenance; 4. Not easy to expand。 @@ -67,22 +67,22 @@ function addStudentScores( ### BINARY FIELD EXTRA LONG -The smart contract is compiled by the JAVA compiler to generate the corresponding JAVA contract, in which there is an important constant field BINARY, which is the code of the smart contract, that is, the contract code.。The contract code is used to sign when the contract is deployed, and the BINARY corresponding to each contract change will be different.。 +The smart contract is compiled by the JAVA compiler to generate the corresponding JAVA contract, in which there is an important constant field BINARY, which is the code of the smart contract, that is, the contract code。The contract code is used to sign when the contract is deployed, and the BINARY corresponding to each contract change will be different。 -When writing smart contracts, if a single smart contract code is long, the compiled BINARY field will be large。In the JAVA contract, the BINARY field is stored in the String type, and the maximum length of the String type is 65534. If the smart contract code is too much, the length of the BINARY field will exceed the maximum length of the String type, causing the String type to overflow and reporting an error.。 +When writing smart contracts, if a single smart contract code is long, the compiled BINARY field will be large。In the JAVA contract, the BINARY field is stored in the String type, and the maximum length of the String type is 65534. If the smart contract code is too much, the length of the BINARY field will exceed the maximum length of the String type, causing the String type to overflow and reporting an error。 The solution is also very simple: -1. Reuse the code as much as possible, for example, some judgments appear multiple times in different methods and can be extracted, which is also convenient for subsequent maintenance.; -2. Contract split, the split of a contract into multiple contracts, the general occurrence of String out of bounds, basically can show that the contract design is unreasonable.。 +1. Reuse the code as much as possible, for example, some judgments appear multiple times in different methods and can be extracted, which is also convenient for subsequent maintenance; +2. Contract split, the split of a contract into multiple contracts, the general occurrence of String out of bounds, basically can show that the contract design is unreasonable。 ### Use string type with caution -The string type is a special dynamic byte array that cannot be directly converted to a fixed-length array, and its parsing and array conversion are also very complex.。 +The string type is a special dynamic byte array that cannot be directly converted to a fixed-length array, and its parsing and array conversion are also very complex。 -In addition, the string type wastes space, is very expensive (consumes a lot of gas), and cannot be passed between contracts (except for the new experimental ABI compiler), so it is recommended to use bytes instead, except for special scenarios, such as unknown length byte arrays or reserved fields.。 +In addition, the string type wastes space, is very expensive (consumes a lot of gas), and cannot be passed between contracts (except for the new experimental ABI compiler), so it is recommended to use bytes instead, except for special scenarios, such as unknown length byte arrays or reserved fields。 -⊙ **Remarks**The string type can be passed between contracts by adding a new experimental ABI compiler (code below) to the contract.。 +⊙ **Remarks**The string type can be passed between contracts by adding a new experimental ABI compiler (code below) to the contract。 ``` pragma experimental ABIEncoderV2; @@ -92,14 +92,14 @@ pragma experimental ABIEncoderV2; ### Layered design -Most examples of smart contracts on the Internet, such as the famous ERC20, are usually written in a smart contract file, which is not a problem in itself, but inevitably in the face of complex business. +Most examples of smart contracts on the Internet, such as the famous ERC20, are usually written in a smart contract file, which is not a problem in itself, but inevitably in the face of complex business 1. Code all written in a file, this file is very large, not easy to view and understand, modify error-prone; -2. It is not easy for multiple people to collaborate and maintain, especially when business changes or code vulnerabilities occur, the deployment contract needs to be re-upgraded, resulting in the invalidation of the previous contract and the loss of relevant business data or assets.。 +2. It is not easy for multiple people to collaborate and maintain, especially when business changes or code vulnerabilities occur, the deployment contract needs to be re-upgraded, resulting in the invalidation of the previous contract and the loss of relevant business data or assets。 So, is there a way to upgrade a smart contract without affecting the original account (address)? -First answer: No.!(Except for CRUD based on the underlying distributed storage, currently FISCO BCOS 2.0 supports distributed storage, which can be upgraded directly through the CRUD operation database.。) +First answer: No!(Except for CRUD based on the underlying distributed storage, currently FISCO BCOS 2.0 supports distributed storage, which can be upgraded directly through the CRUD operation database。) But!No does not mean that can not be upgraded, smart contract upgrade after the biggest problem is the data, so as long as the data is complete。 @@ -118,7 +118,7 @@ contract Students { } ``` -In this way, the code is all in a smart contract, if the existing smart contract can no longer meet the business requirements, such as the type of uint32 field needs to be upgraded to uint64, or a new field is added to the contract, such as sex, then the smart contract is useless and needs to be redeployed.。However, due to the redeployment, the contract address has changed and the previous data cannot be accessed.。 +In this way, the code is all in a smart contract, if the existing smart contract can no longer meet the business requirements, such as the type of uint32 field needs to be upgraded to uint64, or a new field is added to the contract, such as sex, then the smart contract is useless and needs to be redeployed。However, due to the redeployment, the contract address has changed and the previous data cannot be accessed。 One approach is to layer contracts, separating business logic and data, as follows: @@ -136,11 +136,11 @@ contract Student { } ``` -This writing method makes the logic and data separate. When you need to add a sex field, you can write two StudentController contracts for the original data. By version distinction, the new student adopts the new logic, which requires compatibility processing at the business level. The biggest problem is that the interactive operation of the original data needs to be completed across contracts, which is very inconvenient, such as querying all student information.。 +This writing method makes the logic and data separate. When you need to add a sex field, you can write two StudentController contracts for the original data. By version distinction, the new student adopts the new logic, which requires compatibility processing at the business level. The biggest problem is that the interactive operation of the original data needs to be completed across contracts, which is very inconvenient, such as querying all student information。 -We layered again, with an extra map layer dedicated to contract data management, even if there are problems with both the business logic layer and the data layer, it doesn't matter, just rewrite the business logic layer and the data layer, and perform special processing on the original data to be compatible.。However, this approach requires version control (version) in the data contract in advance, using different logic for different data.。 +We layered again, with an extra map layer dedicated to contract data management, even if there are problems with both the business logic layer and the data layer, it doesn't matter, just rewrite the business logic layer and the data layer, and perform special processing on the original data to be compatible。However, this approach requires version control (version) in the data contract in advance, using different logic for different data。 -The biggest advantage of this approach is that all data is stored in StudentMap, changes to data contracts and logical contracts will not affect the data, and in subsequent upgrades, a controller contract can be used to achieve compatibility with old and new data, as shown below. +The biggest advantage of this approach is that all data is stored in StudentMap, changes to data contracts and logical contracts will not affect the data, and in subsequent upgrades, a controller contract can be used to achieve compatibility with old and new data, as shown below ``` contract StudentController { @@ -172,9 +172,9 @@ contract Student { ### Unified Interface -Smart contracts have many high-level language features, but they still have many limitations.。For accurate business processing, you need to use Event events for tracking. For different contracts and methods, you can write different Event events, as follows: +Smart contracts have many high-level language features, but they still have many limitations。For accurate business processing, you need to use Event events for tracking. For different contracts and methods, you can write different Event events, as follows: -PS: You can also use the require method to process, but the require method does not support dynamic variables, each require after processing needs to fill in a specific error content, in the SDK level coupling is too heavy, and is not easy to expand.。 +PS: You can also use the require method to process, but the require method does not support dynamic variables, each require after processing needs to fill in a specific error content, in the SDK level coupling is too heavy, and is not easy to expand。 ``` contract StudentController { @@ -194,9 +194,9 @@ contract StudentController { } ``` -There is no problem with this approach, but we need to write a large number of Event events, adding to the complexity of smart contracts.。Every time we add a new method or processing logic, we need to write a special event to track, the code is too intrusive and error-prone。 +There is no problem with this approach, but we need to write a large number of Event events, adding to the complexity of smart contracts。Every time we add a new method or processing logic, we need to write a special event to track, the code is too intrusive and error-prone。 -In addition, SDK development based on smart contracts requires writing a lot of non-reusable code to parse the Event event for each transaction (method) due to the different Event events.。This way of writing, the understanding and maintenance of the code is very poor.。To solve this problem, we just need to write a base contract CommonLib as follows: +In addition, SDK development based on smart contracts requires writing a lot of non-reusable code to parse the Event event for each transaction (method) due to the different Event events。This way of writing, the understanding and maintenance of the code is very poor。To solve this problem, we just need to write a base contract CommonLib as follows: ``` contract CommonLib { @@ -229,26 +229,26 @@ contract StudentController is CommonLib { } ``` -When adding a modifyStudentName method or other contract, the original method is to define multiple Event events according to the possible situation of the method, and then write the parsing method for different events in the SDK, which is a lot of work.。Now you only need to define a pair of constants in CommonLib, and the SDK code can be completely reused with almost no new work。 +When adding a modifyStudentName method or other contract, the original method is to define multiple Event events according to the possible situation of the method, and then write the parsing method for different events in the SDK, which is a lot of work。Now you only need to define a pair of constants in CommonLib, and the SDK code can be completely reused with almost no new work。 -⊙ **Note**: In the above example, commonEvent contains three parameters, where txCode is the transaction type, which is the transaction method called, and rtnCode is the return code, which indicates what happens when the transaction method represented by txCode is executed.。There is also an Id field in commonEvent, which is used to associate the business field studentId. In a specific project, the associated business field can be defined and adjusted by itself.。 +⊙ **Note**: In the above example, commonEvent contains three parameters, where txCode is the transaction type, which is the transaction method called, and rtnCode is the return code, which indicates what happens when the transaction method represented by txCode is executed。There is also an Id field in commonEvent, which is used to associate the business field studentId. In a specific project, the associated business field can be defined and adjusted by itself。 ### Code Details -Code details can experience a coder's ability and professional ethics.。When the business is in a hurry, code details are often overlooked, and code details (style) vary from person to person.。For a multi-person collaborative project, a unified code style and code specification can greatly improve R & D efficiency, reduce R & D and maintenance costs, and reduce code error rates.。 +Code details can experience a coder's ability and professional ethics。When the business is in a hurry, code details are often overlooked, and code details (style) vary from person to person。For a multi-person collaborative project, a unified code style and code specification can greatly improve R & D efficiency, reduce R & D and maintenance costs, and reduce code error rates。 #### Naming Specification -There is no standard for naming smart contracts, but the team can follow an industry consensus specification.。After actual combat, recommend the following style (not mandatory), the following code block。 +There is no standard for naming smart contracts, but the team can follow an industry consensus specification。After actual combat, recommend the following style (not mandatory), the following code block。 -1. Contract naming: the use of hump naming, capital initials, and can express the corresponding business meaning.; -2. Method naming: the use of hump naming, the first letter is lowercase, and can express the corresponding business meaning.; +1. Contract naming: the use of hump naming, capital initials, and can express the corresponding business meaning; +2. Method naming: the use of hump naming, the first letter is lowercase, and can express the corresponding business meaning; 3. Event naming: Hump naming, initial lowercase, and can express the corresponding business meaning, ending with Event; -4. Contract variables: named after the hump, starting with _, with lowercase initials, and expressing the corresponding business meaning.; -5. Method entry: the use of hump naming, the first letter is lowercase, and can express the corresponding business meaning.; -6. Method parameters: It is recommended to write only the parameter type, without naming, except in special cases.; -7. Event parameters: the same method into the reference.; -8. Local variables: the same method into the reference.。 +4. Contract variables: named after the hump, starting with _, with lowercase initials, and expressing the corresponding business meaning; +5. Method entry: the use of hump naming, the first letter is lowercase, and can express the corresponding business meaning; +6. Method parameters: It is recommended to write only the parameter type, without naming, except in special cases; +7. Event parameters: the same method into the reference; +8. Local variables: the same method into the reference。 ``` contract Student { @@ -262,7 +262,7 @@ contract Student { #### conditional judgment -In smart contracts, conditions can be judged through logical control, such as if statements, or built-in methods provided by the solidity language, such as require.。 +In smart contracts, conditions can be judged through logical control, such as if statements, or built-in methods provided by the solidity language, such as require。 There are some differences between the two in execution, in general, there is no problem using require, but require does not support parameter transfer, if the business needs to give a clear exception in the case of exception, it is recommended to use the if statement combined with the event, as follows。 @@ -277,9 +277,9 @@ if(_studentMapping.studentExist(studentId)){ #### Constants and Notes -In smart contracts, constants, like other programming languages, need to be named in uppercase and underscore, and the naming needs to have business implications, and the need to use the constant keyword modification, it is recommended to place at the beginning of the contract.。 +In smart contracts, constants, like other programming languages, need to be named in uppercase and underscore, and the naming needs to have business implications, and the need to use the constant keyword modification, it is recommended to place at the beginning of the contract。 -Constants also need to be distinguished, and external interface constants are decorated with public and placed in the base contract.。Business-related constants are decorated with private and placed in specific business logic contracts.。As follows: +Constants also need to be distinguished, and external interface constants are decorated with public and placed in the base contract。Business-related constants are decorated with private and placed in specific business logic contracts。As follows: ``` contract CommonLib { @@ -319,13 +319,13 @@ contract StudentController { #### Pouting scheme -In the smart contract design process, no one can guarantee that their code will meet the business requirements, because business changes are absolute.。At the same time, no one can guarantee that the business and operators will not make mistakes, for example, the business does not check certain fields resulting in illegal data on the chain, or because the business operators hand errors, malicious operations, resulting in wrong data on the chain.。 +In the smart contract design process, no one can guarantee that their code will meet the business requirements, because business changes are absolute。At the same time, no one can guarantee that the business and operators will not make mistakes, for example, the business does not check certain fields resulting in illegal data on the chain, or because the business operators hand errors, malicious operations, resulting in wrong data on the chain。 -Unlike other traditional systems, blockchain systems can modify data by manually modifying libraries or files, and blockchains must modify data through transactions.。 +Unlike other traditional systems, blockchain systems can modify data by manually modifying libraries or files, and blockchains must modify data through transactions。 -For business changes, you can add some reserved fields when writing smart contracts for possible subsequent business changes.。Generally defined as a generic data type is more appropriate, such as string, on the one hand, string type storage capacity is large, on the other hand, almost anything can be stored.。 +For business changes, you can add some reserved fields when writing smart contracts for possible subsequent business changes。Generally defined as a generic data type is more appropriate, such as string, on the one hand, string type storage capacity is large, on the other hand, almost anything can be stored。 -We can store the extended data into the string field through data processing at the SDK level, and provide the corresponding data processing reverse operation to parse the data when using it, for example, in the Student contract, add the reserved field, as shown below.。At this stage, reserved has no effect and is empty in smart contracts.。 +We can store the extended data into the string field through data processing at the SDK level, and provide the corresponding data processing reverse operation to parse the data when using it, for example, in the Student contract, add the reserved field, as shown below。At this stage, reserved has no effect and is empty in smart contracts。 ``` contract Student { @@ -343,7 +343,7 @@ contract Student { } ``` -For data errors caused by manual errors or illegal operations, be sure to reserve the relevant interfaces so that in an emergency, you can not modify the contract, but update the SDK to fix the data on the chain (SDK can not be implemented first)。For example, for the owner field in the Student contract, add the set operation.。 +For data errors caused by manual errors or illegal operations, be sure to reserve the relevant interfaces so that in an emergency, you can not modify the contract, but update the SDK to fix the data on the chain (SDK can not be implemented first)。For example, for the owner field in the Student contract, add the set operation。 ``` contract Student { @@ -356,8 +356,8 @@ contract Student { } ``` -Special attention should be paid to the fact that for reserved fields and reserved methods, their operation rights must be ensured to prevent the introduction of more problems。At the same time, reserved fields and reserved methods are an abnormal design, with advance consciousness, but must avoid over-design, which will lead to a waste of storage space of smart contracts, and improper use of reserved methods will bring hidden dangers to the security of the business.。 +Special attention should be paid to the fact that for reserved fields and reserved methods, their operation rights must be ensured to prevent the introduction of more problems。At the same time, reserved fields and reserved methods are an abnormal design, with advance consciousness, but must avoid over-design, which will lead to a waste of storage space of smart contracts, and improper use of reserved methods will bring hidden dangers to the security of the business。 ## Write at the end -The development of blockchain applications involves many aspects, smart contracts are the core, this article gives some suggestions and optimization methods in the process of developing smart contracts, but it is not complete and perfect, and essentially can not eliminate the emergence of bugs, but through optimization methods, you can make the code more robust and easy to maintain, from this point of view, has the basic conscience requirements of the industry.。 \ No newline at end of file +The development of blockchain applications involves many aspects, smart contracts are the core, this article gives some suggestions and optimization methods in the process of developing smart contracts, but it is not complete and perfect, and essentially can not eliminate the emergence of bugs, but through optimization methods, you can make the code more robust and easy to maintain, from this point of view, has the basic conscience requirements of the industry。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/35_contract/solidity_advanced_features.md b/3.x/en/docs/articles/3_features/35_contract/solidity_advanced_features.md index a15143171..aada198df 100644 --- a/3.x/en/docs/articles/3_features/35_contract/solidity_advanced_features.md +++ b/3.x/en/docs/articles/3_features/35_contract/solidity_advanced_features.md @@ -4,11 +4,11 @@ Author : MAO Jiayu | FISCO BCOS Core Developer ## **Foreword** -FISCO BCOS uses Solidity language for smart contract development。Solidity is a Turing-complete programming language designed for blockchain platforms, supporting features of multiple high-level languages such as function calls, modifiers, overloads, events, inheritance, and libraries.。The first two articles in this series introduced the concept of smart contracts and the basic features of Solidity.。This article will introduce some advanced features of Solidity to help readers get started quickly and write high-quality, reusable Solidity code.。 +FISCO BCOS uses Solidity language for smart contract development。Solidity is a Turing-complete programming language designed for blockchain platforms, supporting features of multiple high-level languages such as function calls, modifiers, overloads, events, inheritance, and libraries。The first two articles in this series introduced the concept of smart contracts and the basic features of Solidity。This article will introduce some advanced features of Solidity to help readers get started quickly and write high-quality, reusable Solidity code。 ## **Rational control of types of functions and variables** -Based on the classic object-oriented programming principles of the Least Knowledge Principle, an object should have minimal knowledge of other objects.。Good Solidity programming practices should also be consistent with this principle: each contract clearly and reasonably defines the visibility of functions, exposes minimal information to the outside, and manages the visibility of internal functions.。At the same time, correctly modifying the types of functions and variables can provide different levels of protection for data within the contract to prevent unintended operations in the program from causing data errors;It also improves code readability and quality, reducing misunderstandings and bugs.;It is more conducive to optimizing the cost of contract execution and improving the efficiency of the use of resources on the chain.。 +Based on the classic object-oriented programming principles of the Least Knowledge Principle, an object should have minimal knowledge of other objects。Good Solidity programming practices should also be consistent with this principle: each contract clearly and reasonably defines the visibility of functions, exposes minimal information to the outside, and manages the visibility of internal functions。At the same time, correctly modifying the types of functions and variables can provide different levels of protection for data within the contract to prevent unintended operations in the program from causing data errors;It also improves code readability and quality, reducing misunderstandings and bugs;It is more conducive to optimizing the cost of contract execution and improving the efficiency of the use of resources on the chain。 ### Hold the door to function operations: Function visibility @@ -16,7 +16,7 @@ Solidity has two function calls: - Internal call: also known as "message call"。Common calls to contract internal functions, parent contract functions, and library functions。(For example, suppose there is an f function in contract A, then inside contract A, other functions call the f function as f()。) -- External call: also known as "EVM call"。Generally cross-contract function calls.。Within the same contract, external calls can also be made.。(For example, assuming that there is an f-function in contract A, you can use A.f in contract B.()Call。Inside contract A, you can use this.f.()to call。)。 +- External calls: Also known as "EVM calls"。Generally cross-contract function calls。Within the same contract, external calls can also be made。(For example, assuming that there is an f-function in contract A, you can use A.f in contract B()Call。Inside contract A, you can use this.f()to call。)。 Functions can be modified by specifying external, public, internal, or private identifiers。 @@ -27,21 +27,21 @@ Functions can be modified by specifying external, public, internal, or private i | internal | Only internal calls are supported。 | | private | Used only in the current contract and cannot be inherited。 | -Based on the above table, we can derive the visibility of the function public > external > internal > private。Also, if the function does not use the above type identifier, the function type is public by default。 +Based on the above table, we can derive the visibility of the function public> external > internal > private。Also, if the function does not use the above type identifier, the function type is public by default。 To sum up, we can summarize the different usage scenarios of the above identifiers: -- public, public function, system default。Usually used to embellish**A function that can be exposed to the outside world, and the function may be called internally at the same time.。** +-public, public function, system default。Usually used to embellish**A function that can be exposed to the outside world, and the function may be called internally at the same time。** -- external, external function, recommended**Exposed to the outside only**The function uses the。When a parameter of a function is very large, if you explicitly mark the function as external, you can force the function storage location to be set to calldata, which saves the storage or computing resources required for function execution.。 +-external, external function, recommended**Exposed to the outside only**The function uses the。When a parameter of a function is very large, if you explicitly mark the function as external, you can force the function storage location to be set to calldata, which saves the storage or computing resources required for function execution。 -- internal, internal functions, recommended for all contracts**Not exposed outside the contract**function to avoid the risk of being attacked due to permission exposure.。 +-internal, internal function, recommended for all contracts**Not exposed outside the contract**function to avoid the risk of being attacked due to permission exposure。 -- private, private functions, in very few strictly protected contract functions**Not open to outside of contract and not inheritable**used in the scene.。 +-private, private functions, strictly protected contract functions in very few**Not open to outside of contract and not inheritable**used in the scene。 -However, it should be noted that no matter what identifier is used, even private, the entire function execution process and data are visible to all nodes, and other nodes can verify and replay arbitrary historical functions.。In fact, all the data of the entire smart contract is transparent to the participating nodes of the blockchain.。 +However, it should be noted that no matter what identifier is used, even private, the entire function execution process and data are visible to all nodes, and other nodes can verify and replay arbitrary historical functions。In fact, all the data of the entire smart contract is transparent to the participating nodes of the blockchain。 -Users who are new to the blockchain often misunderstand that the privacy of the data on the blockchain can be controlled and protected through permission control operations.。This is a wrong view。In fact, under the premise that the blockchain business data is not specially encrypted, all the data in the same ledger of the blockchain is agreed to fall on all nodes, and the data on the chain is globally public and the same, and smart contracts can only control and protect the execution rights of contract data.。How to correctly select function modifiers is a "required course" in contract programming practice, only to master the true meaning of this section can freely control the contract function access rights, improve contract security.。 +Users who are new to the blockchain often misunderstand that the privacy of the data on the blockchain can be controlled and protected through permission control operations。This is a wrong view。In fact, under the premise that the blockchain business data is not specially encrypted, all the data in the same ledger of the blockchain is agreed to fall on all nodes, and the data on the chain is globally public and the same, and smart contracts can only control and protect the execution rights of contract data。How to correctly select function modifiers is a "required course" in contract programming practice, only to master the true meaning of this section can freely control the contract function access rights, improve contract security。 ### Exposing the least necessary information to the outside world: Visibility of variables @@ -63,11 +63,11 @@ contract Caller { } ``` -This mechanism is a bit like the @ Getter annotation provided by the lombok library in the Java language, which generates a get function for a POJO class variable by default, greatly simplifying the writing of some contract code.。Similarly, the visibility of variables needs to be reasonably modified, and variables that should not be exposed should be decisively modified with private to make contract code more in line with the "least known" design principle.。 +This mechanism is a bit like the @ Getter annotation provided by the lombok library in the Java language, which generates a get function for a POJO class variable by default, greatly simplifying the writing of some contract code。Similarly, the visibility of variables needs to be reasonably modified, and variables that should not be exposed should be decisively modified with private to make contract code more in line with the "least known" design principle。 ### Precise classification of functions: types of functions -Functions can be declared as pure and view, both of which can be seen in the figure below.。 +Functions can be declared as pure and view, both of which can be seen in the figure below。 | Function Type| Role| | -------- | ---------------------- | @@ -78,10 +78,10 @@ So, what is reading or modifying state??In simple terms, the two states are re In FISCO BCOS, the read status might be: -1. Read the state variable.。 +1. Read the state variable。 2. Access any member in block, tx, msg (except msg.sig and msg.data)。 -3. Call any function that is not marked as pure.。 -4. Use inline assembly that contains some opcodes.。 +3. Call any function that is not marked as pure。 +4. Use inline assembly that contains some opcodes。 And the modification status might be: @@ -89,19 +89,19 @@ And the modification status might be: 2. Generate events。 3. Create Other Contracts。 4. Using selfdestruct。 -5. Call any function that is not marked as view or pure.。 -6. Use the underlying call.。 -7. Use an inline assembly that contains a specific opcode.。 +5. Call any function that is not marked as view or pure。 +6. Use the underlying call。 +7. Use an inline assembly that contains a specific opcode。 Note that in some versions of the compiler, there are no mandatory syntax checks for these two keywords。It is recommended to use pure and view to declare functions as much as possible, for example, to declare library functions that do not read or modify any state as pure, which not only improves code readability, but also makes it more pleasing to the eye, why not? ### Value determined at compile time: state constant -The so-called state constant refers to the state variable declared as constant.。Once a state variable is declared constant, the value of the variable can only be determined at compile time and cannot be modified。The compiler will generally calculate the actual value of this variable in the compiled state and will not reserve storage space for the variable.。Therefore, constant only supports decorated value types and strings.。State constants are generally used to define well-defined business constant values。 +The so-called state constant refers to the state variable declared as constant。Once a state variable is declared constant, the value of the variable can only be determined at compile time and cannot be modified。The compiler will generally calculate the actual value of this variable in the compiled state and will not reserve storage space for the variable。Therefore, constant only supports decorated value types and strings。State constants are generally used to define well-defined business constant values。 ## Slice-Oriented Programming: Function Modifier -Solidity provides a powerful syntax for changing the behavior of functions: function modifiers。Once a function is decorated, the code defined within the decorator can be executed as a decoration of the function, similar to the concept of decorators in other high-level languages.。This is very abstract, let's look at a concrete example: +Solidity provides a powerful syntax for changing the behavior of functions: function modifiers。Once a function is decorated, the code defined within the decorator can be executed as a decoration of the function, similar to the concept of decorators in other high-level languages。This is very abstract, let's look at a concrete example: ``` pragma solidity ^0.4.11; @@ -128,9 +128,9 @@ As shown above, after defining the onlyOwner decorator, within the decorator, th So, the actual execution order of the code becomes: 1. Execute the statement of the onlyOwner decorator, first execute the require statement。(execute line 9) -2. Execute the statement of the changeOwner function.。(Execute line 15) +2. Execute the statement of the changeOwner function。(Execute line 15) -Because the changeOwner function is modified by the onlyOwner function, this function can only be called successfully if msg.sender is the owner, otherwise an error will be reported and rolled back.。At the same time, the decorator can also pass in parameters, for example, the above decorator can also be written as: +Because the changeOwner function is modified by the onlyOwner function, this function can only be called successfully if msg.sender is the owner, otherwise an error will be reported and rolled back。At the same time, the decorator can also pass in parameters, for example, the above decorator can also be written as: ``` modifier onlyOwner(address sender) { @@ -143,7 +143,7 @@ function changeOwner(address _owner) public onlyOwner(msg.sender) { } ``` -The same function can have multiple modifiers, with spaces in between, and the modifiers check for execution in turn.。In addition, decorators can be inherited and overridden。Because of the power it provides, decorators are also often used for permission control, input checking, logging, etc.。For example, we can define a modifier for the execution of a trace function: +The same function can have multiple modifiers, with spaces in between, and the modifiers check for execution in turn。In addition, decorators can be inherited and overridden。Because of the power it provides, decorators are also often used for permission control, input checking, logging, etc。For example, we can define a modifier for the execution of a trace function: ``` event LogStartMethod(); @@ -156,13 +156,13 @@ modifier logMethod { } ``` -In this way, any function decorated with the logMethod decorator can log its function before and after execution to achieve the log wrap effect.。If you are used to AOP using the Spring framework, you can also try to implement a simple AOP function with modifier.。 +In this way, any function decorated with the logMethod decorator can log its function before and after execution to achieve the log wrap effect。If you are used to AOP using the Spring framework, you can also try to implement a simple AOP function with modifier。 -The most common way to open a modifier is through a validator that provides a function。In practice, some of the check statements of the contract code are often abstracted and defined as a modifier, such as the onlyOwner in the above example is a classic permission checker.。In this way, even the logic of the check can be quickly reused, and users no longer have to worry about the smart contract being full of parameter checks or other validation code.。 +The most common way to open a modifier is through a validator that provides a function。In practice, some of the check statements of the contract code are often abstracted and defined as a modifier, such as the onlyOwner in the above example is a classic permission checker。In this way, even the logic of the check can be quickly reused, and users no longer have to worry about the smart contract being full of parameter checks or other validation code。 ## Logs that can be debugged: Events in the contract -After introducing functions and variables, let's talk about one of Solidity's more unique advanced features - the event mechanism.。 +After introducing functions and variables, let's talk about one of Solidity's more unique advanced features - the event mechanism。 Events allow us to easily use EVM's logging infrastructure, while Solidity's events have the following effects: @@ -172,7 +172,7 @@ Events allow us to easily use EVM's logging infrastructure, while Solidity's eve The use of events is very simple, two steps to play。 -- The first step is to define an event using the keyword "event."。It is recommended that the naming of the event start with a specific prefix or end with a specific suffix, which is easier to distinguish from the function, in this article we will unify the "Log" prefix to name the event。Below, we use "event" to define an event that is tracked by a function call. +- The first step is to define an event using the keyword "event"。It is recommended that the naming of the event start with a specific prefix or end with a specific suffix, which is easier to distinguish from the function, in this article we will unify the "Log" prefix to name the event。Below, we use "event" to define an event that is tracked by a function call ``` event LogCallTrace(address indexed from, address indexed to, bool result); @@ -181,7 +181,7 @@ event LogCallTrace(address indexed from, address indexed to, bool result); Events can be inherited in a contract。When they are called, the parameters are stored in the transaction's log。These logs are saved to the blockchain, associated with the address。In the above example, the parameters are searched with the indexed tag, otherwise, these parameters are stored in the log data and cannot be searched。 -- The second step is to trigger the defined event within the corresponding function。When calling an event, add the "emit" keyword before the event name: +-The second step is to trigger the definition event in the corresponding function。When calling an event, add the "emit" keyword before the event name: ``` function f() public { @@ -191,19 +191,19 @@ function f() public { In this way, when the function body is executed, it will trigger the execution of LogCallTrace。 -Finally, in the Java SDK of FISCO BCOS, the contract event push function provides an asynchronous push mechanism for contract events. The client sends a registration request to the node, which carries the contract event parameters that the client is concerned about.。For more details, please refer to the contract event push function document.。In the SDK, you can search by a specific value based on the indexed property of the event。[Contract Event Push Function Document](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk.html#id14): +Finally, in the Java SDK of FISCO BCOS, the contract event push function provides an asynchronous push mechanism for contract events. The client sends a registration request to the node, which carries the contract event parameters that the client is concerned about。For more details, please refer to the contract event push function document。In the SDK, you can search by a specific value based on the indexed property of the event。[Contract Event Push Function Document](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk.html#id14): -However, logs and events cannot be accessed directly, not even in contracts created。But the good news is that the definition and declaration of the log is very useful for tracing and exporting "after the fact."。For example, we can define and bury enough events in the writing of contracts, and through WeBASE's data export subsystem we can export all logs to databases such as MySQL.。This is particularly applicable to scenarios such as generating reconciliation files, generating reports, and OLTP queries for complex businesses.。In addition, WeBASE provides a dedicated code generation subsystem to help analyze specific business contracts and automatically generate the appropriate code.。 +However, logs and events cannot be accessed directly, not even in contracts created。But the good news is that the definition and declaration of the log is very useful for tracing and exporting "after the fact."。For example, we can define and bury enough events in the writing of contracts, and through WeBASE's data export subsystem we can export all logs to databases such as MySQL。This is particularly applicable to scenarios such as generating reconciliation files, generating reports, and OLTP queries for complex businesses。In addition, WeBASE provides a dedicated code generation subsystem to help analyze specific business contracts and automatically generate the appropriate code。 -- [Data Export Subsystem for WeBASE](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Collect-Bee/index.html) +- [WeBASE's Data Export Subsystem](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Collect-Bee/index.html) - [Code Generation Subsystem](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Codegen-Monkey/index.html) -In Solidity, events are a very useful mechanism. If the biggest difficulty in developing smart contracts is debug, then making good use of the event mechanism allows you to quickly subdue Solidity development.。 +In Solidity, events are a very useful mechanism. If the biggest difficulty in developing smart contracts is debug, then making good use of the event mechanism allows you to quickly subdue Solidity development。 ## Object-Oriented Overloading -Overloading refers to a function with the same name whose contract has multiple different parameters.。For the caller, you can use the same function name to call multiple functions with the same function but different parameters.。In some scenarios, this operation can make the code clearer and easier to understand, and I believe readers with some programming experience will have a deep understanding of this.。Here's a typical overload syntax: +Overloading refers to a function with the same name whose contract has multiple different parameters。For the caller, you can use the same function name to call multiple functions with the same function but different parameters。In some scenarios, this operation can make the code clearer and easier to understand, and I believe readers with some programming experience will have a deep understanding of this。Here's a typical overload syntax: ``` pragma solidity ^0.4.25; @@ -219,11 +219,11 @@ contract Test { } ``` -Note that there is only one constructor per contract, which means that contract constructors are not overloaded.。We can imagine a world without overloading, programmers must be racking their brains and trying to name functions, and everyone may have to lose a few more hairs!。 +Note that there is only one constructor per contract, which means that contract constructors are not overloaded。We can imagine a world without overloading, programmers must be racking their brains and trying to name functions, and everyone may have to lose a few more hairs!。 ## Object-Oriented Inheritance -Solidity uses "is" as the inheritance key。Thus, the following code indicates that contract B inherits contract A. +Solidity uses "is" as the inheritance key。Thus, the following code indicates that contract B inherits contract A ``` pragma solidity ^0.4.25; @@ -235,11 +235,11 @@ contract B is A { } ``` -Inherited contract B has access to all non-private functions and state variables of inherited contract A.。In Solidity, the underlying implementation principle of inheritance is that when a contract inherits from multiple contracts, only one contract is created on the blockchain, and the code of all base contracts is copied into the created contract.。Compared to C++Or the inheritance mechanism of languages such as Java, Solidity's inheritance mechanism is somewhat similar to Python, supporting multiple inheritance mechanisms.。Therefore, one contract can be used in Solidity to inherit multiple contracts。In some high-level languages, such as Java, only single inheritance is supported for security and reliability reasons, and multiple inheritance is implemented by using the interface mechanism.。For most scenarios, a single inheritance mechanism is sufficient.。Multiple inheritance will bring a lot of complex technical problems, such as the so-called "diamond inheritance" and so on, it is recommended to avoid complex multiple inheritance as much as possible in practice.。Inheritance simplifies the understanding and description of abstract contract models, clearly reflects the hierarchical relationships between related contracts, and provides software reuse capabilities.。This avoids code and data redundancy and increases program reusability。 +Inherited contract B has access to all non-private functions and state variables of inherited contract A。In Solidity, the underlying implementation principle of inheritance is that when a contract inherits from multiple contracts, only one contract is created on the blockchain, and the code of all base contracts is copied into the created contract。Compared to C++Or the inheritance mechanism of languages such as Java, Solidity's inheritance mechanism is somewhat similar to Python, supporting multiple inheritance mechanisms。Therefore, one contract can be used in Solidity to inherit multiple contracts。In some high-level languages, such as Java, only single inheritance is supported for security and reliability reasons, and multiple inheritance is implemented by using the interface mechanism。For most scenarios, a single inheritance mechanism is sufficient。Multiple inheritance will bring a lot of complex technical problems, such as the so-called "diamond inheritance" and so on, it is recommended to avoid complex multiple inheritance as much as possible in practice。Inheritance simplifies the understanding and description of abstract contract models, clearly reflects the hierarchical relationships between related contracts, and provides software reuse capabilities。This avoids code and data redundancy and increases program reusability。 ## **Object-Oriented Abstract Classes and Interfaces** -According to the dependency inversion principle, smart contracts should be as interface-oriented as possible, independent of implementation details.。Solidity supports mechanisms for abstract contracts and interfaces。If a contract has unimplemented methods, then it is an abstract contract.。For example: +According to the dependency inversion principle, smart contracts should be as interface-oriented as possible, independent of implementation details。Solidity supports mechanisms for abstract contracts and interfaces。If a contract has unimplemented methods, then it is an abstract contract。For example: ``` pragma solidity ^0.4.25; @@ -261,25 +261,25 @@ interface Vehicle { } ``` -Interfaces are similar to abstract contracts, but cannot implement any functions, with further limitations. +Interfaces are similar to abstract contracts, but cannot implement any functions, with further limitations 1. Cannot inherit other contracts or interfaces。 -2. The constructor cannot be defined.。 +2. The constructor cannot be defined。 3. Unable to define variable。 4. Unable to define structure -5. Enumeration cannot be defined.。 +5. Enumeration cannot be defined。 -Appropriate use of interfaces or abstract contracts helps enhance scalability of contract designs。However, due to the limitations of computing and storage resources on the blockchain EVM, it is important not to overdesign, which is also the sinkhole that old drivers who move from the high-level language technology stack to Solidity development often fall into.。 +Appropriate use of interfaces or abstract contracts helps enhance scalability of contract designs。However, due to the limitations of computing and storage resources on the blockchain EVM, it is important not to overdesign, which is also the sinkhole that old drivers who move from the high-level language technology stack to Solidity development often fall into。 ## **Avoid remaking wheels: library(Library)** -In software development, many classic principles can improve the quality of software, the most classic of which is to reuse tried and tested, repeatedly polished, rigorously tested high-quality code as much as possible.。In addition, reusing mature library code can improve code readability, maintainability, and even scalability。 +In software development, many classic principles can improve the quality of software, the most classic of which is to reuse tried and tested, repeatedly polished, rigorously tested high-quality code as much as possible。In addition, reusing mature library code can improve code readability, maintainability, and even scalability。 -Like all major languages, Solidity provides a library mechanism.。Solidity's library has the following basic features: +Like all major languages, Solidity provides a library mechanism。Solidity's library has the following basic features: -1. Users can use the keyword library to create contracts as they do with contracts.。 -Libraries cannot be inherited or inherited.。 -3. The internal function of the library is visible to the caller.。 +1. Users can use the keyword library to create contracts as they do with contracts。 +Libraries cannot be inherited or inherited。 +3. The internal function of the library is visible to the caller。 4. The library is stateless and state variables cannot be defined, but state variables explicitly provided by the calling contract can be accessed and modified。 Next, let's look at a simple example, the following is a LibSafeMath code base in the FISCO BCOS community。We've streamlined this, retaining only the functionality of addition: @@ -299,7 +299,7 @@ library LibSafeMath { } ``` -We just import the library file in the contract and use L.f()way to call the function, (e.g. LibSafeMath.add(a,b))。Next, we write a test contract that calls this library, which reads as follows. +We just import the library file in the contract and use L.f()way to call the function, (e.g. LibSafeMath.add(a,b))。Next, we write a test contract that calls this library, which reads as follows ``` pragma solidity ^0.4.25; @@ -346,7 +346,7 @@ return value: (2020) [group:1]> ``` -With the above example, we can clearly understand how the library should be used in Solidity.。Like Python, in some scenarios, the directive "using A for B";"Can be used to attach library functions (from library A) to any type (B)。These functions will receive the object that called them as the first argument (like Python's self variable)。This feature makes the use of the library easier and more intuitive。 +With the above example, we can clearly understand how the library should be used in Solidity。Like Python, in some scenarios, the directive "using A for B";"Can be used to attach library functions (from library A) to any type (B)。These functions will receive the object that called them as the first argument (like Python's self variable)。This feature makes the use of the library easier and more intuitive。 For example, we make the following simple changes to the code: @@ -367,7 +367,7 @@ contract TestAdd { } ``` -Verify that the results are still correct.。 +Verify that the results are still correct。 ``` ============================================================================================= @@ -399,9 +399,9 @@ return value: (2020) [group:1]> ``` -Better use of Solidity library helps developers reuse code better。In addition to the large number of open source, high-quality code libraries provided by the Solidity community, the FISCO BCOS community also plans to launch a new Solidity code library, open to community users, so stay tuned。Of course, you can also do it yourself, write reusable code library components, and share them with the community.。 +Better use of Solidity library helps developers reuse code better。In addition to the large number of open source, high-quality code libraries provided by the Solidity community, the FISCO BCOS community also plans to launch a new Solidity code library, open to community users, so stay tuned。Of course, you can also do it yourself, write reusable code library components, and share them with the community。 ## SUMMARY -This article introduces several high-level syntax features of Solidity contract writing, aiming to help readers quickly immerse themselves in the Solidity programming world.。The trick to writing high-quality, reusable Solidity code is to look at the community's best code, practice coding, summarize and evolve.。Looking forward to more friends in the community to share Solidity's valuable experience and wonderful stories, have fun:) +This article introduces several high-level syntax features of Solidity contract writing, aiming to help readers quickly immerse themselves in the Solidity programming world。The trick to writing high-quality, reusable Solidity code is to look at the community's best code, practice coding, summarize and evolve。Looking forward to more friends in the community to share Solidity's valuable experience and wonderful stories, have fun:) diff --git a/3.x/en/docs/articles/3_features/35_contract/solidity_basic_features.md b/3.x/en/docs/articles/3_features/35_contract/solidity_basic_features.md index 989dd2f22..3fdd326c6 100644 --- a/3.x/en/docs/articles/3_features/35_contract/solidity_basic_features.md +++ b/3.x/en/docs/articles/3_features/35_contract/solidity_basic_features.md @@ -2,11 +2,11 @@ Author: Chu Yuzhi | FISCO BCOS Core Developer -As mentioned in the previous article, most of the current alliance chain platforms, including FISCO BCOS, use Solidity as a smart contract development language, so it is necessary to be familiar with and get started with Solidity.。As a Turing-complete programming language designed for blockchain platforms, Solidity supports a variety of features such as function calls, modifiers, overloads, events, inheritance, etc., and has a wide range of influence and active community support in the blockchain community.。But for those new to blockchain, Solidity is an unfamiliar language。The smart contract writing phase will start from the basic features, advanced features, design patterns and programming strategies of Solidity, taking readers to understand Solidity and master its application to better develop smart contracts.。This article will focus on the basic features of Solidity, take you to develop a basic smart contract.。 +As mentioned in the previous article, most of the current alliance chain platforms, including FISCO BCOS, use Solidity as a smart contract development language, so it is necessary to be familiar with and get started with Solidity。As a Turing-complete programming language designed for blockchain platforms, Solidity supports a variety of features such as function calls, modifiers, overloads, events, inheritance, etc., and has a wide range of influence and active community support in the blockchain community。But for those new to blockchain, Solidity is an unfamiliar language。The smart contract writing phase will start from the basic features, advanced features, design patterns and programming strategies of Solidity, taking readers to understand Solidity and master its application to better develop smart contracts。This article will focus on the basic features of Solidity, take you to develop a basic smart contract。 ## Smart contract code structure -Any programming language has a canonical code structure that expresses how code is organized and written in a code file, as does Solidity.。In this section, we'll look at the code structure of a smart contract through a simple contract example.。 +Any programming language has a canonical code structure that expresses how code is organized and written in a code file, as does Solidity。In this section, we'll look at the code structure of a smart contract through a simple contract example。 ``` pragma solidity ^0.4.25; @@ -45,19 +45,19 @@ contract Sample{ The above procedure includes the following functions: -- Deploy contract through constructor -- Setting contract status via setValue function -- Query contract status through getValue function +- Deploy contract via constructor +- Set contract status via setValue function +- Query contract status via getValue function -The entire contract is divided into the following components. +The entire contract is divided into the following components - **State variable** - _ admin, _ state, these variables will be permanently saved, can also be modified by the function - **Constructor** - Used to deploy and initialize contracts -- **Event** - SetState, which functions like a log, records the occurrence of an event -- **Modifier** - onlyAdmin, used to add a layer to the function"coat" -- **Function** - setState, getState, for reading and writing state variables +- **Event** - SetState, function similar to log, record the occurrence of an event +- **Modifier** -onlyAdmin, used to add a layer to the function"coat" +- **Function** -setState, getState, for reading and writing state variables -The above components will be described one by one below.。 +The above components will be described one by one below。 ### State variable @@ -67,11 +67,11 @@ The state variable is the bone marrow of the contract, which records the busines uint private _state; ``` -The state variable is declared as: [type] [access modifier-Optional] [Field Name] +State variables are declared as: [type] [access modifier - optional] [field name] ### Constructor -The constructor is used to initialize the contract, which allows the user to pass in some basic data and write it to the state variable.。In the above example, the _ admin field is set as a prerequisite for the other functions shown later.。 +The constructor is used to initialize the contract, which allows the user to pass in some basic data and write it to the state variable。In the above example, the _ admin field is set as a prerequisite for the other functions shown later。 ``` constructor() public{ @@ -83,7 +83,7 @@ Unlike java, constructors do not support overloading and only one constructor ca ### Function -function is used to read and write state variables。Modifications to variables will be included in the transaction and will not take effect until confirmed by the blockchain network。After taking effect, the changes will be permanently saved in the blockchain ledger.。The function signature defines the function name, input and output parameters, access modifiers, and custom modifiers.。 +function is used to read and write state variables。Modifications to variables will be included in the transaction and will not take effect until confirmed by the blockchain network。After taking effect, the changes will be permanently saved in the blockchain ledger。The function signature defines the function name, input and output parameters, access modifiers, and custom modifiers。 ``` function setState(uint value) public onlyAdmin; @@ -97,7 +97,7 @@ function functionSample() public view returns(uint, uint){ } ``` -In this contract, there is also a function with the view modifier.。This view indicates that the function does not modify any state variables。Similar to view, there is the modifier pure, which indicates that the function is a pure function, even the state variables are not read, the operation of the function depends only on the parameters.。 +In this contract, there is also a function with the view modifier。This view indicates that the function does not modify any state variables。Similar to view, there is the modifier pure, which indicates that the function is a pure function, even the state variables are not read, the operation of the function depends only on the parameters。 ``` function add(uint a, uint b) public pure returns(uint){ @@ -105,7 +105,7 @@ function add(uint a, uint b) public pure returns(uint){ } ``` -If you try to modify the state variable in the view function or access the state variable in the pure function, the compiler will report an error.。 +If you try to modify the state variable in the view function or access the state variable in the pure function, the compiler will report an error。 ### Event @@ -125,9 +125,9 @@ emit SetState(value); Here are a few points to note: -- The name of the event can be specified arbitrarily and does not have to be linked to the function name, but it is recommended to hook both in order to clearly express what happened. +- The name of the event can be specified arbitrarily, not necessarily linked to the function name, but it is recommended to hook both in order to clearly express what happened -- When constructing an event, you can also not write emit, but because the event and the function are highly related in both name and parameter, it is easy to write the event as a function call by mistake, so it is not recommended.。 +-When constructing an event, you can also not write emit, but because the event and the function are highly related in both name and parameter, it is easy to write the event as a function call by mistake, so it is not recommended。 ``` function setState(uint value) public onlyAdmin{ @@ -142,7 +142,7 @@ function setState(uint value) public onlyAdmin{ ### Modifier -Modifiers are a very important part of a contract.。It hangs on the function declaration and provides some additional functionality for the function, such as checking, cleaning, etc.。In this case, the modifier onlyAdmin requires that before the function is called, you need to check whether the caller of the function is the administrator set at the time of the function deployment.(That is, the deployer of the contract)。 +Modifiers are a very important part of a contract。It hangs on the function declaration and provides some additional functionality for the function, such as checking, cleaning, etc。In this case, the modifier onlyAdmin requires that before the function is called, you need to check whether the caller of the function is the administrator set at the time of the function deployment(That is, the deployer of the contract)。 ``` //Modifer @@ -158,15 +158,15 @@ function setState(uint value) public onlyAdmin{ } ``` -It is worth noting that the underscore "_" defined in the modifier indicates the call of the function and refers to the function modified by the developer with the modifier.。In this case, the expression is the setState function call。 +It is worth noting that the underscore "_" defined in the modifier indicates the call of the function and refers to the function modified by the developer with the modifier。In this case, the expression is the setState function call。 ## Operation of Smart Contracts -Knowing the structure of the above smart contract example, you can run it directly, and there are many ways to run the contract, and you can take any one of them. +Knowing the structure of the above smart contract example, you can run it directly, and there are many ways to run the contract, and you can take any one of them -- Method 1: You can use [FISCO BCOS Console](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/installation.html#id7)The way to deploy the contract +-Method 1: You can use [FISCO BCOS Console](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/installation.html#id7)The way to deploy the contract -- Method 2: Use the online ide WEBASE provided by the FISCO BCOS open source project WeBASE-front run +-Method 2: Use the online ide WEBASE-front run provided by the FISCO BCOS open source project WeBASE - Method 3: Deploy and run the contract through the online ide remix, [remix address](http://remix.ethereum.org/) @@ -181,7 +181,7 @@ First, after typing the code in the file ide of remix, compile it through the co ### Deploy -After the compilation is successful, you can deploy the contract instance.。 +After the compilation is successful, you can deploy the contract instance。 ![](../../../../images/articles/solidity_basic_features/IMG_5446.PNG) @@ -191,7 +191,7 @@ After the contract is deployed, we call setState(4)。Upon successful execution, ![](../../../../images/articles/solidity_basic_features/IMG_5447.PNG) -Here, the user can see the transaction execution status(status), Transaction Executor(from), transaction input and output(decoded input, decoded output)Transaction overhead(execution cost)and the transaction log(logs)。In logs, we see that the SetState event is thrown, and the parameter inside also records the value 4 passed in by the event.。If we change the account to execute, the call will fail because the onlyAdmin modifier prevents the user from calling the。 +Here, the user can see the transaction execution status(status), Transaction Executor(from), transaction input and output(decoded input, decoded output)Transaction overhead(execution cost)and the transaction log(logs)。In logs, we see that the SetState event is thrown, and the parameter inside also records the value 4 passed in by the event。If we change the account to execute, the call will fail because the onlyAdmin modifier prevents the user from calling the。 ![](../../../../images/articles/solidity_basic_features/IMG_5448.JPG) @@ -203,11 +203,11 @@ After calling getState, you can directly see that the value obtained is 4, which ## Solidity data type -In the previous example, we used data types such as uint.。Due to the special design of the Solidity type, the data type of Solidity is also briefly introduced here.。 +In the previous example, we used data types such as uint。Due to the special design of the Solidity type, the data type of Solidity is also briefly introduced here。 ### Integer series -Solidity provides a set of data types to represent integers, including unsigned integers and signed integers.。Each type of integer can also be subdivided according to length, the specific subdivision type is as follows。 +Solidity provides a set of data types to represent integers, including unsigned integers and signed integers。Each type of integer can also be subdivided according to length, the specific subdivision type is as follows。 | Type| Length(Bit) | have symbols| | ------- | -------- | ------ | @@ -243,7 +243,7 @@ Also, you can convert an integer type to bytes。 bytes32 b = bytes32(s); ``` -Here is a key detail, Solidity takes the big endian encoding, the high address is stored in the small endian of the integer.。For example, b [0] is the low address side, which stores the high side of the integer, so the value is 0;B [31] is 1.。 +Here is a key detail, Solidity takes the big endian encoding, the high address is stored in the small endian of the integer。For example, b [0] is the low address side, which stores the high side of the integer, so the value is 0;B [31] is 1。 ``` function bytesSample() public pure returns(byte, byte){ @@ -257,11 +257,11 @@ Here is a key detail, Solidity takes the big endian encoding, the high address i ### variable length bytes -From the above, the reader can understand the fixed-length byte array.。In addition, Solidity provides a variable-length byte array: bytes。The use is similar to an array, which will be described later.。 +From the above, the reader can understand the fixed-length byte array。In addition, Solidity provides a variable-length byte array: bytes。The use is similar to an array, which will be described later。 ### string -The string provided by Solidity is essentially a string of UTF-8-encoded byte array that is compatible with variable-length bytes。Currently Solidity has poor support for strings and no concept of characters。User can convert string to bytes。 +The string provided by Solidity is essentially a string of UTF-8 encoded byte arrays, which is compatible with variable-length bytes。Currently Solidity has poor support for strings and no concept of characters。User can convert string to bytes。 ``` function stringSample() public view returns(bytes){ @@ -289,10 +289,10 @@ address represents the account address, which is indirectly generated by the pri ### mapping -Mapping represents mapping and is an extremely important data structure.。There are several differences between it and the mapping in java: +Mapping represents mapping and is an extremely important data structure。There are several differences between it and the mapping in java: -- It cannot iterate keys because it only saves the hash of the key, not the key value, if you want to iterate, you can use the open source iterable hash class library -- If a key is not stored in mapping, the corresponding value can be read normally, except that the value is null (all bytes are 0).。So it does not need to put, get and other operations, the user can directly operate it。 +-It cannot iterate keys because it only saves the hash of the key, not the key value. If you want to iterate, you can use the open source iterable hash class library +-If a key is not saved in mapping, the corresponding value can be read normally, but the value is null (all bytes are 0)。So it does not need to put, get and other operations, the user can directly operate it。 ``` contract Sample{ @@ -362,17 +362,17 @@ This section only describes the more common data types, a more complete list can #### global variable -In the constructor of the sample contract code, include msg.sender。It belongs to global variable。In smart contracts, global variables or global methods can be used to obtain some basic information about the current block and transaction, such as block height, block time, contract caller, etc.。 +In the constructor of the sample contract code, include msg.sender。It belongs to global variable。In smart contracts, global variables or global methods can be used to obtain some basic information about the current block and transaction, such as block height, block time, contract caller, etc。 -The most commonly used global variable is the msg variable, which represents the calling context, and the common global variables are as follows. +The most commonly used global variable is the msg variable, which represents the calling context, and the common global variables are as follows - **msg.sender**: the direct caller of the contract。 - Since it is a direct caller, when in user A.-> Contract 1-> Under the contract 2 call chain, if you use msg.sender in contract 2, you will get the address of contract 1.。If you want to get user A, you can use tx.origin. + Since it is a direct caller, when in user A->Contract 1->In the contract 2 call chain, if you use msg.sender in contract 2, you will get the address of contract 1。If you want to get user A, you can use tx.origin - **tx.origin**: 交易的"Initiator"the starting point of the entire call chain。 -- **msg.calldata**Contains complete call information, including function identifiers and parameters.。The first 4 bytes of calldata are the function ID, which is the same as msg.sig。 +- **msg.calldata**Contains complete call information, including function identifiers and parameters。The first 4 bytes of calldata are the function ID, which is the same as msg.sig。 - **msg.sig**The first 4 bytes of msg.calldata, used to identify the function。 - **block.number**: indicates the height of the current block。 @@ -383,4 +383,4 @@ Only some common global variables are listed here, please refer to [Full Version ## Conclusion -This article introduces a simple example contract and introduces the basics of using Solidity to develop smart contracts.。Readers can try to run the contract and feel the development of smart contracts.。If you want to learn more about smart contract examples, recommend [official website](https://solidity.readthedocs.io/en/v0.6.2/solidity-by-example.html)Examples for readers to learn, or follow in a follow-up series on this topic。In the example of the official website, a number of cases such as voting, bidding, and micro-payment channels are provided, which are close to real life and are good learning materials.。 +This article introduces a simple example contract and introduces the basics of using Solidity to develop smart contracts。Readers can try to run the contract and feel the development of smart contracts。If you want to learn more about smart contract examples, recommend [official website](https://solidity.readthedocs.io/en/v0.6.2/solidity-by-example.html)Examples for readers to learn, or follow in a follow-up series on this topic。In the example of the official website, a number of cases such as voting, bidding, and micro-payment channels are provided, which are close to real life and are good learning materials。 diff --git a/3.x/en/docs/articles/3_features/35_contract/solidity_design_patterns.md b/3.x/en/docs/articles/3_features/35_contract/solidity_design_patterns.md index 3156d70e8..414b3e0dd 100644 --- a/3.x/en/docs/articles/3_features/35_contract/solidity_design_patterns.md +++ b/3.x/en/docs/articles/3_features/35_contract/solidity_design_patterns.md @@ -4,11 +4,11 @@ Author: Chu Yuzhi | FISCO BCOS Core Developer ## Foreword -With the development of blockchain technology, more and more enterprises and individuals begin to combine blockchain with their own business。The unique advantages of blockchain, for example, data is open, transparent and immutable, which can facilitate business.。But at the same time, there are some hidden dangers。The transparency of the data means that anyone can read it.;Cannot be tampered with, meaning that information cannot be deleted once it is on the chain, and even the contract code cannot be changed。In addition, the openness of the contract, the callback mechanism, each of the characteristics can be used as an attack technique, a little careless, light contract is useless, heavy to face the risk of disclosure of corporate secrets.。Therefore, before the business contract is put on the chain, the security and maintainability of the contract need to be fully considered in advance.。Fortunately, through a lot of practice of Solidity language in recent years, developers continue to refine and summarize, has formed some"Design Pattern"To guide the daily development of common problems。 +With the development of blockchain technology, more and more enterprises and individuals begin to combine blockchain with their own business。The unique advantages of blockchain, for example, data is open, transparent and immutable, which can facilitate business。But at the same time, there are some hidden dangers。The transparency of the data means that anyone can read it;Cannot be tampered with, meaning that information cannot be deleted once it is on the chain, and even the contract code cannot be changed。In addition, the openness of the contract, the callback mechanism, each of the characteristics can be used as an attack technique, a little careless, light contract is useless, heavy to face the risk of disclosure of corporate secrets。Therefore, before the business contract is put on the chain, the security and maintainability of the contract need to be fully considered in advance。Fortunately, through a lot of practice of Solidity language in recent years, developers continue to refine and summarize, has formed some"Design Pattern"To guide the daily development of common problems。 ## Smart Contract Design Patterns Overview -In 2019, the IEEE included a paper from the University of Vienna entitled "Design Patterns For Smart Contracts In the Ethereum Ecosystem."。This paper analyzes the hot Solidity open source projects, combined with previous research results, sorted out 18 design patterns。These design patterns cover security, maintainability, lifecycle management, authentication, and more.。 +In 2019, the IEEE included a paper from the University of Vienna entitled "Design Patterns For Smart Contracts In the Ethereum Ecosystem."。This paper analyzes the hot Solidity open source projects, combined with previous research results, sorted out 18 design patterns。These design patterns cover security, maintainability, lifecycle management, authentication, and more。 | Type| Mode| | ------------------ | ------------------------------------------------------------ | @@ -22,9 +22,9 @@ Next, this article will select the most common and common of these 18 design pat ## Security(Security) -Smart contract writing, the primary consideration is security issues。In the blockchain world, there are countless malicious codes。If your contract contains cross-contract calls, be especially careful to verify that external calls are credible, especially if their logic is out of your control.。If you're not defensive, those "malicious" external codes could ruin your contract.。For example, external calls can cause code to be executed repeatedly through malicious callbacks, thus destroying the contract state, an attack known as Reentrance Attack.。Here, a small experiment in reentry attacks is introduced to give the reader an understanding of why external calls can lead to contract breaches, while helping to better understand the two design patterns that will be introduced to improve contract security.。 +Smart contract writing, the primary consideration is security issues。In the blockchain world, there are countless malicious codes。If your contract contains cross-contract calls, be especially careful to verify that external calls are credible, especially if their logic is out of your control。If you're not defensive, those "malicious" external codes could ruin your contract。For example, external calls can cause code to be executed repeatedly through malicious callbacks, thus destroying the contract state, an attack known as Reentrance Attack。Here, a small experiment in reentry attacks is introduced to give the reader an understanding of why external calls can lead to contract breaches, while helping to better understand the two design patterns that will be introduced to improve contract security。 -Here's a condensed example of a re-entry attack。The AddService contract is a simple counter, and each external contract can call addByOne of the AddService contract to increment the field _ count by one, while requiring each external contract to call the function at most once.。In this way, the _ count field reflects exactly how many contracts AddService has been called by.。At the end of the addByOne function, AddService calls the callback function notify for the external contract。The code for AddService is as follows: +Here's a condensed example of a re-entry attack。The AddService contract is a simple counter, and each external contract can call addByOne of the AddService contract to increment the field _ count by one, while requiring each external contract to call the function at most once。In this way, the _ count field reflects exactly how many contracts AddService has been called by。At the end of the addByOne function, AddService calls the callback function notify for the external contract。The code for AddService is as follows: ``` contract AddService{ @@ -50,7 +50,7 @@ contract AdderInterface{ } ``` -If AddService is deployed as such, a malicious attacker can easily control the number of _ count in AddService, invalidating the counter altogether。The attacker only needs to deploy a contract BadAdder, which can be used to call AddService, which can achieve the attack effect.。The BadAdder contract is as follows: +If AddService is deployed as such, a malicious attacker can easily control the number of _ count in AddService, invalidating the counter altogether。The attacker only needs to deploy a contract BadAdder, which can be used to call AddService, which can achieve the attack effect。The BadAdder contract is as follows: ``` @@ -79,17 +79,17 @@ BadAdder in the callback function notify, in turn, continue to call AddService, ![](../../../../images/articles/solidity_design_patterns/IMG_5450.PNG) -In this example, AddService had difficulty knowing the caller's callback logic, but still trusted the external call, and the attacker took advantage of AddService's poor code arrangement, resulting in tragedy.。In this example, the actual business significance is removed, and the only consequence of the attack is the distortion of the _ count value.。Genuine re-entry attacks that can have serious business consequences。For example, in counting the number of votes, the number of votes will be changed beyond recognition.。If you want to block this type of attack, the contract needs to follow a good coding pattern.。 +In this example, AddService had difficulty knowing the caller's callback logic, but still trusted the external call, and the attacker took advantage of AddService's poor code arrangement, resulting in tragedy。In this example, the actual business significance is removed, and the only consequence of the attack is the distortion of the _ count value。Genuine re-entry attacks that can have serious business consequences。For example, in counting the number of votes, the number of votes will be changed beyond recognition。If you want to block this type of attack, the contract needs to follow a good coding pattern。 -### Checks-Effects-Interaction - Ensure that the state is complete, and then make external calls. +### Checks-Effects-Interaction - Ensure the status is complete before making external calls This pattern is a coding style constraint that effectively avoids replay attacks。Typically, a function might have three parts: -- Checks: Parameter Validation -- Effects: Modify contract status +-Checks: Parameter Validation +-Effects: Modify contract status - Interaction: external interaction -This model requires contracts to follow Checks-Effects-The order of the interaction to organize the code。The benefit of it is that before making an external call, Checks-Effects has completed all work related to the state of the contract itself, making the state complete and logically self-consistent, so that external calls cannot exploit the incomplete state for attacks.。Review the previous AddService contract, did not follow this rule, in the case of its own state has not been updated to call the external code, the external code can naturally cross a knife, so that _ adders [msg.sender] = true permanently not called, thus invalidating the require statement.。We check-effects-Review the original code from the perspective of interaction: +This pattern requires contracts to organize code in the order Checks-Effects-Interaction。The advantage is that Checks-Effects has completed all the work related to the state of the contract itself before making the external call, making the state complete and logically self-consistent, so that the external call cannot be attacked with incomplete state。Review the previous AddService contract, did not follow this rule, in the case of its own state has not been updated to call the external code, the external code can naturally cross a knife, so that _ adders [msg.sender] = true permanently not called, thus invalidating the require statement。Let's review the original code in terms of checks-effects-interaction: ``` //Checks @@ -103,7 +103,7 @@ This model requires contracts to follow Checks-Effects-The order of the interact _adders[msg.sender] = true; ``` -As long as the order is slightly adjusted to meet the Checks-Effects-Interaction mode, tragedy is avoided: +With a slight adjustment of the order, satisfying the Checks-Effects-Interaction pattern, the tragedy is avoided: ``` //Checks @@ -116,7 +116,7 @@ As long as the order is slightly adjusted to meet the Checks-Effects-Interaction adder.notify(); ``` -Since the _ adders mapping has been modified, when a malicious attacker wants to recursively call addByOne, the require line of defense will work to intercept the malicious call.。Although this pattern is not the only way to resolve reentry attacks, it is still recommended that developers follow。 +Since the _ adders mapping has been modified, when a malicious attacker wants to recursively call addByOne, the require line of defense will work to intercept the malicious call。Although this pattern is not the only way to resolve reentry attacks, it is still recommended that developers follow。 ### Mutex - Prohibit Recursion @@ -140,19 +140,19 @@ contract Mutex { } ``` -In this example, before calling the some function, the noReancy modifier is run to assign the locked variable to true。If some is called recursively at this point, the logic of the modifier is activated again, and the first line of code for the modifier throws an error because the locked property is already true.。 +In this example, before calling the some function, the noReancy modifier is run to assign the locked variable to true。If some is called recursively at this point, the logic of the modifier is activated again, and the first line of code for the modifier throws an error because the locked property is already true。 ## **Maintainability** In blockchain, contracts cannot be changed once they are deployed。When a contract has a bug, you usually have to face the following problems: -1. How to deal with the business data already on the contract.? +1. How to deal with the business data already on the contract? 2. How to reduce the impact of the upgrade as much as possible, so that the rest of the functions are not affected? -3. What to do with other contracts that rely on it.? +3. What to do with other contracts that rely on it? Reviewing object-oriented programming, the core idea is to separate the changing things from the unchanging things in order to block the spread of change in the system。As a result, well-designed code is usually organized to be highly modular, highly cohesive and low-coupling。Using this classic idea can solve the above problems。 -### Data segregation - Separation of data and logic +### Data segregation - separation of data and logic Before understanding the design pattern, take a look at the following contract code: @@ -171,11 +171,11 @@ contract Computer{ } ``` -This contract contains two capabilities, one is to store data(setData function)The other is the use of data for calculation.(Compute function)。If the contract is deployed for a period of time and you find that the compute is incorrectly written, for example, you should not multiply by 10, but multiply by 20, it will lead to the question of how to upgrade the contract as described above.。At this point, you can deploy a new contract and try to migrate the existing data to the new contract, but this is a heavy operation, on the one hand, to write the code of the migration tool, on the other hand, the original data is completely obsolete, empty of valuable node storage resources。 +This contract contains two capabilities, one is to store data(setData function)The other is the use of data for calculation(Compute function)。If the contract is deployed for a period of time and you find that the compute is incorrectly written, for example, you should not multiply by 10, but multiply by 20, it will lead to the question of how to upgrade the contract as described above。At this point, you can deploy a new contract and try to migrate the existing data to the new contract, but this is a heavy operation, on the one hand, to write the code of the migration tool, on the other hand, the original data is completely obsolete, empty of valuable node storage resources。 -Therefore, it is necessary to be modular in advance when programming。If we will"Data"Seen as unchanging things, will"Logic"Seeing as something that can change, you can perfectly avoid the above problems。The Data Segregation (which means data separation) pattern is a good implementation of this idea.。The model requires a business contract and a data contract: the data contract is only for data access, which is stable.;Business contracts, on the other hand, perform logical operations through data contracts.。 +Therefore, it is necessary to be modular in advance when programming。If we will"Data"Seen as unchanging things, will"Logic"Seeing as something that can change, you can perfectly avoid the above problems。The Data Segregation (which means data separation) pattern is a good implementation of this idea。The model requires a business contract and a data contract: the data contract is only for data access, which is stable;Business contracts, on the other hand, perform logical operations through data contracts。 -In conjunction with the previous example, we transfer data read and write operations specifically to a contract DataRepository. +In conjunction with the previous example, we transfer data read and write operations specifically to a contract DataRepository ``` contract DataRepository{ @@ -208,11 +208,11 @@ contract Computer{ } ``` -In this way, as long as the data contract is stable, the upgrade of the business contract is very lightweight.。For example, when I want to replace Computer with ComputerV2, the original data can still be reused。 +In this way, as long as the data contract is stable, the upgrade of the business contract is very lightweight。For example, when I want to replace Computer with ComputerV2, the original data can still be reused。 -### Satellite - Breaking down contract functions +### Satellite - Decompose contract function -A complex contract usually consists of many functions, if these functions are all coupled in a contract, when a function needs to be updated, you have to deploy the entire contract, normal functions will be affected.。The Satellite model addresses these issues using the single-duty principle, advocating the placement of contract subfunctions into subcontracts, with each subcontract (also known as a satellite contract) corresponding to only one function.。When a sub-function needs to be modified, just create a new sub-contract and update its address to the main contract.。 +A complex contract usually consists of many functions, if these functions are all coupled in a contract, when a function needs to be updated, you have to deploy the entire contract, normal functions will be affected。The Satellite model addresses these issues using the single-duty principle, advocating the placement of contract subfunctions into subcontracts, with each subcontract (also known as a satellite contract) corresponding to only one function。When a sub-function needs to be modified, just create a new sub-contract and update its address to the main contract。 For a simple example, the setVariable function of the following contract is to calculate the input data (compute function) and store the calculation result in the contract state _ variable: @@ -231,7 +231,7 @@ contract Base { } ``` -After deployment, if you find that the compute function is incorrectly written and you want to multiply by a factor of 20, you must redeploy the entire contract.。However, if you initially operate in Satellite mode, you only need to deploy the corresponding subcontract。 +After deployment, if you find that the compute function is incorrectly written and you want to multiply by a factor of 20, you must redeploy the entire contract。However, if you initially operate in Satellite mode, you only need to deploy the corresponding subcontract。 First, let's strip the compute function into a separate satellite contract: @@ -243,7 +243,7 @@ contract Satellite { } ``` -The main contract then relies on the subcontract to complete setVariable. +The main contract then relies on the subcontract to complete setVariable ``` contract Base { @@ -271,9 +271,9 @@ contract Satellite2{ } ``` -### Contract Registry - Track the latest contracts +### Contract Registry - Track Latest Contracts -In Satellite mode, if a primary contract depends on a subcontract, when the subcontract is upgraded, the primary contract needs to update the address reference to the subcontract, which is done through updateXXX, for example, the updateSatellite function described earlier.。This type of interface is a maintainable interface and has nothing to do with the actual business. Too much exposure of this type of interface will affect the aesthetics of the main contract and greatly reduce the caller's experience.。The Contract Registry design pattern elegantly solves this problem。In this design mode, there is a special contract Registry to track each upgrade of a subcontract, and the main contract can obtain the latest subcontract address by querying this Registyr contract.。After the satellite contract is redeployed, the new address is updated via the Registry.update function。 +In Satellite mode, if a primary contract depends on a subcontract, when the subcontract is upgraded, the primary contract needs to update the address reference to the subcontract, which is done through updateXXX, for example, the updateSatellite function described earlier。This type of interface is a maintainable interface and has nothing to do with the actual business. Too much exposure of this type of interface will affect the aesthetics of the main contract and greatly reduce the caller's experience。The Contract Registry design pattern elegantly solves this problem。In this design mode, there is a special contract Registry to track each upgrade of a subcontract, and the main contract can obtain the latest subcontract address by querying this Registyr contract。After the satellite contract is redeployed, the new address is updated via the Registry.update function。 ``` contract Registry{ @@ -281,7 +281,7 @@ contract Registry{ address _current; address[] _previous; - / / If the subcontract is upgraded, update the address through the update function. + / / If the subcontract is upgraded, update the address through the update function function update(address newAddress) public{ if(newAddress != _current){ _previous.push(_current); @@ -312,7 +312,7 @@ contract Base { ### **Contract Relay - Agent invokes latest contract** -This design pattern solves the same problem as Contract Registry, i.e. the main contract can call the latest subcontract without exposing the maintenance interface.。In this mode, there is a proxy contract, and the subcontract shares the same interface, responsible for passing the call request of the main contract to the real subcontract.。After the satellite contract is redeployed, the new address is updated via the SatelliteProxy.update function。 +This design pattern solves the same problem as Contract Registry, i.e. the main contract can call the latest subcontract without exposing the maintenance interface。In this mode, there is a proxy contract, and the subcontract shares the same interface, responsible for passing the call request of the main contract to the real subcontract。After the satellite contract is redeployed, the new address is updated via the SatelliteProxy.update function。 ``` contract SatelliteProxy{ @@ -322,7 +322,7 @@ contract SatelliteProxy{ return satellite.compute(a); } - / / If the subcontract is upgraded, update the address through the update function. + / / If the subcontract is upgraded, update the address through the update function function update(address newAddress) public{ if(newAddress != _current){ _current = newAddress; @@ -355,7 +355,7 @@ contract Base { ## (Lifecycle) -By default, the life of a contract is nearly infinite - unless the blockchain on which it depends is eliminated。But many times, users want to shorten the life of the contract。This section will introduce two simple patterns to end contract life early.。 +By default, the life of a contract is nearly infinite - unless the blockchain on which it depends is eliminated。But many times, users want to shorten the life of the contract。This section will introduce two simple patterns to end contract life early。 ### Mortal - Allow contracts to self-destruct @@ -371,9 +371,9 @@ contract Mortal{ } ``` -### Automatic Deprecation - Allow contracts to automatically stop services +### Automatic Deprecation - allows contracts to automatically stop services -If you want a contract to be out of service after a specified period without human intervention, you can use the Automatic Deprecation pattern.。 +If you want a contract to be out of service after a specified period without human intervention, you can use the Automatic Deprecation pattern。 ``` contract AutoDeprecated{ @@ -395,15 +395,15 @@ contract AutoDeprecated{ } ``` -When the user calls service, the notExpired modifier will first perform date detection, so that once a specific time has passed, the call will be intercepted at the notExpired layer due to expiration.。 +When the user calls service, the notExpired modifier will first perform date detection, so that once a specific time has passed, the call will be intercepted at the notExpired layer due to expiration。 ## Permissions (Authorization) -There are many administrative interfaces in the previous article, which can have serious consequences if they can be called by anyone, such as the self-destruct function above, which assumes that anyone can access it, and its severity is self-evident.。Therefore, a set of permission control design patterns that ensure that only specific accounts can access is particularly important。 +There are many administrative interfaces in the previous article, which can have serious consequences if they can be called by anyone, such as the self-destruct function above, which assumes that anyone can access it, and its severity is self-evident。Therefore, a set of permission control design patterns that ensure that only specific accounts can access is particularly important。 ### Ownership -For permission control, you can use the ownership mode.。This pattern guarantees that only the owner of the contract can call certain functions.。First you need an Owned contract: +For permission control, you can use the ownership mode。This pattern guarantees that only the owner of the contract can call certain functions。First you need an Owned contract: ``` contract Owned{ @@ -430,15 +430,15 @@ contract Biz is Owned{ } ``` -Thus, when the manage function is called, the onlyOwner modifier runs first and detects whether the caller is consistent with the contract owner, thus intercepting unauthorized calls.。 +Thus, when the manage function is called, the onlyOwner modifier runs first and detects whether the caller is consistent with the contract owner, thus intercepting unauthorized calls。 ## Action and Control These patterns are typically used in specific scenarios, and this section will focus on privacy-based coding patterns and design patterns for interacting with off-chain data。 -### Commit - Reveal - Delayed Secret Leak +### Commit-Reveal - Delay Secret Leak -On-chain data is open and transparent, once some private data on the chain, anyone can see, and can never withdraw。Commit And Reveal mode allows users to convert the data to be protected into unrecognizable data, such as a string of hash values, until a certain point to reveal the meaning of the hash value, revealing the true original value.。In the voting scenario, for example, suppose that the voting content needs to be revealed after all participants have completed the voting to prevent participants from being affected by the number of votes during this period.。We can look at the specific code used in this scenario: +On-chain data is open and transparent, once some private data on the chain, anyone can see, and can never withdraw。Commit And Reveal mode allows users to convert the data to be protected into unrecognizable data, such as a string of hash values, until a certain point to reveal the meaning of the hash value, revealing the true original value。In the voting scenario, for example, suppose that the voting content needs to be revealed after all participants have completed the voting to prevent participants from being affected by the number of votes during this period。We can look at the specific code used in this scenario: ``` contract CommitReveal { @@ -472,9 +472,9 @@ contract CommitReveal { } ``` -### Oracle - Read off-chain data +### Oracle - Read Out-of-Chain Data -At present, the ecosystem of smart contracts on the chain is relatively closed, and it is impossible to obtain off-chain data, which affects the application scope of smart contracts.。Off-chain data can greatly expand the use of smart contracts, such as in the insurance industry, where smart contracts can automatically execute claims if they can read unexpected events that occur in reality.。Fetching external data is performed through an off-chain data layer called Oracle。When a business contract attempts to obtain external data, the query request is first placed in an Oracle-specific contract;Oracle listens to the contract, reads the query request, executes the query, and calls the business contract response interface to get the contract results.。 +At present, the ecosystem of smart contracts on the chain is relatively closed, and it is impossible to obtain off-chain data, which affects the application scope of smart contracts。Off-chain data can greatly expand the use of smart contracts, such as in the insurance industry, where smart contracts can automatically execute claims if they can read unexpected events that occur in reality。Fetching external data is performed through an off-chain data layer called Oracle。When a business contract attempts to obtain external data, the query request is first placed in an Oracle-specific contract;Oracle listens to the contract, reads the query request, executes the query, and calls the business contract response interface to get the contract results。 ![](../../../../images/articles/solidity_design_patterns/IMG_5451.PNG) @@ -535,4 +535,4 @@ contract BizContract { ## SUMMARY -This article covers a variety of design patterns such as security and maintainability, some of which are more principled, such as the Security and Maintenance design patterns.;Some are partial practices, such as Authorization, Action And Control。These design patterns, especially practice classes, do not cover all scenarios。As you explore the actual business, you will encounter more and more specific scenarios and problems, and developers can refine and sublimate these patterns to precipitate design patterns for certain types of problems.。The above design patterns are a powerful weapon for programmers, mastering them can deal with many known scenarios, but more should master the method of refining design patterns, so as to calmly deal with unknown areas, this process can not be separated from the in-depth exploration of the business, in-depth understanding of software engineering principles.。 \ No newline at end of file +This article covers a variety of design patterns such as security and maintainability, some of which are more principled, such as the Security and Maintenance design patterns;Some are partial practices, such as Authorization, Action And Control。These design patterns, especially practice classes, do not cover all scenarios。As you explore the actual business, you will encounter more and more specific scenarios and problems, and developers can refine and sublimate these patterns to precipitate design patterns for certain types of problems。The above design patterns are a powerful weapon for programmers, mastering them can deal with many known scenarios, but more should master the method of refining design patterns, so as to calmly deal with unknown areas, this process can not be separated from the in-depth exploration of the business, in-depth understanding of software engineering principles。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/35_contract/solidity_design_programming_strategy.md b/3.x/en/docs/articles/3_features/35_contract/solidity_design_programming_strategy.md index 4ca1635dd..649f3195c 100644 --- a/3.x/en/docs/articles/3_features/35_contract/solidity_design_programming_strategy.md +++ b/3.x/en/docs/articles/3_features/35_contract/solidity_design_programming_strategy.md @@ -4,7 +4,7 @@ Author : MAO Jiayu | FISCO BCOS Core Developer ## **Preface** -As a veteran code farmer who has been moving bricks for many years, I felt helpless when I first came into contact with Solidity: expensive computing and storage resources, rudimentary syntax features, maddening debug experience, almost barren class library support, inserting assembly statements if I didn't agree...... makes one wonder if it's been 9012 years and there's even this anti-human language.?For code farmers who are accustomed to using all kinds of increasingly "silly" class libraries and automated advanced frameworks, the process of learning Solidity is an endless journey of persuasion.。But as we learn more about the underlying technology of blockchain, we will gradually understand the design principles that must be strictly followed and the price that must be paid after weighing the Solidity language running on "The World Machine."。As the famous slogan in the Matrix: "Welcome to the dessert of the real," in the face of harsh and difficult circumstances, the most important thing is to learn how to adapt to the environment, preserve yourself and evolve quickly.。This article summarizes some of the Solidity programming strategy, looking forward to the readers will not hesitate to share the exchange, to achieve the effect of throwing bricks and mortar。 +As a veteran code farmer who has been moving bricks for many years, I felt helpless when I first came into contact with Solidity: expensive computing and storage resources, rudimentary syntax features, maddening debug experience, almost barren class library support, inserting assembly statements if I didn't agree...... makes one wonder if it's been 9012 years and there's even this anti-human language?For code farmers who are accustomed to using all kinds of increasingly "silly" class libraries and automated advanced frameworks, the process of learning Solidity is an endless journey of persuasion。But as we learn more about the underlying technology of blockchain, we will gradually understand the design principles that must be strictly followed and the price that must be paid after weighing the Solidity language running on "The World Machine."。As the famous slogan in the Matrix: "Welcome to the dessert of the real," in the face of harsh and difficult circumstances, the most important thing is to learn how to adapt to the environment, preserve yourself and evolve quickly。This article summarizes some of the Solidity programming strategy, looking forward to the readers will not hesitate to share the exchange, to achieve the effect of throwing bricks and mortar。 ## The principle of the chain @@ -12,21 +12,21 @@ As a veteran code farmer who has been moving bricks for many years, I felt helpl Based on the current development of blockchain technology and smart contracts, the following principles should be followed in the data chain: -- Important data that requires distributed collaboration is chained, and non-essential data is not chained.; -- Sensitive data is desensitized or encrypted and then linked (depending on the degree of data confidentiality, select an encryption algorithm that meets the requirements of the privacy protection security level); +- Important data that requires distributed collaboration is chained, and unnecessary data is not chained; +- Sensitive data is desensitized or encrypted on the chain (depending on the degree of data confidentiality, select the encryption algorithm that meets the requirements of the privacy protection security level); - On-chain authentication, off-chain authorization。 -When using blockchain, developers don't need to put all their business and data on the chain。Instead, "good steel is on the cutting edge," and smart contracts are more suitable for use in distributed collaboration business scenarios.。 +When using blockchain, developers don't need to put all their business and data on the chain。Instead, "good steel is on the cutting edge," and smart contracts are more suitable for use in distributed collaboration business scenarios。 ## thin function variable -If complex logic is defined in a smart contract, especially if complex function parameters, variables, and return values are defined in the contract, you will encounter the following errors at compile time. +If complex logic is defined in a smart contract, especially if complex function parameters, variables, and return values are defined in the contract, you will encounter the following errors at compile time ``` Compiler error: Stack too deep, try removing local variables. ``` -This is also one of the high-frequency technical issues in the community.。The reason for this problem is that EVM is designed for a maximum stack depth of 16。All calculations are performed within a stack, and access to the stack is limited to the top of the stack in such a way as to allow one of the top 16 elements to be copied to the top of the stack, or to swap the top of the stack with one of the 16 elements below.。All other operations can only take the top few elements, and after the operation, the result is pushed to the top of the stack.。Of course, you can put the elements on the stack into storage or memory.。However, you cannot access only the element on the stack at the specified depth unless you first remove the other elements from the top of the stack。If the size of the input parameters, return values, and internal variables in a contract exceeds 16, it clearly exceeds the maximum depth of the stack.。Therefore, we can use structs or arrays to encapsulate input or return values to reduce the use of elements at the top of the stack, thereby avoiding this error。For example, the following code encapsulates the original 16 bytes variables by using the bytes array.。 +This is also one of the high-frequency technical issues in the community。The reason for this problem is that EVM is designed for a maximum stack depth of 16。All calculations are performed within a stack, and access to the stack is limited to the top of the stack in such a way as to allow one of the top 16 elements to be copied to the top of the stack, or to swap the top of the stack with one of the 16 elements below。All other operations can only take the top few elements, and after the operation, the result is pushed to the top of the stack。Of course, you can put the elements on the stack into storage or memory。However, you cannot access only the element on the stack at the specified depth unless you first remove the other elements from the top of the stack。If the size of the input parameters, return values, and internal variables in a contract exceeds 16, it clearly exceeds the maximum depth of the stack。Therefore, we can use structs or arrays to encapsulate input or return values to reduce the use of elements at the top of the stack, thereby avoiding this error。For example, the following code encapsulates the original 16 bytes variables by using the bytes array。 ``` function doBiz(bytes[] paras) public { @@ -37,11 +37,11 @@ function doBiz(bytes[] paras) public { ## Guaranteed parameters and behavior as expected -With the lofty ideal of "Code is law," geeks design and create smart contracts for blockchain。In the alliance chain, different participants can use smart contracts to define and write the logic of a part of a business or interaction to complete a part of a social or commercial activity.。 +With the lofty ideal of "Code is law," geeks design and create smart contracts for blockchain。In the alliance chain, different participants can use smart contracts to define and write the logic of a part of a business or interaction to complete a part of a social or commercial activity。 -Compared to traditional software development, smart contracts have more stringent security requirements for function parameters and behavior。Mechanisms such as identity real names and CA certificates are provided in the federation chain to effectively locate and regulate all participants。However, smart contracts lack prior intervention mechanisms for vulnerabilities and attacks.。As the so-called word Breguet, if you do not rigorously check the smart contract input parameters or behavior, it may trigger some unexpected bugs.。 +Compared to traditional software development, smart contracts have more stringent security requirements for function parameters and behavior。Mechanisms such as identity real names and CA certificates are provided in the federation chain to effectively locate and regulate all participants。However, smart contracts lack prior intervention mechanisms for vulnerabilities and attacks。As the so-called word Breguet, if you do not rigorously check the smart contract input parameters or behavior, it may trigger some unexpected bugs。 -Therefore, when writing smart contracts, it is important to pay attention to the examination of contract parameters and behavior, especially those contract functions that are open to the outside world.。Solidity provides keywords such as require, revert, and assert to detect and handle exceptions.。Once the error is detected and found, the entire function call is rolled back and all state modifications are rolled back as if the function had never been called。The following uses three keywords to achieve the same semantics.。 +Therefore, when writing smart contracts, it is important to pay attention to the examination of contract parameters and behavior, especially those contract functions that are open to the outside world。Solidity provides keywords such as require, revert, and assert to detect and handle exceptions。Once the error is detected and found, the entire function call is rolled back and all state modifications are rolled back as if the function had never been called。The following uses three keywords to achieve the same semantics。 ``` require(_data == data, "require data is valid"); @@ -53,13 +53,13 @@ assert(_data == data); However, these three keywords generally apply to different usage scenarios: -- require: The most commonly used detection keyword to verify whether the input parameters and the result of calling the function are legitimate.。 -- revert: Applicable to a branch judgment scenario。 +-require: The most commonly used detection keyword to verify whether the input parameters and the result of calling the function are legitimate。 +-revert: Applicable in a branch judgment scenario。 - assert: Check whether the result is correct and legal, generally used at the end of the function。 -In a function of a contract, you can use the function decorator to abstract part of the parameter and condition checking。Within the function body, you can use if for the running state-Else and other judgment statements to check, the abnormal branch using revert fallback。You can use assert to check the execution result or intermediate state before the function runs。In practice, it is recommended to use the require keyword and move the condition check to the function decorator.;This allows the function to have more single responsibilities and focus more on the business logic.。At the same time, condition codes such as function modifiers are easier to reuse, and contracts are more secure and hierarchical.。 +In a function of a contract, you can use the function decorator to abstract part of the parameter and condition checking。In the function body, you can check the running state using judgment statements such as if-else, and use revert fallback for abnormal branches。You can use assert to check the execution result or intermediate state before the function runs。In practice, it is recommended to use the require keyword and move the condition check to the function decorator;This allows the function to have more single responsibilities and focus more on the business logic。At the same time, condition codes such as function modifiers are easier to reuse, and contracts are more secure and hierarchical。 -In this paper, we use a fruit store inventory management system as an example to design a fruit supermarket contract.。This contract only contains the management of all fruit categories and inventory quantities in the store, and the setFruitStock function provides a function corresponding to the fruit inventory settings.。In this contract, we need to check the incoming parameters, i.e. the fruit name cannot be empty。 +In this paper, we use a fruit store inventory management system as an example to design a fruit supermarket contract。This contract only contains the management of all fruit categories and inventory quantities in the store, and the setFruitStock function provides a function corresponding to the fruit inventory settings。In this contract, we need to check the incoming parameters, i.e. the fruit name cannot be empty。 ``` pragma solidity ^0.4.25; @@ -76,11 +76,11 @@ contract FruitStore { } ``` -As mentioned above, we added a function decorator for parameter checking before function execution。Similarly, by using function decorators that check before and after function execution, you can ensure that smart contracts are safer and clearer.。The writing of smart contracts requires strict pre-and post-function checks to ensure their security.。 +As mentioned above, we added a function decorator for parameter checking before function execution。Similarly, by using function decorators that check before and after function execution, you can ensure that smart contracts are safer and clearer。The writing of smart contracts requires strict pre-and post-function checks to ensure their security。 ## Strictly control the execution permission of functions -If the parameters and behavior detection of smart contracts provide static contract security measures, then the mode of contract permission control provides control of dynamic access behavior.。Since smart contracts are published on the blockchain, all data and functions are open and transparent to all participants, and any node participant can initiate a transaction, which does not guarantee the privacy of the contract.。Therefore, the contract publisher must design a strict access restriction mechanism for the function。Solidity provides syntax such as function visibility modifiers and modifiers, which can be used flexibly to help build a smart contract system with legal authorization and controlled calls.。Or take the fruit contract just now as an example.。Now getStock provides a function to query the inventory quantity of specific fruits.。 +If the parameters and behavior detection of smart contracts provide static contract security measures, then the mode of contract permission control provides control of dynamic access behavior。Since smart contracts are published on the blockchain, all data and functions are open and transparent to all participants, and any node participant can initiate a transaction, which does not guarantee the privacy of the contract。Therefore, the contract publisher must design a strict access restriction mechanism for the function。Solidity provides syntax such as function visibility modifiers and modifiers, which can be used flexibly to help build a smart contract system with legal authorization and controlled calls。Or take the fruit contract just now as an example。Now getStock provides a function to query the inventory quantity of specific fruits。 ``` pragma solidity ^0.4.25; @@ -100,7 +100,7 @@ contract FruitStore { } ``` -The fruit store owner posted the contract on the chain.。However, after publication, the setFruitStock function can be called by any other affiliate chain participant。Although the participants in the alliance chain are real-name authenticated and can be held accountable afterwards.;However, once a malicious attacker attacks the fruit store, calling the setFruitStock function can modify the fruit inventory at will, or even clear all the fruit inventory, which will have serious consequences for the normal operation and management of the fruit store.。Therefore, it is necessary to set up some prevention and authorization measures: for the function setFruitStock that modifies the inventory, the caller can be authenticated before the function executes.。Similarly, these checks may be reused by multiple functions that modify the data, using an onlyOwner decorator to abstract this check。The owner field represents the owner of the contract and is initialized in the contract constructor.。Using public to modify the getter query function, you can pass _ owner()function to query the owner of a contract。 +The fruit store owner posted the contract on the chain。However, after publication, the setFruitStock function can be called by any other affiliate chain participant。Although the participants in the alliance chain are real-name authenticated and can be held accountable afterwards;However, once a malicious attacker attacks the fruit store, calling the setFruitStock function can modify the fruit inventory at will, or even clear all the fruit inventory, which will have serious consequences for the normal operation and management of the fruit store。Therefore, it is necessary to set up some prevention and authorization measures: for the function setFruitStock that modifies the inventory, the caller can be authenticated before the function executes。Similarly, these checks may be reused by multiple functions that modify the data, using an onlyOwner decorator to abstract this check。The owner field represents the owner of the contract and is initialized in the contract constructor。Using public to modify the getter query function, you can pass _ owner()function to query the owner of a contract。 ``` contract FruitStore { @@ -131,11 +131,11 @@ contract FruitStore { } ``` -In this way, we can encapsulate the corresponding function call permission check into the decorator, the smart contract will automatically initiate the caller authentication check, and only allow the contract deployer to call the setFruitStock function, thus ensuring that the contract function is open to the specified caller.。 +In this way, we can encapsulate the corresponding function call permission check into the decorator, the smart contract will automatically initiate the caller authentication check, and only allow the contract deployer to call the setFruitStock function, thus ensuring that the contract function is open to the specified caller。 ## abstract generic business logic -Analyzing the above FruitStore contract, we found that there seems to be something strange mixed in with the contract.。Referring to the programming principle of single responsibility, the fruit store inventory management contract has more logic than the above function function check, so that the contract can not focus all the code in its own business logic.。In this regard, we can abstract reusable functions and use Solidity's inheritance mechanism to inherit the final abstract contract.。Based on the above FruitStore contract, a BasicAuth contract can be abstracted, which contains the previous onlyOwner's decorator and related functional interfaces.。 +Analyzing the above FruitStore contract, we found that there seems to be something strange mixed in with the contract。Referring to the programming principle of single responsibility, the fruit store inventory management contract has more logic than the above function function check, so that the contract can not focus all the code in its own business logic。In this regard, we can abstract reusable functions and use Solidity's inheritance mechanism to inherit the final abstract contract。Based on the above FruitStore contract, a BasicAuth contract can be abstracted, which contains the previous onlyOwner's decorator and related functional interfaces。 ``` contract BasicAuth { @@ -174,11 +174,11 @@ contract FruitStore is BasicAuth { } ``` -In this way, the logic of FruitStore is greatly simplified, and the contract code is more streamlined, focused and clear.。 +In this way, the logic of FruitStore is greatly simplified, and the contract code is more streamlined, focused and clear。 ## Prevent loss of private keys -There are two ways to call contract functions in the blockchain: internal calls and external calls.。For privacy protection and permission control, a business contract defines a contract owner。Suppose user A deploys the FruitStore contract, then the above contract owner is the external account address of deployer A.。This address is generated by the private key calculation of the external account.。However, in the real world, the phenomenon of private key leakage, loss abound。A commercial blockchain DAPP needs to seriously consider issues such as private key replacement and reset.。The simplest and most intuitive solution to this problem is to add an alternate private key。This alternate private key supports the operation of the permission contract modification owner. The code is as follows: +There are two ways to call contract functions in the blockchain: internal calls and external calls。For privacy protection and permission control, a business contract defines a contract owner。Suppose user A deploys the FruitStore contract, then the above contract owner is the external account address of deployer A。This address is generated by the private key calculation of the external account。However, in the real world, the phenomenon of private key leakage, loss abound。A commercial blockchain DAPP needs to seriously consider issues such as private key replacement and reset。The simplest and most intuitive solution to this problem is to add an alternate private key。This alternate private key supports the operation of the permission contract modification owner. The code is as follows: ``` ontract BasicAuth { @@ -213,11 +213,11 @@ ontract BasicAuth { } ``` -In this way, when we find that the private key is lost or leaked, we can use the standby external account to call setOwner to reset the account to restore and ensure the normal operation of the business.。 +In this way, when we find that the private key is lost or leaked, we can use the standby external account to call setOwner to reset the account to restore and ensure the normal operation of the business。 ## interface-oriented programming -The above-mentioned private key backup concept is worthy of praise, but its specific implementation has certain limitations, in many business scenarios, it is too simple and crude.。For actual business scenarios, the dimensions and factors that need to be considered for the backup and preservation of private keys are much more complex, and the corresponding key backup strategies are more diversified.。Take fruit stores as an example, some chain fruit stores may want to manage private keys through brand headquarters, some may reset their accounts through social relationships, and some may bind a social platform management account...... Interface-oriented programming, without relying on specific implementation details, can effectively circumvent this problem。For example, we use the interface function to first define an abstract interface for judging permissions: +The above-mentioned private key backup concept is worthy of praise, but its specific implementation has certain limitations, in many business scenarios, it is too simple and crude。For actual business scenarios, the dimensions and factors that need to be considered for the backup and preservation of private keys are much more complex, and the corresponding key backup strategies are more diversified。Take fruit stores as an example, some chain fruit stores may want to manage private keys through brand headquarters, some may reset their accounts through social relationships, and some may bind a social platform management account...... Interface-oriented programming, without relying on specific implementation details, can effectively circumvent this problem。For example, we use the interface function to first define an abstract interface for judging permissions: ``` contract Authority { @@ -227,7 +227,7 @@ contract Authority { } ``` -This canCall function covers the function caller address, the address of the target call contract, and the function signature, and the function returns the result of a bool。This contains all the necessary parameters for contract authentication。We can further modify the previous rights management contract and rely on the Authority interface in the contract, and when authenticated, the decorator calls the abstract methods in the interface. +This canCall function covers the function caller address, the address of the target call contract, and the function signature, and the function returns the result of a bool。This contains all the necessary parameters for contract authentication。We can further modify the previous rights management contract and rely on the Authority interface in the contract, and when authenticated, the decorator calls the abstract methods in the interface ``` contract BasicAuth { @@ -259,17 +259,17 @@ contract BasicAuth { } ``` -In this way, we only need to flexibly define the contract that implements the canCall interface and define the specific judgment logic in the canCall method of the contract.。Business contracts, such as FruitStore, which inherit the BasicAuth contract, can be created with different judgment logic as long as the specific implementation contract is passed in.。 +In this way, we only need to flexibly define the contract that implements the canCall interface and define the specific judgment logic in the canCall method of the contract。Business contracts, such as FruitStore, which inherit the BasicAuth contract, can be created with different judgment logic as long as the specific implementation contract is passed in。 ## Reasonable Reservation Event -So far, we have implemented a strong and flexible permission management mechanism, and only pre-authorized external accounts can modify the contract owner attribute.。However, with the above contract code alone, we cannot record and query the history and details of modifications and calls to functions.。And such needs abound in real business scenarios.。For example, FruitStore needs to check the historical inventory modification records to calculate the best-selling and slow-selling fruits in different seasons.。 +So far, we have implemented a strong and flexible permission management mechanism, and only pre-authorized external accounts can modify the contract owner attribute。However, with the above contract code alone, we cannot record and query the history and details of modifications and calls to functions。And such needs abound in real business scenarios。For example, FruitStore needs to check the historical inventory modification records to calculate the best-selling and slow-selling fruits in different seasons。 -One way is to rely on the chain to maintain an independent ledger mechanism.。However, there are many problems with this approach: the cost overhead of keeping the off-chain ledger and on-chain records consistent is very high.;At the same time, smart contracts are open to all participants in the chain, and once other participants call the contract function, there is a risk that the relevant transaction information will not be synchronized.。For such scenarios, Solidity provides the event syntax。Event not only has the mechanism for SDK listening callback, but also can record and save event parameters and other information to the block with low gas cost.。FISCO BCOS community, there is also WEBASE-Collect-A tool like Bee that enables the complete export of block history event information after the fact.。 +One way is to rely on the chain to maintain an independent ledger mechanism。However, there are many problems with this approach: the cost overhead of keeping the off-chain ledger and on-chain records consistent is very high;At the same time, smart contracts are open to all participants in the chain, and once other participants call the contract function, there is a risk that the relevant transaction information will not be synchronized。For such scenarios, Solidity provides the event syntax。Event not only has the mechanism for SDK listening callback, but also can record and save event parameters and other information to the block with low gas cost。In the FISCO BCOS community, there are also tools like WEBASE-Collect-Bee that enable the complete export of block historical event information after the fact。 [WEBASE-Collect-Bee Tool Reference](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Collect-Bee/index.html) -Based on the above permission management contract, we can define the corresponding permission modification event, other events and so on.。 +Based on the above permission management contract, we can define the corresponding permission modification event, other events and so on。 ``` event LogSetAuthority (Authority indexed authority, address indexed from); @@ -288,21 +288,21 @@ function setAuthority(Authority authority) } ``` -When the setAuthority function is called, LogSetAuthority is triggered at the same time, and the Authority contract address and caller address defined in the event are recorded in the blockchain transaction receipt.。When the setAuthority method is called from the console, the corresponding event LogSetAuthority is also printed。Based on WEBASE-Collect-Bee, we can export all the historical information of the function to the database.。Also available based on WEBASE-Collect-Bee for secondary development, to achieve complex data query, big data analysis and data visualization functions。 +When the setAuthority function is called, LogSetAuthority is triggered at the same time, and the Authority contract address and caller address defined in the event are recorded in the blockchain transaction receipt。When the setAuthority method is called from the console, the corresponding event LogSetAuthority is also printed。Based on WEBASE-Collect-Bee, we can export all the history information of this function to the database。It can also be based on WEBASE-Collect-Bee for secondary development, to achieve complex data query, big data analysis and data visualization functions。 ## Follow security programming specifications -Each language has its own coding specifications, and we need to follow Solidity's official programming style guidelines as strictly as possible to make the code easier to read, understand, and maintain, effectively reducing the number of contract bugs.。[Solidity Official Programming Style Guide Reference](https://solidity.readthedocs.io/en/latest/style-guide.html)。In addition to programming specifications, the industry has also summarized many secure programming guidelines, such as re-entry vulnerabilities, data structure overflows, random number errors, runaway constructors, storage pointers for initialization, and so on.。To address and prevent such risks, it is critical to adopt industry-recommended security programming specifications, such as the [Solidity Official Security Programming Guide](https://solidity.readthedocs.io/en/latest/security-considerations.html)。At the same time, after the contract is released and launched, you also need to pay attention to and subscribe to all kinds of security vulnerabilities and attack methods released by security organizations or institutions in the Solidity community, and make up for problems in a timely manner.。 +Each language has its own coding specifications, and we need to follow Solidity's official programming style guidelines as strictly as possible to make the code easier to read, understand, and maintain, effectively reducing the number of contract bugs。[Solidity Official Programming Style Guide Reference](https://solidity.readthedocs.io/en/latest/style-guide.html)。In addition to programming specifications, the industry has also summarized many secure programming guidelines, such as re-entry vulnerabilities, data structure overflows, random number errors, runaway constructors, storage pointers for initialization, and so on。To address and prevent such risks, it is critical to adopt industry-recommended security programming specifications, such as the [Solidity Official Security Programming Guide](https://solidity.readthedocs.io/en/latest/security-considerations.html)。At the same time, after the contract is released and launched, you also need to pay attention to and subscribe to all kinds of security vulnerabilities and attack methods released by security organizations or institutions in the Solidity community, and make up for problems in a timely manner。 -For important smart contracts, it is necessary to introduce auditing。Existing audits include manual audits, machine audits and other methods to ensure contract security through code analysis, rule validation, semantic validation and formal validation.。Although emphasized throughout this article, modularity and reuse of smart contracts that are highly reviewed and widely validated are best practice strategies。But in the actual development process, this assumption is too idealistic, each project will more or less introduce new code, or even from scratch。However, we can still grade audits based on how much code is reused, explicitly label referenced code, and focus audits and inspections on new code to save on audit costs。 +For important smart contracts, it is necessary to introduce auditing。Existing audits include manual audits, machine audits and other methods to ensure contract security through code analysis, rule validation, semantic validation and formal validation。Although emphasized throughout this article, modularity and reuse of smart contracts that are highly reviewed and widely validated are best practice strategies。But in the actual development process, this assumption is too idealistic, each project will more or less introduce new code, or even from scratch。However, we can still grade audits based on how much code is reused, explicitly label referenced code, and focus audits and inspections on new code to save on audit costs。 -Finally, we need to constantly summarize and learn the best practices of our predecessors, dynamically and sustainably improve coding engineering, and continue to apply them to specific practices.。 +Finally, we need to constantly summarize and learn the best practices of our predecessors, dynamically and sustainably improve coding engineering, and continue to apply them to specific practices。 ### Accumulate and reuse mature code -The previous ideas in interface-oriented programming reduce code coupling, making contracts easier to extend and easier to maintain.。In addition to following this rule, there is another piece of advice: reuse existing code bases as much as possible.。Smart contracts are difficult to modify or withdraw after they are released, and when they are released on an open and transparent blockchain environment, they mean that bugs can cause more damage and risk than traditional software.。So, reusing some better and safer wheels is far better than rebuilding them.。In the open source community, there are already a large number of business contracts and libraries available, such as excellent libraries such as OpenZeppelin。If you can't find suitable reusable code in the open source world and past team code repositories, it is recommended to test and refine the code design as much as possible when writing new code。In addition, historical contract codes are regularly analyzed and reviewed to be templated for easy scaling and reuse。 +The previous ideas in interface-oriented programming reduce code coupling, making contracts easier to extend and easier to maintain。In addition to following this rule, there is another piece of advice: reuse existing code bases as much as possible。Smart contracts are difficult to modify or withdraw after they are released, and when they are released on an open and transparent blockchain environment, they mean that bugs can cause more damage and risk than traditional software。So, reusing some better and safer wheels is far better than rebuilding them。In the open source community, there are already a large number of business contracts and libraries available, such as excellent libraries such as OpenZeppelin。If you can't find suitable reusable code in the open source world and past team code repositories, it is recommended to test and refine the code design as much as possible when writing new code。In addition, historical contract codes are regularly analyzed and reviewed to be templated for easy scaling and reuse。 -For example, for the above BasicAuth, refer to Firewall Classic ACL(Access Control List)design, we can further inherit and extend BasicAuth to abstract the implementation of ACL contract control.。 +For example, for the above BasicAuth, refer to Firewall Classic ACL(Access Control List)design, we can further inherit and extend BasicAuth to abstract the implementation of ACL contract control。 ``` contract AclGuard is BasicAuth { @@ -351,7 +351,7 @@ contract AclGuard is BasicAuth { } ``` -In this contract, there are three main parameters: the caller address, the called contract address, and the function signature.。By configuring ACL access policies, you can precisely define and control function access behavior and permissions。The contract has built-in ANY constants that match arbitrary functions, making access granularity control easier.。This template contract is powerful and flexible enough to meet the needs of all similar permission control scenarios.。 +In this contract, there are three main parameters: the caller address, the called contract address, and the function signature。By configuring ACL access policies, you can precisely define and control function access behavior and permissions。The contract has built-in ANY constants that match arbitrary functions, making access granularity control easier。This template contract is powerful and flexible enough to meet the needs of all similar permission control scenarios。 ## Improve storage and compute efficiency @@ -359,11 +359,11 @@ So far, in the above deduction process, more is to do the addition of smart cont ### Select the appropriate variable type -Explicit problems can be detected and reported by the EVM compiler.;But a large number of performance issues can be hidden in the details of the code.。Solidity provides very precise base types, which is very different from traditional programming languages。Here are a few tips on the basic types of Solidity。In C, you can use short\ int\ long to define integer types on demand, and in Solidity, not only distinguish between int and uint, but even define the length of uint, such as uint8 is one byte, uint256 is 32 bytes。This design warns us that what can be done with uint8 should never be done with uint16.!Almost all basic types of Solidity, whose size can be specified at declaration time。Developers must make effective use of this syntax feature, writing code as small as possible to meet the needs of the variable type。The data type bytes32 can hold 32 (raw) bytes, but unless the data is a fixed-length data type such as bytes32 or bytes16, it is more recommended to use bytes that can vary in length.。Bytes is similar to byte [], but it will be automatically compressed and packaged in external functions, which is more space-saving。If the variable content is in English, you do not need to use UTF-8 encoding, here, recommend bytes instead of string。string defaults to UTF-8 encoding, so the storage cost of the same string will be much higher。 +Explicit problems can be detected and reported by the EVM compiler;But a large number of performance issues can be hidden in the details of the code。Solidity provides very precise base types, which is very different from traditional programming languages。Here are a few tips on the basic types of Solidity。In C, you can use short\ int\ long to define integer types on demand, and in Solidity, not only distinguish between int and uint, but even define the length of uint, such as uint8 is one byte, uint256 is 32 bytes。This design warns us that what can be done with uint8 should never be done with uint16!Almost all basic types of Solidity, whose size can be specified at declaration time。Developers must make effective use of this syntax feature, writing code as small as possible to meet the needs of the variable type。The data type bytes32 can hold 32 (raw) bytes, but unless the data is a fixed-length data type such as bytes32 or bytes16, it is more recommended to use bytes that can vary in length。Bytes is similar to byte [], but it will be automatically compressed and packaged in external functions, which is more space-saving。If the variable content is in English, you do not need to use UTF-8 encoding, here, we recommend bytes instead of string。string uses UTF-8 encoding by default, so the storage cost of the same string will be much higher。 ### compact state variable packing -In addition to using as small data types as possible to define variables, sometimes the order in which variables are arranged is also very important and may affect program execution and storage efficiency。The root cause is still EVM, whether it is an EVM storage slot (Storage Slot) or a stack, each element is a word in length (256 bits, 32 bytes).。When allocating storage, all variables (except for non-static types such as maps and dynamic arrays) are written down in order of declaration, starting at position 0。When processing state variables and structure member variables, EVM packs multiple elements into a storage slot, thereby merging multiple reads or writes into a single operation on storage。It is worth noting that when using elements smaller than 32 bytes, the gas usage of the contract may be higher than when using 32-byte elements.。This is because EVM will operate on 32 bytes at a time, so if the element is smaller than 32 bytes, more operations must be used to reduce its size to the required。This also explains why the most common data types in Solidity, such as int, uint, and byte32, all occupy just 32 bytes.。Therefore, when a contract or structure declares multiple state variables, it is important to be able to reasonably arrange multiple storage state variables and structure member variables so that they take up less storage space.。For example, in the following two contracts, the Test1 contract consumes less storage and computing resources than the Test2 contract.。 +In addition to using as small data types as possible to define variables, sometimes the order in which variables are arranged is also very important and may affect program execution and storage efficiency。The root cause is still EVM, whether it is an EVM storage slot (Storage Slot) or a stack, each element is a word in length (256 bits, 32 bytes)。When allocating storage, all variables (except for non-static types such as maps and dynamic arrays) are written down in order of declaration, starting at position 0。When processing state variables and structure member variables, EVM packs multiple elements into a storage slot, thereby merging multiple reads or writes into a single operation on storage。It is worth noting that when using elements smaller than 32 bytes, the gas usage of the contract may be higher than when using 32-byte elements。This is because EVM will operate on 32 bytes at a time, so if the element is smaller than 32 bytes, more operations must be used to reduce its size to the required。This also explains why the most common data types in Solidity, such as int, uint, and byte32, all occupy just 32 bytes。Therefore, when a contract or structure declares multiple state variables, it is important to be able to reasonably arrange multiple storage state variables and structure member variables so that they take up less storage space。For example, in the following two contracts, the Test1 contract consumes less storage and computing resources than the Test2 contract。 ``` contract Test1 { @@ -393,11 +393,11 @@ contract Test2 { ### Optimize Query Interface -There are many optimization points of the query interface, for example, you must add the view modifier to the function declaration that is only responsible for the query, otherwise the query function will be packaged as a transaction and sent to the consensus queue, executed by the whole network and recorded in the block.;This will greatly increase the burden on the blockchain and take up valuable on-chain resources。For example, don't add complex query logic to a smart contract, because any complex query code will make the entire contract longer and more complex.。Readers can use the WeBASE data export component mentioned above to export on-chain data to a database for off-chain query and analysis。 +There are many optimization points of the query interface, for example, you must add the view modifier to the function declaration that is only responsible for the query, otherwise the query function will be packaged as a transaction and sent to the consensus queue, executed by the whole network and recorded in the block;This will greatly increase the burden on the blockchain and take up valuable on-chain resources。For example, don't add complex query logic to a smart contract, because any complex query code will make the entire contract longer and more complex。Readers can use the WeBASE data export component mentioned above to export on-chain data to a database for off-chain query and analysis。 ### Reduced contract binary length -The Solidity code written by the developer is compiled into binary code, and the process of deploying the smart contract is actually storing the binary code on the chain through a transaction and obtaining the address specific to the contract.。Reducing the length of binary code can save the overhead of network transmission and consensus packed data storage.。For example, in a typical deposit business scenario, a new deposit contract is created each time a customer deposits a certificate, so the length of the binary code should be reduced as much as possible。The common idea is to cut unnecessary logic and remove redundant code.。Especially when reusing code, some non-rigid code may be introduced。In the above example, ACL contracts support permissions to control the granularity of contract functions.。 +The Solidity code written by the developer is compiled into binary code, and the process of deploying the smart contract is actually storing the binary code on the chain through a transaction and obtaining the address specific to the contract。Reducing the length of binary code can save the overhead of network transmission and consensus packed data storage。For example, in a typical deposit business scenario, a new deposit contract is created each time a customer deposits a certificate, so the length of the binary code should be reduced as much as possible。The common idea is to cut unnecessary logic and remove redundant code。Especially when reusing code, some non-rigid code may be introduced。In the above example, ACL contracts support permissions to control the granularity of contract functions。 ``` function canCall( @@ -414,7 +414,7 @@ function canCall( } ``` -However, in specific business scenarios, you only need to control the contract visitors, and further simplify the usage logic by deleting the corresponding code.。In this way, the length of the binary code of the corresponding contract will be greatly reduced.。 +However, in specific business scenarios, you only need to control the contract visitors, and further simplify the usage logic by deleting the corresponding code。In this way, the length of the binary code of the corresponding contract will be greatly reduced。 ``` function canCall( @@ -426,7 +426,7 @@ function canCall( } ``` -Another way to reduce binary code is to use a more compact writing method.。It has been measured that the binary length of the judgment statement using the short-circuit principle as above will be longer than if-Shorter Else Syntax。Similarly, if-Else's structure will also be better than if-if-If the structure generates shorter binary code。Finally, in scenarios where the length of the binary code is extreme, you should avoid creating new contracts in the contract as much as possible, which will significantly increase the length of the binary.。For example, a contract has the following constructor: +Another way to reduce binary code is to use a more compact writing method。It has been measured that a judgment statement that adopts the short-circuit principle as above will have a shorter binary length than one that adopts the if-else syntax。Similarly, using the if-else structure will generate shorter binary code than the if-if-if structure。Finally, in scenarios where the length of the binary code is extreme, you should avoid creating new contracts in the contract as much as possible, which will significantly increase the length of the binary。For example, a contract has the following constructor: ``` constructor() public { @@ -435,7 +435,7 @@ constructor() public { } ``` -We can circumvent this problem by constructing the A object off-chain and based on address transmission and fixed validation.。 +We can circumvent this problem by constructing the A object off-chain and based on address transmission and fixed validation。 ``` constructor(address a) public { @@ -444,23 +444,23 @@ constructor(address a) public { } ``` -Of course, this can also complicate the way contracts interact.。However, it provides a shortcut to effectively shorten the length of binary code, which requires trade-offs in specific business scenarios.。 +Of course, this can also complicate the way contracts interact。However, it provides a shortcut to effectively shorten the length of binary code, which requires trade-offs in specific business scenarios。 ## Guaranteed contracts can be upgraded ### classic three-layer structure -Through the above, we do our best to maintain the flexibility of the contract design.;The wheels were reused when turning over boxes and cabinets.;Also conduct all-round, dead-end-free testing of release contracts。In addition, as business needs change, we will also face a problem: how to ensure a smooth and smooth upgrade of the contract.?As a high-level programming language, Solidity supports running some complex control and calculation logic, and also supports storing the state and business data after the smart contract is run.。Different from the application of WEB development and other scenarios-Database hierarchical architecture, Solidity language does not even abstract a layer of independent data storage structure, data are saved to the contract.。However, this model becomes a bottleneck once the contract needs to be upgraded。 +Through the above, we do our best to maintain the flexibility of the contract design;The wheels were reused when turning over boxes and cabinets;Also conduct all-round, dead-end-free testing of release contracts。In addition, as business needs change, we will also face a problem: how to ensure a smooth and smooth upgrade of the contract?As a high-level programming language, Solidity supports running some complex control and calculation logic, and also supports storing the state and business data after the smart contract is run。Different from the application of WEB development and other scenarios - database hierarchical architecture, Solidity language does not even abstract a layer of independent data storage structure, data are saved to the contract。However, this model becomes a bottleneck once the contract needs to be upgraded。 -In Solidity, once a contract is deployed and released, its code cannot be modified and can only be modified by releasing a new contract.。If the data is stored in the old contract, there will be a so-called "orphan data" problem, the new contract will lose the historical business data previously run.。In this case, developers can consider migrating the old contract data to the new contract, but this operation has at least two problems: +In Solidity, once a contract is deployed and released, its code cannot be modified and can only be modified by releasing a new contract。If the data is stored in the old contract, there will be a so-called "orphan data" problem, the new contract will lose the historical business data previously run。In this case, developers can consider migrating the old contract data to the new contract, but this operation has at least two problems: Migrating data will increase the burden on the blockchain, resulting in waste and consumption of resources, and even introduce security issues; -2. Pull the whole body, will introduce additional migration data logic, increase contract complexity.。 +2. Pull the whole body, will introduce additional migration data logic, increase contract complexity。 -A more reasonable approach is to abstract a separate contract storage layer.。This storage layer only provides the most basic way to read and write contracts, and does not contain any business logic.。In this model, there are three contract roles: +A more reasonable approach is to abstract a separate contract storage layer。This storage layer only provides the most basic way to read and write contracts, and does not contain any business logic。In this model, there are three contract roles: -- Data contract: Save data in a contract and provide an interface for data manipulation。 -- Manage contracts: Set control permissions to ensure that only control contracts have permission to modify data contracts.。 +- Data contract: Save data in the contract and provide the operation interface of the data。 +- Manage contracts: Set control permissions to ensure that only control contracts have permission to modify data contracts。 - Control contracts: Contracts that really need to initiate operations on data。 Specific code examples are as follows: @@ -508,15 +508,15 @@ contract FruitStoreController is BasicAuth { } ``` -Once the control logic of the function needs to be changed, the developer simply modifies the FruitStoreController control contract logic, deploys a new contract, and then uses the management contract Admin to modify the new contract address parameters to easily complete the contract upgrade.。This approach eliminates data migration hazards due to changes in business control logic in contract upgrades。But there is no such thing as a free lunch, and this kind of operation requires a basic trade-off between scalability and complexity.。First, the separation of data and logic reduces operational performance。Second, further encapsulation increases program complexity。Finally, more complex contracts increase the potential attack surface, and simple contracts are safer than complex contracts.。 +Once the control logic of the function needs to be changed, the developer simply modifies the FruitStoreController control contract logic, deploys a new contract, and then uses the management contract Admin to modify the new contract address parameters to easily complete the contract upgrade。This approach eliminates data migration hazards due to changes in business control logic in contract upgrades。But there is no such thing as a free lunch, and this kind of operation requires a basic trade-off between scalability and complexity。First, the separation of data and logic reduces operational performance。Second, further encapsulation increases program complexity。Finally, more complex contracts increase the potential attack surface, and simple contracts are safer than complex contracts。 ### general data structure So far, there is a question of what to do if the data structure itself in the data contract needs to be upgraded? -For example, in FruitStore, originally only inventory information was kept, but now, as the fruit store business has grown, a total of ten branches have been opened, and each branch, each fruit's inventory and sales information needs to be recorded.。In this case, one solution is to use external association management: create a new ChainStore contract, create a mapping in this contract, and establish the relationship between the branch name and FruitStore.。 +For example, in FruitStore, originally only inventory information was kept, but now, as the fruit store business has grown, a total of ten branches have been opened, and each branch, each fruit's inventory and sales information needs to be recorded。In this case, one solution is to use external association management: create a new ChainStore contract, create a mapping in this contract, and establish the relationship between the branch name and FruitStore。 -In addition, different stores need to create a FruitStore contract。In order to record new sales information and other data, we also need to create a new contract to manage。If you can preset different types of reserved fields in FruitStore, you can avoid the overhead of creating new sales information contracts and still reuse FruitStore contracts.。But this approach will increase the storage overhead at the beginning.。A better idea is to abstract a more underlying and generic storage structure。The code is as follows: +In addition, different stores need to create a FruitStore contract。In order to record new sales information and other data, we also need to create a new contract to manage。If you can preset different types of reserved fields in FruitStore, you can avoid the overhead of creating new sales information contracts and still reuse FruitStore contracts。But this approach will increase the storage overhead at the beginning。A better idea is to abstract a more underlying and generic storage structure。The code is as follows: ``` contract commonDB is BasicAuth { @@ -533,7 +533,7 @@ contract commonDB is BasicAuth { } ``` -Similarly, we can add all data type variables to help commonDB cope with and meet different data type storage needs.。The corresponding control contract may be modified as follows: +Similarly, we can add all data type variables to help commonDB cope with and meet different data type storage needs。The corresponding control contract may be modified as follows: ``` contract FruitStoreControllerV2 is BasicAuth { @@ -545,8 +545,8 @@ contract FruitStoreControllerV2 is BasicAuth { } ``` -Using the above storage design patterns can significantly improve the flexibility of contract data storage and ensure that contracts can be upgraded.。As we all know, Solidity neither supports databases, uses code as a storage entity, nor provides the flexibility to change schemas。However, with this KV design, the storage itself can be made highly scalable。Anyway,**No strategy is perfect, and good architects are good at weighing**。Smart contract designers need to fully understand the pros and cons of various solutions and choose the right design based on the actual situation。 +Using the above storage design patterns can significantly improve the flexibility of contract data storage and ensure that contracts can be upgraded。As we all know, Solidity neither supports databases, uses code as a storage entity, nor provides the flexibility to change schemas。However, with this KV design, the storage itself can be made highly scalable。Anyway,**No strategy is perfect, and good architects are good at weighing**。Smart contract designers need to fully understand the pros and cons of various solutions and choose the right design based on the actual situation。 ## SUMMARY -As for this, I hope to arouse the reader's interest in survival and evolution in the Solidity world.。"If there is perfection, there must be lies," the world of software development has no silver bullet。The process of writing this article is the process of gradual improvement and evolution from the simplest contract.。In the Solidity programming world, survival and evolution are inseparable from three key words: security, reusability, and efficiency.。Life goes on, evolution goes on。It is difficult to exhaust all the skills of survival and evolution in a short essay. I hope these three key words can help you soar in the world of Solidity and keep writing brilliant stories and legends:) +As for this, I hope to arouse the reader's interest in survival and evolution in the Solidity world。"If there is perfection, there must be lies," the world of software development has no silver bullet。The process of writing this article is the process of gradual improvement and evolution from the simplest contract。In the Solidity programming world, survival and evolution are inseparable from three key words: security, reusability, and efficiency。Life goes on, evolution goes on。It is difficult to exhaust all the skills of survival and evolution in a short essay. I hope these three key words can help you soar in the world of Solidity and keep writing brilliant stories and legends:) diff --git a/3.x/en/docs/articles/3_features/35_contract/solidity_operation_principle.md b/3.x/en/docs/articles/3_features/35_contract/solidity_operation_principle.md index 42540ac80..48989b806 100644 --- a/3.x/en/docs/articles/3_features/35_contract/solidity_operation_principle.md +++ b/3.x/en/docs/articles/3_features/35_contract/solidity_operation_principle.md @@ -4,15 +4,15 @@ Author: Chu Yuzhi | FISCO BCOS Core Developer ## Introduction -As a language for smart contracts, Solidity has both differences and similarities with other classic languages.。On the one hand, the properties that serve the blockchain make it different from other languages。For example, the deployment and invocation of contracts are confirmed by the blockchain network.;Execution costs need to be tightly controlled to prevent malicious code from consuming node resources。On the other hand, as a programming language, the implementation of Solidity does not deviate from the classical language, for example, Solidity contains a stack, heap-like design, the use of stacked virtual machines for bytecode processing.。The previous articles in this series have described how to develop Solidity programs, and in order to give readers a better understanding of why, this article will further introduce the inner workings of Solidity, focusing on the life cycle of Solidity programs and how EVM works.。 +As a language for smart contracts, Solidity has both differences and similarities with other classic languages。On the one hand, the properties that serve the blockchain make it different from other languages。For example, the deployment and invocation of contracts are confirmed by the blockchain network;Execution costs need to be tightly controlled to prevent malicious code from consuming node resources。On the other hand, as a programming language, the implementation of Solidity does not deviate from the classical language, for example, Solidity contains a stack, heap-like design, the use of stacked virtual machines for bytecode processing。The previous articles in this series have described how to develop Solidity programs, and in order to give readers a better understanding of why, this article will further introduce the inner workings of Solidity, focusing on the life cycle of Solidity programs and how EVM works。 ## Solidity Life Cycle -Like other languages, Solidity's code life cycle is inseparable from the four stages of compilation, deployment, execution, and destruction.。The following figure shows the complete life cycle of the Solidity program: +Like other languages, Solidity's code life cycle is inseparable from the four stages of compilation, deployment, execution, and destruction。The following figure shows the complete life cycle of the Solidity program: ![](../../../../images/articles/solidity_operation_principle/IMG_5474.PNG) -When compiled, the Solidity file generates bytecode。This is a kind of code similar to jvm bytecode。At deployment, the bytecode and construction parameters are built into a transaction, which is packaged into a block, which is passed through a network consensus process, and finally the contract is built on each block chain node and the contract address is returned to the user.。When the user is ready to call the function on the contract, the call request will also go through the process of transaction, block, consensus, and finally be executed by the EVM virtual machine on each node.。 +When compiled, the Solidity file generates bytecode。This is a kind of code similar to jvm bytecode。At deployment, the bytecode and construction parameters are built into a transaction, which is packaged into a block, which is passed through a network consensus process, and finally the contract is built on each block chain node and the contract address is returned to the user。When the user is ready to call the function on the contract, the call request will also go through the process of transaction, block, consensus, and finally be executed by the EVM virtual machine on each node。 Here is a sample program, we explore its life cycle through remix。 @@ -44,7 +44,7 @@ You can also get the corresponding bytecode (OpCode): PUSH1 0x80 PUSH1 0x40 MSTORE CALLVALUE DUP1 ISZERO PUSH2 0x10 JUMPI PUSH1 0x0 DUP1 REVERT JUMPDEST POP PUSH1 0x40 MLOAD PUSH1 0x20 DUP1 PUSH2 0xED DUP4 CODECOPY DUP2 ADD DUP1 PUSH1 0x40 MSTORE DUP2 ADD SWAP1 DUP1 DUP1 MLOAD SWAP1 PUSH1 0x20 ADD SWAP1 SWAP3 SWAP2 SWAP1 POP POP POP DUP1 PUSH1 0x0 DUP2 SWAP1 SSTORE POP POP PUSH1 0xA4 DUP1 PUSH2 0x49 PUSH1 0x0 CODECOPY PUSH1 0x0 RETURN STOP PUSH1 0x80 PUSH1 0x40 MSTORE PUSH1 0x4 CALLDATASIZE LT PUSH1 0x3F JUMPI PUSH1 0x0 CALLDATALOAD PUSH29 0x100000000000000000000000000000000000000000000000000000000 SWAP1 DIV PUSH4 0xFFFFFFFF AND DUP1 PUSH4 0x60FE47B1 EQ PUSH1 0x44 JUMPI JUMPDEST PUSH1 0x0 DUP1 REVERT JUMPDEST CALLVALUE DUP1 ISZERO PUSH1 0x4F JUMPI PUSH1 0x0 DUP1 REVERT JUMPDEST POP PUSH1 0x6C PUSH1 0x4 DUP1 CALLDATASIZE SUB DUP2 ADD SWAP1 DUP1 DUP1 CALLDATALOAD SWAP1 PUSH1 0x20 ADD SWAP1 SWAP3 SWAP2 SWAP1 POP POP POP PUSH1 0x6E JUMP JUMPDEST STOP JUMPDEST DUP1 PUSH1 0x0 DUP2 SWAP1 SSTORE POP POP JUMP STOP LOG1 PUSH6 0x627A7A723058 KECCAK256 0x4e 0xd9 MOD DIFFICULTY 0x4c 0xc4 0xc9 0xaa 0xbd XOR EXTCODECOPY MSTORE 0xb2 0xd4 DUP7 0xdf 0xc5 0xde 0xa9 DUP1 SLT PUSH1 0xC3 CALLDATACOPY XOR 0x5d 0xad KECCAK256 0xe1 0x1f DUP2 SHL STOP 0x29 ``` -The following instruction set is the code corresponding to the set function, which will be explained later on.。 +The following instruction set is the code corresponding to the set function, which will be explained later on。 ``` JUMPDEST DUP1 PUSH1 0x0 DUP2 SWAP1 SSTORE POP POP JUMP STOP @@ -52,7 +52,7 @@ JUMPDEST DUP1 PUSH1 0x0 DUP2 SWAP1 SSTORE POP POP JUMP STOP ### Deploy -After the compilation, you can deploy the code on remix and pass the construction parameters to 0x123.: +After the compilation, you can deploy the code on remix and pass the construction parameters to 0x123: ![](../../../../images/articles/solidity_operation_principle/IMG_5475.PNG) @@ -60,39 +60,39 @@ After the deployment is successful, you can get a transaction receipt: ![](../../../../images/articles/solidity_operation_principle/IMG_5476.PNG) -Click on input to see the specific transaction input data. +Click on input to see the specific transaction input data ![](../../../../images/articles/solidity_operation_principle/IMG_5477.PNG) -In the above data, the yellow part happens to be the contract binary from the previous section.;The purple part, on the other hand, corresponds to the incoming construct parameter 0x123。These all suggest that contract deployments use transactions as a medium。Combined with blockchain transaction knowledge, we can restore the entire deployment process: +In the above data, the yellow part happens to be the contract binary from the previous section;The purple part, on the other hand, corresponds to the incoming construct parameter 0x123。These all suggest that contract deployments use transactions as a medium。Combined with blockchain transaction knowledge, we can restore the entire deployment process: - Client will deploy the request(contract binary, construction parameters)as input data for the transaction to construct a transaction - The transaction is rlp encoded and then signed by the sender with the private key - Signed transactions are pushed to nodes on the blockchain - After the blockchain node verifies the transaction, it is deposited into the transaction pool -- When it's the node's turn to block, package the transaction to build the block and broadcast it to other nodes. -- Other nodes verify blocks and achieve consensus。Different blockchains may use different consensus algorithms, and PBFT is used in FISCO BCOS to achieve consensus, which requires a three-stage submission (pre-prepare,prepare, commit) -- The node executes the transaction, as a result, the smart contract Demo is created, the storage space of the status field _ state is allocated, and is initialized to 0x123 +- When it's the node's turn to block, package the transaction to build the block and broadcast it to other nodes +- Other nodes validate blocks and reach consensus。Different blockchains may use different consensus algorithms, and PBFT is used in FISCO BCOS to obtain consensus, which requires a three-stage commit (pre-prepare, prepare, commit) +- The node executes the transaction, and the result is that the smart contract Demo is created, the storage space of the status field _ state is allocated, and it is initialized to 0x123 ### Execute -Depending on whether or not we have the modifier view, we can divide functions into two categories: calls and transactions.。Since it is determined at compile time that the call will not cause a change in the contract state, for such function calls, the node can directly provide a query without confirming with other blockchain nodes.。And because the transaction may cause a state change, it will be confirmed between networks.。The following will call set with the user.(0x10)For assumptions, look at the specific running process。First, the function set is not configured with the view / pure modifier, which means it may change the contract state。So this call information will be put into a transaction, through the transaction code, transaction signature, transaction push, transaction pool cache, packaging out of the block, network consensus and other processes, and finally handed over to the EVM of each node for execution.。In EVM, parameter 0xa is stored by SSTORE bytecode into contract field _ state。The bytecode first gets the address of the status field _ state and the new value 0xa from the stack, and then completes the actual storage。The following figure shows the running process: +Depending on whether or not we have the modifier view, we can divide functions into two categories: calls and transactions。Since it is determined at compile time that the call will not cause a change in the contract state, for such function calls, the node can directly provide a query without confirming with other blockchain nodes。And because the transaction may cause a state change, it will be confirmed between networks。The following will call set with the user(0x10)For assumptions, look at the specific running process。First, the function set is not configured with the view / pure modifier, which means it may change the contract state。So this call information will be put into a transaction, through the transaction code, transaction signature, transaction push, transaction pool cache, packaging out of the block, network consensus and other processes, and finally handed over to the EVM of each node for execution。In EVM, parameter 0xa is stored by SSTORE bytecode into contract field _ state。The bytecode first gets the address of the status field _ state and the new value 0xa from the stack, and then completes the actual storage。The following figure shows the running process: ![](../../../../images/articles/solidity_operation_principle/IMG_5478.PNG) -Here is only a rough introduction to set(0xa)The next section will further introduce the working mechanism of EVM and the data storage mechanism.。 +Here is only a rough introduction to set(0xa)The next section will further introduce the working mechanism of EVM and the data storage mechanism。 ### Destruction -Since the contract cannot be tampered with once it is on the chain, the life of the contract can last until the underlying blockchain is completely shut down.。To manually destroy a contract, use the bytecode selfdestruct。Destruction contracts also require transaction confirmation and will not be repeated here.。 +Since the contract cannot be tampered with once it is on the chain, the life of the contract can last until the underlying blockchain is completely shut down。To manually destroy a contract, use the bytecode selfdestruct。Destruction contracts also require transaction confirmation and will not be repeated here。 ## Principle of EVM -In the previous article, we introduced how the Solidity program works.。After the transaction is confirmed, the bytecode is finally executed by the EVM.。For EVM, the above is just a passing note, and this section will detail its working mechanism.。 +In the previous article, we introduced how the Solidity program works。After the transaction is confirmed, the bytecode is finally executed by the EVM。For EVM, the above is just a passing note, and this section will detail its working mechanism。 ### Operation principle -An EVM is a stacked virtual machine whose core feature is that all operands are stored on the stack。Let's look at how it works through a simple piece of Solidity statement code. +An EVM is a stacked virtual machine whose core feature is that all operands are stored on the stack。Let's look at how it works through a simple piece of Solidity statement code ``` uint a = 1; @@ -100,7 +100,7 @@ uint b = 2; uint c = a + b; ``` -After this code is compiled, the resulting bytecode is as follows. +After this code is compiled, the resulting bytecode is as follows ``` PUSH1 0x1 @@ -108,12 +108,12 @@ PUSH1 0x2 ADD ``` -For the reader to better understand the concept, this is reduced to the above three statements, but the actual bytecode may be more complex and will be doped with statements such as SWAP and DUP.。We can see that in the above code, there are two instructions: PUSH1 and ADD, which have the following meanings: +For the reader to better understand the concept, this is reduced to the above three statements, but the actual bytecode may be more complex and will be doped with statements such as SWAP and DUP。We can see that in the above code, there are two instructions: PUSH1 and ADD, which have the following meanings: -- PUSH1: Push data to the top of the stack。 -- ADD: POP two top stack elements, add them and press them back to the top of the stack。 +-PUSH1: Push data to the top of the stack。 +-ADD: POP two top stack elements, add them and press them back to the top of the stack。 -The execution process is explained here in a semi-animated way.。In the following figure, sp represents the top of the stack pointer and pc represents the program counter.。After executing push1 0x1, both pc and sp move down: +The execution process is explained here in a semi-animated way。In the following figure, sp represents the top of the stack pointer and pc represents the program counter。After executing push1 0x1, both pc and sp move down: ![](../../../../images/articles/solidity_operation_principle/IMG_5479.PNG) @@ -121,7 +121,7 @@ Similarly, after executing push1 0x2, the pc and sp states are as follows: ![](../../../../images/articles/solidity_operation_principle/IMG_5480.PNG) -Finally, when add is executed, both operands at the top of the stack are popped up as input to the add instruction, and the sum of the two is pushed onto the stack. +Finally, when add is executed, both operands at the top of the stack are popped up as input to the add instruction, and the sum of the two is pushed onto the stack ![](../../../../images/articles/solidity_operation_principle/IMG_5481.PNG) @@ -147,23 +147,23 @@ contract Demo{ #### Stack -The stack is used to store the operands of a bytecode instruction。In Solidity, local variables of types such as integers and fixed-length byte arrays are pushed into and out of the stack as instructions are run.。For example, in the following simple statement, the variable value 1 is read and pushed to the top of the stack by the PUSH operation: +The stack is used to store the operands of a bytecode instruction。In Solidity, local variables of types such as integers and fixed-length byte arrays are pushed into and out of the stack as instructions are run。For example, in the following simple statement, the variable value 1 is read and pushed to the top of the stack by the PUSH operation: ``` uint i = 1; ``` -For such variables, you cannot forcibly change how they are stored, and if you place the memory modifier before them, the compiler will report an error.。 +For such variables, you cannot forcibly change how they are stored, and if you place the memory modifier before them, the compiler will report an error。 #### Memory -Memory is similar to the heap in java, which is used to store"Object"。In Solidity programming, if a local variable is of a variable-length byte array, string, structure, etc., it is usually modified by the memory modifier to indicate that it is stored in memory.。 +Memory is similar to the heap in java, which is used to store"Object"。In Solidity programming, if a local variable is of a variable-length byte array, string, structure, etc., it is usually modified by the memory modifier to indicate that it is stored in memory。 -In this section, we will use strings as an example to analyze how memory stores these objects.。 +In this section, we will use strings as an example to analyze how memory stores these objects。 ##### 1. Object storage structure -The following will use the assembly statement to analyze the storage method of complex objects.。The assembly statement is used to invoke bytecode operations。The mload instruction will be used to call these bytecodes。mload(p)indicates that 32 bytes of data are read from address p。Developers can pass object variables directly into mload as pointers.。In the following code, after the mload call, the data variable holds the first 32 bytes of the string str in memory.。 +The following will use the assembly statement to analyze the storage method of complex objects。The assembly statement is used to invoke bytecode operations。The mload instruction will be used to call these bytecodes。mload(p)indicates that 32 bytes of data are read from address p。Developers can pass object variables directly into mload as pointers。In the following code, after the mload call, the data variable holds the first 32 bytes of the string str in memory。 ``` string memory str = "aaa"; @@ -173,7 +173,7 @@ assembly{ } ``` -Mastering mload, you can use this to analyze how string variables are stored.。The following code reveals how string data is stored: +Mastering mload, you can use this to analyze how string variables are stored。The following code reveals how string data is stored: ``` function strStorage() public view returns(bytes32, bytes32){ @@ -188,18 +188,18 @@ function strStorage() public view returns(bytes32, bytes32){ } ``` -The data variable represents 0 to 31 bytes of str, and data2 represents 32 to 63 bytes of str.。The result of running the strStorage function is as follows: +The data variable represents 0 to 31 bytes of str, and data2 represents 32 to 63 bytes of str。The result of running the strStorage function is as follows: ``` 0: bytes32: 0x0000000000000000000000000000000000000000000000000000000000000006 1: bytes32: 0xe4bda0e5a5bd0000000000000000000000000000000000000000000000000000 ``` -As you can see, the first data word gets a value of 6, which is exactly the string"Hello"Via UTF-8 Number of bytes after encoding。The second data word is saved as"Hello"UTF itself-8 Code。After mastering the storage format of strings, we can use assembly to modify, copy, and splice strings.。Readers can search Solidity's string library to learn how to implement string concat。 +As you can see, the first data word gets a value of 6, which is exactly the string"Hello"Number of UTF-8 encoded bytes。The second data word is saved as"Hello"UTF-8 encoding of itself。After mastering the storage format of strings, we can use assembly to modify, copy, and splice strings。Readers can search Solidity's string library to learn how to implement string concat。 ##### 2. Memory allocation method -Since memory is used to store objects, it necessarily involves how memory is allocated。The way memory is allocated is very simple, that is, sequential allocation.。Below we will assign two objects and look at their addresses: +Since memory is used to store objects, it necessarily involves how memory is allocated。The way memory is allocated is very simple, that is, sequential allocation。Below we will assign two objects and look at their addresses: ``` function memAlloc() public view returns(bytes32, bytes32){ @@ -222,19 +222,19 @@ After running this function, the return result will contain two data words: 1: bytes32: 0x00000000000000000000000000000000000000000000000000000000000000c0 ``` -This means that the starting address of the first string str1 is 0x80 and the starting address of the second string str2 is 0xc0, between 64 bytes, which is exactly the space occupied by str1 itself.。The memory layout at this point is as follows, where one grid represents 32 bytes (a data word, and EVM uses 32 bytes as a data word instead of 4 bytes). +This means that the starting address of the first string str1 is 0x80 and the starting address of the second string str2 is 0xc0, between 64 bytes, which is exactly the space occupied by str1 itself。The memory layout at this point is as follows, where one grid represents 32 bytes (a data word, and EVM uses 32 bytes as a data word instead of 4 bytes) ![](../../../../images/articles/solidity_operation_principle/IMG_5482.PNG) -- 0x40 ~ 0x60: free pointer to save the available address, in this case 0x100, indicating that the new object will be allocated from 0x100。You can use mload.(0x40)Get the allocation address of the new object。 -- 0x80 ~ 0xc0: Start address of object allocation。Here the string aaa is assigned -- 0xc0 ~ 0x100: The string bbb is allocated -- 0x100 ~...: Because it is sequential allocation, new objects will be allocated here.。 +-0x40 ~ 0x60: free pointer to save the available address, in this case 0x100, indicating that the new object will be allocated from 0x100。You can use mload(0x40)Get the allocation address of the new object。 +-0x80 ~ 0xc0: Start address of object allocation。Here the string aaa is assigned +-0xc0 ~ 0x100: String bbb allocated +-0x100 ~...: Because it is sequential allocation, new objects will be allocated here。 #### State Storage -As the name suggests, the state store is used to store the contract's state field。From the model, storage consists of multiple 32-byte storage slots。In the previous article, we introduced the set function of the Demo contract, where 0x0 represents the storage slot of the state variable _ state.。All fixed-length variables are placed sequentially into this set of slots。For mapping and arrays, the storage is more complicated. It occupies one slot and contains data that occupies other slots according to the corresponding rules. For example, in mapping, the storage slot of a data item is calculated by the key value k and the mapping's own slot p by keccak.。In terms of implementation, different chains may use different implementations, and the more classic is the MPT tree used by Ethereum.。Due to MPT tree performance, scalability and other issues, FISCO BCOS abandoned this structure, and adopted distributed storage, through rocksdb or mysql to store state data, so that the storage performance, scalability has been improved.。 +As the name suggests, the state store is used to store the contract's state field。From the model, storage consists of multiple 32-byte storage slots。In the previous article, we introduced the set function of the Demo contract, where 0x0 represents the storage slot of the state variable _ state。All fixed-length variables are placed sequentially into this set of slots。For mapping and arrays, the storage is more complicated. It occupies one slot and contains data that occupies other slots according to the corresponding rules. For example, in mapping, the storage slot of a data item is calculated by the key value k and the mapping's own slot p by keccak。In terms of implementation, different chains may use different implementations, and the more classic is the MPT tree used by Ethereum。Due to MPT tree performance, scalability and other issues, FISCO BCOS abandoned this structure, and adopted distributed storage, through rocksdb or mysql to store state data, so that the storage performance, scalability has been improved。 ## Conclusion -This article describes the operating principles of Solidity, which are summarized as follows。First, the Solidity source code is compiled into bytecode, and when deployed, the bytecode is confirmed across the network using the transaction as a carrier and a contract is formed on the node.;The contract function call, if it is a transaction type, is confirmed by the network and eventually executed by the EVM.。The EVM is a stacked virtual machine that reads the bytecode of the contract and executes the。During execution, it interacts with stack, memory, and contract storage。where the stack is used to store ordinary local variables, which are the operands of the bytecode;Memory is used to store objects, using length+body for storage, sequential allocation for memory allocation;State storage is used to store state variables。Understanding how Solidity works and the principles behind it is the only way to become a master of Solidity programming。 \ No newline at end of file +This article describes the operating principles of Solidity, which are summarized as follows。First, the Solidity source code is compiled into bytecode, and when deployed, the bytecode is confirmed across the network using the transaction as a carrier and a contract is formed on the node;The contract function call, if it is a transaction type, is confirmed by the network and eventually executed by the EVM。The EVM is a stacked virtual machine that reads the bytecode of the contract and executes the。During execution, it interacts with stack, memory, and contract storage。where the stack is used to store ordinary local variables, which are the operands of the bytecode;Memory is used to store objects, using length+body for storage, sequential allocation for memory allocation;State storage is used to store state variables。Understanding how Solidity works and the principles behind it is the only way to become a master of Solidity programming。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/35_contract/solidity_presensation.md b/3.x/en/docs/articles/3_features/35_contract/solidity_presensation.md index f6159e9e9..54f155a31 100644 --- a/3.x/en/docs/articles/3_features/35_contract/solidity_presensation.md +++ b/3.x/en/docs/articles/3_features/35_contract/solidity_presensation.md @@ -2,21 +2,21 @@ Author : SHI Xiang | FISCO BCOS Core Developer -In the Bitcoin-only era, blockchain enabled simple value generation and transfer, but no more business models emerged。Ethereum has brought a dimensional improvement to the blockchain, blockchain-based applications are becoming more and more abundant, and various business models of blockchain are emerging at an accelerated pace.。The reason for this is that Ethereum has brought a Turing-complete programming language to the blockchain.。 +In the Bitcoin-only era, blockchain enabled simple value generation and transfer, but no more business models emerged。Ethereum has brought a dimensional improvement to the blockchain, blockchain-based applications are becoming more and more abundant, and various business models of blockchain are emerging at an accelerated pace。The reason for this is that Ethereum has brought a Turing-complete programming language to the blockchain。 -The main function of the blockchain is to achieve consensus among multiple parties.。In Bitcoin, the operations that require consensus are fixed and non-Turing complete.。The consensus is only a change in the value owner.。However, on Ethereum, developers can write their own logic that requires consensus, and Ethereum has customized the consensus logic through the smart contract language Solidity.。 +The main function of the blockchain is to achieve consensus among multiple parties。In Bitcoin, the operations that require consensus are fixed and non-Turing complete。The consensus is only a change in the value owner。However, on Ethereum, developers can write their own logic that requires consensus, and Ethereum has customized the consensus logic through the smart contract language Solidity。 ## Introduction to Solidity Solidity and Java have some similarities。Among the many programming languages, Java is the more mature。Java code is executed in the Java Virtual Machine (JVM)。The JVM masks operating system differences, making Java a cross-platform language。A set of Java code can be used on Windows, Linux, Mac, without worrying about operating system differences。 -Solidity is similar to Java。After the code is written, you need to convert the code into binary through the compiler, which is Javac in Java and solc in Solidity.。The generated binary code will be executed in the virtual machine.。Java code is executed in the Java Virtual Machine (JVM), which in Solidity is a virtual machine EVM on the blockchain。 +Solidity is similar to Java。After the code is written, you need to convert the code into binary through the compiler, which is Javac in Java and solc in Solidity。The generated binary code will be executed in the virtual machine。Java code is executed in the Java Virtual Machine (JVM), which in Solidity is a virtual machine EVM on the blockchain。 ![](../../../../images/articles/solidity_presensation/IMG_5440.PNG) -Solidity differs from Java in that Solidity is a language that serves the blockchain, and the code is executed on the blockchain.。EVM is an executor on the blockchain。Each blockchain node has an EVM。After Solidity is executed in the EVM, the EVM makes changes to the blockchain's data。These data changes are referred to the consensus algorithm.。At the same time, Solidity's operation is limited to the inside of EVM, and cannot access external uncertain systems or data, such as system clock, network file system, etc.。 +Solidity differs from Java in that Solidity is a language that serves the blockchain, and the code is executed on the blockchain。EVM is an executor on the blockchain。Each blockchain node has an EVM。After Solidity is executed in the EVM, the EVM makes changes to the blockchain's data。These data changes are referred to the consensus algorithm。At the same time, Solidity's operation is limited to the inside of EVM, and cannot access external uncertain systems or data, such as system clock, network file system, etc。 -Solidity is designed to provide a unified set of logic for the blockchain, so that the same code runs on each node of the blockchain, and with the help of consensus algorithms, the blockchain data can be changed in a unified way to achieve globally consistent results.。 +Solidity is designed to provide a unified set of logic for the blockchain, so that the same code runs on each node of the blockchain, and with the help of consensus algorithms, the blockchain data can be changed in a unified way to achieve globally consistent results。 ## Solidity implementation details @@ -24,35 +24,35 @@ Take the example of the Demo contract here, which has a global variable m and a ![](../../../../images/articles/solidity_presensation/IMG_5441.PNG) -This contract can be compiled into binary by the contract compiler solc。Each word of binary (8 bit) represents an EVM opcode (OPCODE)。The binary compiled by the Demo contract and its corresponding OPCODE are as follows, which implements the functions of the complete Demo contract, including the loading of the contract, the invocation of the contract interface and the logic of exception handling.。Among them, the red part is add()method implementation。 +This contract can be compiled into binary by the contract compiler solc。Each word of binary (8 bit) represents an EVM opcode (OPCODE)。The binary compiled by the Demo contract and its corresponding OPCODE are as follows, which implements the functions of the complete Demo contract, including the loading of the contract, the invocation of the contract interface and the logic of exception handling。Among them, the red part is add()method implementation。 ![](../../../../images/articles/solidity_presensation/IMG_5442.PNG) -Add()function of the OPCODE**Red part**Excerpt, you can see that its specific implementation idea is the same as assembly code, is a stack-based operation.。The SLOAD reads the data at the specified position on the blockchain into the top of the stack, ADD adds the two data at the top of the stack, and SSTORE writes the result of the addition to the top of the stack into the data of the next block of the blockchain to prepare for the consensus of the next block.。 +Add()function of the OPCODE**Red part**Excerpt, you can see that its specific implementation idea is the same as assembly code, is a stack-based operation。The SLOAD reads the data at the specified position on the blockchain into the top of the stack, ADD adds the two data at the top of the stack, and SSTORE writes the result of the addition to the top of the stack into the data of the next block of the blockchain to prepare for the consensus of the next block。 ![](../../../../images/articles/solidity_presensation/IMG_5443.PNG) -After the contract binary is deployed on the blockchain, the method in the contract is called by sending a transaction.。The node loads the contract code into the EVM based on the transaction and executes the corresponding function add on the contract based on the transfer of the transaction()。 +After the contract binary is deployed on the blockchain, the method in the contract is called by sending a transaction。The node loads the contract code into the EVM based on the transaction and executes the corresponding function add on the contract based on the transfer of the transaction()。 -The EVM executes the contract code, reads the data of the current block from the blockchain, performs the addition operation, and writes the result to the state data corresponding to the next block (the block waiting for consensus).。 +The EVM executes the contract code, reads the data of the current block from the blockchain, performs the addition operation, and writes the result to the state data corresponding to the next block (the block waiting for consensus)。 -After that, the consensus algorithm drops the block consensus to be executed, the block height increases, and the data on the blockchain is updated.。 +After that, the consensus algorithm drops the block consensus to be executed, the block height increases, and the data on the blockchain is updated。 ![](../../../../images/articles/solidity_presensation/IMG_5444.PNG) -As can be seen from the above steps, the implementation of Solidity has many similarities to existing practices today.。Compile, using traditional routines that convert code into binary executable by a virtual machine;execution, also in the same way as the traditional way, executing binary code with the stack as a buffer。 +As can be seen from the above steps, the implementation of Solidity has many similarities to existing practices today。Compile, using traditional routines that convert code into binary executable by a virtual machine;execution, also in the same way as the traditional way, executing binary code with the stack as a buffer。 ## Solidity Limitations and Improvements -Solidity is the first smart contract language to be applied on a large scale, and there are some areas for improvement.。 +Solidity is the first smart contract language to be applied on a large scale, and there are some areas for improvement。 -**Solidity is not flexible enough。**The Solidity language is limited by its own stack depth, and the total number of function parameters and local parameters cannot exceed 16.。To implement some more complex functions, it is inevitable that some chicken ribs。Solidity is a strongly typed language, but its type conversion is more troublesome。When converting an integer to a string, it needs to be converted to binary and then spliced.。On string manipulation, some convenient functions are missing。 +**Solidity is not flexible enough。**The Solidity language is limited by its own stack depth, and the total number of function parameters and local parameters cannot exceed 16。To implement some more complex functions, it is inevitable that some chicken ribs。Solidity is a strongly typed language, but its type conversion is more troublesome。When converting an integer to a string, it needs to be converted to binary and then spliced。On string manipulation, some convenient functions are missing。 **Poor performance of Solidity。**In execution, the execution of OPCODE is an assembly executor simulated by a program, rather than directly using CPU resources。In terms of storage, Solidity's underlying storage unit is 32 bytes (256 bits), which requires high hard disk read and write requirements, wasting a lot of storage resources。 **In response to the above two points, FISCO BCOS provides a C++How to write a contract: precompiled contract。Developers can use C++Write the smart contract logic and build it into the node。** -The precompiled contract is called in the same way as the Solidity contract, and can be called directly from the contract address.。FISCO BCOS provides parameter parsing to resolve the parameters of the call to C++Recognizable format。 +The precompiled contract is called in the same way as the Solidity contract, and can be called directly from the contract address。FISCO BCOS provides parameter parsing to resolve the parameters of the call to C++Recognizable format。 -Precompiled contracts break through the limitations of the Solidity language, with the help of the powerful C++Language, can be flexible to achieve a variety of logic, flexibility is greatly improved。Meanwhile, c++The performance advantages of the are also well utilized, and the logic written through precompiled contracts is improved compared to the Solidity language.。 +Precompiled contracts break through the limitations of the Solidity language, with the help of the powerful C++Language, can be flexible to achieve a variety of logic, flexibility is greatly improved。Meanwhile, c++The performance advantages of the are also well utilized, and the logic written through precompiled contracts is improved compared to the Solidity language。 diff --git a/3.x/en/docs/articles/3_features/36_cryptographic/ecdsa_analysis.md b/3.x/en/docs/articles/3_features/36_cryptographic/ecdsa_analysis.md index 525583390..5f9b229e5 100644 --- a/3.x/en/docs/articles/3_features/36_cryptographic/ecdsa_analysis.md +++ b/3.x/en/docs/articles/3_features/36_cryptographic/ecdsa_analysis.md @@ -2,7 +2,7 @@ Author : LI Hui-zhong | Senior Architect, FISCO BCOS -The FISCO BCOS transaction signature algorithm is designed based on the ECDSA principle, which is also the transaction signature algorithm used by Bitcoin and Ethereum.。This paper introduces the knowledge of ECDSA and Elliptic Curve Encryption (ECC), the Recover mechanism and implementation of ECDSA, and the underlying principles of FISCO BCOS transaction signing and verification.。Content hard (shu) core (xue), welcome developers interested in cryptography principles, blockchain underlying principles to share。 +The FISCO BCOS transaction signature algorithm is designed based on the ECDSA principle, which is also the transaction signature algorithm used by Bitcoin and Ethereum。This paper introduces the knowledge of ECDSA and Elliptic Curve Encryption (ECC), the Recover mechanism and implementation of ECDSA, and the underlying principles of FISCO BCOS transaction signing and verification。Content hard (shu) core (xue), welcome developers interested in cryptography principles, blockchain underlying principles to share。 ## STORY BEGINS @@ -10,17 +10,17 @@ The story starts with a magic number in Ethereum。 ![](../../../../images/articles/ecdsa_analysis/IMG_5504.JPG) -In the Ethereum Yellow Book, the description of transaction signatures talks about two special numbers "27, 28," which actually evolve from "0, 1" by adding a 27 to get "27, 28," so it is essentially a special number 27.。What does this particular number 27 mean??A detective journey begins... +In the Ethereum Yellow Book, the description of transaction signatures talks about two special numbers "27, 28," which actually evolve from "0, 1" by adding a 27 to get "27, 28," so it is essentially a special number 27。What does this particular number 27 mean??A detective journey begins.. ## **It's like a bug** -The search found that there had been many previous discussions about the issue, including a Stack Exchange post stating that it was a design bug.。There is also a related issue on the Ethereum source code github, which is labeled "type:The bug label.。 +The search found that there had been many previous discussions about the issue, including a Stack Exchange post stating that it was a design bug。There is also a related issue on the Ethereum source code github, which is labeled "type:The bug label。 ![](../../../../images/articles/ecdsa_analysis/IMG_5505.PNG) ![](../../../../images/articles/ecdsa_analysis/IMG_5506.JPG) -There is a link in the Stack Exchange post that gives the code to fix the bug, see screenshot below (red box)。As can be seen in the comments and code, the fromRpcSig function has a special treatment for the magic number 27.。In the signature from RPC, the value of v is less than 27 (possibly 0-3), then directly add 27 as the new v value, the fromRpcSig function is compatible with the ECDSA original v value (that is, recoveryID) and the Ethereum v value in this way。 +There is a link in the Stack Exchange post that gives the code to fix the bug, see screenshot below (red box)。As can be seen in the comments and code, the fromRpcSig function has a special treatment for the magic number 27。In the signature from RPC, if the v value is less than 27 (possibly 0-3), then 27 is added directly as the new v value, and the fromRpcSig function is compatible with the original v value of ECDSA (that is, recoveryID) and the Ethereum v value in this way。 ![](../../../../images/articles/ecdsa_analysis/IMG_5507.JPG) @@ -32,11 +32,11 @@ So, more questions, what is the magic number 35?What is ChainID?? ## It's not like a bug -With these questions in mind, once again reviewing the relevant design materials, we see that the design of ChainID is described in Ethereum EIP155。In order to prevent the transaction of one chain from being submitted to another chain and causing replay attack, the design of ChainID is introduced, and the fork implementation is carried out at the position of block height 2,675,000.。 +With these questions in mind, once again reviewing the relevant design materials, we see that the design of ChainID is described in Ethereum EIP155。In order to prevent the transaction of one chain from being submitted to another chain and causing replay attack, the design of ChainID is introduced, and the fork implementation is carried out at the position of block height 2,675,000。 ![](../../../../images/articles/ecdsa_analysis/IMG_5509.JPG) -Understand the role of ChainID, another question arises - in Ethereum, there is NetworkID to distinguish between different networks, why do you need ChainID?This is explained from the scope of NetworkID and ChainID。NetworkID is mainly used to isolate the chain at the network level. Nodes need to exchange NetworkID when they are connected to each other. Only when they have the same NetworkID can they complete the handshake connection.。ChainID is the transaction layer that prevents transactions across different networks from being cross-duplicated。The main network NetworkID of Ethereum (ETH) and Classic Ethereum (ETC) is 1, and the ChainID mechanism is required to prevent cross-replay of transactions between ETH and ETC networks. The ChainID of the ETH main network is 1, and the ChainID of the ETC main network is 61。At this point, I still don't understand why it's 27 and why it's 35.?Our Issue at EIP github#Seeing the exchange record of Jan and Buterin in 155, it seems that 27 is a product from Bitcoin。 +Understand the role of ChainID, another question arises - in Ethereum, there is NetworkID to distinguish between different networks, why do you need ChainID?This is explained from the scope of NetworkID and ChainID。NetworkID is mainly used to isolate the chain at the network level. Nodes need to exchange NetworkID when they are connected to each other. Only when they have the same NetworkID can they complete the handshake connection。ChainID is the transaction layer that prevents transactions across different networks from being cross-duplicated。The main network NetworkID of Ethereum (ETH) and Classic Ethereum (ETC) is 1, and the ChainID mechanism is required to prevent cross-replay of transactions between ETH and ETC networks. The ChainID of the ETH main network is 1, and the ChainID of the ETC main network is 61。At this point, I still don't understand why it's 27 and why it's 35?Our Issue at EIP github#Seeing the exchange record of Jan and Buterin in 155, it seems that 27 is a product from Bitcoin。 ![](../../../../images/articles/ecdsa_analysis/IMG_5510.PNG) @@ -50,25 +50,25 @@ Go along and open the github of the electric, we find the following code in the ![](../../../../images/articles/ecdsa_analysis/IMG_5514.PNG) -As can be seen from the code, when signing, the electric is originally only 0-The recid (recoveryID) between 3, plus 27, and a compression mark, plus 4 if there is compression, the value range of recid is 27-34。So far, 27 and 35 probably come from this, Ethereum inherited the design of Bitcoin, in the Bitcoin source code Bitcoin / src / key.cpp CKey.::The implementation is also determined in the SignCompact function, but why Bitcoin is designed this way is still unknown。 +As can be seen from the code, when signing, electric adds 27 to the recid (recoveryID) that was originally only between 0 and 3, and a compression mark, plus 4 if there is compression, and the value of recid ranges from 27 to 34。So far, 27 and 35 probably come from this, Ethereum inherited the design of Bitcoin, in the Bitcoin source code Bitcoin / src / key.cpp CKey::The implementation is also determined in the SignCompact function, but why Bitcoin is designed this way is still unknown。 -## **ECDSA is a bug.** +## **ECDSA is a bug** -At this point in the story, we have a general understanding of the past lives of the magic number 27 in the Ethereum code, but this is only the beginning of the story, which leads us to think further about the question: what is recoveryID??In order to explain this problem, we need to start with the ECDSA algorithm and understand the principles behind it mathematically.。ECDSA is the transaction signature algorithm used by FISCO BCOS, from which we will find that the ECDSA algorithm has a Recover mechanism, which is the real "bug" level function.。 +At this point in the story, we have a general understanding of the past lives of the magic number 27 in the Ethereum code, but this is only the beginning of the story, which leads us to think further about the question: what is recoveryID??In order to explain this problem, we need to start with the ECDSA algorithm and understand the principles behind it mathematically。ECDSA is the transaction signature algorithm used by FISCO BCOS, from which we will find that the ECDSA algorithm has a Recover mechanism, which is the real "bug" level function。 -ECDSA (Elliptic Curve Digital Signature Algorithm) is a digital signature algorithm based on elliptic curves.。Digital signature algorithm is the use of public and private key system similar to the ordinary signature written on paper, used to identify digital information methods, common digital signature algorithms include DSA, RSA and ECDSA.。Elliptic curve cryptography (ECC) is a public key encryption algorithm based on elliptic curve mathematics, based on the elliptic curve discrete logarithm difficult problem, commonly used protocols such as ECDH, ECDSA and ECIES.。The parameters of elliptic curves can be configured in a variety of ways, and there are many different curves, such as secp256k1, secp256r1, Curve25519, etc. There are some differences in the security of different curves, which are described in SafeCurves.。 +ECDSA (Elliptic Curve Digital Signature Algorithm) is a digital signature algorithm based on elliptic curves。Digital signature algorithm is the use of public and private key system similar to the ordinary signature written on paper, used to identify digital information methods, common digital signature algorithms include DSA, RSA and ECDSA。Elliptic curve cryptography (ECC) is a public key encryption algorithm based on elliptic curve mathematics, based on the elliptic curve discrete logarithm difficult problem, commonly used protocols such as ECDH, ECDSA and ECIES。The parameters of elliptic curves can be configured in a variety of ways, and there are many different curves, such as secp256k1, secp256r1, Curve25519, etc. There are some differences in the security of different curves, which are described in SafeCurves。 ECDSA algorithm mainly includes the following four key functions: ### Generate Key GenKey - Select an elliptic curve E _ P(a,b), select the base point G, the order of G is n -- Select the random number d ∈ n as the private key and calculate the public key Q = d ⋅ G. +- Select the random number d ∈ n as the private key and calculate the public key Q = d ⋅ G ### Signature Algorithm Sign - Use the message digest algorithm for message m to get z = hash(m) -- Generate random number k ∈ n, calculate the point(x, y)=k⋅G +- Generate random numbers k ∈ n, calculate points(x, y)=k⋅G - Take r = x mod n and reselect the random number k if r = 0 - Calculate s = k ^ − 1(z+rd) mod n, reselect the random number k if s = 0 - Above(r,s)Sign for ECDSA @@ -77,57 +77,57 @@ ECDSA algorithm mainly includes the following four key functions: Using the public key Q and the message m, sign the(r,s)Perform validation。 -- Verify r, s ∈ n +- verify r, s ∈ n - Calculate z = hash(m) - Calculate u _ 1 = zs ^ − 1 mod n and u _ 2 = rs ^ − 1 mod n -- Calculate(x, y) = u1⋅G+u2⋅Q mod n +- Calculation(x, y) = u1⋅G+u2⋅Q mod n - Determine r = = x, if equal, the signature verification is successful ### Recovery Algorithm Recover known message m and signature(r,s)recovery calculates the public key Q。 -- Verify r, s ∈ n +- verify r, s ∈ n - Calculate R =(x, y)where x = r, r+n,r+2n..., substituted into the elliptic curve equation to obtain R - Calculate z = hash(m) -- Compute u _ 1 = − zr ^ − 1 mod n and u _ 2 = sr ^ − 1 mod n +- calculate u _ 1 = − zr ^ − 1 mod n and u _ 2 = sr ^ − 1 mod n - Compute public key Q =(x’, y’)=u_1⋅G+u_2⋅R -To answer the question of recoveryID, we focus on "Recovery Algorithm Recover"。In the step of calculating R, we can see that there are multiple possibilities for the value of x, resulting in the possibility of multiple R, so there are also multiple possible results for the calculated Q, which needs to be compared with the known public key to determine which Q is correct.。If the correct Q is not found throughout all of x, the message does not correspond to the signature, or is an unknown public key.。 +To answer the question of recoveryID, we focus on "Recovery Algorithm Recover"。In the step of calculating R, we can see that there are multiple possibilities for the value of x, resulting in the possibility of multiple R, so there are also multiple possible results for the calculated Q, which needs to be compared with the known public key to determine which Q is correct。If the correct Q is not found throughout all of x, the message does not correspond to the signature, or is an unknown public key。 -In order to determine the correct Q, you need to traverse all possible values of x and run multiple rounds of the Recover algorithm, which is expensive.。**In order to improve the time efficiency of Recover, the idea of space-for-time is used to add a v value to the signature to quickly determine x and avoid traversal search heuristics, which is the recoveryID.。** +In order to determine the correct Q, you need to traverse all possible values of x and run multiple rounds of the Recover algorithm, which is expensive。**In order to improve the time efficiency of Recover, the idea of space-for-time is used to add a v value to the signature to quickly determine x and avoid traversal search heuristics, which is the recoveryID。** -In a blockchain system, the client signs each transaction and the node verifies the transaction signature.。If the "verification algorithm is used," the node must first know the public key corresponding to the transaction, so it needs to carry the public key in each transaction, which requires a lot of bandwidth and storage.。If you use the "Recover algorithm" and carry the recoveryID in the generated signature, you can quickly recover the public key corresponding to the transaction, calculate the user address based on the public key, and then perform the corresponding operation in the user address space.。 +In a blockchain system, the client signs each transaction and the node verifies the transaction signature。If the "verification algorithm is used," the node must first know the public key corresponding to the transaction, so it needs to carry the public key in each transaction, which requires a lot of bandwidth and storage。If you use the "Recover algorithm" and carry the recoveryID in the generated signature, you can quickly recover the public key corresponding to the transaction, calculate the user address based on the public key, and then perform the corresponding operation in the user address space。 -A blockchain design philosophy is hidden here, the resources (assets, contracts) on the blockchain belong to a user, if you can construct a signature that matches the user's address, it is equivalent to mastering the user's private key, so the node does not need to determine the user's public key in advance, only from the signature to recover the public key, and then calculate the user address, you can perform the corresponding operation of the user address space.。**FISCO BCOS designs and implements transaction signatures and checks based on this principle.**。 +A blockchain design philosophy is hidden here, the resources (assets, contracts) on the blockchain belong to a user, if you can construct a signature that matches the user's address, it is equivalent to mastering the user's private key, so the node does not need to determine the user's public key in advance, only from the signature to recover the public key, and then calculate the user address, you can perform the corresponding operation of the user address space。**FISCO BCOS designs and implements transaction signatures and checks based on this principle**。 ## **Calculation of recoveryID** -Article on JavaSDK Performance Optimization ([Remember the Process of Improving JavaSDK Performance from 8000 to 30000](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485438&idx=1&sn=2d68d0f051dd42a0c68dc9da39538ea9&chksm=9f2ef5e2a8597cf4a96620f95b16b734b0efd55d7463c4d0bf04b46b51acce4cf68794a480af&scene=21#wechat_redirect)) mentioned a key optimization point - the calculation of recoveryID, which is discussed carefully here.。 +Article on JavaSDK Performance Optimization ([Remember the Process of Improving JavaSDK Performance from 8000 to 30000](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485438&idx=1&sn=2d68d0f051dd42a0c68dc9da39538ea9&chksm=9f2ef5e2a8597cf4a96620f95b16b734b0efd55d7463c4d0bf04b46b51acce4cf68794a480af&scene=21#wechat_redirect)) mentioned a key optimization point - the calculation of recoveryID, which is discussed carefully here。 -ECDSA signature (r, s), where r is a point kG on an elliptic curve(x, y)The corresponding x mod n, which is equivalent to leaving only the X-axis coordinate-related values in the signature information and discarding the Y-axis-related values。In "Recovery Algorithm Recover," try to retrieve the value corresponding to the Y-axis to construct R, and then recover the public key.。 +ECDSA signature (r, s), where r is a point kG on an elliptic curve(x, y)The corresponding x mod n, which is equivalent to leaving only the X-axis coordinate-related values in the signature information and discarding the Y-axis-related values。In "Recovery Algorithm Recover," try to retrieve the value corresponding to the Y-axis to construct R, and then recover the public key。 Since r = x mod n, r, r+n,r+2n... may be a legal original x value, different elliptic curves have different numbers of such legal x values, FISCO BCOS uses secp256k1 curve there are two possible r, r+n。 -Each X-axis coordinate corresponds to two possible Y-coordinates, so there are four possible Rs in FISCO BCOS.(r, y) (r, -y) (r+n, y’) (r+n, -y’)。However, the probability of two X-axis coordinates for an r value is extremely low, so low that it can be ignored. These two small probability events are ignored in Ethereum。 +Each X-axis coordinate corresponds to two possible Y-coordinates, so there are four possible Rs in FISCO BCOS(r, y) (r, -y) (r+n, y’) (r+n, -y’)。However, the probability of two X-axis coordinates for an r value is extremely low, so low that it can be ignored. These two small probability events are ignored in Ethereum。 How small is the probability of this small probability event??This starts with the parameters of the secp256k1 curve, usually describing the points of an elliptic curve (x, y), the values of x and y are the result of mod p, p is the parameter of the curve, it is a large prime number, the previously mentioned n is also the parameter of the curve, equal to the number of points on the curve (the number of points on the curve is n*h, h is also a curve parameter, the curve h = 1), in secp256k1, the values of n and p are very close, as shown in the figure below。 ![](../../../../images/articles/ecdsa_analysis/IMG_5515.JPG) -Since r = x mod n, x is the result of mod p, r is the result of mod n, and the range of x values is [0, p-1], the range of r values is [0, n-1]。if r+n is also a point on the curve, then the value of r must be less than p-n, the probability is(p-n) / p, approximately 3.73*10^-39, this probability is very small。 +Since r = x mod n, x is the result of mod p, r is the result of mod n, the range of x values is [0, p-1], and the range of r values is [0, n-1]。if r+n is also a point on the curve, then the value of r must be less than p-n with a probability of(p-n) / p, approximately 3.73*10 ^ -39, this probability is very small。 Based on the signature result (r, s) and the y value of the random point (x, y) generated during the signature process, recoveryID is calculated as follows: 1. id = y & 1; / / The y coordinate of the kG point in the "Signature Algorithm Sign," the id value is set according to the parity, because y is the result of mod p, and its parity corresponds exactly to the positive and negative of the coordinate axis 2. id |= (x != r ? 2 : 0); / / Small probability events, as explained earlier -3. if (s > n / 2) id = id ^ 1; / / If s calculated by the signature is greater than n / 2, it will take n-s as the value of s, so the corresponding conversion is done here, and the two conversions occur at the same time. +3. if (s > n / 2) id = id ^ 1; / / If the s calculated by the signature is greater than n / 2, it will take n-s as the value of s, so the corresponding conversion is done here, and the two conversions occur at the same time -[JavaSDK Performance Optimization](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485438&idx=1&sn=2d68d0f051dd42a0c68dc9da39538ea9&chksm=9f2ef5e2a8597cf4a96620f95b16b734b0efd55d7463c4d0bf04b46b51acce4cf68794a480af&scene=21#wechat_redirect)The article is based on this calculation formula, the traversal search recoveryID to calculate to obtain, greatly improve the performance.。 +[JavaSDK Performance Optimization](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485438&idx=1&sn=2d68d0f051dd42a0c68dc9da39538ea9&chksm=9f2ef5e2a8597cf4a96620f95b16b734b0efd55d7463c4d0bf04b46b51acce4cf68794a480af&scene=21#wechat_redirect)The article is based on this calculation formula, the traversal search recoveryID to calculate to obtain, greatly improve the performance。 ## Afterword -Start with a magical number, access relevant information, understand the design principles, and then break into the world of ECDSA, confused and wandering in a bunch of mathematical formulas, problem after problem.。At first, I looked at the flowers in the fog, like I didn't understand them, and by virtue of Virgo's cleanliness, I finally resolved my doubts one by one.。Exquisite cryptographic protocols, inscrutable mathematical theories, and a lot to learn as a blockchain code farmer。Only bitter its heart, its bones and muscles, treat every doubt, do not let go of every detail。There will come a day when the clouds will be lifted to see the sun, and the clouds will be kept to see the moon.。 +Start with a magical number, access relevant information, understand the design principles, and then break into the world of ECDSA, confused and wandering in a bunch of mathematical formulas, problem after problem。At first, I looked at the flowers in the fog, like I didn't understand them, and by virtue of Virgo's cleanliness, I finally resolved my doubts one by one。Exquisite cryptographic protocols, inscrutable mathematical theories, and a lot to learn as a blockchain code farmer。Only bitter its heart, its bones and muscles, treat every doubt, do not let go of every detail。There will come a day when the clouds will be lifted to see the sun, and the clouds will be kept to see the moon。 ------ diff --git a/3.x/en/docs/articles/3_features/36_cryptographic/elliptic_curve.md b/3.x/en/docs/articles/3_features/36_cryptographic/elliptic_curve.md index d057466c6..8b50e1ac5 100644 --- a/3.x/en/docs/articles/3_features/36_cryptographic/elliptic_curve.md +++ b/3.x/en/docs/articles/3_features/36_cryptographic/elliptic_curve.md @@ -2,11 +2,11 @@ Author : LI Hui-zhong | Senior Architect, FISCO BCOS -This paper introduces the common elliptic curves in cryptography and the relationship between them, introduces the naming rules of different standard systems, and attempts to describe the family relationship between elliptic curves.。The article attempts to clarify the elliptic curve related concepts and functions, does not involve complex mathematical proof and reasoning, welcome interested students to read。The author mainly refers to Wikipedia and related organizations to organize the information on the website, do not rule out the possibility of mistakes, welcome experts to criticize and correct。 +This paper introduces the common elliptic curves in cryptography and the relationship between them, introduces the naming rules of different standard systems, and attempts to describe the family relationship between elliptic curves。The article attempts to clarify the elliptic curve related concepts and functions, does not involve complex mathematical proof and reasoning, welcome interested students to read。The author mainly refers to Wikipedia and related organizations to organize the information on the website, do not rule out the possibility of mistakes, welcome experts to criticize and correct。 -## A question you may not have cared about. +## A question you may not have cared about -In [A Number-Triggered Exploration - ECDSA Analysis](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485497&idx=1&sn=44ac5977abbf92bd81e9013433a59f69&chksm=9f2efa25a8597333575c8bf0642c2e312e54d23867644021c9b2e353963d405e1d945e2a067d&scene=21#wechat_redirect)The elliptic curve secp256k1 mentioned in, it has some characteristics that can quickly calculate the recoveryID。Why is this secp256k1 so named?Not afraid of your jokes, I often misspelled it before figuring it out, writing sec256pk1, seck256p1, etc.。 +In [A Number-Triggered Exploration - ECDSA Analysis](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485497&idx=1&sn=44ac5977abbf92bd81e9013433a59f69&chksm=9f2efa25a8597333575c8bf0642c2e312e54d23867644021c9b2e353963d405e1d945e2a067d&scene=21#wechat_redirect)The elliptic curve secp256k1 mentioned in, it has some characteristics that can quickly calculate the recoveryID。Why is this secp256k1 so named?Not afraid of your jokes, I often misspelled it before figuring it out, writing sec256pk1, seck256p1, etc。 ## Bite the word secp256k1 @@ -16,20 +16,20 @@ To figure out the meaning of the name secp256k1 is actually very simple, search ### 1 、 Cryptographic Protocol Standards -The first part is "sec," short for Standards for Efficient Cryptography, a cryptographic protocol standard published by SECG.。SECG published "SEC 1" and "SEC 2" two elliptic curve protocol standards, in "SEC 2" in detail secp256k1 and other curve parameter definition。In addition to "sec," there are many other protocol standards for elliptic curves. From "SafeCurve," you can see that there are the following different types of standards。 +The first part is "sec," short for Standards for Efficient Cryptography, a cryptographic protocol standard published by SECG。SECG published "SEC 1" and "SEC 2" two elliptic curve protocol standards, in "SEC 2" in detail secp256k1 and other curve parameter definition。In addition to "sec," there are many other protocol standards for elliptic curves. From "SafeCurve," you can see that there are the following different types of standards。 ![](../../../../images/articles/elliptic_curve/IMG_5517.PNG) -"SafeCurve" has not been updated for a long time, and some standards have been updated many times. For example, NIST's standard FIPS 186 on digital signatures is currently in use in the fourth edition, and the fifth edition is also being drafted.。NIST is the National Institute of Standards and Technology in the United States, so NIST's standards are also American standards.。 +"SafeCurve" has not been updated for a long time, and some standards have been updated many times. For example, NIST's standard FIPS 186 on digital signatures is currently in use in the fourth edition, and the fifth edition is also being drafted。NIST is the National Institute of Standards and Technology in the United States, so NIST's standards are also American standards。 ![](../../../../images/articles/elliptic_curve/IMG_5518.PNG) -「NIST FIPS 186-4 "standard defines several elliptic curve standards, such as NIST P-256、NIST P-384, etc., where the beginning NIST also represents the name of the cryptographic protocol standard。Subsequent descriptions are based on these two criteria.。 +The "NIST FIPS 186-4" standard defines several elliptic curve standards, such as NIST P-256, NIST P-384, etc., where the beginning NIST also represents the name of the cryptographic protocol standard。Subsequent descriptions are based on these two criteria。 ### 2. Finite field -The second part is "p," where p indicates that the elliptic curve is based on a prime finite field Fp.。A finite field is a concept in discrete mathematics, which is not expanded here; in simple terms, it is a set of a finite number of elements that can be added and multiplied with some unique properties.。Elliptic curves used in cryptography are based on finite fields, and in addition to the prime finite field Fp, there is another finite field F2m with a characteristic of 2 (due to format problems, 2m should be 2 to the power of m, the same below), Fp's size (number of elements) is p, F2m's size is 2m.。The elliptic curve based on Fp is: +The second part is "p," where p indicates that the elliptic curve is based on a prime finite field Fp。A finite field is a concept in discrete mathematics, which is not expanded here; in simple terms, it is a set of a finite number of elements that can be added and multiplied with some unique properties。Elliptic curves used in cryptography are based on finite fields, and in addition to the prime finite field Fp, there is another finite field F2m with a characteristic of 2 (due to format problems, 2m should be 2 to the power of m, the same below), Fp's size (number of elements) is p, F2m's size is 2m。The elliptic curve based on Fp is: ![](../../../../images/articles/elliptic_curve/IMG_5519.PNG) @@ -40,19 +40,19 @@ The elliptic curve based on F2m is: ![](../../../../images/articles/elliptic_curve/IMG_5520.JPG) -The sect163k1, sect163r1, etc. curves are also defined in "SEC 2," where t indicates that the curve is based on F2m。in "NIST FIPS 186-4 "in the set p-256、B-163 equal curve, P-representation based on Fp, B-Representation based on F2m。 +The sect163k1, sect163r1, etc. curves are also defined in "SEC 2," where t indicates that the curve is based on F2m。In "NIST FIPS 186-4," P-256, B-163 and other curves are set, P-based on Fp, B-based on F2m。 ### 3. Finite field size -Each elliptic curve E has a number of key parameters, including the base point G of order n and the coefficient h, where n is a large prime number, n*h is the number of points on the elliptic curve。For computational efficiency considerations, h is usually set to 1, 2, or 4。In layman's terms, the more points on an elliptic curve, the more secure that elliptic curve is, so the value of n is the key to affecting the safety of the curve。Elliptic curves are based on finite fields, and the points on the curve are elements of the finite field, so the size of the finite field determines the safety of the curve.。The third part "256" is a representation of the finite field size, and there are many more such as 192, 224, 384, etc., in "NIST FIPS 186."-4 "There is a table showing the different size configurations of Fp and F2m domains.。 +Each elliptic curve E has a number of key parameters, including the base point G of order n and the coefficient h, where n is a large prime number, n*h is the number of points on the elliptic curve。For computational efficiency considerations, h is usually set to 1, 2, or 4。In layman's terms, the more points on an elliptic curve, the more secure that elliptic curve is, so the value of n is the key to affecting the safety of the curve。Elliptic curves are based on finite fields, and the points on the curve are elements of the finite field, so the size of the finite field determines the safety of the curve。The third part "256" is the expression of the finite field size, there are more such as 192, 224, 384, etc., in the "NIST FIPS 186-4" there is a table showing the Fp and F2m two different size configurations。 ![](../../../../images/articles/elliptic_curve/IMG_5521.JPG) -The SEC standard in this setting is similar to the NIST standard, and we will see that the p-series curves have p192, p224, p256 (secp256k1 is one of them), p384, and p521, and the t / B series has t163 / B-163、t233/B-233 et al.。 +The SEC standard in this setting is similar to the NIST standard, we will see that the p-series curves have p192, p224, p256 (secp256k1 is one of them), p384 and p521, t / B series have t163 / B-163, t233 / B-233, etc。 ### 4、Koblitz Curve -The fourth part "k" indicates that the curve is a Koblitz Curve, and from "SEC 2" you can see that there are also curves marked here as r (e.g. secp256r1), r indicates that the curve is a pseudo-random curve Pesudo.-Random Curve。The name Koblitz Curve is derived from the mathematician "Neal Koblitz," which is a special kind of curve, some of its parameters are carefully selected and set。Koblitz Curve has the property of self-homomorphism, which can greatly improve the computational efficiency through optimization.。In contrast, Pesudo-The corresponding parameters of Random Curve are calculated by random seeds, and there are standard test algorithms that can detect that all parameters are generated by random seeds.。corresponds to "**2. Finite field**"The two elliptic curves in the Koblitz Curve are reduced to +The fourth part "k" indicates that the curve is a Koblitz Curve, and from "SEC 2" you can see that there are also curves marked here as r (e.g. secp256r1), r indicates that the curve is a pseudo-random curve Pesudo-Random Curve。The name Koblitz Curve is derived from the mathematician "Neal Koblitz," which is a special kind of curve, some of its parameters are carefully selected and set。Koblitz Curve has the property of self-homomorphism, which can greatly improve the computational efficiency through optimization。In contrast, the corresponding parameters of the Pesudo-Random Curve are calculated by random seeds, and there are standard test algorithms that detect that all parameters are generated by random seeds。corresponds to "**2. Finite field**"The two elliptic curves in the Koblitz Curve are reduced to ![](../../../../images/articles/elliptic_curve/IMG_5522.PNG) @@ -64,19 +64,19 @@ For example, secp256k1 corresponds to curve b = 7, which is represented as ![](../../../../images/articles/elliptic_curve/IMG_5524.PNG) -in "NIST FIPS 186-4 "in the Koblitz Curve curve with" K "-"Mark the beginning, respectively, with K-163、K-233 et al.。 +In "NIST FIPS 186-4," the Koblitz Curve curve begins with the "K-" mark, with K-163, K-233, etc。 ### **5. Last Mark** -To the fifth part "1," which represents the first four conditions to provide a variety of recommended parameter settings, in the SEC standard most of the bit is 1, that is, only one recommended parameter, sect163r2 is an exception.。Below, the curves recommended by the SEC and NIST standards are listed separately, and the larger part of the two are the same parameter settings.。 +To the fifth part "1," which represents the first four conditions to provide a variety of recommended parameter settings, in the SEC standard most of the bit is 1, that is, only one recommended parameter, sect163r2 is an exception。Below, the curves recommended by the SEC and NIST standards are listed separately, and the larger part of the two are the same parameter settings。 ![](../../../../images/articles/elliptic_curve/IMG_5525.JPG) -In the above table, both SEC and NIST appear in the same row, and although the two curves have different names, they have exactly the same parameters, which means they are actually the same.。Several SEC curves with orange shading do not have corresponding NIST curves, so the SEC standard contains more curves than NIST, and the secp256k1 mentioned at the beginning of this article is the SEC alone.。Speaking of which, I have to mention a serious gossip.。It is said that NIST recommended Pesudo-Random Curve, also known as the P and B series, does not publish random number selection rules, there is a suspicion that the NSA (National Security Agency) may have a backdoor and can easily crack these cryptographic protocols。Interested students can search for "Dual _ EC _ DRBG backdoor," the bigger gossip is that it is said that Satoshi Nakamoto chose secp256k1 as the curve of the Bitcoin signature algorithm instead of the more commonly used secp256r1, also because of this hidden risk.。 +In the above table, both SEC and NIST appear in the same row, and although the two curves have different names, they have exactly the same parameters, which means they are actually the same。Several SEC curves with orange shading do not have corresponding NIST curves, so the SEC standard contains more curves than NIST, and the secp256k1 mentioned at the beginning of this article is the SEC alone。Speaking of which, I have to mention a serious gossip。It is said that NIST recommended Pesudo-Random Curve, that is, P and B series, did not publish random number selection rules, there is a suspicion that the NSA (National Security Agency) may have mastered the back door, can easily crack these cryptographic protocols。Interested students can search for "Dual _ EC _ DRBG backdoor," the bigger gossip is that it is said that Satoshi Nakamoto chose secp256k1 as the curve of the Bitcoin signature algorithm instead of the more commonly used secp256r1, also because of this hidden risk。 ## **elliptic curve family spectrum** -The research found that "STD" records more detailed standards and curves than "SafeCurve," which feels like an elliptic curve family tree.。Looking through all the curves recorded by the site, it is found that most of them are still based on the curves in the "(2) finite domain," and the recommended parameters are different.。However, there are several exceptions in "other," E-222 with Edward Curve, Curve25519 with Montgomery Curve, Ed448 with Twisted Edward Curve。 +The research found that "STD" records more detailed standards and curves than "SafeCurve," which feels like an elliptic curve family tree。Looking through all the curves recorded by the site, it is found that most of them are still based on the curves in the "(2) finite domain," and the recommended parameters are different。However, there are several exceptions to the "other," with E-222 using the Edward Curve, Curve25519 using the Montgomery Curve, and Ed448 using the Twisted Edward Curve。 What is Edward Curve??What is Montgomery Curve??What does Edward have to do with Twisted Edward Curve??The above problem once again touched my knowledge blind spot, so the next had to take screenshots mainly, the content from Wikipedia, if you feel a little dizzy, you can directly skip to see the conclusion。Edward Curve is defined as follows: @@ -103,22 +103,22 @@ According to Wikipedia, you can probably put together a few pieces of informatio The Edward Curve is a Twisted Edward Curve 2. Twisted Edward Curve and Montgomery Curve can be converted to each other 3. Both Edward Curve and Montgomery Curve have special properties, such as the ability to speed up calculations -Curve25519 is a curve, Ed25519 is a signature algorithm. +Curve25519 is a curve, Ed25519 is a signature algorithm 5. Curve25519 is also the selected Montgomery Curve, with higher computational efficiency -6. The curves used by Curve25519 and Ed25519 are consistent, one is the Montgomery representation and the other is the Twisted Edward Curve representation. -7. The name of 25519 comes from the finite field parameter p-2255 of the curve.- 19 +6. The curves used by Curve25519 and Ed25519 are consistent, one is the Montgomery representation and the other is the Twisted Edward Curve representation +7. The name of 25519 comes from the finite field parameter p-2255-19 of the curve In the process of reading Wikipedia found a name "Weierstrass equation," it turns out that it is the originator of these curves, in a domain k on any plane curve, can be expressed as Weierstrass equation。 ![](../../../../images/articles/elliptic_curve/IMG_5531.PNG) -It is not difficult to find that each of the formulas mentioned above is an evolutionary version of Weierstrass equation (Twisted Edward Curve doesn't seem to be, but it can be converted to Montgomery Curve, essentially the same).。 +It is not difficult to find that each of the formulas mentioned above is an evolutionary version of Weierstrass equation (Twisted Edward Curve doesn't seem to be, but it can be converted to Montgomery Curve, essentially the same)。 "STD" lists a number of other standards, such as Brainpool curve series, BN curve series, MNT curve series, etc., behind these series represent a unique curve generation philosophy, or in order to provide verifiable random numbers, or in order to provide the characteristics of Paring, or in order to improve the ability to resist attack, etc., each carefully selected parameter is a group of mathematicians The design of。 ## **Afterword** -In ancient times, there were words to be interpreted, and the mystery of Chinese characters was revealed, which revealed the natural mechanism of life.;Now bite the words, find out the ellipse principle, uncover the curve family tree。Start with the name, decode secp256k1, clarify the standard;Finally name, ask Zu Weierstrass, pay tribute to the great god。 +In ancient times, there were words to be interpreted, and the mystery of Chinese characters was revealed, which revealed the natural mechanism of life;Now bite the words, find out the ellipse principle, uncover the curve family tree。Start with the name, decode secp256k1, clarify the standard;Finally name, ask Zu Weierstrass, pay tribute to the great god。 By understanding the intrinsic relationship between elliptic curves, a little more understanding of their design。Know more, don't know more, what is the mathematical principle of those special curves?Why higher computational efficiency?How much performance can be improved?... It's another late night, embracing the latest gains and mingling more confusion。The computer is playing "Cut off too fine nerves, will you sleep better..." diff --git a/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_deployment_example.md b/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_deployment_example.md index ec63709d8..b9fc4ae50 100644 --- a/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_deployment_example.md +++ b/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_deployment_example.md @@ -2,78 +2,78 @@ Author: Liu Haifeng | FISCO BCOS Open Source Community Contributor -As the underlying technology of blockchain continues to receive attention, more and more projects are using blockchain to solidify data。Recently, due to work needs, compared a number of domestic blockchain products, the actual deployment of several blockchain systems, and finally chose FISCO BCOS。In the actual deployment process, I have summarized this set of deployment processes that can be used in actual projects for reference by blockchain enthusiasts.。 +As the underlying technology of blockchain continues to receive attention, more and more projects are using blockchain to solidify data。Recently, due to work needs, compared a number of domestic blockchain products, the actual deployment of several blockchain systems, and finally chose FISCO BCOS。In the actual deployment process, I have summarized this set of deployment processes that can be used in actual projects for reference by blockchain enthusiasts。 ## Design a set of scenarios that cover actual project usage -When I decided to use FISCO BCOS, in order to cover the actual project usage as much as possible, I designed a set of scenarios, which contained most of the actual situation, and encountered a pit to fill the pit, and finally deployed successfully according to the designed scenario.。Here is the scenario I designed: +When I decided to use FISCO BCOS, in order to cover the actual project usage as much as possible, I designed a set of scenarios, which contained most of the actual situation, and encountered a pit to fill the pit, and finally deployed successfully according to the designed scenario。Here is the scenario I designed: -1. Assume that multiple agencies form a coalition and that each agency sends a representative to the coalition committee.(In this test networking deployment, it is assumed that there are four ABCD agencies)。 +1. Assume that multiple agencies form a coalition and that each agency sends a representative to the coalition committee(In this test networking deployment, it is assumed that there are four ABCD agencies)。 2. The alliance decided to use the state secret FISCO BCOS to form an alliance chain。 -3. The alliance committee determines the alliance chain networking mode(The number of nodes, node distribution, group composition and information exchange methods are mainly confirmed.)。 +3. The alliance committee determines the alliance chain networking mode(The number of nodes, node distribution, group composition and information exchange methods are mainly confirmed)。 -In this test, different ports of the same server are used to simulate different nodes, and copy instructions are used to simulate information exchange.(The actual deployment can be delivered through the trust of alliance members such as U disk, mail, network message transmission, etc.), the networking mode assumes the following. +In this test, different ports of the same server are used to simulate different nodes, and copy instructions are used to simulate information exchange(The actual deployment can be delivered through the trust of alliance members such as U disk, mail, network message transmission, etc), the networking mode assumes the following ![](../../../../images/articles/national_cryptography_deployment_example/640.jpeg) -4. The alliance committee provides the chain certificate of the alliance chain.(Self-built or apply to CA, this test is self-built using the enterprise deployment tool provided by FISCO BCOS.)。 +4. The alliance committee provides the chain certificate of the alliance chain(Self-built or apply to CA, this test is self-built using the enterprise deployment tool provided by FISCO BCOS)。 -5. Each institution of the Alliance applies to the Alliance Committee for the institution certificate of the corresponding institution.(This test was built using the enterprise-class deployment tools provided by FISCO BCOS)。 +5. Each institution of the Alliance applies to the Alliance Committee for the institution certificate of the corresponding institution(This test was built using the enterprise-class deployment tools provided by FISCO BCOS)。 -6. Each organization of the alliance, according to the alliance chain networking mode determined by the alliance committee, uses the enterprise deployment tool provided by FISCO BCOS to generate the node deployment program under the organization, and modifies the configuration file of the corresponding node deployment program according to the networking mode.。 +6. Each organization of the alliance, according to the alliance chain networking mode determined by the alliance committee, uses the enterprise deployment tool provided by FISCO BCOS to generate the node deployment program under the organization, and modifies the configuration file of the corresponding node deployment program according to the networking mode。 -7. All agencies of the alliance deploy and start the subordinate node program to form the alliance chain.。 +7. All agencies of the alliance deploy and start the subordinate node program to form the alliance chain。 8. Networking changes: -- Organization A believes that it has too many subordinate nodes and consumes resources, and is ready to remove node 2 from group 1 to become a free node and not participate in consensus.。At the same time, in order to improve the efficiency of business processing, node 1 is set as an observation node and does not participate in consensus.。 +- Institution A thinks that it has too many subordinate nodes and consumes resources. It is ready to remove node 2 from group 1 and become a free node. It does not participate in consensus。At the same time, in order to improve the efficiency of business processing, node 1 is set as an observation node and does not participate in consensus。 - Institution D wants to join group 2 and add Institution D subordinate nodes: ![](../../../../images/articles/national_cryptography_deployment_example/640-2.png) -- Institution D and Institution B want to form a new group with nodes 3 and 5: Group 3。 +- Institution D and Institution B want to form a new group with node 3 and node 5: Group 3。 9. Institution A and Institution D submit an application for network change to the Alliance Committee。 -10. The Alliance Committee agrees to the network change application of Agency A and Agency D.。 +10. The Alliance Committee agrees to the network change application of Agency A and Agency D。 11. Institution A starts network change operation: -- Organization A or B of Group 1 uses the console or SDK to send instructions for node 2 to exit and node 1 to convert to an observation node based on the organization A's network change request approved by the Alliance Committee to complete the organization A's network change request.。 +- Organization A or B of Group 1 uses the console or SDK to send instructions for node 2 to exit and node 1 to convert to an observation node based on the organization A's network change request approved by the Alliance Committee to complete the organization A's network change request。 12. Organization D starts network change operation: -- In step 5, institution D has applied to the Alliance Committee for the institution certificate of institution D. Now, institution D uses the enterprise deployment tool provided by FISCO BCOS to generate the node deployment program under institution D. According to the new networking mode, modify the configuration file of the node deployment program, start the node, and prepare to join the alliance chain。 +- In step 5, institution D has applied to the Alliance Committee for institution D's institution certificate. Now, institution D uses the enterprise deployment tool provided by FISCO BCOS to generate the node deployment program under institution D. According to the new networking mode, modify the configuration file of the node deployment program, start the node, and prepare to join the alliance chain。 -- Organization D sends a node 5 network access request to organization B or C in group 2, and waits for organization B or C to use the console or SDK to send an instruction for node 5 to join group 2 consensus。 +- Institution D sends a node 5 access request to Institution B or C in Group 2, and waits for Institution B or C to use the console or SDK to send an instruction for Node 5 to join Group 2 consensus。 -- Institution B or C uses the console or sdk to send an order for node 5 to join group 2 consensus based on the organization D network change application approved by the Alliance Committee.。 +- Institution B or C uses the console or sdk to send a consensus order for node 5 to join group 2 based on the organization D network change application approved by the Alliance Committee。 -- Agency D and Agency B form Group 3 based on the Group 3 networking change application agreed by the Alliance Committee.。 +- Agency D and Agency B form Group 3 based on the Group 3 networking change application agreed by the Alliance Committee。 -13 and ended.。 +13 and ended。 ## Deployment Alliance Chain Preparation -FISCO BCOS provides an enterprise-class deployment tool for deploying and using the alliance chain in real projects, which contains the various operations required to deploy the alliance chain.。Alliance committees can use enterprise-level deployment tools to generate self-built chain certificates and issue self-built institution certificates to alliance member institutions. Alliance member institutions can use enterprise-level deployment tools to generate creation blocks of groups and node deployment procedures of institutions.。 +FISCO BCOS provides an enterprise-class deployment tool for deploying and using the alliance chain in real projects, which contains the various operations required to deploy the alliance chain。Alliance committees can use enterprise-level deployment tools to generate self-built chain certificates and issue self-built institution certificates to alliance member institutions. Alliance member institutions can use enterprise-level deployment tools to generate creation blocks of groups and node deployment procedures of institutions。 -This test simulates the actual build process of the alliance chain and tests the deployment directory(Linux)For: / usr / local / rc3-test-BCOS /, hereinafter referred to as directory rc3-test-BCOS。 +This test simulates the actual build process of the alliance chain and tests the deployment directory(Linux)For: / usr / local / rc3-test-BCOS /, hereinafter referred to as the directory rc3-test-BCOS。 -- Environment requirements: Python 2.7+/3.6+,openssl 1.0.2k+。 +- Environment requirements: python 2.7+/3.6+,openssl 1.0.2k+。 -- Get the Enterprise Deployment Tool: git clone https://github.com/FISCO-BCOS / generator.git, get the enterprise deployment tool root generator。 +- Get the enterprise deployment tool: git clone https:/ / github.com / FISCO-BCOS / generator.git。 ```eval_rst .. note:: - - If you cannot download the Enterprise Deployment Tool for a long time due to network problems, try 'git clone https://gitee.com/FISCO-BCOS/generator.git` + -If you cannot download the enterprise deployment tool for a long time due to network problems, please try 'git clone https://gitee.com/FISCO-BCOS/generator.git` ``` -- Upload the generator(or direct clone)for / usr / local / rc3-test-BCOS / generator /, hereinafter referred to as the directory generator。 +- Upload the generator(or direct clone)is / usr / local / rc3-test-BCOS / generator /, hereinafter referred to as the directory generator。 -- Ensure the operation permission of the generator: chmod-R 777 /usr/local/rc3-test-BCOS/generator/ +- Ensure the operation permission of the generator: chmod -R 777 / usr / local / rc3-test-BCOS / generator / - Install Enterprise Deployment Tools(Be sure to install correctly): @@ -93,9 +93,9 @@ This test simulates the actual build process of the alliance chain and tests the ./generator --download_fisco ./meta -g ``` --The g parameter indicates the state secret, removed-G, the normal FISCO BCOS is downloaded. After executing this command, the national secret FISCO BCOS will be downloaded to the generator / meta /。 +The -g parameter indicates the national secret. If the -g parameter is removed, the normal FISCO BCOS is downloaded. After the command is executed, the national secret FISCO BCOS is downloaded to the generator / meta /。 -- View the FISCO BCOS version: +- View State Secret FISCO BCOS version: ``` cd generator @@ -110,47 +110,47 @@ cd generator > Git Commit Hash : a43952c544aa8252f7ac965e310148c099510410 ``` -As you can see, this test deployment uses' v1.0.0-rc3 'version of Enterprise Deployment Tools,' 2.0.0-rc3 gm 'version of FISCO BCOS(Seems to be the same as the deployment process for the rc2 version)。 +As you can see, this test deployment uses the 'v1.0.0-rc3' version of the enterprise deployment tool, and the '2.0.0-rc3 gm' version of the FISCO BCOS(Seems to be the same as the deployment process for the rc2 version)。 ## Certificate Description -Two sets of chain certificates and agency certificates are required for the FISCO BCOS State Secret Edition, namely the State Secret Certificate and the General Edition Certificate.。 +Two sets of chain certificates and agency certificates are required for the FISCO BCOS State Secret Edition, namely the State Secret Certificate and the General Edition Certificate。 - Certificate chain: ``` -> Chain Certificate - Authority Certificate----Node Certificate +> chain certificate - authority certificate - node certificate > -> Chain Certificate - Authority Certificate----sdk certificate +> chain certificate - authority certificate - sdk certificate ``` -- Certificates can be self-built and applied by CA companies. The enterprise deployment tool uses the 'openssl' tool to generate the certificate. If you want to use the certificate applied by CA companies, you can directly change the name of the certificate file to the specified name.。 +- Certificates can be self-built and applied by CA companies. The enterprise deployment tool uses the 'openssl' tool to generate the certificate. If you want to use the certificate applied by CA companies, you can directly change the name of the certificate file to the specified name。 - Chain certificate common version 'ca.crt', corresponding to the private key 'ca.key'。Chain certificate national secret version 'gmca.crt', corresponding to the private key 'gmca.key'。 -- Common version of the certificate authority 'agency.crt', corresponding to the private key 'agency.key', national version of the certificate authority 'gmagency.crt', corresponding to the private key 'gmagency.key'。 -- (Chain certificate private key needs to be kept by the Union Committee, the private key of the agency certificate is kept by the institutions themselves, and the significance of the certificate and the private key is not extended here)。 -- When using the FISCO BCOS Enterprise Deployment Tool, all operations that require a certificate require a chain certificate and an authority certificate.(.crt files)placed under 'generator / meta /'。 +- The common version of the institution certificate 'agency.crt', the corresponding private key 'agency.key', the national version of the institution certificate 'gmagency.crt', the corresponding private key 'gmagency.key'。 +-Note: The private key of the chain certificate needs to be kept by the Alliance Committee, and the private key of the agency certificate is kept by each agency, and the meaning of the certificate and private key is not extended here。 +-When using FISCO BCOS enterprise-level deployment tool, all operations that require certificates need to combine chain certificates and agency certificates(.crt files)placed under 'generator / meta /'。 ## Detailed steps to deploy the federation chain (with code for each step) -After trying, I performed the following sequence of operations and successfully deployed a federation chain that met the above scenario description。Because the steps are very detailed and contain a lot of code, they are placed separately in the link here: ["Hands-on Deploy FISCO BCOS Alliance Chain"](https://blog.csdn.net/FISCO_BCOS/article/details/95496272)(Warm Tip: This article is attached to each step of the code demonstration, copy the link to the PC side to open the experience is better.。) +After trying, I performed the following sequence of operations and successfully deployed a federation chain that met the above scenario description。Because the steps are very detailed and contain a lot of code, they are placed separately in the link here: ["Hands-on Deploy FISCO BCOS Alliance Chain"](https://blog.csdn.net/FISCO_BCOS/article/details/95496272)(Warm Tip: This article is attached to each step of the code demonstration, copy the link to the PC side to open the experience is better。) ## Key Steps Summary 1. Alliance Committee and relevant members determine the networking mode; 2. Issuance of relevant certificates; 3. Prepare FISCO BCOS Enterprise Deployment Tool; -4. According to the networking mode, each organization collects the p2p connection information of the subordinate nodes of the group to be formed.; +4. According to the networking mode, each organization collects the p2p connection information of the subordinate nodes of the group to be formed; 5. Each group generates and distributes Genesis blocks(File); -6. Each organization generates node deployment procedures for adding points under the organization.; -7. Each organization modifies the configuration file in the subordinate node deployment program according to the networking mode and starts the alliance chain.。 +6. Each organization generates node deployment procedures for adding points under the organization; +7. Each organization modifies the configuration file in the subordinate node deployment program according to the networking mode and starts the alliance chain。 ## Write at the end -At this point, the deployment process for the actual project is complete.。At the beginning of the deployment, I felt that the deployment of the entire system was very complicated. After many deployments, I felt that the process was quite reasonable and clear, because in the actual scenario, not one person was operating, but the entire alliance was working together, and it was actually very fast to deploy according to the organization form that the alliance chain should have. I also summed up some experience in many deployments: +At this point, the deployment process for the actual project is complete。At the beginning of the deployment, I felt that the deployment of the entire system was very complicated. After many deployments, I felt that the process was quite reasonable and clear, because in the actual scenario, not one person was operating, but the entire alliance was working together, and it was actually very fast to deploy according to the organization form that the alliance chain should have. I also summed up some experience in many deployments: -- The first is to read more official technical documents and sort out the general process。 -- The second is to solve the problem of certificates, this problem is like a throat for me, in the actual business, the certificate must be legally valid, issued by CA company.。I think the generation of certificates can be considered separate from the enterprise deployment tool as a tool for building test chains.。Of course, in practice, if you don't pay attention to the legal effects of the data, you can also treat the certificate as just an encrypted public-private key pair.。 -- FISCO BCOS is currently in the stage of rapid iterative development, and some of the features introduced are very good, such as the blockchain browser。 -- WeBASE, a blockchain middleware platform that supports the underlying FISCO BCOS platform, is fully functional, quick to get started, and saves a lot of time.。 +-The first is to read more official technical documents and sort out the general process。 +-The second is to solve the problem of certificates, which is like a throat for me. In actual business, certificates must be legally valid and issued by CA。I think the generation of certificates can be considered separate from the enterprise deployment tool as a tool for building test chains。Of course, in practice, if you don't pay attention to the legal effects of the data, you can also treat the certificate as just an encrypted public-private key pair。 +-FISCO BCOS is currently in the stage of rapid iterative development, and some of the features introduced are very good, such as the blockchain browser。 +- Adapt to the blockchain middleware platform WeBASE that supports the underlying platform of FISCO BCOS. It has comprehensive functions, is fast to get started, and saves a lot of time。 Overall FISCO BCOS experience is very good, looking forward to see more FISCO BCOS breakthrough。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_features.md b/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_features.md index 0d0d4412c..d5ceb93fc 100644 --- a/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_features.md +++ b/3.x/en/docs/articles/3_features/36_cryptographic/national_cryptography_features.md @@ -3,9 +3,9 @@ Author : LI Hao Xuan | FISCO BCOS Core Developer -> Security is the guarantee of trust, algorithm is the basis of security, the establishment of node communication channel, signature generation, data encryption, etc., all need to use the corresponding algorithm of cryptography。FISCO BCOS uses the national secret algorithm to achieve a secure and controllable blockchain architecture.。 +> Security is the guarantee of trust, algorithm is the basis of security, the establishment of node communication channel, signature generation, data encryption, etc., all need to use the corresponding algorithm of cryptography。FISCO BCOS uses the national secret algorithm to achieve a secure and controllable blockchain architecture。 -This article explains the concepts related to the state secret algorithm and the application of the state secret algorithm in the blockchain.。 +This article explains the concepts related to the state secret algorithm and the application of the state secret algorithm in the blockchain。 ## Cryptography in Blockchain @@ -13,20 +13,20 @@ In the blockchain, the general use of cryptography in the following scenarios: ### 1. Data hashing algorithm -Hash function is a kind of one-way function, the role is to convert any length of the message into a fixed length of the output value, with one-way, collision-free, deterministic, irreversible properties.。 +Hash function is a kind of one-way function, the role is to convert any length of the message into a fixed length of the output value, with one-way, collision-free, deterministic, irreversible properties。 In the blockchain, the hash function is used to compress the message into a fixed-length output, as well as to ensure the authenticity of the data, to ensure that the data has not been modified。 ### 2. Data encryption and decryption algorithm -Encryption and decryption algorithms are mainly divided into two types: symmetric encryption and asymmetric encryption. -- Symmetric encryption has the characteristics of fast speed, high efficiency, high encryption strength, the use of the need to negotiate the key in advance, mainly for large-scale data encryption, such as FISCO BCOS node data when the encryption.。 -- Asymmetric encryption has the characteristics of no need to negotiate the key, compared to symmetric encryption calculation efficiency is lower, there are defects such as man-in-the-middle attacks, mainly used in the process of key agreement.。 +Encryption and decryption algorithms are mainly divided into two types: symmetric encryption and asymmetric encryption +- Symmetric encryption has the characteristics of high speed, high efficiency, high encryption strength, the use of the need to negotiate the key in advance, mainly for large-scale data encryption, such as FISCO BCOS node data when the encryption。 +-Asymmetric encryption has the characteristics of no need to negotiate a key. Compared with symmetric encryption, it has lower computational efficiency and defects such as man-in-the-middle attacks. It is mainly used in the process of key negotiation。 -For different needs, the two can be used in combination with each other.。 +For different needs, the two can be used in combination with each other。 -### 3. Generation and verification of message signatures. +### 3. Generation and verification of message signatures -In the blockchain, messages need to be signed for message tamper resistance and authentication。For example, in the process of node consensus, the identity of other nodes needs to be verified, and nodes need to verify the transaction data on the chain.。 +In the blockchain, messages need to be signed for message tamper resistance and authentication。For example, in the process of node consensus, the identity of other nodes needs to be verified, and nodes need to verify the transaction data on the chain。 ### 4. Handshake establishment process @@ -34,14 +34,14 @@ We talked earlier [**Handshake Process of Node TLS**](https://mp.weixin.qq.com/s ## FISCO BCOS's State Secret Algorithm -The national secret algorithm is issued by the National Cryptographic Bureau, including SM1\ SM2\ SM3\ SM4\ and so on, for China's independent research and development of cryptographic algorithm standards.。 -In order to fully support domestic cryptography algorithms, based on domestic cryptography standards, Jinchainmeng has implemented the national secret encryption and decryption, signature, signature verification, hash algorithm, national secret SSL communication protocol, and integrated it into the FISCO BCOS platform to achieve full support for commercial passwords recognized by the National Cryptographic Bureau.。 -The state secret version of FISCO BCOS replaces the cryptographic algorithms of the underlying modules such as transaction signature verification, p2p network connection, node connection, data drop encryption, etc. with the state secret algorithm.。 +The national secret algorithm is issued by the National Cryptographic Bureau, including SM1\ SM2\ SM3\ SM4\ and so on, for China's independent research and development of cryptographic algorithm standards。 +In order to fully support domestic cryptography algorithms, based on domestic cryptography standards, Jinchainmeng has implemented the national secret encryption and decryption, signature, signature verification, hash algorithm, national secret SSL communication protocol, and integrated it into the FISCO BCOS platform to achieve full support for commercial passwords recognized by the National Cryptographic Bureau。 +The state secret version of FISCO BCOS replaces the cryptographic algorithms of the underlying modules such as transaction signature verification, p2p network connection, node connection, data drop encryption, etc. with the state secret algorithm。 -1. The state-secret SSL algorithm is used in the node TLS handshake.; +1. The state-secret SSL algorithm is used in the node TLS handshake; 2. Transaction signature generation, verification process using the state secret SM2 algorithm; -3. The national secret SM4 algorithm is used in the data encryption process.; -4. The data summary algorithm uses the national secret SM3 algorithm.。 +3. The national secret SM4 algorithm is used in the data encryption process; +4. The data summary algorithm uses the national secret SM3 algorithm。 The ECDHE _ SM4 _ SM3 cipher suite of State Secret SSL 1.1 is used to establish SSL links for authentication between FISCO BCOS nodes. The differences are shown in the following table: @@ -49,8 +49,8 @@ The ECDHE _ SM4 _ SM3 cipher suite of State Secret SSL 1.1 is used to establish | ------------ | ---------------------------------------- | ---------------------------------------- | | Encryption Suite| Using ECDH, RSA, SHA-256, AES256 and other cryptographic algorithms| Adopting the State Secret Algorithm| | PRF algorithm| SHA-256 | SM3 | -| Key exchange mode| Transmission elliptic curve parameters and the signature of the current message| The signature and encryption certificate of the current message.| -| Certificate Mode| OpenSSL certificate mode| The dual certificate model of the State Secret, which is an encryption certificate and a signature certificate, respectively.| +| Key exchange mode| Transmission elliptic curve parameters and the signature of the current message| The signature and encryption certificate of the current message| +| Certificate Mode| OpenSSL certificate mode| The dual certificate model of the State Secret, which is an encryption certificate and a signature certificate, respectively| The data structure differences between the State Secret Edition and the Standard Edition FISCO BCOS are as follows: @@ -59,7 +59,7 @@ The data structure differences between the State Secret Edition and the Standard | Signature| ECDSA (Public key length: 512 bits, private key length: 256 bits) | SM2 (Length of public key: 512 bits, length of private key: 256 bits) | | Hash| SHA3 (Hash string length: 256 bits) | SM3 (Hash string length: 256 bits) | | symmetric encryption and decryption| AES (Encryption Key Length: 256 bits) | SM4 (Symmetric key length: 128 bits) | -| Transaction length| 520bits(The identifier is 8bits and the signature length is 512bits.) | 1024bits(128 bytes, including public key 512bits, signature length 512bits) | +| Transaction length| 520bits(The identifier is 8bits and the signature length is 512bits) | 1024bits(128 bytes, including public key 512bits, signature length 512bits) | ## Turn on the national secret feature @@ -71,15 +71,15 @@ There are two main ways to build the Jianguo Secret FISCO BCOS blockchain: ##### (1). Use the build _ chain.sh script to build -of the buildchain.sh script-g is the state secret compilation option. After successful use, the state secret version of the node will be generated.。By default, download the latest stable executable from GitHub. +-g in the buildchain.sh script is the state secret compilation option. After successful use, the state secret version of the node will be generated。By default, download the latest stable executable from GitHub ```bash -curl -LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.9.1/build_chain.sh && chmod u+x build_chain.sh +curl -LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.11.0/build_chain.sh && chmod u+x build_chain.sh ``` ```eval_rst .. note:: - - If the build _ chain.sh script cannot be downloaded for a long time due to network problems, try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` + -If the build _ chain.sh script cannot be downloaded for a long time due to network problems, please try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` ``` Run the following command to build a four-node state secret FISCO BCOS alliance chain @@ -92,7 +92,7 @@ After successful execution, a local four-node state-secret FISCO BCOS alliance c ##### (2)Use enterprise deployment tools to build -Enterprise Deployment Tools for Generator-The g command is related to the national secret operation, the user needs to generate the relevant certificate, download the binary and other operations in the process of accompanying.-g option, operation mode: +The -g command of the enterprise deployment tool generator is related to national secrets. You need to attach the -g option to operations such as generating relevant certificates and downloading binaries **Reference Tutorial**:https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/tutorial/enterprise_quick_start.html @@ -115,28 +115,28 @@ The following operations are similar to those in the tutorial The operation commands of the state secret SDK and console are the same as those of the normal version. The Web3SDK and 1.x console need to do the following when enabling the state secret feature: - (1). Set the encryptType in applicationContext.xml to 1; -- (2). When loading the private key, the state secret version of the private key needs to be loaded.; +- (2). When loading the private key, the state secret version of the private key needs to be loaded; - (3). The console also needs to obtain the state secret version jar package。 ```eval_rst .. note:: - Java SDK and 2.6.0+No additional configuration is required when the version of the console enables the national secret algorithm. + Java SDK and 2.6.0+No additional configuration is required when the version of the console enables the national secret algorithm ``` ### 3. Turn on the encryption function of the state secret version -In the state-secret version, you need to encrypt the nodes conf / gmnode.key and conf / origin _ cert / node.key at the same time. Other operations are the same as those in the normal version.。 +In the state-secret version, you need to encrypt the nodes conf / gmnode.key and conf / origin _ cert / node.key at the same time. Other operations are the same as those in the normal version。 ## SUMMARY -The main algorithmic features in FISCO BCOS are compared as follows. +The main algorithmic features in FISCO BCOS are compared as follows | | **Standard Edition****FISCO BCOS** | **State Secret Edition****FISCO BCOS** | | ------------ | ------------------------ | ------------------------ | | SSL Link| OpenSSL TLSv1.2 Protocol| State Secret TLSv1.1 Agreement| | Signature Verification| ECDSA Signature Algorithm| SM2 Signature Algorithm| | message digest algorithm| SHA-256 SHA-3 | SM3 Message Digest Algorithm| -| falling disk encryption algorithm| AES-256 encryption algorithm| SM4 Encryption Algorithm| +| falling disk encryption algorithm| AES-256 Encryption Algorithm| SM4 Encryption Algorithm| | Certificate Mode| OpenSSL certificate mode| State Secret Dual Certificate Mode| | contract compiler| Ethereum Solidity Compiler| State Secret Solidity Compiler| @@ -146,7 +146,7 @@ The main algorithmic features in FISCO BCOS are compared as follows. **Q**:**Wang Gang+Yunfei Micro-networking+Zhuhai**: Does the solidity compiler also need to use cryptographic algorithms?? -**A**:**Li Haoxuan**: The abi in solidity will use the hash, which requires both the underlying and contract compilers to use the same SM3 algorithm.。 +**A**:**Li Haoxuan**: The abi in solidity will use the hash, which requires both the underlying and contract compilers to use the same SM3 algorithm。 **Q**:**Chen Xiaojun-Jiangnan Keyou-Guangzhou**: Can you tell me whether the National Secret TLS protocol suite is implemented by itself or is it open source?? @@ -154,8 +154,8 @@ The main algorithmic features in FISCO BCOS are compared as follows. **Q**:**Tenglong(He Zhiqun)**If you use a state-secret node, because the signature algorithm is changed, will the RPC SDK be different?? -**A**:**Li Haoxuan**Yes, the SDK also needs to enable the national secret feature.。 +**A**:**Li Haoxuan**Yes, the SDK also needs to enable the national secret feature。 -**Q**:**KHJ**: How the private key of the current drop encryption is handled.? +**Q**:**KHJ**: How the private key of the current drop encryption is handled? -**A**:**meng**: To save it yourself, please refer to [instructions] here.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/storage_security.html) \ No newline at end of file +**A**:**meng**: To save it yourself, please refer to [instructions] here(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/storage_security.html) \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/37_safety/access_control_glance.md b/3.x/en/docs/articles/3_features/37_safety/access_control_glance.md index f334e4ec5..1a9b84e07 100644 --- a/3.x/en/docs/articles/3_features/37_safety/access_control_glance.md +++ b/3.x/en/docs/articles/3_features/37_safety/access_control_glance.md @@ -4,48 +4,48 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS **Author language** -In the multi-party alliance chain, the division of labor and cooperation between the parties should also be done.**Clear responsibilities, each perform their own duties**。There is no need for chain managers to "be both referees and athletes" to participate in business transactions, and users who only participate in transactions do not have to worry about the development and deployment of smart contracts.。At the same time,"DO separation"(Separation of development and operation and maintenance) is a mature practice in the industry, and overstepping your authority poses risks that could ultimately undermine your reputation and cause loss of assets.。 +In the multi-party alliance chain, the division of labor and cooperation between the parties should also be done**Clear responsibilities, each perform their own duties**。There is no need for chain managers to "be both referees and athletes" to participate in business transactions, and users who only participate in transactions do not have to worry about the development and deployment of smart contracts。At the same time,"DO separation"(Separation of development and operation and maintenance) is a mature practice in the industry, and overstepping your authority poses risks that could ultimately undermine your reputation and cause loss of assets。 Clear, easy to use, comprehensive**Permission control ability**, both for information security, and for improving the governance of the alliance, are essential。 -This article is about FISCO BCOS permission control this matter, the author from the FISCO BCOS permission classification, typical alliance chain role design, permission control operation basic steps and so on.。 +This article is about FISCO BCOS permission control this matter, the author from the FISCO BCOS permission classification, typical alliance chain role design, permission control operation basic steps and so on。 ```eval_rst .. note:: - Starting from "v1.0.9," the console no longer supports the "grantPermissionManager" command. Use the commands related to blockchain committee permission management such as "grantCommitteeMembers" and "grantOperator" instead of this command.。 + Starting from "v1.0.9," the console no longer supports the "grantPermissionManager" command. Use the commands related to blockchain committee permission management such as "grantCommitteeMembers" and "grantOperator" instead of this command。 ``` ## Permission classification of FISCO BCOS -FISCO BCOS in the chain just set up, in order to facilitate rapid development and experience, the default does not do any permission control。However, if this chain is used to provide enterprise-level services, it is important to design and implement a permissions control strategy from the outset.。 Permission classification of FISCO BCOS: +FISCO BCOS in the chain just set up, in order to facilitate rapid development and experience, the default does not do any permission control。However, if this chain is used to provide enterprise-level services, it is important to design and implement a permissions control strategy from the outset。 Permission classification of FISCO BCOS: ![](../../../../images/articles/access_control_glance/IMG_4967.PNG) ### 1. Chain administrator permissions -即**Permissions to assign permissions**If you define account A as the chain administrator, A can assign permissions to accounts B, C, and D.;You can set up multiple administrators. If you do not set up an administrator, any account can modify various permissions indiscriminately.。 +即**Permissions to assign permissions**If you define account A as the chain administrator, A can assign permissions to accounts B, C, and D;You can set up multiple administrators. If you do not set up an administrator, any account can modify various permissions indiscriminately。 ### 2. System management authority Currently includes 4: -- Node management permissions (adding or deleting consensus nodes or observing nodes) +- Node management permissions (add or delete consensus nodes or observe nodes) - Permission to modify system parameters - Modify CNS contract naming permissions -- Can contracts be deployed and table creation permissions +- Can you deploy contracts and create table permissions -The deployment contract and table creation are "two-in-one" controls, when using the CRUD contract, we recommend that the deployment contract together with the table used in the contract built (written in the contract constructor), otherwise the next read and write table transactions may encounter "missing table" error.。If the business process requires dynamic table creation, the permissions for dynamic table creation should also be assigned to only a few accounts, otherwise various obsolete tables may appear on the chain。 +The deployment contract and table creation are "two-in-one" controls, when using the CRUD contract, we recommend that the deployment contract together with the table used in the contract built (written in the contract constructor), otherwise the next read and write table transactions may encounter "missing table" error。If the business process requires dynamic table creation, the permissions for dynamic table creation should also be assigned to only a few accounts, otherwise various obsolete tables may appear on the chain。 ### 3. User Table Permissions -At the granularity of the user table, control whether certain accounts can**Overwrite a user table**to prevent the user table from being accidentally modified by others, this permission depends on the FISCO BCOS CRUD contract writing。In addition,**Read User Table**Not controlled by permissions;If you want to control the privacy of data, you need to introduce technologies such as data encryption and zero knowledge.。 +At the granularity of the user table, control whether certain accounts can**Overwrite a user table**to prevent the user table from being accidentally modified by others, this permission depends on the FISCO BCOS CRUD contract writing。In addition,**Read User Table**Not controlled by permissions;If you want to control the privacy of data, you need to introduce technologies such as data encryption and zero knowledge。 ### 4. Contract Interface Permissions -A contract can include multiple interfaces, because the logic in the contract is closely related to the business, the interface granularity of the permission control is implemented by the developer, the developer can judge the msg.sender or tx.organ, decide whether to allow this call to continue processing.。 +A contract can include multiple interfaces, because the logic in the contract is closely related to the business, the interface granularity of the permission control is implemented by the developer, the developer can judge the msg.sender or tx.organ, decide whether to allow this call to continue processing。 -The FISCO BCOS console provides a series of commands to control permissions, which can be easily used by users.**Grant, Cancel(revoke), View(list)**For various permissions, see the documentation on the console。 +The FISCO BCOS console provides a series of commands to control permissions, which can be easily used by users**Grant, Cancel(revoke), View(list)**For various permissions, see the documentation on the console。 ## Typical Rights Management Role Design in Alliance Chain @@ -53,21 +53,21 @@ In the alliance chain, different roles perform their duties, division of labor a ### 1. Chain Manager -A committee is usually selected by multiple parties involved in the chain, and one or more agencies can be granted administrator privileges for personnel management and authority allocation。The chain administrator is not responsible for node management, modifying system parameters, deploying contracts and other system management operations.。 +A committee is usually selected by multiple parties involved in the chain, and one or more agencies can be granted administrator privileges for personnel management and authority allocation。The chain administrator is not responsible for node management, modifying system parameters, deploying contracts and other system management operations。 ### 2. System Administrator -Designated business operators or system operation and maintenance personnel, assign various permissions as needed, responsible for daily on-chain management, including node addition and deletion, system parameter modification, etc.。The chain administrator assigns permissions according to the governance rules agreed upon by everyone, for example, only the specified accounts are allowed to deploy contracts, and they are given contract deployment permissions so that other accounts cannot deploy contracts at will.。 +Designated business operators or system operation and maintenance personnel, assign various permissions as needed, responsible for daily on-chain management, including node addition and deletion, system parameter modification, etc。The chain administrator assigns permissions according to the governance rules agreed upon by everyone, for example, only the specified accounts are allowed to deploy contracts, and they are given contract deployment permissions so that other accounts cannot deploy contracts at will。 ### 3. Transaction Users -Users send business transaction requests to the blockchain. Business transactions mainly call contracts and read and write user tables, which can be flexibly controlled according to business logic, combined with user table permissions and contract interface permissions.。 +Users send business transaction requests to the blockchain. Business transactions mainly call contracts and read and write user tables, which can be flexibly controlled according to business logic, combined with user table permissions and contract interface permissions。 ### 4. Regulators Which system and user table permissions are assigned to the supervisor, you can refer to the specific regulatory rules, such as the supervisor read-only all data, there is no need to set special permissions。 -Managing accounts with different roles is another complex issue, one that needs to be clearly differentiated, easy to use, and secure;In case the account is lost, you need to support recovery. If the account is leaked, reset it. We will introduce it in another article later.。 +Managing accounts with different roles is another complex issue, one that needs to be clearly differentiated, easy to use, and secure;In case the account is lost, you need to support recovery. If the account is leaked, reset it. We will introduce it in another article later。 ## Basic steps for privilege control operations @@ -91,13 +91,13 @@ The command line to assign administrator privileges is: grantPermissionManager 0xf1585b8d0e08a0a00fff662e24d67ba95a438256 ``` -When this account gets the chain administrator permissions, exit the current console or switch to another terminal window, log in once with the private key of this account, and you can perform subsequent operations as a chain administrator.。 +When this account gets the chain administrator permissions, exit the current console or switch to another terminal window, log in once with the private key of this account, and you can perform subsequent operations as a chain administrator。 -**Tips**: Be sure to remember the correspondence between the administrator address and the private key, otherwise once the administrator permissions are set, only the administrator can assign permissions to other accounts, and the settings of other accounts will report no permissions.。 +**Tips**: Be sure to remember the correspondence between the administrator address and the private key, otherwise once the administrator permissions are set, only the administrator can assign permissions to other accounts, and the settings of other accounts will report no permissions。 ### step2 -Log in to the console with the chain administrator account, and assign node management permissions, system parameter modification permissions, CNS permissions, deployment contract and table creation permissions to other system administrator accounts in turn according to the management policy.。Then log on to the console with the private key of a system administrator account with the appropriate permissions, such as an account with deployment and table creation permissions, for the next step.。 +Log in to the console with the chain administrator account, and assign node management permissions, system parameter modification permissions, CNS permissions, deployment contract and table creation permissions to other system administrator accounts in turn according to the management policy。Then log on to the console with the private key of a system administrator account with the appropriate permissions, such as an account with deployment and table creation permissions, for the next step。 ### step3 @@ -111,13 +111,13 @@ Authorize 0xf1585b8d0e08a0a00fff662e24d67ba95a438256 to operate this account**t_ ### step4 -For an interface in the Solidity contract, you can refer to this code for control. +For an interface in the Solidity contract, you can refer to this code for control ``` function testFunction() public returns(int256) { - require(msg.sender == tx.origin); / / The effect of this line is to prohibit contract adjustment. - if(msg.sender != address(0x156dff526b422b17c4f576e6c0b243179eaa8407) ) / / Here is an example, the account address is written directly in clear text, which can actually be handled flexibly during development.。 + require(msg.sender == tx.origin); / / The effect of this line is to prohibit contract adjustment + if(msg.sender != address(0x156dff526b422b17c4f576e6c0b243179eaa8407) ) / / Here is an example, the account address is written directly in clear text, which can actually be handled flexibly during development。 { return -1; } / / If the caller and the preset authorized caller are different, return } ``` @@ -126,7 +126,7 @@ msg.sender is the address of the caller of the current contract, either the user ## Summary and references -This article describes some of the interfaces and capabilities that FISCOBCOS provides at the basic level, and the reasonableness and sophistication of permission control will ultimately depend on the user, and you can continue to explore the scenario governance and security control of different chains in depth to arrive at best practices.。 +This article describes some of the interfaces and capabilities that FISCOBCOS provides at the basic level, and the reasonableness and sophistication of permission control will ultimately depend on the user, and you can continue to explore the scenario governance and security control of different chains in depth to arrive at best practices。 #### References @@ -154,7 +154,7 @@ This article describes some of the interfaces and capabilities that FISCOBCOS pr If the answer to the above two questions is yes, is it possible to modify the data of the entire network as long as you have the super permissions of one node?? -**@ Light Path**Yes. Before establishing the chain, you must first negotiate which account or accounts will assume the role of the chain administrator. The roles will be assigned as soon as the chain is established. For details, see FISCO BCOS permission control related documents.。 +**@ Light Path**Yes. Before establishing the chain, you must first negotiate which account or accounts will assume the role of the chain administrator. The roles will be assigned as soon as the chain is established. For details, see FISCO BCOS permission control related documents。 Thank you for participating in this topic discussion of small partners!Open source community, because you are more beautiful! diff --git a/3.x/en/docs/articles/3_features/37_safety/certificate_description.md b/3.x/en/docs/articles/3_features/37_safety/certificate_description.md index b1e2c2d07..b00a46c44 100644 --- a/3.x/en/docs/articles/3_features/37_safety/certificate_description.md +++ b/3.x/en/docs/articles/3_features/37_safety/certificate_description.md @@ -2,37 +2,37 @@ Author : LI Hao Xuan | FISCO BCOS Core Developer -alliance chain, multi-party participation on the chain is a collaborative relationship;The alliance chain is open to authorized organizations or institutions, with access mechanisms。In the admission mechanism, the certificate is an important credential for the parties to authenticate each other.;So to speak,**Certificate mechanism is the cornerstone of alliance chain network security**。 +alliance chain, multi-party participation on the chain is a collaborative relationship;The alliance chain is open to authorized organizations or institutions, with access mechanisms。In the admission mechanism, the certificate is an important credential for the parties to authenticate each other;So to speak,**Certificate mechanism is the cornerstone of alliance chain network security**。 ## Part1: FISCO BCOS Certificate Structure -FISCO BCOS network adopts CA-oriented access mechanism, uses the certificate format of x509 protocol, supports any multi-level certificate structure, and ensures information confidentiality, authentication, integrity and non-repudiation.。According to existing business scenarios, FISCO BCOS adopts a three-level certificate structure by default, with chain certificates, agency certificates, and node certificates from top to bottom.。 +FISCO BCOS network adopts CA-oriented access mechanism, uses the certificate format of x509 protocol, supports any multi-level certificate structure, and ensures information confidentiality, authentication, integrity and non-repudiation。According to existing business scenarios, FISCO BCOS adopts a three-level certificate structure by default, with chain certificates, agency certificates, and node certificates from top to bottom。 ![](../../../../images/articles/certificate_description/IMG_5540.PNG) ▲ Figure: Certificate format of x509 protocol -The certificate content includes the certificate version number, serial number, certificate signing algorithm, message digest algorithm and other generation information.;It also includes information such as the issuer, validity period, user, public key information, and cipher suites required for SSL communication.。The node loads the certificate and, when receiving the packet, verifies the certificate carried in the packet according to the cipher suite specified in the certificate and its message fields.。 +The certificate content includes the certificate version number, serial number, certificate signing algorithm, message digest algorithm and other generation information;It also includes information such as the issuer, validity period, user, public key information, and cipher suites required for SSL communication。The node loads the certificate and, when receiving the packet, verifies the certificate carried in the packet according to the cipher suite specified in the certificate and its message fields。 ## Part2: Role Definition -There are four roles in the certificate structure of FISCO BCOS, namely, the consortium chain committee, the consortium chain member body, and the consortium chain participant (node and SDK).。 +There are four roles in the certificate structure of FISCO BCOS, namely, the consortium chain committee, the consortium chain member body, and the consortium chain participant (node and SDK)。 ### 1. Alliance Chain Committee -The affiliate chain committee has the root certificate of the affiliate chain, ca.crt, and the private key, ca.key. By using ca.key to issue the certificate to the affiliate chain member institutions, the affiliate chain committee is responsible for the admission and removal of the affiliate chain member institutions.。 +The affiliate chain committee has the root certificate of the affiliate chain, ca.crt, and the private key, ca.key. By using ca.key to issue the certificate to the affiliate chain member institutions, the affiliate chain committee is responsible for the admission and removal of the affiliate chain member institutions。 ### 2. Alliance chain member institutions -Alliance chain member institutions are those that have been approved by the Alliance Chain Committee to join the Alliance Chain.。Consortium chain member institutions have the institution private key agency.key and the institution certificate agency.crt issued by the root private key ca.key.。Alliance chain member organizations can issue node certificates through the private key of the organization, so as to configure the nodes and SDKs of the organization.。 +Alliance chain member institutions are those that have been approved by the Alliance Chain Committee to join the Alliance Chain。Consortium chain member institutions have the institution private key agency.key and the institution certificate agency.crt issued by the root private key ca.key。Alliance chain member organizations can issue node certificates through the private key of the organization, so as to configure the nodes and SDKs of the organization。 ### 3. Alliance Chain Participants -Consortium participants can interact with each other through running nodes or SDK Consortium. They have the node certificate node.crt and the private key node.key that communicate with other nodes.。When a federation chain participant runs a node or SDK, the root certificate ca.crt, the corresponding node certificate node.crt, and the private key node.key must be loaded.;Use pre-loaded certificates for authentication when communicating with other members。 +Consortium participants can interact with each other through running nodes or SDK Consortium. They have the node certificate node.crt and the private key node.key that communicate with other nodes。When a federation chain participant runs a node or SDK, the root certificate ca.crt, the corresponding node certificate node.crt, and the private key node.key must be loaded;Use pre-loaded certificates for authentication when communicating with other members。 ## Part3: Certificate Generation Process -### 1. The Alliance Chain Committee initializes the root certificate ca.crt. +### 1. The Alliance Chain Committee initializes the root certificate ca.crt - locally generated private key ca.key;Self-signed root certificate generation ca.crt。 @@ -40,11 +40,11 @@ Consortium participants can interact with each other through running nodes or SD ### 2. Alliance chain member institutions obtain institutional certificates agency.crt -- Locally generated private key agency.key; -- Generate the certificate request file agency.csr from the local private key; -- Send the certificate request file agency.csr to the federation chain committee; -- The consortium chain committee uses ca.key to issue the certificate request file agency.csr to obtain the certificate agency.crt of the consortium chain member body.; -- The consortium chain committee sends the consortium chain member body certificate agency.crt to the corresponding member。 +- locally generated private key agency.key; +- Generate certificate request file agency.csr from local private key; +- send the certificate request file agency.csr to the federation chain committee; +- The consortium chain committee uses ca.key to issue the certificate request file agency.csr to obtain the consortium chain member certificate agency.crt; +- the federation chain committee sends the federation chain member body certificate agency.crt to the corresponding member。 ![](../../../../images/articles/certificate_description/IMG_5542.PNG) @@ -52,7 +52,7 @@ Consortium participants can interact with each other through running nodes or SD ![](../../../../images/articles/certificate_description/IMG_5543.PNG) -- Locally generated private key node.key; +- locally generated private key node.key; - Generate certificate request file node.csr from local private key; @@ -60,7 +60,7 @@ Consortium participants can interact with each other through running nodes or SD - Alliance chain member organizations use agency.key to issue the certificate request file node.csr to obtain the node / SDK certificate node.crt; -- The consortium chain member organization sends the node certificate node.crt to the corresponding entity.。 +- The consortium chain member organization sends the node certificate node.crt to the corresponding entity。 ![](../../../../images/articles/certificate_description/IMG_5544.PNG) @@ -70,13 +70,13 @@ Consortium participants can interact with each other through running nodes or SD Take the generation of the consortium chain member institution certificate as an example: -1. The institution first uses the institution's private key agency.key locally to generate the certificate request file agency.csr.; +1. The institution first uses the institution's private key agency.key locally to generate the certificate request file agency.csr; ``` openssl req -new -sha256 -subj "/CN=$name/O=fisco-bcos/OU=agency" -key ./agency.key -config ./cert.cnf -out ./agency.csr ``` -2. The Alliance Chain Committee generates the certificate agency.crt from the certificate request file.; +2. The Alliance Chain Committee generates the certificate agency.crt from the certificate request file; ``` openssl x509 -req -days 3650 -sha256 -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -in ./agency.csr -out ./agency.crt -extensions v4_req -extfile ./cert.cnf @@ -86,7 +86,7 @@ cert.cnf in the above process is a certificate information configuration item, w ## Part5: Summary and Reference Documents -This paper introduces the relevant certificate description and its corresponding hierarchical architecture adopted by FISCO BCOS.;In a follow-up article, we'll explain how to use these digital certificates during the SSL handshake.。 +This paper introduces the relevant certificate description and its corresponding hierarchical architecture adopted by FISCO BCOS;In a follow-up article, we'll explain how to use these digital certificates during the SSL handshake。 ------ diff --git a/3.x/en/docs/articles/3_features/37_safety/disk_encryption.md b/3.x/en/docs/articles/3_features/37_safety/disk_encryption.md index 40404b0c8..dbba433b2 100644 --- a/3.x/en/docs/articles/3_features/37_safety/disk_encryption.md +++ b/3.x/en/docs/articles/3_features/37_safety/disk_encryption.md @@ -2,35 +2,35 @@ Author : SHI Xiang | FISCO BCOS Core Developer -Blockchain deployment involves multiple parties. To simplify the construction of a multi-party collaboration environment, a public cloud is usually used to deploy blockchain.。Organizations deploy their own nodes to the cloud, allowing services to interact with nodes on the cloud to achieve multi-party collaboration.。In this architecture, security within the institution is high, especially in financial institutions。 +Blockchain deployment involves multiple parties. To simplify the construction of a multi-party collaboration environment, a public cloud is usually used to deploy blockchain。Organizations deploy their own nodes to the cloud, allowing services to interact with nodes on the cloud to achieve multi-party collaboration。In this architecture, security within the institution is high, especially in financial institutions。 Although the nodes are restricted to the "intranet" through the network isolation mechanism, data cannot be easily stolen through the network, but all data is hosted on the cloud, because all participants will save a copy of the data, in the network and system security measures There are omissions or improper operation and other extreme circumstances, there may be a data access situation。 In order to prevent data disks from being breached or stolen and avoid data leakage, FISCO BCOS introduces the function of "disk encryption."。 ## Background Architecture -In the architecture of the alliance chain, a blockchain is built between institutions, and data is visible within each institution of the alliance chain.。 -In some scenarios with high data security requirements, members within the alliance do not want organizations outside the alliance to have access to data on the alliance chain。At this point, you need to access the data on the federation chain.。 +In the architecture of the alliance chain, a blockchain is built between institutions, and data is visible within each institution of the alliance chain。 +In some scenarios with high data security requirements, members within the alliance do not want organizations outside the alliance to have access to data on the alliance chain。At this point, you need to access the data on the federation chain。 -Access control for federated chain data is divided into two main areas. +Access control for federated chain data is divided into two main areas - Access control of communication data on the chain -- Access Control of Node Storage Data +- Access control for node storage data -For access control of on-chain communication data, FISCO BCOS is done through node certificates and SSL.。This section focuses on access control for node storage data, i.e., drop disk encryption。 +For access control of on-chain communication data, FISCO BCOS is done through node certificates and SSL。This section focuses on access control for node storage data, i.e., drop disk encryption。 ![](../../../../images/articles/disk_encryption/IMG_4939.PNG) ## main idea Falling disk encryption is performed inside the institution。In the organization's intranet environment, each organization independently encrypts the hard drive data of the node。 -When the hard disk of the machine where the node is located is taken away from the organization and the node is started on a network outside the organization's intranet, the hard disk data cannot be decrypted, the node cannot be started, and thus the data on the alliance chain cannot be stolen.。 +When the hard disk of the machine where the node is located is taken away from the organization and the node is started on a network outside the organization's intranet, the hard disk data cannot be decrypted, the node cannot be started, and thus the data on the alliance chain cannot be stolen。 ## Programme architecture ![](../../../../images/articles/disk_encryption/IMG_4940.PNG) Drop-disk encryption is performed within the organization, and each organization independently manages the security of its own hard drive data。In the intranet, the hard drive data of each node is encrypted。Access to all encrypted data, managed through Key Manager。 -Key Manager is deployed in the organization's intranet and is a service dedicated to managing node hard disk data access keys.。When a node in the intranet is started, it obtains the access key for the encrypted data from the Key Manager to access its own encrypted data.。 +Key Manager is deployed in the organization's intranet and is a service dedicated to managing node hard disk data access keys。When a node in the intranet is started, it obtains the access key for the encrypted data from the Key Manager to access its own encrypted data。 Cryptographically protected objects include: @@ -39,22 +39,22 @@ Cryptographically protected objects include: ## realization principle -The specific implementation process is accomplished through the dataKey held by the node itself and the global superKey managed by the Key Manager.。 +The specific implementation process is accomplished through the dataKey held by the node itself and the global superKey managed by the Key Manager。 ### Node -- The node uses its own dataKey to encrypt and decrypt its own encrypted data (Encrypted Space).。 -- The node itself does not store the dataKey on the local disk, but stores the encrypted cipherDataKey of the dataKey.。 +- The node uses its own dataKey to encrypt and decrypt its own encrypted data (Encrypted Space)。 +- The node itself does not store the dataKey on the local disk, but stores the encrypted cipherDataKey of the dataKey。 - When the node is started, request the cipherDataKey from the Key Manager to obtain the dataKey。 -- The dataKey is only in the node's memory. When the node is closed, the dataKey is automatically discarded.。 +-dataKey is only in the node's memory. When the node is closed, dataKey is automatically discarded。 ### Key Manager -Holds the global superKey, which is responsible for responding to authorization requests when all nodes are started.。 +Holds the global superKey, which is responsible for responding to authorization requests when all nodes are started。 -- Key Manager must be online at node startup to respond to node startup requests。 +-Key Manager must be online when the node is started and respond to the start request of the node。 - When the node is started, the cipherDataKey is sent. The key manager decrypts the cipherDataKey with the superKey. If the decryption is successful, the node's dataK is returned to the node。 -- Key Manager can only be accessed from the intranet. Key Manager cannot be accessed from the intranet outside the organization.。 +- Key Manager can only be accessed from the intranet, and cannot be accessed from the extranet outside the organization。 ![](../../../../images/articles/disk_encryption/IMG_4941.PNG) @@ -62,7 +62,7 @@ Holds the global superKey, which is responsible for responding to authorization ### Key Manager Actions -Start a key on each institution-The manger program is started with the following command, specifying the Key Manager: +Start a key-manger program on each organization, start it with the following command, and specify the Key Manager: ``` # Parameters: port, superkey @@ -78,7 +78,7 @@ Start a key on each institution-The manger program is started with the following bash key-manager/scripts/gen_data_secure_key.sh 127.0.0.1 31443 12345 ``` -Obtain the cipherDataKey. The script automatically prints the ini configuration required for disk encryption.(如下)。The cipherDataKey of the node is obtained: cipher _ data _ key = ed157f4588b86d61a2e1745efe71e6ea。 +Obtain the cipherDataKey. The script automatically prints the ini configuration required for disk encryption(如下)。The cipherDataKey of the node is obtained: cipher _ data _ key = ed157f4588b86d61a2e1745efe71e6ea。 ``` [storage_security] @@ -88,7 +88,7 @@ key_manager_port=31443 cipher_data_key=ed157f4588b86d61a2e1745efe71e6ea ``` -Write the resulting encrypted ini configuration to the node configuration file (config.ini).。 +Write the resulting encrypted ini configuration to the node configuration file (config.ini)。 #### (2) Encrypt the private key of the new node @@ -116,9 +116,9 @@ Start the node directly。If the hard disk where this node is located is taken o ## Precautions -- Key Manager is a demo version. Currently, the superkey is specified at startup through the command line. In practical applications, you need to customize the way to load the superkey according to the security requirements, such as using an encryption machine to manage it.。 -- Disk encryption is configured only for newly generated nodes. Once a node is started, it cannot be converted to a node with disk encryption.。 -- The state secret version encrypts one more private key than the non-state secret version.。 +-Key Manager is a demo version. Currently, superKey is specified at startup through the command line. In practical applications, you need to customize the way to load superKey according to security requirements, such as using an encryption machine to manage。 +- Disk encryption is configured only for newly generated nodes. Once a node is started, it cannot be converted to a node with disk encryption。 +- The state secret version encrypts one more private key than the non-state secret version。 ------ diff --git a/3.x/en/docs/articles/3_features/37_safety/role_authority_model_realization.md b/3.x/en/docs/articles/3_features/37_safety/role_authority_model_realization.md index d8295b35c..da2bec7fa 100644 --- a/3.x/en/docs/articles/3_features/37_safety/role_authority_model_realization.md +++ b/3.x/en/docs/articles/3_features/37_safety/role_authority_model_realization.md @@ -4,26 +4,26 @@ Author : Bai Xingqiang | FISCO BCOS Core Developer ## Introduction -The permission control of FISCO BCOS is realized by controlling the account's write permission to the table in the system.。This permission control model is very flexible and powerful, and users can control almost any permission, for example, by controlling the write permission management of the permission table to assign permissions.;By controlling the write permission management chain configuration, node identity management, contract deployment, user table creation, etc. of the table corresponding to the system contract.;Manage the call of the contract write interface by controlling the write permission of the contract table.。 +The permission control of FISCO BCOS is realized by controlling the account's write permission to the table in the system。This permission control model is very flexible and powerful, and users can control almost any permission, for example, by controlling the write permission management of the permission table to assign permissions;By controlling the write permission management chain configuration, node identity management, contract deployment, user table creation, etc. of the table corresponding to the system contract;Manage the call of the contract write interface by controlling the write permission of the contract table。 -However, absolute perfection does not exist.。Powerful and flexible permission control also brings high learning costs: users need to understand the content of each permission item control and how to set it up, understand the difference between chain administrators and system administrators... A large number of concepts and operations are extremely demanding on users。 +However, absolute perfection does not exist。Powerful and flexible permission control also brings high learning costs: users need to understand the content of each permission item control and how to set it up, understand the difference between chain administrators and system administrators... A large number of concepts and operations are extremely demanding on users。 -In order to reduce the difficulty of use and improve the user experience, FISCO BCOS v2.5 has optimized this function and added role-based permission control。Attributing different permissions to different roles, users can determine the permissions they have based on the roles to which the account belongs.。At the same time, v2.5 introduces a role-based on-chain governance voting model to make governance operations more convenient.。 +In order to reduce the difficulty of use and improve the user experience, FISCO BCOS v2.5 has optimized this function and added role-based permission control。Attributing different permissions to different roles, users can determine the permissions they have based on the roles to which the account belongs。At the same time, v2.5 introduces a role-based on-chain governance voting model to make governance operations more convenient。 ## What is the role permission model? -After using the role permission model, users only need to remember the role, and the permissions corresponding to the role are self-evident, for example, the governance committee members have chain governance-related permissions, which greatly reduces the difficulty of user understanding and learning costs.。 +After using the role permission model, users only need to remember the role, and the permissions corresponding to the role are self-evident, for example, the governance committee members have chain governance-related permissions, which greatly reduces the difficulty of user understanding and learning costs。 ![](../../../../images/articles/role_authority_model_realization/IMG_5553.PNG) ### Permissions corresponding to roles -Participants on the blockchain can be divided into governance, operation and maintenance, regulatory and business parties according to their roles.。In order to avoid being both a referee and an athlete, the governance and operation and maintenance parties should be separated from each other's responsibilities and roles should be mutually exclusive.。 +Participants on the blockchain can be divided into governance, operation and maintenance, regulatory and business parties according to their roles。In order to avoid being both a referee and an athlete, the governance and operation and maintenance parties should be separated from each other's responsibilities and roles should be mutually exclusive。 -- Governance: The role is called the governance committee member, referred to as the member, responsible for blockchain governance.。 +- Governance side: the role is called the governance committee member, referred to as the member, responsible for blockchain governance。 - Operation and maintenance side: responsible for blockchain operation and maintenance, this role is added by the committee。 -- Business side: The business side account is added to a contract by O & M, and the write interface of the contract can be called.。 -- Regulator: Regulator monitors the operation of the chain and is able to obtain records of changes in permissions and data to be audited during the chain operation.。 +- Business side: The business side account is added to a contract by O & M, and the write interface of the contract can be called。 +- Regulator: Regulator monitors the operation of the chain and can obtain records of changes in permissions and data to be audited during the chain operation。 The permissions corresponding to each role are shown in the following table。 @@ -31,21 +31,21 @@ The permissions corresponding to each role are shown in the following table。 ### Details of Role Permissions Implementation -This section will briefly introduce the details of the implementation of permissions for members, operations and business roles, as well as the principles behind them, in order to better understand and use the role permissions feature.。 +This section will briefly introduce the details of the implementation of permissions for members, operations and business roles, as well as the principles behind them, in order to better understand and use the role permissions feature。 -There is no member account at the beginning of the chain, and when there is at least one member account, the rights of the member begin to be controlled.。In the practical application of the alliance chain, the technical strength of multiple participants may not be the same, starting from the actual application scenario, we introduced the chain governance voting model, all governance operations need the number of valid votes / number of members > effective threshold to take effect, the user through the new chain governance pre-compiled contract can achieve the addition and deletion of members, weight modification, voting effective threshold modification and other operations.。 +There is no member account at the beginning of the chain, and when there is at least one member account, the rights of the member begin to be controlled。In the practical application of the alliance chain, the technical strength of multiple participants may not be the same, from the practical application scenario, we introduced the on-chain governance voting model, all governance operations require the number of valid votes / members>The effective threshold can only take effect. Users can add or delete members, modify weights, and modify the effective threshold of voting through the new chain governance precompiled contract。 There are several points worth noting about the voting model: -- For each voting operation, if it is a member voting, record the operation content and voting members, and do not repeat the counting of votes -- For each voting operation, after the counting of votes, the number of valid votes / members is calculated, and if it is greater than the effective threshold of this operation, the corresponding operation takes effect +- Each voting operation, if it is a member vote, record the operation content and voting members, do not repeat the vote count +-For each voting operation, after the counting of votes, the number of valid votes / members is calculated. If it is greater than the effective threshold of this operation, the corresponding operation will take effect - Vote set expiration time, according to the block height, blockLimit 10 times, fixed can not be changed -The addition and revocation of the operation and maintenance role must be operated by the member role.。There is no O & M account at the beginning of the chain. When at least one O & M account exists, the permissions of O & M are controlled.。The business account can call the query interface on the chain and the write interface of the specified contract for operation and maintenance.。 +The addition and revocation of the operation and maintenance role must be operated by the member role。There is no O & M account at the beginning of the chain. When at least one O & M account exists, the permissions of O & M are controlled。The business account can call the query interface on the chain and the write interface of the specified contract for operation and maintenance。 ### Compatibility Description -Currently, the role permission model is based on write permission control for various types of tables in the system。We have done our best to keep the same experience as the previous version, but for the sake of complete and strict permission control, the new chain of FISCO BCOS v2.5, the console grantPermissionManager command is no longer valid, the original PermissionManager permissions belong to the role of the committee.。For pre-v2.5 chains, the directive is still valid。 +Currently, the role permission model is based on write permission control for various types of tables in the system。We have done our best to keep the same experience as the previous version, but for the sake of complete and strict permission control, the new chain of FISCO BCOS v2.5, the console grantPermissionManager command is no longer valid, the original PermissionManager permissions belong to the role of the committee。For pre-v2.5 chains, the directive is still valid。 ## How to use role permissions? @@ -53,7 +53,7 @@ This section will take "committee member addition and deletion" and "operation a ### Add and delete members -Use the get _ account.sh script included in console v1.0.10 or later to generate the following three accounts.。After configuring the console, use the console's-pem option loads 3 private keys separately to start 3 consoles。 +Use the get _ account.sh script included in console v1.0.10 or later to generate the following three accounts。After configuring the console, use the console's -pem option to load 3 private keys to start 3 consoles。 ``` # Account number 10x61d88abf7ce4a7f8479cff9cc1422bef2dac9b9a.pem# Account number 20x85961172229aec21694d742a5bd577bedffcfec3.pem# Account number 30x0b6f526d797425540ea70becd7adac7d50f4a7c0.pem @@ -61,19 +61,19 @@ Use the get _ account.sh script included in console v1.0.10 or later to generate #### Add account 1 as a member -Additional members require a vote by the Chain Governance Committee, and valid votes greater than the threshold are valid.。Since only account 1 is a member, the vote on account 1 will take effect.。 +Additional members require a vote by the Chain Governance Committee, and valid votes greater than the threshold are valid。Since only account 1 is a member, the vote on account 1 will take effect。 ![](../../../../images/articles/role_authority_model_realization/IMG_5555.PNG) #### Use account 1 to add account 2 as a member -Since only account 1 is a member here, the judgment that the threshold is met takes effect immediately after voting with account 1.。 +Since only account 1 is a member here, the judgment that the threshold is met takes effect immediately after voting with account 1。 ![](../../../../images/articles/role_authority_model_realization/IMG_5556.PNG) #### Revoke the member authority of account number 2 -At this time, there are two members in the system, account 1 and account 2, and the default voting effective threshold is 50%, so both members need to vote to revoke the member authority of account 2, valid votes / total votes = 2 / 2 = 1 > 0.5 to meet the conditions。 +At this time, there are two members in the system, account 1 and account 2, and the default voting threshold is 50%, so both members are required to vote to revoke the member authority of account 2, valid votes / total number of votes = 2 / 2 = 1>0.5 to meet the conditions。 Account 1 votes to revoke the member authority of account 2, as shown in the following figure: @@ -85,7 +85,7 @@ Account 2 operation vote to revoke the membership of account 2, as shown in the ### Add and delete operation and maintenance -Members can add and revoke O & M roles. The permissions of O & M roles include deploying contracts, creating tables, freezing and unfreezing deployed contracts, and using CNS services.。 +Members can add and revoke O & M roles. The permissions of O & M roles include deploying contracts, creating tables, freezing and unfreezing deployed contracts, and using CNS services。 #### Use account 1 to add account 3 as operation and maintenance @@ -107,10 +107,10 @@ Account 1 is a member and does not have the permission to deploy the contract. D #### Use account 1 to revoke the operation and maintenance permission of account 3 -Account 1 is a member who can revoke operations, as shown below. +Account 1 is a member who can revoke operations, as shown below ![](../../../../images/articles/role_authority_model_realization/IMG_5562.PNG) ## SUMMARY -As an important feature of the alliance chain, permission control needs to be flexible and powerful, but how to achieve a good user experience on this basis requires continuous improvement and optimization.。The pre-FISCO BCOS v2.5 permission control is flexible and powerful, but at the same time, the community has received a lot of feedback that the threshold for understanding the use of permission control is too high。Through the role permissions, we hope to maintain the original function at the same time, lower the learning threshold, improve the user experience。The work of integrating and optimizing permission control is still in progress, and it is hoped that in the future, a permission control solution with full coverage from the bottom layer to the application will be realized.。Welcome everyone to discuss the exchange, positive feedback experience and improvement suggestions。 \ No newline at end of file +As an important feature of the alliance chain, permission control needs to be flexible and powerful, but how to achieve a good user experience on this basis requires continuous improvement and optimization。The pre-FISCO BCOS v2.5 permission control is flexible and powerful, but at the same time, the community has received a lot of feedback that the threshold for understanding the use of permission control is too high。Through the role permissions, we hope to maintain the original function at the same time, lower the learning threshold, improve the user experience。The work of integrating and optimizing permission control is still in progress, and it is hoped that in the future, a permission control solution with full coverage from the bottom layer to the application will be realized。Welcome everyone to discuss the exchange, positive feedback experience and improvement suggestions。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/37_safety/third-party-CA_node_deployment.md b/3.x/en/docs/articles/3_features/37_safety/third-party-CA_node_deployment.md index 55908d4e8..22069ecbb 100644 --- a/3.x/en/docs/articles/3_features/37_safety/third-party-CA_node_deployment.md +++ b/3.x/en/docs/articles/3_features/37_safety/third-party-CA_node_deployment.md @@ -8,11 +8,11 @@ How to Generate CA Certificate?Do nodes cross-validate certificates when they First, explain the background and reasons for my third-party CA certificate transformation: -- People in the community often ask about the transformation of third-party CA certificates, and I personally feel that this is a point of concern for everyone.。 -- In some of our projects, the business specified to use a third-party CA certificate, and actual production needs required us to perform a CA transformation。 -- In the judicial domain blockchain depository scenario, a certificate issued by a certificate authority with an electronic certification license is required to be used as an electronic certification。 +-People in the community often ask about the transformation of third-party CA certificates, and I personally feel that this is a point of concern for everyone。 +-In some of our projects, the business specified to use a third-party CA certificate, and the actual production needs also required us to carry out CA transformation。 +-In the judicial blockchain certificate scenario, a certificate issued by a certificate authority with an electronic certification license is required to be used as an electronic certification。 -In view of the above three points, I think everyone is very concerned about how to carry out third-party CA certificate transformation.。 +In view of the above three points, I think everyone is very concerned about how to carry out third-party CA certificate transformation。 The FISCO BCOS technical document provides a case of CFCA certificate transformation, but some details have yet to be improved, so I want to write a tutorial that combines production environment transformation, third-party CA cooperation, compliance, technical implementation, etc. Specific instructions to see if it can be helpful to other community users。 @@ -20,28 +20,28 @@ The FISCO BCOS technical document provides a case of CFCA certificate transforma Basically, third-party CA certificates may be used in blockchain scenarios that use CA certificates. Consider whether to use third-party CA certificates: -- Whether the alliance chain requires the relevant qualifications behind the third-party CA organization。 -- In the alliance chain, the participants control whether the third-party CA institution is required to issue certificates as an impartial institution for node access management and subsequent control, so as to prevent problems such as arbitrary issuance of certificates in the self-built CA system that lead to node malfeasance.。 +- Does the affiliate chain require the relevant qualifications behind the third-party CA agency。 +-In the alliance chain, the participants control whether the third-party CA institution is required to issue certificates as an impartial institution for node access management and subsequent control, so as to prevent problems such as node malfeasance caused by arbitrary certificate issuance in the self-built CA system。 ### Why do I need to configure the whitelist list in the two-level certificate mode?What will be the problem if it is not configured? -In the two-level certificate mode, a CA certificate provided by a third party is used as a chain certificate. If a whitelist is not configured, any CA certificate issuing node certificate can be connected to the chain.。 +In the two-level certificate mode, a CA certificate provided by a third party is used as a chain certificate. If a whitelist is not configured, any CA certificate issuing node certificate can be connected to the chain。 ## practical operation step teaching -Next, let's take a look at the specific practical steps for deploying the underlying node using a third-party CA certificate.。 +Next, let's take a look at the specific practical steps for deploying the underlying node using a third-party CA certificate。 The main points of the transformation are: -- The underlying CA of FISCO BCOS provides a three-level mode by default, chain certificate--> Agency Certificate--> Node Certificate; +- The underlying CA of FISCO BCOS provides a three-level mode by default, chain certificate-->Authority Certificate-->Node Certificate; - In the real world, although the CA can provide a certificate issued at level 3, there are compliance risks in some scenarios; -- Our current practice is to remove the agency certificate from the chain certificate.--> Issuance of node certificates. The chain certificate is issued by the CA. Crt provided by the CA. It is used in conjunction with the whitelist mechanism to complete the deployment of basic underlying nodes.。 +- The current practice we have adopted is to exclude agency certificates from chain certificates->The node certificate is issued. The chain certificate is provided by the CA. Crt, which is used in conjunction with the whitelist mechanism to complete the basic underlying node deployment。 ### Environmental preparation 1. Two test servers: 118.25.208.8, 132.232.115.126 -The operating system is Ubuntu.:18.04 +The operating system is Ubuntu:18.04 3. openssl tool Ubuntu 18.04 comes with openssl 1.1.1 @@ -51,97 +51,97 @@ The operating system is Ubuntu.:18.04 https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/certificate_list.html#id2 -Note: During the test, the node private key and the request certificate file are managed uniformly, but in the production environment, the node private key should be generated by the administrator of each institution and submitted to the CA, and the private key should be retained separately.。 +Note: During the test, the node private key and the request certificate file are managed uniformly, but in the production environment, the node private key should be generated by the administrator of each institution and submitted to the CA, and the private key should be retained separately。 ### Foundation Certificate Preparation ##### Generate the base node private key and node certificate request file -Use the openssl tool to generate the corresponding node private key and node certificate request file, as well as the corresponding node.nodeid (nodeid is the hexadecimal representation of the public key).。(Note: The node.key in the node.nodeid generated in the fourth step of each node is modified by the cert _ IP _ port.key of the corresponding node. This operation is required by the underlying layer。) +Use the openssl tool to generate the corresponding node private key and node certificate request file, as well as the corresponding node.nodeid (nodeid is the hexadecimal representation of the public key)。(Note: The node.key in the node.nodeid generated in the fourth step of each node is modified by the cert _ IP _ port.key of the corresponding node. This operation is required by the underlying layer。) -- Production node 1 node _ 118.25.208.8 _ 30300 Related Files +- Production node 1 node _ 118.25.208.8 _ 30300 related files ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5545.PNG) -- Production node 2 node _ 118.25.208.8 _ 30301 Related Files +- Production node 2 node _ 118.25.208.8 _ 30301 related files ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5546.PNG) -- Production node 3 node _ 132.232.115.126 _ 30300 Related Files +- Production node 3 node _ 132.232.115.126 _ 30300 related files ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5547.PNG) -- Production node 4 node _ 132.232.115.126 _ 30301 Related Files +- Production Node 4 node _ 132.232.115.126 _ 30301 Related Files ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5548.PNG) (FISCO BCOS V2.5 version, using the private key and EC secp 256k1 curve algorithm)。 -### The CA side issues the node certificate. +### The CA side issues the node certificate -Submit the node.csr file of each node to the CA. The CA returns a CA.crt certificate as a chain certificate and four node certificates in pem format.。(Note: In FISCO BCOS, the CA returns the certificate in the following mode: root-> node -issuer: the content of the issuer certificate is mixed in the node certificate.。) +Submit the node.csr file of each node to the CA. The CA returns a CA.crt certificate as a chain certificate and four node certificates in pem format。(Note: In FISCO BCOS, the CA returns the certificate in the following mode: root-> node-issuer: the content of the issuer certificate is mixed in the node certificate。) ### building chain -- Step 1: Download the domestic image, cd ~ / & & git clone https://gitee.com/FISCO-BCOS/generator.git +-step 1: Download the domestic image, cd ~ / & & git clone https://gitee.com/FISCO-BCOS/generator.git -- Step 2: Complete the installation, cd ~ / generator & & bash. / scripts / install.sh complete the installation, if the output usage: generator xxx, the installation is successful +-step 2: Complete the installation, cd ~ / generator & & bash. / scripts / install.sh complete the installation, if the output usage: generator xxx, the installation is successful -- Step 3: Get the node binary and pull the latest fisco-bcos binary file to meta (domestic cdn), if FISCO output-BCOS Version : x.x.x-x indicates success. +-step 3: Obtain the node binary, pull the latest fisco-bcos binary file to meta (domestic cdn), if FISCO-BCOS Version is output: x.x.x-x, it means success -- Step 4: Agency Assignment +-step 4: Agency Assignment - 118.25.208.8 is selected as Institution A, and Institution A is responsible for the generation of the Genesis block. + 118.25.208.8 is selected as Institution A, and Institution A is responsible for the generation of the Genesis block Select 132.232.115.126 as institution B -- Step 5: Use the CA.crt certificate provided by the CA as the chain certificate +-step 5: The CA provides the CA.crt certificate as a chain certificate Manually create the dir _ chain _ ca directory in the directory of organization A and place CA.crt in the dir _ chain _ ca directory -- Step 6: Perform node certificate migration in the meta directories of institution A and institution B +-step 6: Perform node certificate migration in the meta directory of institution A and institution B In the meta directory, manually create the corresponding node directory, where institution A is node _ 118.25.208.8 _ 30300 and node _ 118.25.208.8 _ 30301, and institution B is node _ 132.232.115.126 _ 30300 and node _ 132.232.115.126 _ 30301 - Each directory needs to store the corresponding node certificate, node private key and node Id, and distribute the node certificate generated by the CA, as well as the initially prepared node id, node private key and other files to the corresponding node directory, as shown in the figure. + Each directory needs to store the corresponding node certificate, node private key and node Id, and distribute the node certificate generated by the CA, as well as the initially prepared node id, node private key and other files to the corresponding node directory, as shown in the figure ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5549.PNG) -- Step 7: Institution A collects all node certificates +-step 7: Institution A collects all node certificates - In the meta directory of institution A, collect the corresponding node certificates for subsequent generation of genesis blocks.。As shown in the figure: + In the meta directory of institution A, collect the corresponding node certificates for subsequent generation of genesis blocks。As shown in the figure: ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5550.PNG) -- Step 8: Manually configure institution A to modify group _ genesis.ini in the conf folder to generate a genesis block +-step 8: Manually configure organization A to modify group _ genesis.ini in the conf folder to generate a genesis block -- Step 9: Modify node _ deployment.ini in the conf directory of organization A and organization B;Where p2p address is the external network address, rpc, channel address is the internal network address +-step 9: Modify node _ deployment.ini in the conf directory of organization A and organization B;Where p2p address is the external network address, rpc, channel address is the internal network address -- Step 10: Manually create the peers.txt file in the organization meta directory +-step 10: Manually create the peers.txt file in the institution meta directory Create peers.txt and peersB.txt in institution A, and create peers.txt and peersA.txt in institution B. Take institution A as an example, the content of peers.txt is as follows: ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5551.PNG) -- Step 11: Generate nodes in mechanism A and mechanism B, and execute commands in the generator of mechanism A. / generator--build _ install _ package. / meta / peersB.txt. / nodeA Generates the corresponding node of organization A;Execute the command in the generator of institution B. / generator--build _ install _ package. / meta / peersA.txt. / nodeB Generates the corresponding node of organization B -- step 12: Nodes running two agencies: bash. / nodeA / start _ all.sh and bash. / nodeB / start _ all.sh;The consensus state is normal as shown in the figure: +-step 11: Generate nodes in organization A and organization B, and execute the command in the generator of organization A. / generator--build _ install _ package. / meta / peersB.txt. / nodeA Generate the corresponding node of organization A;Execute the command in the generator of institution B. / generator --build _ install _ package. / meta / peersA.txt. / nodeB Generate the corresponding node of institution B +-step 12: Node running two agencies: bash. / nodeA / start _ all.sh and bash. / nodeB / start _ all.sh;The consensus state is normal as shown in the figure: ![](../../../../images/articles/third-party-CA_node_deployment/IMG_5552.PNG) -- Step 13: Console deployment and contract deployment testing +-step 13: Console deployment and contract deployment test - Compare the console operation results corresponding to agency A and agency B, and the data is consistent to ensure that the consensus is normal. + Compare the console operation results corresponding to agency A and agency B, and the data is consistent to ensure that the consensus is normal -- Step 14: Configure the whitelist in the config.ini of the corresponding node +-step 14: Configure the whitelist in the config.ini of the corresponding node -So far, we have completed the transformation of the third-party CA certificate combined with the deployment of the underlying node.。From the process point of view, mainly in the chain certificate--> Agency Certificate--> The node certificate generation process has changed, and the peers.txt file and node directory need to be manually created in the meta directory.。 +So far, we have completed the transformation of the third-party CA certificate combined with the deployment of the underlying node。From the process point of view, mainly in the chain certificate->Authority Certificate-->The node certificate generation process has changed, and the peers.txt file and node directory need to be manually created in the meta directory。 ## Join the FISCO BCOS Open Source Community -Speaking of my bond with the FISCO BCOS open source community also stems from the CA certificate, in a government-enterprise project docking, the owner requires the bottom of the blockchain to adapt to the national secret, and use the CA certificate specified by the owner.。 +Speaking of my bond with the FISCO BCOS open source community also stems from the CA certificate, in a government-enterprise project docking, the owner requires the bottom of the blockchain to adapt to the national secret, and use the CA certificate specified by the owner。 -In the early days, our team used the bottom layer of other blockchains, which could not be directly adapted to the national secret, and the transformation was difficult, long and costly.;Considering that many subsequent domestic projects will involve the adaptation of national secrets and CA transformation, we urgently need a complete set of blockchain underlying to support the above needs.。Through the introduction of friends in the circle, I learned about FISCO BCOS, and finally chose FISCO BCOS after in-depth technical research and feasibility analysis.。 +In the early days, our team used the bottom layer of other blockchains, which could not be directly adapted to the national secret, and the transformation was difficult, long and costly;Considering that many subsequent domestic projects will involve the adaptation of national secrets and CA transformation, we urgently need a complete set of blockchain underlying to support the above needs。Through the introduction of friends in the circle, I learned about FISCO BCOS, and finally chose FISCO BCOS after in-depth technical research and feasibility analysis。 FISCO BCOS open source community has created an atmosphere of open communication, welcome everyone in the community to discuss with me。 @@ -149,4 +149,4 @@ FISCO BCOS open source community has created an atmosphere of open communication More details about the content shared in this issue can be found through [The Power of Magnetism](https://www.yc-l.com/article/49.html)Learn more。 -The Yuan Magnetic Power Forum is a user exchange platform contributed by Lin Xuanming and his team to the FISCO BCOS open source community, mainly for sharing and learning FISCO BCOS and related technical knowledge.。Thank you for all kinds of contributions to the community, each of your participation will become a driving force for the growth of the community! \ No newline at end of file +The Yuan Magnetic Power Forum is a user exchange platform contributed by Lin Xuanming and his team to the FISCO BCOS open source community, mainly for sharing and learning FISCO BCOS and related technical knowledge。Thank you for all kinds of contributions to the community, each of your participation will become a driving force for the growth of the community! \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/37_safety/tsl1.2_establish_process.md b/3.x/en/docs/articles/3_features/37_safety/tsl1.2_establish_process.md index 8cfd43cc2..35c3ec4fe 100644 --- a/3.x/en/docs/articles/3_features/37_safety/tsl1.2_establish_process.md +++ b/3.x/en/docs/articles/3_features/37_safety/tsl1.2_establish_process.md @@ -8,21 +8,21 @@ As we said before, everything should be questioned on the Internet, so sending d SSL communication process is TLS1.2**[1]** Content of。The ultimate goal is to transmit packets securely。At the heart of SSL communication is authentication through a certificate, followed by"Handshake"Interactively generate an asymmetric session master key for this communication。Then in this round of communication, the data packet is encrypted and decrypted by this session master key and only the ciphertext is passed on the network。 -This article will take FISCO BCOS node communication two-way authentication as an example to explain how both parties to the communication load, use, verify the certificate, and how to generate the session master key.。 +This article will take FISCO BCOS node communication two-way authentication as an example to explain how both parties to the communication load, use, verify the certificate, and how to generate the session master key。 ## Part 1 Master Key Settings -Network compression is mainly implemented at the P2P network layer, and the system framework is as follows: the master key in TLS is a symmetric key, that is, the keys used by the client and server are the same, and the process of handshake is that the two sides interact with some random numbers to complete the setting of the master key.。 +Network compression is mainly implemented at the P2P network layer, and the system framework is as follows: the master key in TLS is a symmetric key, that is, the keys used by the client and server are the same, and the process of handshake is that the two sides interact with some random numbers to complete the setting of the master key。 Let's take the DH key exchange algorithm x to negotiate the master key as an example. If the attacker is Eve, Eve can obtain the intermediate value of their transmission: -1. Alice and Bob first negotiate fair a large prime p, and the generator g, Eve can get p and g.; +1. Alice and Bob first negotiate fair a large prime p, and the generator g, Eve can get p and g; 2. Alice chooses a random integer a ∈ Zp, calculates A = g ^ a modp, sends A to Bob, and Eve can get A 3. Bob chooses a random integer b ∈ Zp, calculates B = g ^ b modp, sends B to Alice, and Eve can get B 4. Alice calculates S = B ^ a = g ^(ab) modp 5. Bob calculates s = a b = g(ab) modp -Through the above process, Alice and Bob negotiate a key S. Although Eve obtains the intermediate values A, B, p, g, according to the discrete logarithm problem, Eve cannot obtain the specific value of S.。 +Through the above process, Alice and Bob negotiate a key S. Although Eve obtains the intermediate values A, B, p, g, according to the discrete logarithm problem, Eve cannot obtain the specific value of S。 ## Part 2 Certificate Validation @@ -32,11 +32,11 @@ In the previous article, we talked about [the issuance process of certificates]( ▲ Thanks to Li Lianwen, the core developer of the community, for his contribution -- When the program starts, the local ca.crt and node.crt are loaded first; -- When a node verifies the certificate of the other node, it first uses the public key in the other node's node.crt to verify the attached signature. When the verification passes, it can confirm that the corresponding node has the node.key corresponding to the current node.crt.; -- The node then uses the information in agency.crt to verify that node.crt is a legitimate agent.; -- Finally, the node uses the information in locally loaded ca.crt to verify that agency.crt is issued by the Federation Chain Committee.; -- When both are verified, it means that the node.crt received by the node is issued by the locally loaded ca.crt。 +-When the program starts, the local ca.crt and node.crt will be loaded first; +- When a node verifies the certificate of the other node, it first uses the public key in the other node's node.crt to verify the attached signature. When the verification passes, it can confirm that the corresponding node has the node.key corresponding to the current node.crt; +- the node then uses the information in agency.crt to verify that node.crt is a legitimate agent; +- the last node uses the information in the locally loaded ca.crt to verify that the agency.crt was issued by the federation chain committee; +-When both are verified, it means that the node.crt received by the node is issued by the locally loaded ca.crt。 ## Part 3 TLS handshake process @@ -52,10 +52,10 @@ The following figure shows the TLS handshake process obtained by capturing packe **In the process we see a total of 6 packets, namely:** -- The client hello sent by the client to the server. +-client client hello sent to server - The server hello sent by the server to the client, sending the server certificate, and negotiating parameters -- The client certificate sent by the client to the server for parameter negotiation. -- The end flag sent by the server to the client. The handshake is complete. +- The client certificate sent by the client to the server for parameter negotiation +-End flag sent by server to client, handshake complete ### (1) client hello @@ -112,12 +112,12 @@ After receiving the certificate request from the server, the client sends its ow ![](../../../../images/articles/tsl1.2_establish_process/IMG_5538.PNG) -In this step, the client uses the locally loaded ca.crt to verify the server certificate, and then performs parameter negotiation. +In this step, the client uses the locally loaded ca.crt to verify the server certificate, and then performs parameter negotiation ``` { client key exchange parameters -the client 's verification result of the server - side certificate. +the client 's verification result of the server - side certificate Transmission of content using ciphertext of session master key client to(1)(2)(3)Signature of the process } @@ -127,13 +127,13 @@ client to(1)(2)(3)Signature of the process ![](../../../../images/articles/tsl1.2_establish_process/IMG_5539.PNG) -After receiving the data packet, the server uses the session master key to encrypt and transmit the data packet.。 +After receiving the data packet, the server uses the session master key to encrypt and transmit the data packet。 ------ #### References -[【1】TLS(Transport Layer Security)](https://baike.baidu.com/item/TLS/2979545?fr=aladdin)The secure transport layer protocol is used to provide confidentiality and data integrity between two communicating applications。The protocol consists of two layers: TLS Record and TLS Handshake.。 +[【1】TLS(Transport Layer Security)](https://baike.baidu.com/item/TLS/2979545?fr=aladdin)The secure transport layer protocol is used to provide confidentiality and data integrity between two communicating applications。The protocol consists of two layers: TLS Record and TLS Handshake。 [【2】Discrete Logarithm Problem](https://www.doc.ic.ac.uk/~mrh/330tutor/ch06s02.html) diff --git a/3.x/en/docs/articles/3_features/38_privacy/index.md b/3.x/en/docs/articles/3_features/38_privacy/index.md index fe22f820d..f9a585b9b 100644 --- a/3.x/en/docs/articles/3_features/38_privacy/index.md +++ b/3.x/en/docs/articles/3_features/38_privacy/index.md @@ -4,6 +4,6 @@ Physical isolation: Data isolation between groups Privacy protection protocol: support group signature, ring signature, homomorphic encryption Scenario-based privacy protection mechanism: WeDPR supports hidden payment, anonymous voting, anonymous bidding, selective disclosure and other scenarios -- [FISCO BCOS privacy features: group / ring signature technology implementation](./privacy_protection_group_and_ring_signature.md) +- [FISCO BCOS Privacy Features: Group / Ring Signature Technology Implementation](./privacy_protection_group_and_ring_signature.md) - [On-chain ciphertext participates in calculation?Homomorphic Encryption Reveals Avatar| FISCO BCOS Privacy Features](./privacy_protection_homomorphic_encryption.md) -- [topic of privacy protection](http://mp.weixin.qq.com/mp/homepage?__biz=MzU0MDY4MDMzOA==&hid=5&sn=d9ae81771056e6fa4e196baefec33ada&scene=18#wechat_redirect) \ No newline at end of file +- [Privacy Protection Topics](http://mp.weixin.qq.com/mp/homepage?__biz=MzU0MDY4MDMzOA==&hid=5&sn=d9ae81771056e6fa4e196baefec33ada&scene=18#wechat_redirect) \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_group_and_ring_signature.md b/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_group_and_ring_signature.md index f142357db..41ce0f64e 100644 --- a/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_group_and_ring_signature.md +++ b/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_group_and_ring_signature.md @@ -4,83 +4,83 @@ Author : He Shuanghong | FISCO BCOS Core Developer ## Foreword -Security and privacy has always been a hot topic in the field of blockchain, and it is also a strategic high ground for major mainstream blockchain platforms.。FISCO BCOS has made great efforts to address security and privacy from the bottom layer to applications, architectures to protocols, storage to networks, and has now implemented functional modules such as account management, disk encryption, secure communication, and permission control.。This article will introduce a group of ring signatures of FISCO BCOS privacy features.。 +Security and privacy has always been a hot topic in the field of blockchain, and it is also a strategic high ground for major mainstream blockchain platforms。FISCO BCOS has made great efforts to address security and privacy from the bottom layer to applications, architectures to protocols, storage to networks, and has now implemented functional modules such as account management, disk encryption, secure communication, and permission control。This article will introduce a group of ring signatures of FISCO BCOS privacy features。 -Group / ring signature is a special signature algorithm that was originally used in the blockchain field to implement hidden payments.。It can hide the identity of the signer well, allowing the node to verify the correctness of the transaction signature without exposing the public key information of the initiator of the transaction.。This feature has broad application prospects in the alliance chain.。 +Group / ring signature is a special signature algorithm that was originally used in the blockchain field to implement hidden payments。It can hide the identity of the signer well, allowing the node to verify the correctness of the transaction signature without exposing the public key information of the initiator of the transaction。This feature has broad application prospects in the alliance chain。 ## What is a group / ring signature? -To understand group / ring signatures, you have to start with anonymity.。 +To understand group / ring signatures, you have to start with anonymity。 -In the real world, anonymity means that the subject's behavior does not reveal the subject's identity, a need that has existed almost since the dawn of human civilization.。 +In the real world, anonymity means that the subject's behavior does not reveal the subject's identity, a need that has existed almost since the dawn of human civilization。 From a cryptographic point of view, anonymity has two meanings: 1. Given a ciphertext, its public key cannot be restored, mainly used for security analysis of cryptographic algorithms; 2. Users will not leak identity information in the process of using the password scheme, which is more in line with the semantics of the real world。 -The earliest implicit concept of identity in cryptography is electronic signature, where the signer signs the message with a private key, and the verifier can use the signer's public key to verify the legitimacy of the signature.。In practice, public keys are often bound to certificates (except for identity-based encryption, where there is no certificate because identity is the public key), and the attributes of the certificate naturally reveal the identity information of the owner, so traditional signature schemes lack anonymity.。 +The earliest implicit concept of identity in cryptography is electronic signature, where the signer signs the message with a private key, and the verifier can use the signer's public key to verify the legitimacy of the signature。In practice, public keys are often bound to certificates (except for identity-based encryption, where there is no certificate because identity is the public key), and the attributes of the certificate naturally reveal the identity information of the owner, so traditional signature schemes lack anonymity。 -In the early 1990s, Chaum and van Heyst(EUROCRYPT)The concept of group signature is proposed, which effectively solves the identity privacy problem of electronic signature.。 +In the early 1990s, Chaum and van Heyst(EUROCRYPT)The concept of group signature is proposed, which effectively solves the identity privacy problem of electronic signature。 -The "group" in a group signature can be understood as an organization.。There is a leader in the organization, the group owner, responsible for member management, and each member of the organization can**Anonymous**Signing on behalf of the organization。The main algorithms for group signature schemes include. +The "group" in a group signature can be understood as an organization。There is a leader in the organization, the group owner, responsible for member management, and each member of the organization can**Anonymous**Signing on behalf of the organization。The main algorithms for group signature schemes include - Create a group, which is executed by the group owner to generate the group owner's private key and group public key; -- Add a group member, which is executed by the group owner to generate the private key and certificate of the group member. The certificate is used to prove the identity of the group member.; -- Generate a group signature, where group members sign information with a private key; -- Verify the group signature, the verifier can verify the legitimacy of the signature through the group public key, the verifier can determine that the signature does come from the group, but can not determine which group member's signature.; -- Open the group signature, the group owner can obtain the signer certificate through the signature information, so as to track the identity of the signer.。 +- Add group members, which is executed by the group owner to generate group member private keys and certificates, which are used to prove group membership; +- Generate group signature, group members sign information with private key; +- Verify the group signature, the verifier can verify the legitimacy of the signature through the group public key, the verifier can determine that the signature is indeed from the group, but can not determine which group member's signature; +- Open the group signature, the group owner can obtain the signer certificate through the signature information, so as to track the identity of the signer。 ![](../../../../images/articles/privacy_protection_group_and_ring_signature/IMG_4942.PNG) -Since the group signature has a group master role with absolute permissions, the anonymity of the group signature is relative.。This feature applies to scenarios that require regulatory intervention.。 +Since the group signature has a group master role with absolute permissions, the anonymity of the group signature is relative。This feature applies to scenarios that require regulatory intervention。 -In pursuit of complete anonymity, Rivest proposed in 2001 a group-owner-free scheme in which any member could join the organization spontaneously.。A parameter implied by the signature in this scheme forms a ring according to certain rules and is therefore named a ring signature。Essentially, both "group" and "ring" can be understood as an organization of multiple members, the difference being whether there is a leader that can open a signature.。 +In pursuit of complete anonymity, Rivest proposed in 2001 a group-owner-free scheme in which any member could join the organization spontaneously。A parameter implied by the signature in this scheme forms a ring according to certain rules and is therefore named a ring signature。Essentially, both "group" and "ring" can be understood as an organization of multiple members, the difference being whether there is a leader that can open a signature。 The process of the ring signature algorithm is as follows: -- Initialize the ring, which is executed by the ring members to generate the ring parameters, which are like the password for WeChat face-to-face group building, and any member who knows the parameters can join the ring.; -- Join the ring, performed by the ring members, and obtain the public-private key pair through the ring parameters; +-Initialize the ring, which is executed by the ring members to generate ring parameters. The ring parameters are like the password for WeChat face-to-face group building. Any member who knows the parameters can join the ring; +-Join the ring, performed by the ring members, and obtain the public-private key pair through the ring parameters; - Generate ring signatures, where ring members sign information using a private key and any number of ring public keys; - Verify the ring signature, the verifier can verify the validity of the signature through the ring parameters。 -The ring signature scheme hides the signer's public key in the list of public keys used in the signature. The larger the list of public keys, the higher the anonymity, which is suitable for multi-party collaboration scenarios with higher privacy requirements.。 +The ring signature scheme hides the signer's public key in the list of public keys used in the signature. The larger the list of public keys, the higher the anonymity, which is suitable for multi-party collaboration scenarios with higher privacy requirements。 ## Technical Selection of FISCO BCOS -At present, group / ring signatures are mainly used in voting, bidding, auction and other scenarios to protect the identity privacy of participants.。For alliance chains, multiple agencies within the same alliance collaborate and play games, and in some scenarios, protecting user identities is necessary.。 +At present, group / ring signatures are mainly used in voting, bidding, auction and other scenarios to protect the identity privacy of participants。For alliance chains, multiple agencies within the same alliance collaborate and play games, and in some scenarios, protecting user identities is necessary。 -FISCO BCOS integrated group / ring signature scheme provides users with a tool that can guarantee identity anonymity。Based on the consideration of the complexity of the scheme and the computational cost on the chain, only the most necessary step, namely signature verification, is retained on the chain, while other algorithms are provided to the application layer in the form of independent functional components.。 +FISCO BCOS integrated group / ring signature scheme provides users with a tool that can guarantee identity anonymity。Based on the consideration of the complexity of the scheme and the computational cost on the chain, only the most necessary step, namely signature verification, is retained on the chain, while other algorithms are provided to the application layer in the form of independent functional components。 The group signature scheme applied to the blockchain needs to meet the following two requirements: -1. In order to facilitate the management of members, it is necessary to support the withdrawal of group members.; +1. In order to facilitate the management of members, it is necessary to support the withdrawal of group members; 2. Considering the limited storage resources of the blockchain, the signature data cannot be too large and can be aligned to the standard RSA signature。 -Therefore, we chose the first group signature scheme BBS04 Short Group Signatures, which was proposed by Boneh at CRYPTO in 2004.。 +Therefore, we chose the first group signature scheme BBS04 Short Group Signatures, which was proposed by Boneh at CRYPTO in 2004。 -In order to facilitate accountability and prevent the signer from being framed, a scheme with accusation correlation and defamation resistance is needed, i.e., two ring signatures generated based on the same public key list can determine whether they are from the same signer.。Based on this consideration, we chose the first linkable ring signature scheme LSAG (Linkable Spontaneous Anonymous Group Signature for Ad Hoc Groups) proposed by Joseph in 2004.。 +In order to facilitate accountability and prevent the signer from being framed, a scheme with accusation correlation and defamation resistance is needed, i.e., two ring signatures generated based on the same public key list can determine whether they are from the same signer。Based on this consideration, we chose the first linkable ring signature scheme LSAG (Linkable Spontaneous Anonymous Group Signature for Ad Hoc Groups) proposed by Joseph in 2004。 -The BBS04 scheme is based on bilinear pair construction, and the group administrator can initialize the group according to different linear pairs.。The group signature storage and computation costs for different linear pair types are as follows: +The BBS04 scheme is based on bilinear pair construction, and the group administrator can initialize the group according to different linear pairs。The group signature storage and computation costs for different linear pair types are as follows: ![](../../../../images/articles/privacy_protection_group_and_ring_signature/IMG_4943.PNG) -where the group order of each linear pair is freely configurable, the default values are used in the above experiment。As you can see, the time overhead gap for on-chain verification is not large, and users can choose the appropriate linear pair type and group order according to their security and performance requirements.。 +where the group order of each linear pair is freely configurable, the default values are used in the above experiment。As you can see, the time overhead gap for on-chain verification is not large, and users can choose the appropriate linear pair type and group order according to their security and performance requirements。 In the LSAG scheme, the storage and computation overhead of ring signatures for different ring sizes are as follows: ![](../../../../images/articles/privacy_protection_group_and_ring_signature/IMG_4944.PNG) -Since the ring signature length, signature and verification time are linearly related to the number of ring members, it is recommended that the number of ring members does not exceed 32 to prevent over-gas.。 +Since the ring signature length, signature and verification time are linearly related to the number of ring members, it is recommended that the number of ring members does not exceed 32 to prevent over-gas。 ## How to use group / ring signatures in FISCO BCOS -FISCO BCOS version 2.3 begins to integrate the signature verification algorithm of BBSO4 scheme and LSAG scheme in the form of precompiled contract.。Since these privacy features are not enabled by default, enabling these features requires turning on the CRYPTO _ EXTENSION compilation option and recompiling the source code。**2.5 and above versions are enabled by default, and users are no longer required to compile the source code**。 +FISCO BCOS version 2.3 begins to integrate the signature verification algorithm of BBSO4 scheme and LSAG scheme in the form of precompiled contract。Since these privacy features are not enabled by default, enabling these features requires turning on the CRYPTO _ EXTENSION compilation option and recompiling the source code。**2.5 and above versions are enabled by default, and users are no longer required to compile the source code**。 The group / ring signature precompiled contract address is assigned as follows: ![](../../../../images/articles/privacy_protection_group_and_ring_signature/IMG_4945.PNG) -To complete the precompiled contract call, you first need to declare the contract interface as a solidity contract.。 +To complete the precompiled contract call, you first need to declare the contract interface as a solidity contract。 - ### group signature @@ -123,26 +123,26 @@ contract TestRingSig { } ``` -In addition to the pre-compiled contract interface, FISCO BCOS provides two additional core modules for users to use, a complete group / ring signature library and a group / ring signature RPC server.。The signature library and server are independent of the blockchain platform, and users can also develop their own server based on the signature library.。The signature information can be stored on the chain, and then the validity of the signature can be verified by calling the verification interface in the contract.。 +In addition to the pre-compiled contract interface, FISCO BCOS provides two additional core modules for users to use, a complete group / ring signature library and a group / ring signature RPC server。The signature library and server are independent of the blockchain platform, and users can also develop their own server based on the signature library。The signature information can be stored on the chain, and then the validity of the signature can be verified by calling the verification interface in the contract。 FISCO BCOS provides users with a development example of a group / ring signature, using the client as the entry point. The sample architecture is shown in the following figure: ![](../../../../images/articles/privacy_protection_group_and_ring_signature/IMG_4946.PNG) -The group / ring signing client calls the RPC interface of the server to complete the creation of the group / ring, the joining of members and the generation of the signature.;At the same time, the client interacts with the blockchain platform to put the signature information on the chain.;Finally, the client calls the signature on the precompiled contract verification chain.。For more operation steps and technical details, please refer to the Group / Ring Signing Client Guide。The reference link is as follows: https://github.com/FISCO-BCOS/group-signature-client/tree/master-2.0 +The group / ring signing client calls the RPC interface of the server to complete the creation of the group / ring, the joining of members and the generation of the signature;At the same time, the client interacts with the blockchain platform to put the signature information on the chain;Finally, the client calls the signature on the precompiled contract verification chain。For more operation steps and technical details, please refer to the Group / Ring Signing Client Guide。The reference link is as follows: https://github.com/FISCO-BCOS/group-signature-client/tree/master-2.0 ## Direction of improvement -In academia, the development of group / ring signatures has matured, and many new schemes have been born based on different scenarios.。 +In academia, the development of group / ring signatures has matured, and many new schemes have been born based on different scenarios。 -For example, programs that support group members' active participation can effectively resist the framing behavior of group owners.;The revocable anonymity ring signature scheme allows the signer to convert the ring signature into an ordinary signature on specific occasions to prove his or her identity as a signer.;The scheme that supports the security of the preceding item can ensure that the disclosure of the user's private key does not affect the anonymity of the previous signature.。 +For example, programs that support group members' active participation can effectively resist the framing behavior of group owners;The revocable anonymity ring signature scheme allows the signer to convert the ring signature into an ordinary signature on specific occasions to prove his or her identity as a signer;The scheme that supports the security of the preceding item can ensure that the disclosure of the user's private key does not affect the anonymity of the previous signature。 -At present, FISCO BCOS integrated group / ring signature schemes each have one. In the future, for more complex requirements, more support schemes will be added to provide users with more choices。At the same time, in view of the poor portability of existing client examples, plug-ins will be considered in the future to facilitate rapid business access.。 +At present, FISCO BCOS integrated group / ring signature schemes each have one. In the future, for more complex requirements, more support schemes will be added to provide users with more choices。At the same time, in view of the poor portability of existing client examples, plug-ins will be considered in the future to facilitate rapid business access。 ## Conclusion Security and privacy is a complex, vast and challenging field。 -The group / ring signature module only provides anonymity protection for user identities. How to build a more reliable and robust secure blockchain platform in combination with other cryptographic protocols?How to reduce user costs and overhead and provide multi-dimensional, highly available privacy protection services?The above problems need our continuous research and exploration.。 +The group / ring signature module only provides anonymity protection for user identities. How to build a more reliable and robust secure blockchain platform in combination with other cryptographic protocols?How to reduce user costs and overhead and provide multi-dimensional, highly available privacy protection services?The above problems need our continuous research and exploration。 -Finally, people with lofty ideals are welcome to join the FISCO BCOS security building to build an unbreakable wall of privacy.。 \ No newline at end of file +Finally, people with lofty ideals are welcome to join the FISCO BCOS security building to build an unbreakable wall of privacy。 \ No newline at end of file diff --git a/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_homomorphic_encryption.md b/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_homomorphic_encryption.md index 8d69d105d..1217a718a 100644 --- a/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_homomorphic_encryption.md +++ b/3.x/en/docs/articles/3_features/38_privacy/privacy_protection_homomorphic_encryption.md @@ -4,45 +4,45 @@ Author : He Shuanghong | FISCO BCOS Core Developer ## Foreword -As a distributed system, on the one hand, blockchain gives full play to the value of data through data sharing, co-governance and collaborative processing.。On the other hand, due to resource and privacy constraints, blockchain is only suitable for storing the lightest, most necessary, and non-privacy-risk data, such as hashes, metadata ciphertexts, etc.。The contradiction between data availability and privacy is becoming more and more obvious in the blockchain。 +As a distributed system, on the one hand, blockchain gives full play to the value of data through data sharing, co-governance and collaborative processing。On the other hand, due to resource and privacy constraints, blockchain is only suitable for storing the lightest, most necessary, and non-privacy-risk data, such as hashes, metadata ciphertexts, etc。The contradiction between data availability and privacy is becoming more and more obvious in the blockchain。 -As the saying goes, "You can't have both a fish and a bear's paw," but cryptographers who specialize in intractable diseases don't agree, and have proposed a solution where ciphertexts can also participate in computations-**homomorphic encryption**(HE,Homomorphic Encryption)。This article will explain the definition of homomorphic encryption and the technical implementation in FISCO BCOS.。 +As the saying goes, "You can't have both a fish and a bear's paw," but cryptographers who specialize in intractable diseases don't agree, and have proposed a solution where ciphertexts can also participate in computations-**homomorphic encryption**(HE,Homomorphic Encryption)。This article will explain the definition of homomorphic encryption and the technical implementation in FISCO BCOS。 ## What is Homomorphic Encryption Homomorphic encryption is an open problem that was raised in the 1970s to complete the processing of data without exposing it, focusing on**data processing security**。 -Imagine such a scene, as a full of ideals of the second generation of the building, you live a boring life of rent collection every day, hoping to get rid of the shackles of the world, copper stink to pursue poetry and distance。You need to hire an agent to take on the chore of rent collection, but don't want it to pry into your monthly income from lying down。So, you ask the expert to build a set of equipment that will ensure that the agent can successfully complete the rent collection without revealing income information.。The kit includes envelopes, glue, wallets and magic scissors, each of which has a unique function: +Imagine such a scene, as a full of ideals of the second generation of the building, you live a boring life of rent collection every day, hoping to get rid of the shackles of the world, copper stink to pursue poetry and distance。You need to hire an agent to take on the chore of rent collection, but don't want it to pry into your monthly income from lying down。So, you ask the expert to build a set of equipment that will ensure that the agent can successfully complete the rent collection without revealing income information。The kit includes envelopes, glue, wallets and magic scissors, each of which has a unique function: 1. Once the envelope is sealed with glue, only magic scissors can open it。 -2. No matter how much money is in the envelope, the size and weight of the envelope will not change.。 -3. After placing multiple envelopes in a wallet, the envelopes will be combined in pairs without opening them, and eventually become an envelope containing exactly the sum of the amounts of all the envelopes before the merger.。 +2. No matter how much money is in the envelope, the size and weight of the envelope will not change。 +3. After placing multiple envelopes in a wallet, the envelopes will be combined in pairs without opening them, and eventually become an envelope containing exactly the sum of the amounts of all the envelopes before the merger。 ![](../../../../images/articles/privacy_protection_homomorphic_encryption/IMG_5563.PNG) -You distribute the envelopes and glue to all the tenants and give the wallets to the agent。On the agreed day of paying the rent, the tenant puts the rent in an envelope, seals it and gives it to the agent.;The agent collects the envelope, puts it in his wallet, and finally gets an envelope full of all the rent, which he forwards to you;You use magic scissors to take it apart and get the rent。 +You distribute the envelopes and glue to all the tenants and give the wallets to the agent。On the agreed day of paying the rent, the tenant puts the rent in an envelope, seals it and gives it to the agent;The agent collects the envelope, puts it in his wallet, and finally gets an envelope full of all the rent, which he forwards to you;You use magic scissors to take it apart and get the rent。 -In this scenario, the two properties of the envelope, a and b, are actually the characteristics of public key encryption, i.e. the ciphertext obtained using public key encryption can only be decrypted by someone who has the private key, and the ciphertext does not reveal the semantic information of the plaintext.;While c represents the property of additive homomorphism, two ciphertexts can be calculated, and the result decrypted is exactly the sum of the two original plaintexts。At this point, the full picture of homomorphic encryption is already on the horizon: +In this scenario, the two properties of the envelope, a and b, are actually the characteristics of public key encryption, i.e. the ciphertext obtained using public key encryption can only be decrypted by someone who has the private key, and the ciphertext does not reveal the semantic information of the plaintext;While c represents the property of additive homomorphism, two ciphertexts can be calculated, and the result decrypted is exactly the sum of the two original plaintexts。At this point, the full picture of homomorphic encryption is already on the horizon: -- Homomorphic encryption is essentially a public key encryption scheme that uses the public key pk for encryption and the private key sk for decryption.; +- Homomorphic encryption is essentially a public key encryption scheme, i.e. encryption uses the public key pk and decryption uses the private key sk; - Homomorphic encryption supports ciphertext computation, i.e. ciphertext generated by the same public key encryption performs f()The homomorphic operation of the function, which generates a new ciphertext that, when decrypted, is exactly equal to the two original plaintext calculations f()Result of function; - The homomorphic encryption formula is described as follows: ![](../../../../images/articles/privacy_protection_homomorphic_encryption/IMG_5564.PNG) -Homomorphic encryption can be divided into fully homomorphic encryption (FHE, Fully Homomorphic Encryption) and semi-homomorphic encryption (SWHE, Somewhat Homomorphic Encryption).。FHE, as the name implies, supports any given f()function, but due to the high computational overhead, there is currently no practical FHE solution in academia;SWHE only supports some specific f()Functions, such as addition or multiplication, have been used in industry, especially in cloud computing, due to their low overhead.。 +Homomorphic encryption can be divided into fully homomorphic encryption (FHE, Fully Homomorphic Encryption) and semi-homomorphic encryption (SWHE, Somewhat Homomorphic Encryption)。FHE, as the name implies, supports any given f()function, but due to the high computational overhead, there is currently no practical FHE solution in academia;SWHE only supports some specific f()Functions, such as addition or multiplication, have been used in industry, especially in cloud computing, due to their low overhead。 ## FISCO BCOS Technology Selection -In the alliance chain, given the regulatory needs, the on-chain agency may need to upload some of the privacy data in the application, such as revenue accounts, product traffic, etc.。In order not to divulge confidentiality, the agency can encrypt this information using the regulator's public key, and after encryption, the information statistics can be completed by the agency.。In this scenario, homomorphic encryption can be leveraged due to the need to compute against the ciphertext。 +In the alliance chain, given the regulatory needs, the on-chain agency may need to upload some of the privacy data in the application, such as revenue accounts, product traffic, etc。In order not to divulge confidentiality, the agency can encrypt this information using the regulator's public key, and after encryption, the information statistics can be completed by the agency。In this scenario, homomorphic encryption can be leveraged due to the need to compute against the ciphertext。 -FISCO BCOS provides users with a privacy protection tool that supports ciphertext processing by integrating homomorphic encryption。Encryption and decryption will expose plaintext data, based on security considerations, only suitable for completion under the chain, the chain only retains the homomorphic operation interface, encryption and decryption interface is provided to the application layer in the form of an independent algorithm library.。In the choice of homomorphic encryption scheme, for the consideration of computational overhead, the lightweight additive homomorphic scheme is preferred.;Given the limited storage resources of the blockchain, the ciphertext cannot be too large and can be aligned to the standard RSA encryption algorithm.。 +FISCO BCOS provides users with a privacy protection tool that supports ciphertext processing by integrating homomorphic encryption。Encryption and decryption will expose plaintext data, based on security considerations, only suitable for completion under the chain, the chain only retains the homomorphic operation interface, encryption and decryption interface is provided to the application layer in the form of an independent algorithm library。In the choice of homomorphic encryption scheme, for the consideration of computational overhead, the lightweight additive homomorphic scheme is preferred;Given the limited storage resources of the blockchain, the ciphertext cannot be too large and can be aligned to the standard RSA encryption algorithm。 -Combining the above two points, we have chosen an additive homomorphic scheme with the above characteristics.-Key Cryptosystems Based on Composite Degree Residency Classes, proposed by Paillier at EUROCRYPT in 1999。The experimental analysis of the Paillier scheme is as follows: +Combining the above two points, we choose the additive homomorphic scheme Paillier "Public-Key Cryptosystems Based on Composite Degree Residuosity Classes," which was proposed by Paillier in 1999 at the European Union (EUROCRYPT)。The experimental analysis of the Paillier scheme is as follows: ![](../../../../images/articles/privacy_protection_homomorphic_encryption/IMG_5565.PNG) -The public-private key pair is obtained by the RSA key generation algorithm, as you can see from the above table, the overhead is positively related to the key length.。Currently, 1024-bit RSA keys are no longer secure. We recommend that you use keys with 2048 bits or more.。 +The public-private key pair is obtained by the RSA key generation algorithm, as you can see from the above table, the overhead is positively related to the key length。Currently, 1024-bit RSA keys are no longer secure. We recommend that you use keys with 2048 bits or more。 ## How to use homomorphic encryption in FISCO BCOS @@ -50,7 +50,7 @@ FISCO BCOS version 2.3 integrates the ciphertext homomorphic addition interface ![](../../../../images/articles/privacy_protection_homomorphic_encryption/IMG_5566.PNG) -To complete the precompiled contract call, you first need to declare the contract interface as a Solidity contract.。 +To complete the precompiled contract call, you first need to declare the contract interface as a Solidity contract。 ``` // PaillierPrecompiled.sol @@ -60,7 +60,7 @@ contract PaillierPrecompiled{ } ``` -The precompiled contract object can then be instantiated by address in the business contract to complete the call of the homomorphic plus interface.。 +The precompiled contract object can then be instantiated by address in the business contract to complete the call of the homomorphic plus interface。 ``` // CallPaillier.sol @@ -81,12 +81,12 @@ contract CallPaillier { } ``` -The public-private key generation, encryption and decryption interfaces in the Paillier scheme are provided to developers as independent password libraries.。The current password library contains a full version of the Java language Paillier scheme, as well as a C language version of the homomorphic plus interface for precompiled contract calls.。[The password library is as follows](https://github.com/FISCO-BCOS/paillier-lib)。 +The public-private key generation, encryption and decryption interfaces in the Paillier scheme are provided to developers as independent password libraries。The current password library contains a full version of the Java language Paillier scheme, as well as a C language version of the homomorphic plus interface for precompiled contract calls。[The password library is as follows](https://github.com/FISCO-BCOS/paillier-lib)。 ## Future directions for improvement -Currently, only the Java version of the homomorphic encryption algorithm library is accessible to developers. In the future, based on actual needs, we will align the SDK language type of FISCO BCOS and provide multi-language versions of the homomorphic encryption library.。Homomorphic encryption has always been a difficult problem in the cryptography community, and there are still great challenges in performance and usability in order to achieve fully homomorphic encryption calculations, FISCO BCOS will continue to pay attention to the technical progress in this field.。 +Currently, only the Java version of the homomorphic encryption algorithm library is accessible to developers. In the future, based on actual needs, we will align the SDK language type of FISCO BCOS and provide multi-language versions of the homomorphic encryption library。Homomorphic encryption has always been a difficult problem in the cryptography community, and there are still great challenges in performance and usability in order to achieve fully homomorphic encryption calculations, FISCO BCOS will continue to pay attention to the technical progress in this field。 ## Conclusion -Safe way Xiu Yuan Xi, FISCO BCOS will search up and down。Currently, FISCO BCOS has been configured with a variety of cryptographic tools, including group signatures, ring signatures, and homomorphic encryption. Next, we will implement and integrate customized privacy protection solutions for specific scenarios.。Welcome everyone to use and pay attention to the application of homomorphic encryption technology in privacy protection scenarios, discuss exchanges, actively feedback, and build a more secure and reliable FISCO BCOS platform.。 \ No newline at end of file +Safe way Xiu Yuan Xi, FISCO BCOS will search up and down。Currently, FISCO BCOS has been configured with a variety of cryptographic tools, including group signatures, ring signatures, and homomorphic encryption. Next, we will implement and integrate customized privacy protection solutions for specific scenarios。Welcome everyone to use and pay attention to the application of homomorphic encryption technology in privacy protection scenarios, discuss exchanges, actively feedback, and build a more secure and reliable FISCO BCOS platform。 \ No newline at end of file diff --git a/3.x/en/docs/articles/4_tools/41_webase/walk_in_webase_zoo.md b/3.x/en/docs/articles/4_tools/41_webase/walk_in_webase_zoo.md index b18cc7c1b..933e3e68b 100644 --- a/3.x/en/docs/articles/4_tools/41_webase/walk_in_webase_zoo.md +++ b/3.x/en/docs/articles/4_tools/41_webase/walk_in_webase_zoo.md @@ -2,30 +2,30 @@ Author : MAO Jiayu | FISCO BCOS Core Developer -**Author language**: Open source software, the Garden of Eden beyond the tide of commercialization in the minds of masters;It's the cathedrals and bazaars where geeks gather.;It is a beloved poem of technology lovers.;But in my eyes, the world of open source software is a zoo。 +**Author language**: Open source software, the Garden of Eden beyond the tide of commercialization in the minds of masters;It's the cathedrals and bazaars where geeks gather;It is a beloved poem of technology lovers;But in my eyes, the world of open source software is a zoo。 -Open source organizations love to take animals to name software or make LOGO, these names have long been as popular as spring rain。Such as a face of serious, agile, resembling a lion Tomcat。And Python, two fat and stupid python images have become popular all over the world.。Linux, on the other hand, uses a penguin named Tux as its mascot, and Tux has now started making video games, commercials, and even a girlfriend named Gown。Inspired by the splendor of scenery in the open source community zoo, we named two data export-related components in WeBASE, Monkey and Bee, respectively.。 +Open source organizations love to take animals to name software or make LOGO, these names have long been as popular as spring rain。Such as a face of serious, agile, resembling a lion Tomcat。And Python, two fat and stupid python images have become popular all over the world。Linux, on the other hand, uses a penguin named Tux as its mascot, and Tux has now started making video games, commercials, and even a girlfriend named Gown。Inspired by the splendor of scenery in the open source community zoo, we named two data export-related components in WeBASE, Monkey and Bee, respectively。 This article will introduce you to the miraculous Monkey and the focused Bee。**WeBASE is a middleware platform built between blockchain applications and FISCO BCOS nodes**To abstract commonalities in technology and business architecture to form generic and experience-friendly components, simplifying the blockchain application development process。 -## WeBASE-Codegen-Monkey's Monkey King +## WeBASE-Codegen - Monkey's Monkey King -WeBASE-Codegen-Monkey is the code generation component of the WeBASE data export tool.。Can help users automatically generate data export components, further improve the efficiency of research and development, help developers reduce the burden。We developed Monkey, whose name is inspired by Monkey King - Monkey King Monkey King。 +WeBASE-Codegen-Monkey (hereafter referred to as Monkey) is the code generation component of the WeBASE data export tool。Can help users automatically generate data export components, further improve the efficiency of research and development, help developers reduce the burden。We developed Monkey, whose name is inspired by Monkey King - Monkey King Monkey King。 -Monkey provides an executable shell script: generate _ bee.sh。Automatically generate WeBASE with simple configuration and smart contract files on request-Collect-Bee。After the script is executed, Monkey will automatically exit, as if it were a changing Monkey King, coming from the clouds and driving away in the fog.。The Monkey startup script automatically downloads code, loads contracts, builds and generates business code, compiles data export component code, and starts data export applications.。The main execution steps of generate _ bee.sh are as follows: +Monkey provides an executable shell script: generate _ bee.sh。Automatically generate WeBASE-Collect-Bee with simple configuration and smart contract files on request。After the script is executed, Monkey will automatically exit, as if it were a changing Monkey King, coming from the clouds and driving away in the fog。The Monkey startup script automatically downloads code, loads contracts, builds and generates business code, compiles data export component code, and starts data export applications。The main execution steps of generate _ bee.sh are as follows: ![](../../../../images/articles/walk_in_webase_zoo/IMG_5611.PNG) -In step 3, the script will automatically place the developer's configuration file to the specified configuration path.。 Step 5: After Monkey is started, it will generate logs, function codes, configuration and setup files according to the order, and place them on the path required by Bee。Step 8, Bee can load the required code and configuration files as you like at startup。Developers can also obtain the required files of various scripts or parameters at a pre-agreed path.。 +In step 3, the script will automatically place the developer's configuration file to the specified configuration path。 Step 5: After Monkey is started, it will generate logs, function codes, configuration and setup files according to the order, and place them on the path required by Bee。Step 8, Bee can load the required code and configuration files as you like at startup。Developers can also obtain the required files of various scripts or parameters at a pre-agreed path。 The execution process of generate _ bee.sh is just as described in Journey to the West: "Seeing his ferocity (the need for customized development of data export is complex), even if he uses the external method (reading the configuration of the environment and contract), he pulls out a handful of hair (downloading the code, assembling the configuration and compiling the Monkey code), chews it in his mouth (running the monkey code, automatically generating the configuration and code of Bee),!'(Start executing Bee's code) That is, change into three or two hundred little monkeys, and accumulate clusters around them (automatically build the database and table and successfully start exporting data)。』 Monkey King has the power of seventy-two changes, Monkey has the function of automatically generating the required code, so that the data export system can be used out of the box Monkey generated code and other files, can be divided into four categories: -1. Code that parses specific logs generated based on contracts and configurations.; -2. Code that is parsed based on specific functions generated by contracts and configurations.; -3. The configuration file generated based on the configuration.; -4. Settings files generated based on contracts and configurations.。 +1. Code that parses specific logs generated based on contracts and configurations; +2. Code that is parsed based on specific functions generated by contracts and configurations; +3. The configuration file generated based on the configuration; +4. Settings files generated based on contracts and configurations。 ![](../../../../images/articles/walk_in_webase_zoo/IMG_5612.PNG) @@ -33,62 +33,62 @@ In the figure above, different types of Paras generate different code files。Am - EventGenerateParas: Contains the generation of log (Event) related code and scripts。 - For example, the definition of log entity and repository parsed in the contract file, as well as the code parsed by each different log, the BO class of the log, and the database table building statement of the log.。 + For example, the definition of log entity and repository parsed in the contract file, as well as the code parsed by each different log, the BO class of the log, and the database table building statement of the log。 - MethodGenerateParas: contains the function (Method) related code and script generation。 - Such as the definition of the functions hibernate entity and repository parsed in the contract file, as well as the code parsed by each different function, the BO class of the function, and the database table statement of the function.。 + Such as the definition of the functions hibernate entity and repository parsed in the contract file, as well as the code parsed by each different function, the BO class of the function, and the database table statement of the function。 -- ConfigGenerateParas: Contains the Bee project's database table creation script and database configuration file.。 -- SettingsParas: json import settings file containing grafana dashboard and table _ panel。 +- ConfigGenerateParas: Contains the Bee project's database table creation script and database configuration file。 +-SettingsParas: json import settings file containing grafana dashboard and table _ panel。 -As mentioned above, for the convenience of developers, one-time configuration。In addition to generating and executing code, the general configuration of the relevant blockchain software and data export components is also put into the configuration of the Monkey system and automatically passed to the configuration file of Bee.。Monkey system integrates Beetl as a template engine, no matter how complex the contract, how cumbersome configuration, can be simplified, quickly generated。Monkey loads the Java file corresponding to Solidy compiled by the developer and uses reflection technology to obtain the Class information of the contract file.。 +As mentioned above, for the convenience of developers, one-time configuration。In addition to generating and executing code, the general configuration of the relevant blockchain software and data export components is also put into the configuration of the Monkey system and automatically passed to the configuration file of Bee。Monkey system integrates Beetl as a template engine, no matter how complex the contract, how cumbersome configuration, can be simplified, quickly generated。Monkey loads the Java file corresponding to Solidy compiled by the developer and uses reflection technology to obtain the Class information of the contract file。 ![](../../../../images/articles/walk_in_webase_zoo/IMG_5613.PNG) -After Monkey is started, a template engine for code generation is created. The AtomicParas mentioned above is the necessary raw material for starting the engine.。As shown above, the call steps through these three cores: +After Monkey is started, a template engine for code generation is created. The AtomicParas mentioned above is the necessary raw material for starting the engine。As shown above, the call steps through these three cores: -1. The engine calls getTemplateFilePath.()method to get the file path of the render template; -2. Through getBindingMap.()method to get all the parameters needed for rendering; -3. Through getGeneratedFilePath.()Obtain the specific path of the generated file。 +1. The engine calls getTemplateFilePath()method to get the file path of the render template; +2. Through getBindingMap()method to get all the parameters needed for rendering; +3. Through getGeneratedFilePath()Obtain the specific path of the generated file。 The code generation engine can be quickly started to perform rendering, and finally complete the automatic generation of the required files。 -Of course, the Monkey King legend will not end, Monkey system provides a lot of powerful and flexible configuration, to meet the unique personalized needs of developers。Monkey King's journey will not be limited to data export, the future will also be involved in more areas, for everyone to bring more surprises...... +Of course, the Monkey King legend will not end, Monkey system provides a lot of powerful and flexible configuration, to meet the unique personalized needs of developers。Monkey King's journey will not be limited to data export, the future will also be involved in more areas, for everyone to bring more surprises..... ## WeBASE-Collect-Bee Bee -WeBASE-Collect-Bee (hereinafter referred to as Bee) is a data export component based on the FISCO BCOS platform in WeBASE, which supports exporting blockchain data to databases such as Mysql.。 +WeBASE-Collect-Bee (hereinafter referred to as Bee) is a data export component based on the FISCO BCOS platform in WeBASE. It supports exporting blockchain data to databases such as Mysql。 -Why is it called Bee??Throughout the ages, the literati have left many poems and essays that sing about bees: through the flowers, the willows fly like arrows, and the sticks look for fragrance like falling stars.。Small bodies can carry weight, and thin wings of instruments can ride the wind.。- Wu Chengen +Why is it called Bee??Throughout the ages, the literati have left many poems and essays that sing about bees: through the flowers, the willows fly like arrows, and the sticks look for fragrance like falling stars。Small bodies can carry weight, and thin wings of instruments can ride the wind。- Wu Chengen -Bee system is just like a bee colony, focus as one, hard work。If the service does not stop, the data export task will not stop.。Bees walk back and forth among the flowers of the blockchain, looking for sweet, floating business data between blocks, and exporting massive amounts of blockchain data to storage in a stable and efficient manner for developers to perform analysis, calculations and queries.。The tiny tiny bee is light, which fits the "light" temperament of the Bee system design.。 +Bee system is just like a bee colony, focus as one, hard work。If the service does not stop, the data export task will not stop。Bees walk back and forth among the flowers of the blockchain, looking for sweet, floating business data between blocks, and exporting massive amounts of blockchain data to storage in a stable and efficient manner for developers to perform analysis, calculations and queries。The tiny tiny bee is light, which fits the "light" temperament of the Bee system design。 -With the help of Monkey, following the principle of "contract over configuration," developers only need to modify a few configurations, supplemented by the certificate that comes with the link and the contract file after development and compilation, after executing the script, you can quickly obtain the packaged executable Jar package, and even start running directly.。The lightness of the bee is also reflected in the systematic extraction and abstraction of a large number of configuration items。According to the individual needs of developers, flexible configuration, on demand。 +With the help of Monkey, following the principle of "contract over configuration," developers only need to modify a few configurations, supplemented by the certificate that comes with the link and the contract file after development and compilation, after executing the script, you can quickly obtain the packaged executable Jar package, and even start running directly。The lightness of the bee is also reflected in the systematic extraction and abstraction of a large number of configuration items。According to the individual needs of developers, flexible configuration, on demand。 -The system architecture diagram is shown below. In addition to the core modules, the modules of the Bee system are pluggable。For example, whether it is necessary to introduce enhanced functional modules such as visual data analysis, integrated test interface, monitoring and supervisor process management.。 +The system architecture diagram is shown below. In addition to the core modules, the modules of the Bee system are pluggable。For example, whether it is necessary to introduce enhanced functional modules such as visual data analysis, integrated test interface, monitoring and supervisor process management。 ![](../../../../images/articles/walk_in_webase_zoo/IMG_5614.PNG) -In order to make the thin wings of the honeybee device dance lighter, we refactored it in the latest V1.1.0 to further split the original single project according to the functional granularity.。In this way, developers can choose to deploy an executable Jar package directly.。You can also introduce Jar packages on demand to embed specified function modules into your own projects。The above modules are based on Springboot2 development, support personalized configuration。 +In order to make the thin wings of the honeybee device dance lighter, we refactored it in the latest V1.1.0 to further split the original single project according to the functional granularity。In this way, developers can choose to deploy an executable Jar package directly。You can also introduce Jar packages on demand to embed specified function modules into your own projects。The above modules are based on Springboot2 development, support personalized configuration。 ![](../../../../images/articles/walk_in_webase_zoo/IMG_5615.PNG) As shown in the figure above, each Jar functions as its name: -- core: Packaged into an executable BootJar package with a built-in Restful API out of the box, making it easier for developers to get data export services.。At the same time, core itself is also an example, showing developers how to combine the Jar package of each functional module to develop a fully functional data export system.。 -- parser: encapsulates the function of block parsing。The method signature of the entry can be passed into a block to obtain a fully parsed BlockInfoBO bean object, which contains the parsed data。 -- extractor: further encapsulates the logic of web3j's SDK, which allows you to easily call up-chain functions and obtain blockchain data.。 -- db: encapsulates the logic of data storage, theoretically supports all kinds of mainstream databases, the actual rigorous testing supports the storage of data to Mysql 5.6 and above community edition.。 -- common: encapsulates some common data structures, tool classes, and common parameters.。 +-core: Packaged as an executable BootJar package with a built-in Restful API out of the box, making it easier for developers to get data export services。At the same time, core itself is also an example, showing developers how to combine the Jar package of each functional module to develop a fully functional data export system。 +-parser: Encapsulates the function of block parsing。The method signature of the entry can be passed into a block to obtain a fully parsed BlockInfoBO bean object, which contains the parsed data。 +-extractor: further encapsulates the logic of the web3j SDK, which can conveniently call the on-chain function and obtain the data of the blockchain。 +-db: Encapsulates the logic of data storage, theoretically supports various mainstream databases, and supports strict testing in practice to store data to Mysql 5.6 and above Community Edition。 +-common: encapsulates some common data structures, tool classes, and common parameters。 -Although the tiny body of the bee is light, the collaboration of the swarm can carry the weight.。 +Although the tiny body of the bee is light, the collaboration of the swarm can carry the weight。 -In order to cope with the storage of massive amounts of data, such as single table data more than 1KW, single library capacity of more than 1T scenarios, Bee integrated sharding-JDBC, which supports configurable multi-data source storage, read / write separation, and database and table splitting.。 +In order to cope with the storage of massive amounts of data, such as single table data more than 1KW, single library capacity of more than 1T scenarios, Bee integrated sharding-jdbc, supports configurable multi-data source storage, read and write separation, sub-database sub-table。 -In order to speed up data export, the Bee system has carried out multiple rounds of performance optimization, greatly improving the efficiency of data export under stand-alone deployment.。At the same time, relying on the integration of Elastic-After a job, you can obtain the ability to coordinate distributed tasks. Bee supports multi-active deployment of instances, scale-out, and flexible scaling.。 +In order to speed up data export, the Bee system has carried out multiple rounds of performance optimization, greatly improving the efficiency of data export under stand-alone deployment。At the same time, relying on the integration of Elastic-Job, the ability to obtain distributed task coordination services, Bee supports multi-active deployment of instances, scale-out, flexible scaling。 -The work of bees is divided into three types, and the task performers of the Bee system are also divided into three categories: task schedulers, block collectors, and block processors.。 +The work of bees is divided into three types, and the task performers of the Bee system are also divided into three categories: task schedulers, block collectors, and block processors。 ![](../../../../images/articles/walk_in_webase_zoo/IMG_5616.PNG) @@ -96,37 +96,37 @@ As shown in the figure above, the specific three types of task performers are de **The Queen Bee (Dispatcher)**: -The distributed coordination service ensures that only one thread runs, which is responsible for detecting the current block height of the blockchain and the details of the pulled tasks, preparing the tasks, detecting the fork status of the blocks, detecting the timeout status and error status of the tasks, and performing re-pulling based on pre-defined policies.。The preparation task means that Dispatcher will maintain the database based on the dimensions of the block: create a status record for each block and insert it into the system table of block _ task _ pool。 +The distributed coordination service ensures that only one thread runs, which is responsible for detecting the current block height of the blockchain and the details of the pulled tasks, preparing the tasks, detecting the fork status of the blocks, detecting the timeout status and error status of the tasks, and performing re-pulling based on pre-defined policies。The preparation task means that Dispatcher will maintain the database based on the dimensions of the block: create a status record for each block and insert it into the system table of block _ task _ pool。 **The drone (Extractor)**: -Share the same threads as Depot, the total number of threads can be specified by the configuration file, but the shard of task execution is automatically scheduled by the Distributed Coordination Service。The task performed by the Extractor is to pull the block task corresponding to the serial number of the shard from the block _ task _ pool system table maintained by the Dispatcher, then download the block data of the specified block height from the blockchain, and modify the task execution status of the block _ task _ pool system table.。 +Share the same threads as Depot, the total number of threads can be specified by the configuration file, but the shard of task execution is automatically scheduled by the Distributed Coordination Service。The task performed by the Extractor is to pull the block task corresponding to the serial number of the shard from the block _ task _ pool system table maintained by the Dispatcher, then download the block data of the specified block height from the blockchain, and modify the task execution status of the block _ task _ pool system table。 **Worker bee (Depot)**: Share the same threads as Extractor, the total number of threads can be specified by the configuration file, but the shard of task execution is automatically scheduled by the Distributed Coordination Service。Depot can directly obtain the block content from the extractor, and then perform data export tasks, successively perform block parsing, content conversion processing, storage database, and finally modify the task execution status of the block _ task _ pool system table。 -The advantage of this design is that the data exchange overhead of switching between different blocks in different threads is isolated, which greatly improves the efficiency of processing.。At the same time, you can increase the speed of data export by increasing the number of deployed instances and increasing the number of processing thread collections.。If you compare content conversion and content storage to the brewing and storage of nectar, then block resolution is like the collection of pollen - a task that makes the worker bees toil day and night, and best interprets the phrase "flying willows like arrows through flowers, sticky catkins like falling stars."。 +The advantage of this design is that the data exchange overhead of switching between different blocks in different threads is isolated, which greatly improves the efficiency of processing。At the same time, you can increase the speed of data export by increasing the number of deployed instances and increasing the number of processing thread collections。If you compare content conversion and content storage to the brewing and storage of nectar, then block resolution is like the collection of pollen - a task that makes the worker bees toil day and night, and best interprets the phrase "flying willows like arrows through flowers, sticky catkins like falling stars."。 ![](../../../../images/articles/walk_in_webase_zoo/IMG_5617.PNG) -As shown in the figure above, the results of block parsing are divided into four categories: block basic information, account information, function information, and log information.。The following is a brief idea of the analysis: +As shown in the figure above, the results of block parsing are divided into four categories: block basic information, account information, function information, and log information。The following is a brief idea of the analysis: -1. Contract loading: Before executing, the parser loads the BIN, ABI and configuration information of the contract and calculates the MethodId of all contract functions.。 +1. Contract loading: Before executing, the parser loads the BIN, ABI and configuration information of the contract and calculates the MethodId of all contract functions。 -2, account resolution: according to the Block can obtain the block contract address, according to the contract address to obtain runcode, and then through the pre-loaded contract BIN information, identify the type of account, and finally resolve the account.。 +2, account resolution: according to the Block can obtain the block contract address, according to the contract address to obtain runcode, and then through the pre-loaded contract BIN information, identify the type of account, and finally resolve the account。 -3, function resolution: if the function is a constructor, as described in 1, you can resolve the value of the function.。 +3, function resolution: if the function is a constructor, as described in 1, you can resolve the value of the function。 -If the function is not a constructor, the to field in the transaction is the contract address。As described in 2, the contract to which the function belongs can be obtained based on the contract address。The function name can be obtained by comparing the input attached to the transaction with the preloaded methodId.。 +If the function is not a constructor, the to field in the transaction is the contract address。As described in 2, the contract to which the function belongs can be obtained based on the contract address。The function name can be obtained by comparing the input attached to the transaction with the preloaded methodId。 After the contract and function name of the function are accurately located, the transactionHandler is automatically triggered and the corresponding parsing work is performed。 4. Event resolution:You can get the contract name based on the mapping of the transactionHash resolved in step 2 to the contract name。Based on a specific contract name, the eventHandler under that contract is automatically triggered and the corresponding parsing work is performed。 -5. Block resolution:Obtain the summary information of the block based on the property resolution of the obtained Block object.。 +5. Block resolution:Obtain the summary information of the block based on the property resolution of the obtained Block object。 -All of these parsing steps, the same block is done by a worker bee, each worker bee for a thread.。These threads are distributed and managed by distributed coordination services through a collection of threads, thus achieving the effects of "flying like an arrow" and "falling like a star."。Bee system strictly follows the principle of "skill specificity" and is committed to providing data export services for FISCO BCOS.。 +All of these parsing steps, the same block is done by a worker bee, each worker bee for a thread。These threads are distributed and managed by distributed coordination services through a collection of threads, thus achieving the effects of "flying like an arrow" and "falling like a star."。Bee system strictly follows the principle of "skill specificity" and is committed to providing data export services for FISCO BCOS。 ## SUMMARY diff --git a/3.x/en/docs/articles/4_tools/41_webase/webase-transaction.md b/3.x/en/docs/articles/4_tools/41_webase/webase-transaction.md index 77730e732..c344f8c59 100644 --- a/3.x/en/docs/articles/4_tools/41_webase/webase-transaction.md +++ b/3.x/en/docs/articles/4_tools/41_webase/webase-transaction.md @@ -1,58 +1,58 @@ -# Talk about two or three things about the WeBASE deal. +# Talk about two or three things about the WeBASE deal Author : LIU Mingzhen | FISCO BCOS Core Developer -On July 2, 2019, the blockchain middleware platform WeBASE is officially open source, and the first thing that comes to mind is: what is WeBASE and what is it used for?WeBASE, short for WeBank Blockchain Application Software Extension, is a set of common components built between blockchain applications and FISCO BCOS nodes.。The purpose of developing this set of common components is to shield the complexity of the underlying blockchain, reduce the threshold for developers, and improve the development efficiency of blockchain applications.。WeBASE mainly includes: node front, node management, transaction link, data export, Web management platform and other subsystems.。The full deployment architecture is shown below: +On July 2, 2019, the blockchain middleware platform WeBASE is officially open source, and the first thing that comes to mind is: what is WeBASE and what is it used for?WeBASE, short for WeBank Blockchain Application Software Extension, is a set of common components built between blockchain applications and FISCO BCOS nodes。The purpose of developing this set of common components is to shield the complexity of the underlying blockchain, reduce the threshold for developers, and improve the development efficiency of blockchain applications。WeBASE mainly includes: node front, node management, transaction link, data export, Web management platform and other subsystems。The full deployment architecture is shown below: ![](../../../../images/articles/webase-transaction/IMG_5604.PNG) -To learn more about WeBASE, please click to go to: "[FISCO BCOS welcomes the blockchain middleware platform WeBASE, application landing speed](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485307&idx=1&sn=40b0002d3d261be7c2daadd73697a131&chksm=9f2ef567a8597c719225f87a490ea3307a537518cbbcd04a1c5881828a3d4ba1ae7714609522&token=773706277&lang=zh_CN#rd)》。We plan to push WeBASE series of articles, with you to experience WeBASE "simple but not simple"。This article is the first in a series of articles, "Talking about WeBASE Trading Two or Three Things," and intends to talk about some of WeBASE's work on trading-related aspects.。 +To learn more about WeBASE, please click to go to: "[FISCO BCOS welcomes the blockchain middleware platform WeBASE, application landing speed](https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485307&idx=1&sn=40b0002d3d261be7c2daadd73697a131&chksm=9f2ef567a8597c719225f87a490ea3307a537518cbbcd04a1c5881828a3d4ba1ae7714609522&token=773706277&lang=zh_CN#rd)》。We plan to push WeBASE series of articles, with you to experience WeBASE "simple but not simple"。This article is the first in a series of articles, "Talking about WeBASE Trading Two or Three Things," and intends to talk about some of WeBASE's work on trading-related aspects。 -## The concept, basic data structure and process of trading. +## The concept, basic data structure and process of trading -A transaction can be thought of as a request data sent to the blockchain system for deploying a contract, calling the contract interface, in order to achieve the objectives of maintaining the life cycle of the contract, managing assets, and exchanging value.。The basic data structure of the transaction includes sender, receiver, transaction data, etc.。 +A transaction can be thought of as a request data sent to the blockchain system for deploying a contract, calling the contract interface, in order to achieve the objectives of maintaining the life cycle of the contract, managing assets, and exchanging value。The basic data structure of the transaction includes sender, receiver, transaction data, etc。 A complete trading process can be divided into trade construction and**On-chain execution, trade display, trade audit**Three steps: -- First, the user can construct a transaction, sign the transaction with his private key, and send it to the chain (through interfaces such as sendRawTransaction).;The chain then receives the transaction and hands it over to the multiple node mechanism to execute the relevant smart contract code and generate the status data specified by the transaction.;Finally, the transaction is packaged into a block and stored with the state data.。A closing transaction is recognized, and the recognized transaction is generally considered to be transactional and consistent。 -- With the confirmation of the transaction, the corresponding transaction receipt (receipt) will be generated, and the transaction one-to-one correspondence and stored in the block, used to save some of the information generated during the execution of the transaction, such as: result code, log, the amount of gas consumed, etc.。Users can use the transaction hash to query the transaction, and the transaction receipt is displayed to the user.。 -- Over time, there are more and more transactions on the chain, and it is necessary to analyze the transactions on the chain, assist in the supervision and audit of the behavior on the chain, and ensure the reasonable and compliant operation of the chain.。 +- First, the user can construct a transaction, sign the transaction with his private key, and send it to the chain (through interfaces such as sendRawTransaction);The chain then receives the transaction and hands it over to the multiple node mechanism to execute the relevant smart contract code and generate the status data specified by the transaction;Finally, the transaction is packaged into a block and stored with the state data。A closing transaction is recognized, and the recognized transaction is generally considered to be transactional and consistent。 +- As the transaction is confirmed, a corresponding transaction receipt (receipt) is generated, which corresponds to the transaction and is stored in the block, which is used to store some information generated during the execution of the transaction, such as result codes, logs, and the amount of gas consumed。Users can use the transaction hash to query the transaction, and the transaction receipt is displayed to the user。 +- Over time, the number of transactions on the chain has increased, requiring analysis of transactions on the chain to assist in monitoring and auditing chain behavior to ensure the proper and compliant operation of the chain。 -Now that you have a preliminary understanding of the transaction, let's explore in detail the role and role of WeBASE in the three steps of the transaction.。 +Now that you have a preliminary understanding of the transaction, let's explore in detail the role and role of WeBASE in the three steps of the transaction。 ## Trading on the Chain -There are many ways to send transactions to the chain through WeBASE, the more common are WeBASE management platform and transaction chain agent subsystem, the former by the WeBAS management platform to provide a contract IDE, you can deploy and call the contract interface, not to repeat here.。Today we focus on the following**Transaction on-chain agent subsystem ([WeBASE-Transaction](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Transaction/index.html))**。 +There are many ways to send transactions to the chain through WeBASE, the more common are WeBASE management platform and transaction chain agent subsystem, the former by the WeBAS management platform to provide a contract IDE, you can deploy and call the contract interface, not to repeat here。Today we focus on the following**Transaction on-chain agent subsystem ([WeBASE-Transaction](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Transaction/index.html))**。 At present, blockchain developers mainly face the following pain points: -- High cost of blockchain learning--Do not want to pay attention to the details of the blockchain, want to use the traditional way to call the blockchain service。 +- The high cost of learning blockchain - do not want to pay attention to the details of the blockchain, want to use the traditional way to call the blockchain service。 -- The peak of real business may exceed the processing power of blockchain--Need a cache system to cut peaks and valleys。 +-The peak of real business may exceed the processing power of blockchain-a cache system is needed to cut peaks and valleys。 -To address these pain points, WeBASE-Transaction came into being。WeBASE-Transaction is a service system summed up from many blockchain certificate storage projects to help you quickly build blockchain applications, its function is mainly to receive stateless transaction requests, cache to the database, and then asynchronously on the chain, the service supports distributed tasks, multi-live deployment, remote disaster recovery.。The deployment architecture is as follows: +To address these pain points, WeBASE-Transaction came into being。WeBASE-Transaction is a service system summarized from many blockchain certificate storage projects to help you quickly build blockchain applications. Its main function is to receive stateless transaction requests, cache them in the database, and then asynchronously upload the chain. The service supports distributed tasks, multi-active deployment, and remote disaster recovery。The deployment architecture is as follows: ![](../../../../images/articles/webase-transaction/IMG_5605.PNG) As you can see from the deployment diagram, WeBASE-Transaction has the following features: -- **Asynchronous on-chain, overload protection**: The blockchain request first caches the database, cuts peaks and valleys, and the service uses a reasonable speed to asynchronously upload the chain.。 +- **Asynchronous on-chain, overload protection**: The blockchain request first caches the database, cuts peaks and valleys, and the service uses a reasonable speed to asynchronously upload the chain。 - **Multi-active deployment, distributed tasks**: Use Zookeeper to coordinate distributed tasks, divide the chain into multiple distributed tasks, deploy multiple tasks, and perform remote disaster recovery。 -- **Error retry, real-time alignment**: The service automatically checks the on-chain status and retries the error to achieve real-time alignment between the database and the on-chain status.。 +- **Error retry, real-time alignment**: The service automatically checks the on-chain status and retries the error to achieve real-time alignment between the database and the on-chain status。 ## Transaction Display -The transaction chain represents that the data is finally written into the block chain.。Run excitedly to report to the boss: our data on the chain!The boss was pleased and curious.。The blockchain is so advanced, how to display the data written to the blockchain?What does a blockchain transaction look like?? +The transaction chain represents that the data is finally written into the block chain。Run excitedly to report to the boss: our data on the chain!The boss was pleased and curious。The blockchain is so advanced, how to display the data written to the blockchain?What does a blockchain transaction look like?? -At this point you can build a WeBASE management platform。The build method can be viewed in the [Installation and Deployment Document](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/index.html)》。You can view the [WeBASE Management Platform User Manual](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Console-Suit/index.html#webase)》。After the setup is complete, open the WeBASE management platform and you will find that the transaction is like this: sender, recipient, transaction input data, etc.。After the transaction is executed, a transaction receipt is generated。The transaction receipt contains an event, which records the event log during the execution of the transaction.。 +At this point you can build a WeBASE management platform。The build method can be viewed in the [Installation and Deployment Document](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/index.html)》。You can view the [WeBASE Management Platform User Manual](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Console-Suit/index.html#webase)》。After the setup is complete, open the WeBASE management platform and you will find that the transaction is like this: sender, recipient, transaction input data, etc。After the transaction is executed, a transaction receipt is generated。The transaction receipt contains an event, which records the event log during the execution of the transaction。 ![](../../../../images/articles/webase-transaction/IMG_5606.PNG) ![](../../../../images/articles/webase-transaction/IMG_5607.PNG) -After the boss saw the first feeling, it must be mixed。At this point you tap the "decode" button to translate the transaction from a string of "0x gobbledygook" into something humans can read.。Which user is linked by which contract method, at a glance。The boss will cast a approving glance after seeing it.。At this time, you must have a feeling that the whole world, who give up me。 +After the boss saw the first feeling, it must be mixed。At this point you tap the "decode" button to translate the transaction from a string of "0x gobbledygook" into something humans can read。Which user is linked by which contract method, at a glance。The boss will cast a approving glance after seeing it。At this time, you must have a feeling that the whole world, who give up me。 ![](../../../../images/articles/webase-transaction/IMG_5608.PNG) @@ -60,7 +60,7 @@ After the boss saw the first feeling, it must be mixed。At this point you tap t ## Transaction Audit -The deal is on the chain, and it's on display, but it's not enough.。Each institution in the alliance chain shares and transfers data on the chain in accordance with the regulations established by the Alliance Chain Committee.。These regulations are often literal, compliance, lack of regulation and auditing。Therefore, in order to regulate how everyone uses the chain's computing and storage resources and to avoid abuse of the chain's computing and storage resources by certain institutions, there is an urgent need for a set of services to assist in monitoring and auditing chain behavior.。The transaction audit provided by the WeBASE management platform is a set of services that assist in the regulation and audit of behavior on the chain.。It combines blockchain data, WeBASE management platform private key management and contract management data, blockchain data as raw materials, private key management and contract management as the basis for a comprehensive data analysis function.。 +The deal is on the chain, and it's on display, but it's not enough。Each institution in the alliance chain shares and transfers data on the chain in accordance with the regulations established by the Alliance Chain Committee。These regulations are often literal, compliance, lack of regulation and auditing。Therefore, in order to regulate how everyone uses the chain's computing and storage resources and to avoid abuse of the chain's computing and storage resources by certain institutions, there is an urgent need for a set of services to assist in monitoring and auditing chain behavior。The transaction audit provided by the WeBASE management platform is a set of services that assist in the regulation and audit of behavior on the chain。It combines blockchain data, WeBASE management platform private key management and contract management data, blockchain data as raw materials, private key management and contract management as the basis for a comprehensive data analysis function。 Key indicators of transaction audit: @@ -74,13 +74,13 @@ The transaction audit interface is as follows ![](../../../../images/articles/webase-transaction/IMG_5610.PNG) -WeBASE management platform provides visual decentralized contract deployment, transaction monitoring, audit functions, easy to identify abnormal users, abnormal contracts, as well as abnormal trading volume, to provide a basis for alliance chain governance.。 +WeBASE management platform provides visual decentralized contract deployment, transaction monitoring, audit functions, easy to identify abnormal users, abnormal contracts, as well as abnormal trading volume, to provide a basis for alliance chain governance。 ## Return to the original heart -Here, we can see a transaction, from the assembly to the blockchain node, and then the node executes the drop into the transaction receipt, transaction and state, to the transaction display, transaction after-the-fact audit and supervision.。WeBASE is involved in the assembly of the chain, the transaction display, the transaction audit, basically can be said that WeBASE is involved in the whole process of the transaction.。 +Here, we can see a transaction, from the assembly to the blockchain node, and then the node executes the drop into the transaction receipt, transaction and state, to the transaction display, transaction after-the-fact audit and supervision。WeBASE is involved in the assembly of the chain, the transaction display, the transaction audit, basically can be said that WeBASE is involved in the whole process of the transaction。 -In the process of assembling the chain, it provides a transaction chain agent subsystem, which effectively shields the complexity of the underlying blockchain, reduces the threshold for developers, and helps developers build alliance chain applications at high speed.。During the transaction display process, WeBASE has built a visual management platform to display transactions in three dimensions, making it easy for developers to monitor and view transaction data in real time.。In the process of post-transaction audit and supervision, it also provides comprehensive data analysis functions to assist in the behavior of the supervision and audit chain, and effectively govern the alliance chain.。 +In the process of assembling the chain, it provides a transaction chain agent subsystem, which effectively shields the complexity of the underlying blockchain, reduces the threshold for developers, and helps developers build alliance chain applications at high speed。During the transaction display process, WeBASE has built a visual management platform to display transactions in three dimensions, making it easy for developers to monitor and view transaction data in real time。In the process of post-transaction audit and supervision, it also provides comprehensive data analysis functions to assist in the behavior of the supervision and audit chain, and effectively govern the alliance chain。 ------ diff --git a/3.x/en/docs/articles/4_tools/41_webase/webase_data_output.md b/3.x/en/docs/articles/4_tools/41_webase/webase_data_output.md index e8b0318e2..6a6135be0 100644 --- a/3.x/en/docs/articles/4_tools/41_webase/webase_data_output.md +++ b/3.x/en/docs/articles/4_tools/41_webase/webase_data_output.md @@ -2,63 +2,63 @@ Author : ZHANG Long | FISCO BCOS Core Developer -With the rapid development of blockchain technology, various applications have sprung up, the data on the chain is growing exponentially, blockchain-based big data scenarios have become a battleground for military strategists, and data analysis has also become a rigid demand.。 +With the rapid development of blockchain technology, various applications have sprung up, the data on the chain is growing exponentially, blockchain-based big data scenarios have become a battleground for military strategists, and data analysis has also become a rigid demand。 -However, due to the storage characteristics of the data on the chain, it can only be obtained directly from the chain through the interface, which requires a large amount of code to be written for the smart contract interface, which is costly.;At the same time read data from the chain, in addition to the network overhead, but also need to perform decoding operations, and even traverse the MPT tree, etc., read performance is poor;More importantly, global data processing cannot be performed directly on the chain, which cannot meet the needs of big data scenarios, such as complex queries, big data mining and analysis.。 +However, due to the storage characteristics of the data on the chain, it can only be obtained directly from the chain through the interface, which requires a large amount of code to be written for the smart contract interface, which is costly;At the same time read data from the chain, in addition to the network overhead, but also need to perform decoding operations, and even traverse the MPT tree, etc., read performance is poor;More importantly, global data processing cannot be performed directly on the chain, which cannot meet the needs of big data scenarios, such as complex queries, big data mining and analysis。 -To meet the needs of users, we are committed to providing an automated and intelligent data export solution, and through continuous iteration and improvement, to meet the various demands of users based on data export, helping blockchain big data scenarios to land quickly.。This article will start from the user needs, layer by layer to uncover**WeBASE Data Export**The Mystery of Component Function, Feature and Architecture Evolution。 +To meet the needs of users, we are committed to providing an automated and intelligent data export solution, and through continuous iteration and improvement, to meet the various demands of users based on data export, helping blockchain big data scenarios to land quickly。This article will start from the user needs, layer by layer to uncover**WeBASE Data Export**The Mystery of Component Function, Feature and Architecture Evolution。 -WeBASE is a middleware platform built between blockchain applications and FISCO BCOS nodes, abstracting commonalities in technology and business architecture, forming universal and experience-friendly components, and simplifying the blockchain application development process.。 +WeBASE is a middleware platform built between blockchain applications and FISCO BCOS nodes, abstracting commonalities in technology and business architecture, forming universal and experience-friendly components, and simplifying the blockchain application development process。 ### The user said: the performance of obtaining data on the chain is poor, and it is not convenient for big data processing, so there is WeBASE-Collect-Bee -For users, they want to have raw data to support big data processing, but due to the unique chain storage structure of the blockchain and the codec operations and state trees designed for security, the performance of reading data from the chain is poor, so we designed [WeBASE-Collect-Bee](https://github.com/WeBankFinTech/WeBASE-Collect-Bee/tree/master ) 。 +For users, they want to have raw data to support big data processing, but due to the unique chain storage structure of the blockchain and the codec operation and state tree designed for security, the performance of reading data from the chain is very poor, so we designed [WeBASE-Collect-Bee](https://github.com/WeBankFinTech/WeBASE-Collect-Bee/tree/master ) 。 -WeBASE-Collect-Bee's initial architecture is shown in the figure below. Its purpose is to reduce the development threshold for obtaining block data, improve data acquisition efficiency, and support big data processing. Users only need to perform simple configuration to export block data to a specified storage medium, such as a database that supports complex relational queries, big data mining, or big data platforms.。 +The preliminary architecture of WeBASE-Collect-Bee is shown in the figure below. Its purpose is to reduce the development threshold for obtaining block data, improve data acquisition efficiency, and support big data processing. Users only need to perform simple configuration to export block data to a specified storage medium, such as a database that supports complex relational queries, big data mining, or big data platforms。 ![](../../../../images/articles/webase_data_output/IMG_5618.PNG) -WeBASE-Collect-Bee includes three modules: block acquisition module, block processing module and persistence module。 +WeBASE-Collect-Bee consists of three modules: a block acquisition module, a block processing module, and a persistence module。 -- Block acquisition module: Obtain the corresponding block according to the block ID; +- Block acquisition module: obtain the corresponding block according to the block ID; - Block data processing: parsing block data, block transaction data; -- Account Data Processing: Parsing Blockchain Account Data。 +- Account data processing: parsing blockchain account data。 -Users only need to provide the relevant configuration of the chain and database configuration, you can export the data on the chain with one click, and then you can use SQL to operate on the data in the database, while ensuring that WeBASE-Collect-The Bee service runs normally, and the database can synchronize the data on the chain in almost real time。 +Users only need to provide the relevant configuration of the chain and database configuration, you can export the data on the chain with one click, and then you can use SQL to operate the data in the database. At the same time, as long as the WeBASE-Collect-Bee service is running normally, the database can synchronize the data on the chain in almost real time。 ### Users say: business data acquisition workload, and not easy to maintain and reuse, there is WeBASE-Codegen-Monkey -Only block data is not enough, users are more concerned about business data, that is, transaction data.。The transaction data is linked by calling the smart contract method, and in order to view the execution of the transaction, there is a large amount of log data in the transaction, that is, event data, which is very important for business analysis.。 +Only block data is not enough, users are more concerned about business data, that is, transaction data。The transaction data is linked by calling the smart contract method, and in order to view the execution of the transaction, there is a large amount of log data in the transaction, that is, event data, which is very important for business analysis。 -To obtain transaction data and event data, each transaction and event on the blockchain must be parsed according to the smart contract, and the core modules include at least: transaction / event and data parsing, database access interface, POJO, SQL several modules.。 +To obtain transaction data and event data, each transaction and event on the blockchain must be parsed according to the smart contract, and the core modules include at least: transaction / event and data parsing, database access interface, POJO, SQL several modules。 -As shown in the figure below, assuming that our business contains 2 smart contracts, each smart contract contains 2 interfaces and 2 events, each module needs to write code independently, then at least 32 code files or scripts need to be written, the workload is quite large, the maintenance is complex, and can not be reused.。 +As shown in the figure below, assuming that our business contains 2 smart contracts, each smart contract contains 2 interfaces and 2 events, each module needs to write code independently, then at least 32 code files or scripts need to be written, the workload is quite large, the maintenance is complex, and can not be reused。 ![](../../../../images/articles/webase_data_output/IMG_5619.PNG) -Based on this, we designed [WeBASE-Codegen-Monkey](https://github.com/WeBankFinTech/WeBASE-Codegen-Monkey) 。WeBASE-Codegen-Monkey is used to generate all the core code for parsing and storing transaction / event data。Users do not need to write any code, only need to provide smart contract files, WeBASE-Codegen-Monkey automatically parses the contract, generates all code files for obtaining transaction / event data, and automatically interacts with WeBASE-Collect-Bee Assembled into an Independent Service。WeBASE-Codegen-The Monkey architecture is shown in the following figure。 +Based on this, we designed [WeBASE-Codegen-Monkey](https://github.com/WeBankFinTech/WeBASE-Codegen-Monkey) 。WeBASE-Codegen-Monkey is used to generate all the core code for parsing and storing transaction / event data。Users do not need to write any code, only need to provide smart contract files, WeBASE-Codegen-Monkey will automatically parse the contract, generate all the code files to obtain transaction / event data, and automatically and WeBASE-Collect-Bee assembly into a separate service。The WeBASE-Codegen-Monkey architecture is shown in the following figure。 ![](../../../../images/articles/webase_data_output/IMG_5620.JPG) -WeBASE-Codegen-Monkey includes contract resolution module, code template module, code generation module, component assembly module。 +WeBASE-Codegen-Monkey includes contract parsing module, code template module, code generation module, component assembly module。 - Contract parsing module: parse the smart contract file to obtain the transaction and event objects in the contract; -- Code Template Module: Code Template for Generating Obtaining Transaction / Event Data; -- Code generation module: populates the code template and generates the code file according to the obtained transaction and event objects.; -- Component Assembly Module: Used to combine the generated code and WeBASE-Collect-Bee assembled into a separate service。 +- Code template module: used to generate a code template for obtaining transaction / event data; +-Code generation module: according to the obtained transaction and event object, fill the code template, generate the code file; +-Component assembly module: used to assemble the generated code and WeBASE-Collect-Bee into a separate service。 -Due to the new acquisition of transaction / event data, the corresponding WeBASE-Collect-Bee architecture evolves as follows, adding transaction data processing modules and event (event) data processing modules。Users only need to provide smart contract files to get almost all the data on the chain。 +Due to the addition of transaction / event data acquisition, the corresponding WeBASE-Collect-Bee architecture evolves as follows, adding a transaction data processing module and an event data processing module。Users only need to provide smart contract files to get almost all the data on the chain。 ![](../../../../images/articles/webase_data_output/IMG_5621.PNG) -From the user's point of view, you only need to place the certificate file and smart contract file of the chain in the specified directory, then configure the node and database, and set the package name of the smart contract.。 +From the user's point of view, you only need to place the certificate file and smart contract file of the chain in the specified directory, then configure the node and database, and set the package name of the smart contract。 ``` #### Node IP and communication port, group number。NODE _ NAME can be any combination of characters and numbers system.nodeStr=[NODE_NAME]@[IP]:[PORT] system.groupId=[GROUP_ID] -#### Database information. For the time being, only MySQL is supported.;serverTimezone is used to set the time zone +#### Database information. For the time being, only MySQL is supported;serverTimezone is used to set the time zone system.dbUrl=jdbc:mysql://[IP]:[PORT]/[database]?useSSL=false&serverTimezone=GMT%2b8&useUnicode=true&characterEncoding=UTF-8 system.dbUser=[user_name] system.dbPassword=[password] @@ -69,49 +69,49 @@ monitor.contractPackName = [package name specified when compiling Solidity contr ### The user said: with the data, to use also need to develop data access interface, inconvenient, there is a user interface -For users, although we have exported all the data that users care about to the DB, and the name of each table corresponds to the transaction method / event name, the field name is intuitive and easy to understand, but if users want to use the data in their own system, they also need to write a large number of database access interfaces.。Based on this, WeBASE-Collect-Bee added**User Interface Module**, As shown in the figure below。 +For users, although we have exported all the data that users care about to the DB, and the name of each table corresponds to the transaction method / event name, the field name is intuitive and easy to understand, but if users want to use the data in their own system, they also need to write a large number of database access interfaces。Based on this, WeBASE-Collect-Bee added**User Interface Module**, As shown in the figure below。 ![](../../../../images/articles/webase_data_output/IMG_5622.JPG) -The user interface module provides two data access methods, one is the API method, which supports local calls from the user system.;The other is REST, which can be accessed through http, reducing business coupling and supporting cross-platform calls.。 +The user interface module provides two data access methods, one is the API method, which supports local calls from the user system;The other is REST, which can be accessed through http, reducing business coupling and supporting cross-platform calls。 -The user interface is divided into four types of interfaces according to data type: block data interface, account data interface, transaction data interface and event data interface.。Each type of interface supports block height, hash, or account-based queries, as well as complex queries based on time and specific fields.。The user interface allows users to interface with their own systems at zero cost when using data export components.。 +The user interface is divided into four types of interfaces according to data type: block data interface, account data interface, transaction data interface and event data interface。Each type of interface supports block height, hash, or account-based queries, as well as complex queries based on time and specific fields。The user interface allows users to interface with their own systems at zero cost when using data export components。 -In addition, in order to facilitate users to verify and view visual data, the data export component integrates the Swagger plug-in. After users complete the deployment of the data export service, they can enter http://your_ip:port/swagger-ui.html, view all user interfaces, and enter query criteria to perform visual queries, as shown in the following figure。 +In addition, in order to facilitate users to verify and view visual data, the data export component integrates the Swagger plug-in. After users complete the deployment of the data export service, they can enter http://your_ip:port / swagger-ui.html, view all user interfaces, and enter query conditions to perform visual queries, as shown in the following figure。 ![](../../../../images/articles/webase_data_output/IMG_5623.JPG) ### The user said: with the data and query interface, but not intuitive enough, the boss can not understand, there is Grafana integration -In order to display blockchain data in a more real-time and visual way to meet the needs of non-technical personnel such as products and operations, based on lightweight considerations, we finally chose the visual data plug-in Grafana.。 +In order to display blockchain data in a more real-time and visual way to meet the needs of non-technical personnel such as products and operations, based on lightweight considerations, we finally chose the visual data plug-in Grafana。 -However, Grafana display data needs to write a dashboard template for each table data, learning and writing templates is very expensive.。But don't worry, WeBASE-Code-Monkey automatically generates Grafana scripts。Users only need to install Grafana and configure the data source, and then import the generated Dashboard template script. The data visualization can be completed within 1 minute, as shown in the following figure。 +However, Grafana display data needs to write a dashboard template for each table data, learning and writing templates is very expensive。But don't worry, WeBASE-Code-Monkey will automatically generate Grafana scripts。Users only need to install Grafana and configure the data source, and then import the generated Dashboard template script. The data visualization can be completed within 1 minute, as shown in the following figure。 ![](../../../../images/articles/webase_data_output/IMG_5624.JPG) -### The user says: a service exports data too slowly, what if the service hangs, there will be multi-threaded processing and distributed deployment. +### The user says: a service exports data too slowly, what if the service hangs, there will be multi-threaded processing and distributed deployment -For the data export service, once the performance of the chain is very high, exceeding the TPS of the single-machine data export, then the latest data will never be obtained in the DB, and the data will become older and older, obviously unable to meet the business demand for data.。At the same time, the risk of stand-alone processing is that the system stability is very poor, once the stand-alone service is suspended, the latest data cannot be obtained, and the user interface cannot be used for interaction.。Therefore, we have introduced multi-threaded processing and distributed deployment. The architecture evolution is shown in the following figure。 +For the data export service, once the performance of the chain is very high, exceeding the TPS of the single-machine data export, then the latest data will never be obtained in the DB, and the data will become older and older, obviously unable to meet the business demand for data。At the same time, the risk of stand-alone processing is that the system stability is very poor, once the stand-alone service is suspended, the latest data cannot be obtained, and the user interface cannot be used for interaction。Therefore, we have introduced multi-threaded processing and distributed deployment. The architecture evolution is shown in the following figure。 ![](../../../../images/articles/webase_data_output/IMG_5625.JPG) #### Thread Management -Thread management is relatively simple. You only need to turn off the multi-active switch, turn on the single-node task mode, and set the number of blocks processed by independent threads.。As follows, the system opens four threads by default for block capture and processing。 +Thread management is relatively simple. You only need to turn off the multi-active switch, turn on the single-node task mode, and set the number of blocks processed by independent threads。As follows, the system opens four threads by default for block capture and processing。 ``` #### When this parameter is false, enter the single-node task mode system.multiLiving=false -#### The number of multithreaded download fragments. The download progress is updated only after all download tasks of the fragment are completed.。 +#### The number of multithreaded download fragments. The download progress is updated only after all download tasks of the fragment are completed。 system.crawlBatchUnit=100 ``` #### Multi-live management -To further improve the efficiency of data export and ensure system stability and fault tolerance, we integrate Elastic-Job, which supports distributed deployment, task sharding, elastic scaling, parallel scheduling, and customized process tasks。In a distributed environment, the data export component first captures the block through a SimpleJob, and then processes the block through a DataflowJob.。 +To further improve the efficiency of data export and ensure system stability and fault tolerance, we integrate Elastic-Job to support distributed deployment, task sharding, elastic scaling, parallel scheduling, and customized process tasks。In a distributed environment, the data export component first captures the block through a SimpleJob, and then processes the block through a DataflowJob。 -Consider using Elastic-The cost of the job, the system will automatically generate all the configurations of the task shard and execution policy, except for a few necessary configurations, the user does not need to do anything to complete the multi-live configuration and deployment.。The necessary configuration is as follows。 +Considering the cost of using Elastic-Job, the system automatically generates all the configurations of task shards and execution policies. Except for a few necessary configurations, you can configure and deploy multiple tasks without doing anything。The necessary configuration is as follows。 ``` #### Enter multi-node task mode when this parameter is true @@ -123,15 +123,15 @@ regcenter.serverList=ip:port regcenter.namespace=namespace ``` -### The user said: the amount of exported data is too large, query and storage performance can not keep up, easy to collapse, there is a sub-database sub-table. +### The user said: the amount of exported data is too large, query and storage performance can not keep up, easy to collapse, there is a sub-database sub-table -When there is a large amount of data on the blockchain, exporting to a single database or a single business table will cause huge pressure on operation and maintenance, resulting in the degradation of database performance.。Generally speaking, the data threshold of a single database instance is within 1TB, and the data threshold of a single database table is within 10G, which is a reasonable range.。 +When there is a large amount of data on the blockchain, exporting to a single database or a single business table will cause huge pressure on operation and maintenance, resulting in the degradation of database performance。Generally speaking, the data threshold of a single database instance is within 1TB, and the data threshold of a single database table is within 10G, which is a reasonable range。 -If the amount of data exceeds this threshold, it is recommended to shard the data。Split the data in the same table into multiple tables or multiple tables in the same database.。Data export introduces the data management module, and the architecture evolution is shown in the following figure。 +If the amount of data exceeds this threshold, it is recommended to shard the data。Split the data in the same table into multiple tables or multiple tables in the same database。Data export introduces the data management module, and the architecture evolution is shown in the following figure。 ![](../../../../images/articles/webase_data_output/IMG_5626.JPG) -Data Management Module Integration Sharding-JDBC, supports database and table splitting and read / write splitting。You only need to set the number of shards, and the system automatically generates the shard policy configuration。If you need to support read / write splitting, you can use the-Collect-For configuration in Bee, refer to [Advanced Configuration of Data Export](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Collect-Bee/install.html)。At the user interface layer, users can use the same set of interfaces without feeling like they are operating in the same library or table.。 +The data management module integrates Sharding-JDBC and supports database and table splitting and read / write splitting。You only need to set the number of shards, and the system automatically generates the shard policy configuration。If you need to support read / write separation, you can configure it in the generated WeBASE-Collect-Bee. For more information, see [Advanced Data Export Configuration](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Collect-Bee/install.html)。At the user interface layer, users can use the same set of interfaces without feeling like they are operating in the same library or table。 ``` #### transaction and event data sharding configuration @@ -140,26 +140,26 @@ system.contractName.[methodName or eventName].shardingNO=XXX system.sys.[sysTableName].shardingNO ``` -### The user says that if a temporary fork or service exception occurs on the chain and the DB data is inconsistent, there will be exception handling and monitoring alarms. +### The user says that if a temporary fork or service exception occurs on the chain and the DB data is inconsistent, there will be exception handling and monitoring alarms -Data Export Service is designed to export on-chain data。On the premise of ensuring performance, stability and scalability, if a non-highly consistent consensus mechanism is selected, there will be a certain probability of temporary forks on the chain, resulting in dirty data in the database.;Or the data export service cannot export the latest data on the chain due to network / service anomalies。 +Data Export Service is designed to export on-chain data。On the premise of ensuring performance, stability and scalability, if a non-highly consistent consensus mechanism is selected, there will be a certain probability of temporary forks on the chain, resulting in dirty data in the database;Or the data export service cannot export the latest data on the chain due to network / service anomalies。 -In order to ensure data correctness and data consistency, the data export component adds an exception management module and monitoring scripts. So far, the data export component has become very powerful. The complete architecture is shown in the following figure.。 +In order to ensure data correctness and data consistency, the data export component adds an exception management module and monitoring scripts. So far, the data export component has become very powerful. The complete architecture is shown in the following figure。 ![](../../../../images/articles/webase_data_output/IMG_5627.JPG) #### Exception Handling -Exception handling is mainly to verify the correctness of the imported DB data, if a non-highly consistent consensus mechanism is selected, there will be a certain probability of temporary forks on the chain, which may lead to inconsistencies between the data in the DB and the data on the chain.。 +Exception handling is mainly to verify the correctness of the imported DB data, if a non-highly consistent consensus mechanism is selected, there will be a certain probability of temporary forks on the chain, which may lead to inconsistencies between the data in the DB and the data on the chain。 Based on the theory that the probability of bifurcation on the chain of six blocks is close to zero, the exception management module performs a hash check on the last six blocks before each data export, and if they are consistent, continue to export。If inconsistent, all chunk data greater than or equal to the height of the abnormal chunk is rolled back, and then re-pulled and imported。 #### Monitor alarm -Even if a distributed deployment ensures the stability of the data export service, it cannot guarantee whether the data on the chain is actually exported.。The monitor script monitor.sh is used to monitor whether the data on the chain is actually exported.。Mainly based on two dimensions: +Even if a distributed deployment ensures the stability of the data export service, it cannot guarantee whether the data on the chain is actually exported。The monitor script monitor.sh is used to monitor whether the data on the chain is actually exported。Mainly based on two dimensions: -- Over a period of time, the data in the DB starts to lag behind the on-chain data until a certain threshold is reached; -- The block height on the chain increases, and the block height in the DB remains the same over time。 +-For a period of time, the data in the DB starts to lag behind the data on the chain until a certain threshold is reached; +-The block height on the chain increases, and the block height in the DB remains unchanged for a period of time。 Users can configure it according to the actual situation, as shown below。 @@ -170,7 +170,7 @@ threshold=20 warn_number=1 ``` -## Users, though. +## Users, though ### Users say: historical data on the chain is worthless, capturing the full amount is a waste of resources and only wants the most recent data。 @@ -188,7 +188,7 @@ system.startDate=XXXX In actual scenarios, users may not need full data, but only data of a specific transaction / event or data of a specific field in a specific transaction / event. Data export supports personalized export, which can be configured as follows。 ``` -#### Set whether to export specific transaction or event data. All data is exported by default. +#### Set whether to export specific transaction or event data. All data is exported by default monitor.[contractName].[methodName/eventName].generated=on/off #### Ignore specific fields of specific contract specific event and do not crawl monitor.[contractName].[methodName/eventName].ignoreParam=XXX,XXX @@ -199,13 +199,13 @@ monitor.[contractName].[methodName/eventName].ignoreParam=XXX,XXX The data export component can flexibly configure the frequency of data capture tasks, which can be modified through the following configuration items。 ``` -#### Grabbing frequency of all methods and events. By default, it is polled every 5 seconds. +#### Grabbing frequency of all methods and events. By default, it is polled every 5 seconds monitor.default.frequency=5 ``` -### The user says that the individual fields of the transaction / event are of the string type, and the database defaults to 255, which will cause an error to be reported in the database.。 +### The user says that the individual fields of the transaction / event are of the string type, and the database defaults to 255, which will cause an error to be reported in the database。 -Data export is VARCHAR by default for bytes and string types in smart contracts(255)This design is designed to save database resources and meet most scenarios, but there are also individual fields that exceed 255 lengths, resulting in inconsistent data.。Therefore, the data export component provides configuration for ultra-long fields, which can effectively and reasonably utilize database storage space and avoid waste of resources.。 +Data export is VARCHAR by default for bytes and string types in smart contracts(255)This design is designed to save database resources and meet most scenarios, but there are also individual fields that exceed 255 lengths, resulting in inconsistent data。Therefore, the data export component provides configuration for ultra-long fields, which can effectively and reasonably utilize database storage space and avoid waste of resources。 ``` #### Configure the length of a specific field in the database for a specific transaction / event in a specific contract @@ -214,17 +214,17 @@ length.[contractName].[methodName or eventName].[paraName]=1024 ### The user said: exporting raw data is not what I want, such as the latest balance of an account, not all change history。 -The data export component exports the full amount of data on the chain, including all historical data。According to the specific data requirements of users, the data export component supports native compilation, copying the execution package to other servers to run, after the execution of generate _ bee.sh, will generate a complete WeBASE-Collect-Bee service project, users can carry out secondary development based on the project source code, modify the strategy of importing data。 +The data export component exports the full amount of data on the chain, including all historical data。According to the specific data requirements of users, the data export component supports local compilation, copying the execution package to run on other servers, after the execution of generate _ bee.sh, will generate a complete WeBASE-Collect-Bee service project, users can carry out secondary development based on the project source code, modify the strategy of importing data。 ### The user said: The overall service function is very powerful, but I only want to integrate some functions into my own system。 -The data export service takes into account module coupling and integrates the entire WeBASE-Collect-Bee is split into block acquisition, data parsing, database operations and public modules, and users can directly use independent modules according to their needs。 +The data export service takes into account the coupling of modules and splits the entire WeBASE-Collect-Bee into block acquisition, data parsing, database operations, and public modules。 ## Users can not only say -For any one program, there is no perfect, only the most suitable。The data export component is dedicated to solving blockchain big data scenarios, not only for specific business requirements, but also as a powerful tool throughout development, testing and operation for a blockchain project, improving the efficiency of research and development, testing and operation.。 +For any one program, there is no perfect, only the most suitable。The data export component is dedicated to solving blockchain big data scenarios, not only for specific business requirements, but also as a powerful tool throughout development, testing and operation for a blockchain project, improving the efficiency of research and development, testing and operation。 -For the needs of users, our consistent attitude is: welcome to say, but not afraid to say, if necessary, you have to say。In the face of like-minded partners, we have already given the answer: chat, always waiting;Welcome with both hands.。 +For the needs of users, our consistent attitude is: welcome to say, but not afraid to say, if necessary, you have to say。In the face of like-minded partners, we have already given the answer: chat, always waiting;Welcome with both hands。 In addition to actively participating in the field of blockchain technology open source, we are also committed to building an open source ecology with the majority of users, users can not only say, but also join us, there are not only cutting-edge black technology, but also poetry and distance! diff --git a/3.x/en/docs/articles/4_tools/41_webase/webase_node_preposition.md b/3.x/en/docs/articles/4_tools/41_webase/webase_node_preposition.md index c0ccf7512..a756d45b1 100644 --- a/3.x/en/docs/articles/4_tools/41_webase/webase_node_preposition.md +++ b/3.x/en/docs/articles/4_tools/41_webase/webase_node_preposition.md @@ -2,19 +2,19 @@ Author : He Shuoyan | FISCO BCOS Core Developer -The FISCO BCOS tethering script has given developers the ultimate tethering experience. How can you quickly build a blockchain visual interface to interact with the blockchain? WeBASE-Front is the component that can meet this expectation the fastest。WeBASE-Front provides developers with a subset of the minimum functions of blockchain interaction, which is lightweight and easy to install without installing any third-party components.。Build WeBASE after completing the node-Front, you can open the interface in the browser, quickly open the blockchain experience journey。 +The FISCO BCOS tethering script has given developers the ultimate tethering experience, how can you quickly build a blockchain visual interface to interact with the blockchain, WeBASE-Front is the fastest component that can meet this expectation。WeBASE-Front provides developers with a subset of the smallest blockchain interaction features, lightweight and easy to install, without the need to install any third-party components。After building a node, build WeBASE-Front. You can open the interface in the browser to quickly start the blockchain experience。 WeBASE-Front also offers a lot of friendly and useful features: -- in WeBASE-On the front page, developers can view the block information, transaction information, group information, number of nodes, and node status of the blockchain.**The core information of the blockchain network is clear at a glance**。 -- WeBASE-Front provides a contract development IDE for developers to write and debug smart contracts.**Quickly develop your own blockchain applications**。 -- WeBASE-Front integrates the Web3SDK and encapsulates the Web3SDK interface. Developers can call WeBASE through HTTP requests.-Front interface interacts with blockchain nodes。This approach shields the limitations of the SDK language.**Developers of any language can call WeBASE-The interface of Front interacts with the blockchain**。 +- On the WeBASE-Front homepage, developers can view the block information, transaction information, group information, number of nodes, and node status of the blockchain**The core information of the blockchain network is clear at a glance**。 +-WeBASE-Front provides a contract development IDE for developers to write and debug smart contracts**Quickly develop your own blockchain applications**。 +-WeBASE-Front integrates the Web3SDK and encapsulates the Web3SDK interface. Developers can call the WeBASE-Front interface to interact with blockchain nodes through HTTP requests。This approach shields the limitations of the SDK language**Developers of any language can interact with the blockchain by calling the WeBASE-Front interface**。 -Of course, WeBASE-The function of Front is not limited to this。As a member of the WeBASE family, this component cooperates with WeBASE-Node-Manager and WeBASE-Web is used together as a node front to monitor the blockchain network in all directions and realize the enterprise-level blockchain monitoring function.。 +Of course, the functionality of WeBASE-Front is not limited to this。As a member of the WeBASE family, this component is used in conjunction with WeBASE-Node-Manager and WeBASE-Web, as a node front, to monitor the blockchain network in an all-round way, realizing enterprise-level blockchain monitoring functions。 ## Function Introduction -WeBASE-Front has the following five main functions: +WeBASE-Front has five main features: ### I. Data Overview @@ -28,7 +28,7 @@ Displays the**Number of nodes, node ID, block height, pbftview, and node running ### III. Contract Management -This is WeBASE-The core functionality of Front, on which developers can**Write, compile, debug contracts**, 以及**JAVA class for generating contracts with one click**The deployed contracts will be stored in the H2 embedded database, and historical contracts can be queried in the contract list.。 +This is the core feature of WeBASE-Front, and developers can**Write, compile, debug contracts**, 以及**JAVA class for generating contracts with one click**The deployed contracts will be stored in the H2 embedded database, and historical contracts can be queried in the contract list。 ![](../../../../images/articles/webase_node_preposition/IMG_5629.PNG) @@ -50,33 +50,33 @@ Generate elliptic curve public-private key pairs, supporting**Import Export Priv ## technical analysis -WeBASE-Front is based on FISCO BCOS**spring-boot-starter**(Please refer to the link at the end of the article) A development example of the project。**Web3SDK interface encapsulation, dynamic group switching, deployment call contract (without generating JAVA classes), public-private key pair generation,**Refer to WeBASE for these common features-Front code, developers can learn from and write their own springboot applications。 +WeBASE-Front is based on FISCO BCOS**spring-boot-starter**(Please refer to the link at the end of the article) A development example of the project。**Web3SDK interface encapsulation, dynamic group switching, deployment call contract (without generating JAVA classes), public-private key pair generation,**These common features please refer to the WeBASE-Front code, developers can learn from and write their own springboot applications。 -For ease of installation and use, WeBASE-Front uses a lightweight**H2 Embedded Database**The backend uses the SSH framework and uses JPA to access the database.;Front End Adopt**VUE**Framework development, front-end resources are built into the back-end springboot service, no need to install and configure nginx and mysql these steps, directly start the Java service to access the interface。 +For ease of installation and use, WeBASE-Front uses a lightweight**H2 Embedded Database**The backend uses the SSH framework and uses JPA to access the database;Front End Adopt**VUE**Framework development, front-end resources are built into the back-end springboot service, no need to install and configure nginx and mysql these steps, directly start the Java service to access the interface。 The generated public and private keys and deployed contracts are stored in the H2 database for easy query history。The performance monitoring function uses the**sigar**Data Collection Components。The collected data will also be stored in the H2 database, but only the most recent week's monitoring data will be saved。 ## Deployment method -As node front, WeBASE-Front needs to be deployed on the same machine as the node。When deploying multiple nodes on one machine, we recommend that you deploy only one WeBASE-Front Services。 +As a node front, WeBASE-Front needs to be deployed on the same machine as the node。When deploying multiple nodes on one machine, we recommend that you deploy only one WeBASE-Front service。 -WeBASE-There are three ways to deploy Front: +There are three ways to deploy WeBASE-Front: -1. Separate deployment is used as an independent console, and is equipped with an interface, deployment is simple and quick, just download WeBASE-Front application, replace node certificate to start。We recommend that beginners and developers use this deployment method to query information about the blockchain and develop and debug smart contracts.。(Please refer to the link at the end of the article for installation) +1. Separate deployment is used as an independent console, and is equipped with an interface. The deployment is simple and quick. Just download the WeBASE-Front application and replace the node certificate to start。We recommend that beginners and developers use this deployment method to query information about the blockchain and develop and debug smart contracts。(Please refer to the link at the end of the article for installation) -2. In Mode 1, WeBASE-Front is used as a visual console. The private key is encrypted and stored in the H2 database by default. If you need a more secure private key protection scheme, you can combine WeBASE-The private key is stored in WeBASE for deployment with the Sign service.-Sign, WeBASE-Sign service is responsible for signing transaction data, providing a more secure private key protection scheme。 +2. In method 1, WeBASE-Front is used as a visualization console, and the private key is encrypted and stored in the H2 database by default. If a more secure private key protection scheme is required, it can be deployed and used together with the WeBASE-Sign service, and the private key is stored in WeBASE-Sign. The WeBASE-Sign service is responsible for signing transaction data, providing a more secure private key protection scheme。 - This method deploys WeBASE on top of Method 1-Sign service. If you have high requirements for private key security, use this deployment method.。(WeBASE-Sign service please refer to the link at the end of the article) + This method deploys the WeBASE-Sign service on the basis of Method 1. If the security requirements for the private key are high, use this deployment method。(WeBASE-Sign service please refer to the link at the end of the article) -3. Combined with WeBASE-Node-Manager and WeBASE-Web services are deployed together using WeBASE here-Front only as a node front, multiple node front unified by WeBASE-Node-Manager Management, WeBASE-Node-Manager has an authentication login system implemented by spring security, and pulls the block information and transaction information on the chain and stores it in the Mysql database.。It is recommended to use this method in the production environment. The architecture diagram is as follows。(Please refer to the link at the end of the article for WeBASE installation and deployment) +3. Deploy and use WeBASE-Node-Manager and WeBASE-Web service together. Here, WeBASE-Front is only used as a node front, and multiple node fronts are managed by WeBASE-Node-Manager. WeBASE-Node-Manager has an authentication login system implemented by spring security, and will pull the block information and transaction information on the chain and store it in the Mysql database。It is recommended to use this method in the production environment. The architecture diagram is as follows。(Please refer to the link at the end of the article for WeBASE installation and deployment) ![](../../../../images/articles/webase_node_preposition/IMG_5631.PNG) ## SUMMARY -WeBASE-As a convenient and powerful blockchain component, Front can be used independently as a visual console for developers to interact with the blockchain, and can also cooperate with WeBASE.-Node-Manager and WeBASE-Web is used together to realize the blockchain monitoring function of production environment。 +As a convenient and powerful blockchain component, WeBASE-Front can be used independently as a visual console for developers to interact with the blockchain. It can also be used in conjunction with WeBASE-Node-Manager and WeBASE-Web to implement blockchain monitoring in production environments。 -WeBASE-Front is still in continuous optimization and development, and will add more and more features in the future, such as the system management function of adding and deleting nodes in the alliance chain, and the transaction resolution function.。Of course, continuous iterative upgrades will maintain its ease of use and convenience.。Welcome community friends to mention PR and ISSUE, participate in optimization together。 +WeBASE-Front is still under continuous optimization and development, and will add more and more features in the future, such as the system management function of adding and deleting nodes in the alliance chain, and the transaction resolution function。Of course, continuous iterative upgrades will maintain its ease of use and convenience。Welcome community friends to mention PR and ISSUE, participate in optimization together。 ------ @@ -90,5 +90,5 @@ WeBASE-Front is still in continuous optimization and development, and will add m - [WeBASE-Sign Service](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Sign/index.html) -- [WeBASE Installation Deployment](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/enterprise.html) +- [WeBASE installation deployment](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/enterprise.html) diff --git a/3.x/en/docs/articles/4_tools/41_webase/webase_release.md b/3.x/en/docs/articles/4_tools/41_webase/webase_release.md index e7c996648..dd51c2e6e 100644 --- a/3.x/en/docs/articles/4_tools/41_webase/webase_release.md +++ b/3.x/en/docs/articles/4_tools/41_webase/webase_release.md @@ -1,26 +1,26 @@ # FISCO BCOS welcomes the blockchain middleware platform WeBASE, application landing speed -With the development of blockchain technology, more and more developers are developing a variety of rich applications based on a stable and efficient blockchain underlying platform, combined with smart contracts and on-chain interfaces, followed by more demand for the ease of use of blockchain systems, the speed of application development, and the richness of business components.。 +With the development of blockchain technology, more and more developers are developing a variety of rich applications based on a stable and efficient blockchain underlying platform, combined with smart contracts and on-chain interfaces, followed by more demand for the ease of use of blockchain systems, the speed of application development, and the richness of business components。 -Facing the underlying platform, "naked writing" smart contracts and the underlying code is technically feasible, and further providing "what you see is what you get" and "out of the box" this threshold-free experience is motivated by feedback from community developers, such as. +Facing the underlying platform, "naked writing" smart contracts and the underlying code is technically feasible, and further providing "what you see is what you get" and "out of the box" this threshold-free experience is motivated by feedback from community developers, such as -- Lack of easy-to-use smart contract development tools, contract development and debugging efficiency is not high, it is difficult to easily manage the configuration information of each node in the chain, observe its running state.。 +-Lack of easy-to-use smart contract development tools, contract development and debugging efficiency is not high, it is difficult to easily manage the configuration information of each node in the chain and observe its running status。 -- The presentation of blocks, transactions, receipts and other data on the blockchain is not friendly enough, making it difficult to conduct flexible and multi-dimensional analysis of the massive amounts of data on the chain.。 +- The presentation of blocks, transactions, receipts and other data on the blockchain is not friendly enough, making it difficult to conduct flexible and multi-dimensional analysis of the massive data on the chain。 -- A common audit tool is needed for the various accounts involved in the business and the transactions they conduct in order to detect and eliminate anomalies in a timely manner.。 +- A common audit tool is required for the various accounts involved in the business and the transactions they conduct in order to detect and eliminate anomalies in a timely manner。 -In response to the continuing needs of the open source community and the open sharing of the results of its long-term exploration, the member of the FISCO BCOS Open Source Working Group, WeBank, contributes a high-speed channel from the bottom of the blockchain to the application landing.。 +In response to the continuing needs of the open source community and the open sharing of the results of its long-term exploration, the member of the FISCO BCOS Open Source Working Group, WeBank, contributes a high-speed channel from the bottom of the blockchain to the application landing。 -**On July 2, WeBASE, a blockchain middleware platform developed by WeBank, was officially launched. The platform supports the underlying FISCO BCOS platform.**For a variety of objects, such as developers, operators, and according to different scenarios, including development, debugging, deployment, auditing, etc., to create a wealth of functional components and utilities, providing a friendly, visual operating environment.。 +**On July 2, WeBASE, a blockchain middleware platform developed by WeBank, was officially launched. The platform supports the underlying FISCO BCOS platform**For a variety of objects, such as developers, operators, and according to different scenarios, including development, debugging, deployment, auditing, etc., to create a wealth of functional components and utilities, providing a friendly, visual operating environment。 -Deploying WeBASE based on the underlying FISCO BCOS platform can simplify the blockchain application development process, greatly reduce the time and labor costs for enterprises to build blockchain applications and conduct operational analysis, so that developers can easily control the blockchain network and focus on application development and business landing.。 +Deploying WeBASE based on the underlying FISCO BCOS platform can simplify the blockchain application development process, greatly reduce the time and labor costs for enterprises to build blockchain applications and conduct operational analysis, so that developers can easily control the blockchain network and focus on application development and business landing。 **WeBASE code warehouse address**:https://github.com/WeBankFinTech/WeBASE ## Introduction to WeBASE -**WeBASE (WeBank Blockchain Application Software Extension) is a middleware platform built between blockchain applications and FISCO BCOS nodes.**As shown in the following figure, developers can deploy WeBASE interactive modules such as browsers, management desks, and other tools on top of blockchain nodes, and can also develop applications based on WeBASE built-in components and APIs.。 +**WeBASE (WeBank Blockchain Application Software Extension) is a middleware platform built between blockchain applications and FISCO BCOS nodes**As shown in the following figure, developers can deploy WeBASE interactive modules such as browsers, management desks, and other tools on top of blockchain nodes, and can also develop applications based on WeBASE built-in components and APIs。 ![](../../../../images/articles/webase_release/IMG_4950.PNG) @@ -30,15 +30,15 @@ Provide a friendly smart contract development platform, support online compilati ![](../../../../images/articles/webase_release/IMG_4951.PNG) -Second, on the basis of SDK packaging Restful style API interface, Restful interface is more intuitive, good scalability, can easily adapt to a variety of programming languages。The transaction data can be coded and decoded through the interface, and the details of the data on the chain can be displayed in an all-round and multi-dimensional manner on devices including web pages and mobile terminals.。 +Second, on the basis of SDK packaging Restful style API interface, Restful interface is more intuitive, good scalability, can easily adapt to a variety of programming languages。The transaction data can be coded and decoded through the interface, and the details of the data on the chain can be displayed in an all-round and multi-dimensional manner on devices including web pages and mobile terminals。 ![](../../../../images/articles/webase_release/IMG_4952.PNG) -Third, the blockchain management platform is the preferred workbench for operations administrators. It can view the data statistics on the chain, the details of each block, and the multi-dimensional statistical data of each node, and monitor the health of the nodes in an all-round way.。 +Third, the blockchain management platform is the preferred workbench for operations administrators. It can view the data statistics on the chain, the details of each block, and the multi-dimensional statistical data of each node, and monitor the health of the nodes in an all-round way。 ![](../../../../images/articles/webase_release/IMG_4953.PNG) -Fourth, the data export component can be configured to export chain data to relational databases, big data processing and other systems, in order to diversify the chain data processing, such as data mining, building business models and so on.。 +Fourth, the data export component can be configured to export chain data to relational databases, big data processing and other systems, in order to diversify the chain data processing, such as data mining, building business models and so on。 ## WeBASE's overall architecture and design principles @@ -48,9 +48,9 @@ The complete architecture of WeBASE is shown in the following figure: The design concept of WeBASE is to solve a problem with one subsystem and run without deploying all subsystems, so the following principles are followed at the beginning of the design: -**On-demand deployment**WeBASE abstraction abstracts the common features of application development to form various service components, such as business access, private key management, transaction queues, contract development, data export, and auditing. Developers deploy the required components as needed.。 +**On-demand deployment**WeBASE abstraction abstracts the common features of application development to form various service components, such as business access, private key management, transaction queues, contract development, data export, and auditing. Developers deploy the required components as needed。 -**Microservices architecture**WeBASE uses a microservices architecture based on spring-boot framework, providing Restful style interface。 +**Microservices architecture**WeBASE uses a microservice architecture, based on the spring-boot framework, and provides Restful-style interfaces。 **zero coupling**: All WeBASE subsystems exist independently, can be deployed and run independently, and provide services for different scenarios, avoiding the redundant burden of "whole family bucket"。 @@ -58,45 +58,45 @@ The design concept of WeBASE is to solve a problem with one subsystem and run wi ## Application Development Process Based on WeBASE -The WeBASE-based application development process has a new experience, and the following diagram visually compares the differences between the two development processes. +The WeBASE-based application development process has a new experience, and the following diagram visually compares the differences between the two development processes ![](../../../../images/articles/webase_release/640.jpeg) -Obviously, based on WeBASE application development, the process is greatly simplified。Smart contract development tools, perfect data visualization platform, simple transaction chain way, reduce the development threshold, making the development efficiency greatly improved.。And for the application after the launch of the transaction audit, data export, three-dimensional monitoring and other aspects of the management, WeBASE provides a series of complete components, can effectively avoid developers and enterprises repeatedly break the road.。 +Obviously, based on WeBASE application development, the process is greatly simplified。Smart contract development tools, perfect data visualization platform, simple transaction chain way, reduce the development threshold, making the development efficiency greatly improved。And for the application after the launch of the transaction audit, data export, three-dimensional monitoring and other aspects of the management, WeBASE provides a series of complete components, can effectively avoid developers and enterprises repeatedly break the road。 ## WeBASE's Next Step Today,**WeBASE open source is just a small step, in the future, WeBASE will have plans to open more features**: -- Provide more business-proven components that face the business field, facilitate integration into applications with common models, and establish blockchain application development best practices and standard architectures.; +- Provide more business-proven components that face the business field, facilitate integration into applications with common models, and establish blockchain application development best practices and standard architectures; - Provide various industry solutions and reference implementations; -- More friendly access for cloud vendors。 +- Provide more friendly access for cloud vendors。 ## Community Developer Experience - **Network Technical Director Lin Dongyi** GUANGZHOU PINGO SOFTWARE CO., LTD -> PAGO Software is a cloud computing company that is exploring the integration of cloud computing and blockchain innovation。WeBASE has the characteristics of friendly interface design, convenient management and good integration of cloud ecology, which makes the cloud on the blockchain easier, and we can focus more on the business scenarios of blockchain and cloud computing.。 +> PAGO Software is a cloud computing company that is exploring the integration of cloud computing and blockchain innovation。WeBASE has the characteristics of friendly interface design, convenient management and good integration of cloud ecology, which makes the cloud on the blockchain easier, and we can focus more on the business scenarios of blockchain and cloud computing。 -- **Architect Wei Wei** Baofu Network Technology (Shanghai) Co., Ltd. +- **Architect Wei Wei** Baofu Network Technology (Shanghai) Co., Ltd -> WeBASE is easy to operate, the community documentation is very complete, covering all kinds of components required by the enterprise, pluggable architecture design, combined according to the needs, help us solve the worries of blockchain technology, shorten the process of business data, and quickly realize business value。 +> WeBASE is easy to operate, the community documentation is very complete, covering all kinds of components required by the enterprise, pluggable architecture design, according to the required combination, help us solve the worries of blockchain technology, shorten the process of business data, and quickly realize business value。 - **CTO Jin Zhaokang** Hangzhou Yibi Technology Co., Ltd -> WeBASE's alliance governance and pre-management capabilities make managing the blockchain as easy as managing chat groups。The timely response of the community and the rapid iteration of functions reflect the strength of the top blockchain service providers in China.。 +> WeBASE's alliance governance, pre-management capabilities make managing a blockchain as easy as managing a chat group。The timely response of the community and the rapid iteration of functions reflect the strength of the top blockchain service providers in China。 -- **Blockchain Architect Sun Yaopu** Full Chain Link Co., Ltd. +- **Blockchain Architect Sun Yaopu** Full Chain Link Co., Ltd > WeBASE each system module provides a wealth of deployment documents, clear deployment steps, according to the deployment documents can be quickly deployed and used, the UI design and layout of the system is also very reasonable。 > -> Registration and login functions to meet customer requirements for blockchain information query permission management and reduce development workload;The system integrates the java interface of FISCO BCOS, provides a rich call interface, reduces the development workload, and reduces the difficulty of using the blockchain。 +> The registration and login function meets the customer's requirements for blockchain information query permission management and reduces the development workload;The system integrates the java interface of FISCO BCOS, provides a rich call interface, reduces the development workload, and reduces the difficulty of using the blockchain。 > -> The system provides smart contract editing and compilation and deployment functions, making smart contract development more convenient。 +> The system provides smart contract editing, compilation and deployment functions, making smart contract development more convenient。 - **Founder Zhang Long** Drop into the sea(Guangzhou)Information Technology Limited -> Judi Chenghai is a middleware provider that focuses on deep blockchain mining. We need to provide customers with packaged interfaces directly to reduce their development workload.。Due to the special nature of our products, we need to be compatible with multiple ecosystems and inclusive of multiple consensus mechanisms, so our development workload is very large.! +> Judi Chenghai is a middleware provider that focuses on deep blockchain mining, and we need to provide packaged interfaces directly to customers to reduce their development workload。Due to the special nature of our products, we need to be compatible with multiple ecosystems and inclusive of multiple consensus mechanisms, so our development workload is very large! > -> But WeBASE has packaged various interfaces very well, and it can be quickly connected with a little exposure!We hope that WeBASE can continue to support the open source community, and we are willing to continue to pay and contribute to it.。 +> However, WeBASE has encapsulated various interfaces very well, and can be quickly connected with only a little exposure!We hope that WeBASE can continue to support the open source community, and we are willing to continue to pay and contribute to it。 diff --git a/3.x/en/docs/articles/4_tools/42_buildchain/fast_build_chain.md b/3.x/en/docs/articles/4_tools/42_buildchain/fast_build_chain.md index 61f053fd9..418b9b753 100644 --- a/3.x/en/docs/articles/4_tools/42_buildchain/fast_build_chain.md +++ b/3.x/en/docs/articles/4_tools/42_buildchain/fast_build_chain.md @@ -2,67 +2,67 @@ Author : Bai Xingqiang | FISCO BCOS Core Developer -Like many developers, when the team first started to build the chain, they also went through the stage of confusion: which version to install, how to compile for so long is still prone to errors.?With several nodes, what IP ports are used?Where do certificates come from and where do they go??How do I verify that my chain is really up?... +Like many developers, when the team first started to build the chain, they also went through the stage of confusion: which version to install, how to compile for so long is still prone to errors?With several nodes, what IP ports are used?Where do certificates come from and where do they go??How do I verify that my chain is really up?... -I believe that the engineers who started from the FISCO BCOS1.X version have a small volcano in their hearts, facing super-long documents and super-many operation steps...... Every time a version is deployed, it takes a lot of time, and the engineers can almost spread an egg on their heads.。The data shows that if a software is not used for 15 minutes, users will be lost.。 +I believe that the engineers who started from the FISCO BCOS1.X version have a small volcano in their hearts, facing super-long documents and super-many operation steps...... Every time a version is deployed, it takes a lot of time, and the engineers can almost spread an egg on their heads。The data shows that if a software is not used for 15 minutes, users will be lost。 In order to extinguish the small volcano in everyone's heart and play the blockchain happily together, FISCO BCOS's ease of use optimization is imperative。The team's first goal is to get developers**Set up a development test chain in 5 minutes**, which requires a summoning-like command from Harry Potter, which we call**build_chain**。 This article will talk about the birth of the build _ chain script and how current scripts can help。 ## Birth of the build _ chain script -The first is to remove the compilation step, the source code compilation not only needs to install the download dependency, but also needs to configure the development environment, even if these two steps go well, the compilation process may also fail due to lack of memory, not to mention the download dependency is often affected by the network speed caused by the download failure.。So we provide a pre-compiled binary distribution package that lets users skip the lengthy compilation phase.。 -Immediately we found a new problem, even though binary distribution packages are available for different platforms, the user's environment is ever-changing, and the installation of the dynamic libraries on which the precompiled program depends becomes a problem.。 -So we thought of providing a statically compiled binary distribution package that is compatible with a variety of Linux 64-bit operating systems and does not rely on any other libraries, saving time and effort.。In order to achieve static compilation, we do not hesitate to re-implement some functions to remove the dependence on external libraries that do not provide .a.。 +The first is to remove the compilation step, the source code compilation not only needs to install the download dependency, but also needs to configure the development environment, even if these two steps go well, the compilation process may also fail due to lack of memory, not to mention the download dependency is often affected by the network speed caused by the download failure。So we provide a pre-compiled binary distribution package that lets users skip the lengthy compilation phase。 +Immediately we found a new problem, even though binary distribution packages are available for different platforms, the user's environment is ever-changing, and the installation of the dynamic libraries on which the precompiled program depends becomes a problem。 +So we thought of providing a statically compiled binary distribution package that is compatible with a variety of Linux 64-bit operating systems and does not rely on any other libraries, saving time and effort。In order to achieve static compilation, we do not hesitate to re-implement some functions to remove the dependence on external libraries that do not provide .a。 Next we try to reduce the deployment steps and reduce the pressure on users。 Too many configuration items are too flexible, we optimize the configuration, all configurations provide appropriate default values, delete configuration items that do not need flexible customization。 -The configuration file in json format is not intuitive enough to read, and manual modification is easy to cause errors due to format problems. We replace it with a clearer ini file.。 -Manual deployment of system contracts is too cumbersome. We use precompiled contracts to implement built-in system contracts to manage on-chain configurations.。 -The node directory structure of manual construction and tool script construction is not unified. We organize documents, unify the directory structure created by tools, and provide auxiliary scripts.。 +The configuration file in json format is not intuitive enough to read, and manual modification is easy to cause errors due to format problems. We replace it with a clearer ini file。 +Manual deployment of system contracts is too cumbersome. We use precompiled contracts to implement built-in system contracts to manage on-chain configurations。 +The node directory structure of manual construction and tool script construction is not unified. We organize documents, unify the directory structure created by tools, and provide auxiliary scripts。 After the above optimizations, we think there can be a more lightweight deployment method, and we can try to complete everything in the deployment process through a script。 -Scripts are lighter and faster than large, full deployment tools;Scripts can be simpler than manual deployment.。In this way, the build _ chain script was born。 +Scripts are lighter and faster than large, full deployment tools;Scripts can be simpler than manual deployment。In this way, the build _ chain script was born。 ## Help provided by the build _ chain script -This script can complete environment check, parameter analysis, FISCO BCOS binary distribution package download, public and private key certificate generation, configuration file generation and tool script generation, etc. It supports MacOS, Linux 64bit, docker mode and national secret version construction.。 -However, after actual use, we found that under the network conditions at home, the script takes a long time to download the binary distribution package, resulting in a FISCO BCOS chain that cannot be completed within 5 minutes.。 +This script can complete environment check, parameter analysis, FISCO BCOS binary distribution package download, public and private key certificate generation, configuration file generation and tool script generation, etc. It supports MacOS, Linux 64bit, docker mode and national secret version construction。 +However, after actual use, we found that under the network conditions at home, the script takes a long time to download the binary distribution package, resulting in a FISCO BCOS chain that cannot be completed within 5 minutes。 In order to achieve the goal of 5 minutes to build the chain, we have added CDN support, even if the network conditions are not very good, you can also smoothly complete the chain in 5 minutes。The heart of the small volcano extinguished。 Specifically, the help that the build _ chain script can provide includes the following: ### Environmental inspection -The build _ chain script needs to use OpenSSL to generate the relevant certificate files that the node needs to use, while FISCO BCOS 2.0 requires OpenSSL 1.0.2 or later.。The script can continue execution only if a version of the program that meets the requirements is found。 -Note that MacOS comes with LibreSSL, so you need to use brew install OpenSSL to install OpenSSL.。 +The build _ chain script needs to use OpenSSL to generate the relevant certificate files that the node needs to use, while FISCO BCOS 2.0 requires OpenSSL 1.0.2 or later。The script can continue execution only if a version of the program that meets the requirements is found。 +Note that MacOS comes with LibreSSL, so you need to use brew install OpenSSL to install OpenSSL。 ### Parsing Parameters -The build _ chain script supports many custom parameters, such as-p specifies the range of ports used by the node,-f build a network with specified configuration,-G build Jianguo secret version,-v Specifies the FISCO BCOS program version number,-o Specify output path, etc. [Refer to details](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/build_chain.html#id4) +The build _ chain script supports many custom parameters, such as -p to specify the port range used by the node, -f to build a network with a specified configuration, -g to build a secret version, -v to specify the FISCO BCOS program version number, and -o to specify the output path. [See details](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/build_chain.html#id4) ![](../../../../images/articles/fast_build_chain/IMG_4954.PNG) ### Get FISCO BCOS executable -FISCO BCOS provides standard and secret versions of precompiled executable programs that can run on most x64 Linux machines.。In addition, for the convenience of developers debugging, while providing a MacOS version of the executable program。 +FISCO BCOS provides standard and secret versions of precompiled executable programs that can run on most x64 Linux machines。In addition, for the convenience of developers debugging, while providing a MacOS version of the executable program。 -- The build _ chain script will download the corresponding executable program according to the operating system and whether the country secret。 -- When downloading executable programs from GitHub is slow, it will automatically switch to CDN download。As you can see in the figure below, the FICO-bcos.tar.gz only 7.72M。 -- When not using-When the v option specifies a version, the script automatically pulls the latest version released by FISCO BCOS on GitHub, using the-v option, the specified version of the executable program is downloaded。 +-build _ chain script will download the corresponding executable program according to the operating system and whether the country secret。 +-When downloading executable programs from GitHub is slow, it will automatically switch to CDN download。As can be seen in the figure below, fisco-bcos.tar.gz is only 7.72M。 +-When the version is not specified with the -v option, the script will automatically pull the latest version released by FISCO BCOS on GitHub. When the -v option is used, the executable program of the specified version is downloaded。 -In addition to the official clear Ubuntu 16.04+and CentOS 7.2+For platforms other than, it is recommended to use the source code to compile the obtained executable program in production, and then use the-b Options and-f option to build blockchain network。 +In addition to the official clear Ubuntu 16.04+and CentOS 7.2+In production, it is recommended to use the source code to compile the obtained executable program, and then use the -b option and the -f option to build the blockchain network。 ![](../../../../images/articles/fast_build_chain/IMG_4955.PNG) ### Generate private key certificate -FISCO BCOS supports certificate chains. By default, the three-level certificate chain structure is used. The self-signed CA certificate is used as the root certificate of the chain. The authority certificate issued by the CA is used to distinguish the authorities, and then the authority private key is used to issue the certificate used by the node.。 -The conf directory of the node contains three files: ca.crt, node.key, and node.crt. The node uses these three files to establish a two-way SSL link and uses node.key to sign the block in the consensus process.。 -If it is the state secret version, the script will download the TaSSL tool and generate the certificate file of the state secret version.。 +FISCO BCOS supports certificate chains. By default, the three-level certificate chain structure is used. The self-signed CA certificate is used as the root certificate of the chain. The authority certificate issued by the CA is used to distinguish the authorities, and then the authority private key is used to issue the certificate used by the node。 +The conf directory of the node contains three files: ca.crt, node.key, and node.crt. The node uses these three files to establish a two-way SSL link and uses node.key to sign the block in the consensus process。 +If it is the state secret version, the script will download the TaSSL tool and generate the certificate file of the state secret version。 ### Generate configuration files and tool scripts -The build _ chain script has a built-in configuration file template for FISCO BCOS nodes, which is modified according to the parameters specified by the user to generate the configuration file used by the node.([Can view the description of the configuration file](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/configuration.html))。 +The build _ chain script has a built-in configuration file template for FISCO BCOS nodes, which is modified according to the parameters specified by the user to generate the configuration file used by the node([Can view the description of the configuration file](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/configuration.html))。 At the same time, in order to facilitate users to start and stop nodes, start.sh and stop.sh will also be generated under the node directory ([can view the node directory structure description](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/build_chain.html#id5))。 @@ -70,14 +70,14 @@ At the same time, in order to facilitate users to start and stop nodes, start.sh Here are a few tips for the FISCO BCOS team to improve deployment speed and achieve rapid chain building: -- 1. Provide statically compiled binary distribution packages, compatible with multiple operating systems, allowing users to skip the lengthy compilation phase.。 -- 2. Simplify the configuration, greatly adopt the default parameters that can guarantee the maximum success rate, minimize the information that the user needs to pay attention to, and the user only needs to pay attention to a small amount of network configuration。 -- 3. Standardized directory structure, whether it is a key link, enterprise-level link, manual link... the goal is the same, reducing the complexity of differentiation。 -- 4. clever use of scripts, build chain scripts can automatically string up a series of routine operation steps from preparing the environment to starting all chain nodes, automatically handling various possible small exceptions, making the whole process seem flowing。 -- 5. Optimize the dependency library address, network speed, etc., greatly reduce the user's waiting consumption, talk and laugh, the chain has been set up。 +- 1. Provide statically compiled binary distribution packages, compatible with multiple operating systems, allowing users to skip the lengthy compilation phase。 +- 2. Simplify the configuration, maximize the use of default parameters that can guarantee the maximum success rate, minimize the information that users need to pay attention to, and users only need to pay attention to a small amount of network configuration。 +- 3. Standardized directory structure, whether it is a one-click chain, enterprise-level chain, manual chain... the goal is the same, reducing the complexity of differentiation。 +-4. Using the script skillfully, the build chain script can automatically string up a series of routine operation steps from preparing the environment to starting all chain nodes, and automatically handle various possible small exceptions, making the whole process seem flowing。 +-5. Optimize the dependency library address, network speed, etc., greatly reduce the user's waiting consumption, talk and laugh, the chain has been set up。 -For students and production environments who want to learn more by building manually, we recommend that you use the enterprise deployment tool generator provided by us.。 -In terms of performance and ease of use, it is worthwhile to make more efforts, we will continue to work hard to optimize, and community participants are very welcome to make optimization suggestions and bugs.。 +For students and production environments who want to learn more by building manually, we recommend that you use the enterprise deployment tool generator provided by us。 +In terms of performance and ease of use, it is worthwhile to make more efforts, we will continue to work hard to optimize, and community participants are very welcome to make optimization suggestions and bugs。 ------ diff --git a/3.x/en/docs/articles/4_tools/43_console/console_details.md b/3.x/en/docs/articles/4_tools/43_console/console_details.md index a1b5a7835..931fcad83 100644 --- a/3.x/en/docs/articles/4_tools/43_console/console_details.md +++ b/3.x/en/docs/articles/4_tools/43_console/console_details.md @@ -2,45 +2,45 @@ Author : Liao Feiqiang | FISCO BCOS Core Developer -We are already familiar with the command line terminal in the Linux system, and we can use Linux smoothly through the shell terminal.。Similarly, in the FISCO BCOS consortium chain, there are such command-line terminals, called consoles。The console is a booster for developers to explore the blockchain world, providing a variety of features that help cross the mountains between blockchain entry and mastery, bringing an "out-of-the-box" smooth experience.。 +We are already familiar with the command line terminal in the Linux system, and we can use Linux smoothly through the shell terminal。Similarly, in the FISCO BCOS consortium chain, there are such command-line terminals, called consoles。The console is a booster for developers to explore the blockchain world, providing a variety of features that help cross the mountains between blockchain entry and mastery, bringing an "out-of-the-box" smooth experience。 ## Why: Why a console?? ### Experience environmental impact users from "getting started to give up," or "getting started to proficient"。 -When learning a new technology or product, in addition to reading documents, it is also important to try to get started and get a first-hand experience.。If the experience environment is complex to configure and cumbersome to operate, it is likely to cause users to move from "getting started to giving up.";The experience environment is simple, easy to configure, and feature-rich, which will quickly open the door to a new world for users, accelerating users from entry to mastery.。 +When learning a new technology or product, in addition to reading documents, it is also important to try to get started and get a first-hand experience。If the experience environment is complex to configure and cumbersome to operate, it is likely to cause users to move from "getting started to giving up.";The experience environment is simple, easy to configure, and feature-rich, which will quickly open the door to a new world for users, accelerating users from entry to mastery。 What kind of form do you choose to carry this fast and friendly experience??If we have a console, just enter a single command or code, and then press Enter to return the result. The result is displayed in front of the user。This "out of the box" effect is exactly the way we expect to experience。 ### Can the console for FISCO BCOS achieve speed experience?? -FISCO BCOS version 1.3 actually has the function of fast experience, which consists of two parts, namely ethconsole and Node.js tool。Among them, ethconsole can query information on the chain, including node, block, and transaction information.;The Node.js tool provides template js files for deploying and invoking contracts to assist users in deploying and invoking contracts.。 +FISCO BCOS version 1.3 actually has the function of fast experience, which consists of two parts, namely ethconsole and Node.js tool。Among them, ethconsole can query information on the chain, including node, block, and transaction information;The Node.js tool provides template js files for deploying and invoking contracts to assist users in deploying and invoking contracts。 -But this experience is not friendly enough, that version of ethconsole can only query very limited information on the chain, and can not send transactions and manage the blockchain, the function is relatively single.;Through the Node.js tool, you need to manually write the template js file for deploying and calling the contract, the operation is more cumbersome, the experience is separated from the ethconsole, not so smooth.。 +But this experience is not friendly enough, that version of ethconsole can only query very limited information on the chain, and can not send transactions and manage the blockchain, the function is relatively single;Through the Node.js tool, you need to manually write the template js file for deploying and calling the contract, the operation is more cumbersome, the experience is separated from the ethconsole, not so smooth。 Therefore, FISCO BCOS 2.0 version planning, focusing on the design of the FISCO BCOS 2.0 console, the goal is to make an easy-to-use, friendly, and powerful new console, to provide FISCO BCOS with a speed experience。 ## What: What functions are implemented in the console? -The realization of each function of the console comes from a simple operation, all according to the actual needs, the user's valuable functions, one by one.。 +The realization of each function of the console comes from a simple operation, all according to the actual needs, the user's valuable functions, one by one。 ### Requirement 1: What is Blockchain and Where Is It??Can you see it?? -Realize a series of commands related to querying the blockchain, making the blockchain visible and tangible!For example, query block height, blocks, transactions, nodes, etc., and according to different parameters, provide different query methods to meet the query requirements under different conditions.。 +Realize a series of commands related to querying the blockchain, making the blockchain visible and tangible!For example, query block height, blocks, transactions, nodes, etc., and according to different parameters, provide different query methods to meet the query requirements under different conditions。 -It is worth noting that for transaction and transaction receipt information query commands (getTransactionByHash, getTransactionReceipt, etc.), the function of parsing detailed data by ABI definition is provided, so that the input, output and event log information of the transaction is presented in a decoded way instead of full-screen hexadecimal astronomical numbers.。 +It is worth noting that for transaction and transaction receipt information query commands (getTransactionByHash, getTransactionReceipt, etc.), the function of parsing detailed data by ABI definition is provided, so that the input, output and event log information of the transaction is presented in a decoded way instead of full-screen hexadecimal astronomical numbers。 ### Requirement 2: Deploying and invoking contracts is the core requirement for using blockchain. Can the console directly deploy and invoke contracts?? -must be able to。Before the console was launched, there were two options for deploying and invoking contracts: one was to use the Node.js tool, that is, to write Node.js client deployment and invocation contracts;One is to use the Java SDK to write Java client deployment and call contracts.。Both of these are powerful, but they are not designed for the speed experience, and users need to write deployment and calling code outside of the contract.。 +must be able to。Before the console was launched, there were two options for deploying and invoking contracts: one was to use the Node.js tool, that is, to write Node.js client deployment and invocation contracts;One is to use the Java SDK to write Java client deployment and call contracts。Both of these are powerful, but they are not designed for the speed experience, and users need to write deployment and calling code outside of the contract。 Therefore, the effect of the console implementation is that the user writes the contract, puts it in the specified path, enters a command (deploy) in the console to complete the deployment, and then uses the call command to call the contract interface without any additional work (such as converting the solidity contract into java code, writing the client code for deploying and calling the contract, etc.)。 -After the deploy command deploys the contract, a contract address will be displayed. Considering that the contract address will be used in subsequent contract calls, the console will record the deployed contract address locally and provide the getDeployLog command to view the list of deployed contract addresses.。 +After the deploy command deploys the contract, a contract address will be displayed. Considering that the contract address will be used in subsequent contract calls, the console will record the deployed contract address locally and provide the getDeployLog command to view the list of deployed contract addresses。 -In addition, the FISCO BCOS blockchain provides the CNS function, the contract command service function。The contract name, version number and corresponding contract deployment address of the deployment can be recorded on the chain.;When deploying a contract, specify the contract name and version number;When calling a contract, specify the contract name and version number (if not, use the most recently deployed contract version number)。 +In addition, the FISCO BCOS blockchain provides the CNS function, the contract command service function。The contract name, version number and corresponding contract deployment address of the deployment can be recorded on the chain;When deploying a contract, specify the contract name and version number;When calling a contract, specify the contract name and version number (if not, use the most recently deployed contract version number)。 -This is a more advanced way to deploy and invoke contracts, and is the recommended way to deploy and invoke contracts.。So the console implements the deployment contract command deployByCNS with CNS and the call contract command callByCNS with CNS.。 -It is worth noting that for developers to view processing information and debug contracts, the console automatically parses contract output and event log information.。 +This is a more advanced way to deploy and invoke contracts, and is the recommended way to deploy and invoke contracts。So the console implements the deployment contract command deployByCNS with CNS and the call contract command callByCNS with CNS。 +It is worth noting that for developers to view processing information and debug contracts, the console automatically parses contract output and event log information。 ### Requirement 3: FISCO BCOS 2.0 supports multiple groups. Can the console switch groups online?? @@ -48,24 +48,24 @@ After console login, the current group number is displayed in front of its comma ### Requirement 4: Can the console manage the blockchain?? -FISCO BCOS 2.0 provides node management, system parameter management, and permission management. The console provides corresponding commands for operation, making it easy for users to manage the blockchain through simple commands.。 -where the command for node management is addSealer(Add consensus node)、addObserver(Add Observation Node)、removeNode(Remove a node from a group);The command for system parameter management is setSystemConfigByKey(Setting System Parameters);Permission management has a series of commands to manage the operation permissions of related functions of the blockchain system, including the grant command beginning with grant, the revoke command beginning with revoke, and the query permission command beginning with list.。 +FISCO BCOS 2.0 provides node management, system parameter management, and permission management. The console provides corresponding commands for operation, making it easy for users to manage the blockchain through simple commands。 +where the command for node management is addSealer(Add consensus node)、addObserver(Add Observation Node)、removeNode(Remove a node from a group);The command for system parameter management is setSystemConfigByKey(Setting System Parameters);Permission management has a series of commands to manage the operation permissions of related functions of the blockchain system, including the grant command beginning with grant, the revoke command beginning with revoke, and the query permission command beginning with list。 [Specific use reference here](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html#grantpermissionmanager) ### Requirement 5: Can I operate the user table in the blockchain without writing a CRUD contract?? -FISCO BCOS 2.0 provides distributed storage, the core of which lies in the table storage design。CRUD contract is a kind of contract writing method for table storage, the contract data is stored in the user table, the contract interface for table addition, deletion, modification and query operations.。 -In order to allow users to experience distributed storage without writing CRUD contracts, the console provides a form similar to the mysql statement, providing the create table (create), view table (desc) and table add and delete (insert, delete, update, select) commands.。 -Both table creation and addition / deletion commands send a transaction that requires the consensus of blockchain nodes, which is equivalent to writing a CRUD contract operation table.。 +FISCO BCOS 2.0 provides distributed storage, the core of which lies in the table storage design。CRUD contract is a kind of contract writing method for table storage, the contract data is stored in the user table, the contract interface for table addition, deletion, modification and query operations。 +In order to allow users to experience distributed storage without writing CRUD contracts, the console provides a form similar to the mysql statement, providing the create table (create), view table (desc) and table add and delete (insert, delete, update, select) commands。 +Both table creation and addition / deletion commands send a transaction that requires the consensus of blockchain nodes, which is equivalent to writing a CRUD contract operation table。 ### Requirement 6: Does the console support the State Secret method for transactions?? -The console provides the State Secret switch to modify the configuration file and download the State Secret contract compiler and replacement, which becomes the State Secret console.。Therefore, when the blockchain node is in the state secret version, the console can connect to the state secret node and support the deployment and invocation of the state secret contract to send the state secret transaction.。 +The console provides the State Secret switch to modify the configuration file and download the State Secret contract compiler and replacement, which becomes the State Secret console。Therefore, when the blockchain node is in the state secret version, the console can connect to the state secret node and support the deployment and invocation of the state secret contract to send the state secret transaction。 ### Requirement 7:... wait for you to mention? -Community users are welcome to actively provide comments, suggestions and requirements (in the form of issue or WeChat community). At the same time, you can directly pull requests to the official console repository to modify and add the functions you need, and you can even fork the source code and then customize the individual or organization's console separately. The co-construction and sharing of open source communities rely on the extensive and active participation of community users.。 +Community users are welcome to actively provide comments, suggestions and requirements (in the form of issue or WeChat community). At the same time, you can directly pull requests to the official console repository to modify and add the functions you need, and you can even fork the source code and then customize the individual or organization's console separately. The co-construction and sharing of open source communities rely on the extensive and active participation of community users。 ## Where: Where is the value of the console @@ -75,17 +75,17 @@ Now, as long as users start the console, they can query rich information on the [Configure and start the console please refer here](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html#id8) -- **For developers**: Developers can use the console to deploy and debug contracts. After writing a contract, they can deploy the contract in the console and use the call command to call the verification contract logic to observe the contract running process and results.。If the business side uses Java to develop applications, you can use the contract compilation tool of the console to compile the Solidity contract into a Java client code file for the client's Java project call.。 +- **For developers**: Developers can use the console to deploy and debug contracts. After writing a contract, they can deploy the contract in the console and use the call command to call the verification contract logic to observe the contract running process and results。If the business side uses Java to develop applications, you can use the contract compilation tool of the console to compile the Solidity contract into a Java client code file for the client's Java project call。 -- **For test and operation personnel**After building a blockchain environment, you can use the console to view the status of the blockchain, operate the blockchain configuration, and test or check related blockchain functions.。 +- **For test and operation personnel**After building a blockchain environment, you can use the console to view the status of the blockchain, operate the blockchain configuration, and test or check related blockchain functions。 -**In short, the console has become a window of FISCO BCOS's extreme experience, a powerful weapon that continues to bring real value to users.**。 +**In short, the console has become a window of FISCO BCOS's extreme experience, a powerful weapon that continues to bring real value to users**。 ------ #### **Note**: -1. The console is already a standard feature of the FISCO BCOS SDK. Currently, the Java version of the console (a separate console repository has been established), the Python version of the console and the Node.js console are available.。The above is mainly for the Java version of the console, other console-related functions are roughly the same.。 +1. The console is already a standard feature of the FISCO BCOS SDK. Currently, the Java version of the console (a separate console repository has been established), the Python version of the console and the Node.js console are available。The above is mainly for the Java version of the console, other console-related functions are roughly the same。 **2. Console Document List**: diff --git a/3.x/en/docs/articles/4_tools/44_sdk/javasdk_performance_improvement_8000-30000.md b/3.x/en/docs/articles/4_tools/44_sdk/javasdk_performance_improvement_8000-30000.md index 0ee9d83c3..df52f095b 100644 --- a/3.x/en/docs/articles/4_tools/44_sdk/javasdk_performance_improvement_8000-30000.md +++ b/3.x/en/docs/articles/4_tools/44_sdk/javasdk_performance_improvement_8000-30000.md @@ -4,32 +4,32 @@ Author : LI Hui-zhong | Senior Architect, FISCO BCOS ## Origin -FISCO BCOS Reaches 20,000 in China ICT Institute Trusted Blockchain Evaluation+ TPS's transaction processing capabilities, leading in similar products。The test target is the underlying platform, the purpose is to stress test the performance of the underlying platform, the main evaluation target is the underlying platform's transaction processing capacity.。 +FISCO BCOS Reaches 20,000 in China ICT Institute Trusted Blockchain Evaluation+ TPS's transaction processing capabilities, leading in similar products。The test target is the underlying platform, the purpose is to stress test the performance of the underlying platform, the main evaluation target is the underlying platform's transaction processing capacity。 -The transaction construction is done by the client (integrated with the SDK), which can usually be easily extended in parallel.。SDK to complete the process of transaction construction, to achieve the transaction group package, encoding, signing, sending and a series of operations, these processes are stateless, the client can be extended through multi-threaded way, a client performance bottleneck, you can add more clients to expand。 +The transaction construction is done by the client (integrated with the SDK), which can usually be easily extended in parallel。SDK to complete the process of transaction construction, to achieve the transaction group package, encoding, signing, sending and a series of operations, these processes are stateless, the client can be extended through multi-threaded way, a client performance bottleneck, you can add more clients to expand。 -Although the parallel expansion of the "heap machine" can solve the performance problem of the sender, the machine itself is a precious resource, further optimization of algorithm efficiency, improve resource utilization, will be of great benefit.。So, we plan to test the current performance of the JavaSDK first, mainly to evaluate the performance of the generated transaction。Generating transactions includes the process of transaction group packages, parameter codes, transaction codes, transaction signatures, etc., of which transaction signatures are the most important part, with the following test data. +Although the parallel expansion of the "heap machine" can solve the performance problem of the sender, the machine itself is a precious resource, further optimization of algorithm efficiency, improve resource utilization, will be of great benefit。So, we plan to test the current performance of the JavaSDK first, mainly to evaluate the performance of the generated transaction。Generating transactions includes the process of transaction group packages, parameter codes, transaction codes, transaction signatures, etc., of which transaction signatures are the most important part, with the following test data -- 8 nuclear machines to test the time-consuming nature of locally generated 50W transactions -- Fully parallel: Generates per second**8498 pens**transactions, taking an average of 0.12 ms per transaction -- Fully serial: 1504 transactions per second, taking an average of 0.66 milliseconds per transaction +- 8-core machine to test the time taken to generate 50W transactions locally +- Fully parallel: generated per second**8498 pens**transactions, taking an average of 0.12 ms per transaction +- Fully serial: 1504 transactions generated per second, taking an average of 0.66 milliseconds per transaction -Compared to C++Implementation of transaction signature, this performance is not high。Judging from past experience, there is a lot of room for optimization.!So the team began a road of performance optimization of JavaSDK.。 +Compared to C++Implementation of transaction signature, this performance is not high。Judging from past experience, there is a lot of room for optimization!So the team began a road of performance optimization of JavaSDK。 ## PROCESS About performance optimization, the community has done many times to share, you can read the following articles: - [FISCO BCOS Consensus Optimization Path](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485295&idx=2&sn=46cff7fcdf2e807325532941fcbc98fe&chksm=9f2ef573a8597c65d159c17298ecec02097aafeedfd0192154d9d530c9a0d0a79a22c33894a0&scene=21#wechat_redirect) -- [Synchronization of blockchain and its performance optimization method](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485283&idx=1&sn=c2028923dc7ec7d8bfa808febc57e596&chksm=9f2ef57fa8597c6911f629b324e466f7058e4ae5da06aab8484c1d7db3203496ffebd9562ecb&scene=21#wechat_redirect) -- [FISCO BCOS Trading Pool and Its Optimization Strategy](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485255&idx=1&sn=3947f289f75813c13a2f58fb00d2018e&chksm=9f2ef55ba8597c4de0a3e427f03af7b7b327a54cf440a36d38b62f520591f81463e6aca772fd&scene=21#wechat_redirect) +- [Blockchain synchronization and its performance optimization method](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485283&idx=1&sn=c2028923dc7ec7d8bfa808febc57e596&chksm=9f2ef57fa8597c6911f629b324e466f7058e4ae5da06aab8484c1d7db3203496ffebd9562ecb&scene=21#wechat_redirect) +- [FISCO BCOS trading pool and its optimization strategy](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485255&idx=1&sn=3947f289f75813c13a2f58fb00d2018e&chksm=9f2ef55ba8597c4de0a3e427f03af7b7b327a54cf440a36d38b62f520591f81463e6aca772fd&scene=21#wechat_redirect) - [FISCO BCOS Performance Optimization - Tools](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485323&idx=1&sn=d63421fa2353d0e9a1506f01516f3416&chksm=9f2ef597a8597c8134f17053236863c501fb6e4f7480cca1a37926e7cfc309f8faacadf1786a&scene=21#wechat_redirect) - [FISCO BCOS FAST AND PASSION:](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485329&idx=1&sn=4bb6c31ff10ae1ae03cd0dbeaf023a4c&chksm=9f2ef58da8597c9bcda115382624012c240ce56a5f84450d9808281b8736bd2e9b8833eab5f2&scene=21#wechat_redirect)[Performance Optimization Scheme Most Full Decryption](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485329&idx=1&sn=4bb6c31ff10ae1ae03cd0dbeaf023a4c&chksm=9f2ef58da8597c9bcda115382624012c240ce56a5f84450d9808281b8736bd2e9b8833eab5f2&scene=21#wechat_redirect) -- [Let the barrel have no short board, FISCO BCOS comprehensive promotion of parallel transformation](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485324&idx=1&sn=94cdd4e7944f1058ee01eadbb7b3ec98&chksm=9f2ef590a8597c86af366b6d3d69407d3be0d3d7e50455d2b229c1d69b1fdc6748999601cd05&scene=21#wechat_redirect) +- [Let the barrel have no short board, FISCO BCOS comprehensively promote the parallel transformation](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485324&idx=1&sn=94cdd4e7944f1058ee01eadbb7b3ec98&chksm=9f2ef590a8597c86af366b6d3d69407d3be0d3d7e50455d2b229c1d69b1fdc6748999601cd05&scene=21#wechat_redirect) In the process of these performance optimizations, there are two sentences that are most deeply touched: "Premature optimization is the root of all evil" and "Optimization without any evidence is the root of all evil." -Optimization depends on data, and obtaining data requires effective analysis tools, so the first task of this JavaSDK performance optimization is to determine what performance analysis tools to use.。 +Optimization depends on data, and obtaining data requires effective analysis tools, so the first task of this JavaSDK performance optimization is to determine what performance analysis tools to use。 ### Tools: found that java's own analysis tools are quite useful @@ -41,11 +41,11 @@ Using HPROF ran once and got the following data report: ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5723.JPG) -The data shows that the hot spot is at the very bottom of the library function.。This conclusion is not in line with expectations, indicating that the SDK itself code is not a hot spot, performance optimization is more difficult to start.。 +The data shows that the hot spot is at the very bottom of the library function。This conclusion is not in line with expectations, indicating that the SDK itself code is not a hot spot, performance optimization is more difficult to start。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5724.JPG) -Fortunately, the good news came soon: using jvisualvm, another tool that comes with java, to better visualize the output of performance analysis data, but also intuitively show that there are hot spots in the JavaSDK itself.。 +Fortunately, the good news came soon: using jvisualvm, another tool that comes with java, to better visualize the output of performance analysis data, but also intuitively show that there are hot spots in the JavaSDK itself。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5725.JPG) @@ -54,7 +54,7 @@ Fortunately, the good news came soon: using jvisualvm, another tool that comes w ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5726.JPG) -Through the analysis of the jvisualvm tool, the biggest hot spot is in the generation of random numbers, which surprised me a bit.。 +Through the analysis of the jvisualvm tool, the biggest hot spot is in the generation of random numbers, which surprised me a bit。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5727.JPG) @@ -71,7 +71,7 @@ Group chats have been silent for a long time... then study SecureRandom to see w ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5730.JPG) -My phrase "replace SecureRandom with ThreadLocalRandom to calculate Nonce" hasn't been typed yet, and a new round of face-bashing has begun.。 +My phrase "replace SecureRandom with ThreadLocalRandom to calculate Nonce" hasn't been typed yet, and a new round of face-bashing has begun。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5731.PNG) @@ -80,16 +80,16 @@ The discussion in this direction has opened another door for thinking about this ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5732.JPG) -The discussion makes sense, and then a closer look at the data confirms the idea at the data level.。 +The discussion makes sense, and then a closer look at the data confirms the idea at the data level。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5733.JPG) -Experimental tests prove that this is the right direction, the number of threads down, the hot spot disappeared。There are hot spots in random numbers because the number of concurrent threads in stress testing is too large, and too many threads preempt resources, resulting in slow random number acquisition.。 +Experimental tests prove that this is the right direction, the number of threads down, the hot spot disappeared。There are hot spots in random numbers because the number of concurrent threads in stress testing is too large, and too many threads preempt resources, resulting in slow random number acquisition。 ### Re-analysis: Finding shocking hot spots -After the first round of analysis, a "pseudo hot spot" was found, but the performance improvement was still ineffective.。The revolution has not yet succeeded, comrades still need to work hard!Reduce the number of threads and run the performance analysis again. The performance data is as follows: +After the first round of analysis, a "pseudo hot spot" was found, but the performance improvement was still ineffective。The revolution has not yet succeeded, comrades still need to work hard!Reduce the number of threads and run the performance analysis again. The performance data is as follows: ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5734.JPG) @@ -101,7 +101,7 @@ Octopus (Wang Zhang) seconds back, the team's passion awakened ~ for him this re ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5736.JPG) -The implementation of the signature algorithm is actually to sign first and then do the verification, and the verification is to get a value called recoveryID (the recovery principle of ECDSA will be expanded in detail in another article)。Here's why we're excited。The purpose of the recoveryID setting is to allow future users to quickly recover the public key from the signature. Without this recoveryID, four possibilities are required to recover the public key.。This approach does not reduce the actual cost at all, but only uses the "dry kun move" to transfer the cost to the signature link.。In fact, recoveryIDs are available in a faster way, as seen in the next section。[This part of the code is inherited from web3j, before no in-depth study of its implementation, the current web3j is still this implementation]。Out of the instinct of the old code farmer, he wanted to find out the boundary of the problem in the first place when he found the symptoms, so there was such an analysis and attempt. +The implementation of the signature algorithm is actually to sign first and then do the verification, and the verification is to get a value called recoveryID (the recovery principle of ECDSA will be expanded in detail in another article)。Here's why we're excited。The purpose of the recoveryID setting is to allow future users to quickly recover the public key from the signature. Without this recoveryID, four possibilities are required to recover the public key。This approach does not reduce the actual cost at all, but only uses the "dry kun move" to transfer the cost to the signature link。In fact, recoveryIDs are available in a faster way, as seen in the next section。[This part of the code is inherited from web3j, before no in-depth study of its implementation, the current web3j is still this implementation]。Out of the instinct of the old code farmer, he wanted to find out the boundary of the problem in the first place when he found the symptoms, so there was such an analysis and attempt ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5737.JPG) @@ -109,16 +109,16 @@ The same old driver's brother octopus quickly gives a gratifying conclusion: ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5738.JPG) -Here, the bottom of the heart!At least killing recoverFromSignature can double the performance, as for how to kill, uh...。Out of curiosity, I want to see what the performance data will be like at this time (thanks to curiosity)。 +Here, the bottom of the heart!At least killing recoverFromSignature can double the performance, as for how to kill, uh..。Out of curiosity, I want to see what the performance data will be like at this time (thanks to curiosity)。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5739.JPG) -Re-adjust the pose (replace it with a version that uses ThreadLocalRandom to generate nonce and reduces the number of concurrent threads to 10), and get another round to get the surprise data again.。 +Re-adjust the pose (replace it with a version that uses ThreadLocalRandom to generate nonce and reduces the number of concurrent threads to 10), and get another round to get the surprise data again。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5740.JPG) -In this case, the performance distribution is more uniform, and there are no obvious hot spots.。 +In this case, the performance distribution is more uniform, and there are no obvious hot spots。 ![](../../../../images/articles/javasdk_performance_improvement_8000-30000/IMG_5741.JPG) @@ -129,7 +129,7 @@ At one o'clock in the morning, I finally got a satisfactory result, and the mood ### Pit filling: calculation method of recoveryID -Due to the Java Cryptographic Algorithm Library (bc-Java) is not familiar enough, here also encountered a lot of pits, and finally through the inheritance of cryptographic algorithm library, more required parameters will be exposed back to the upper layer, finally realized the Java version of the recoveryID calculation.。 +Due to the lack of familiarity with the Java Cryptographic Algorithm Library (bc-java), there are many pits here, and finally through the inheritance of the cryptographic algorithm library, more required parameters will be exposed back to the upper layer, finally realized the Java version of the recoveryID calculation。 ``` // Now we have to work backwards to figure out the recId needed to recover the signature. @@ -145,8 +145,8 @@ Due to the Java Cryptographic Algorithm Library (bc-Java) is not familiar enough } ``` -The recovery mechanism and recoveryID generation principle of the ECDSA algorithm will be expanded in detail in the subsequent push.。 +The recovery mechanism and recoveryID generation principle of the ECDSA algorithm will be expanded in detail in the subsequent push。 ## Afterword -Every time to do performance optimization, is a very cool experience, not to stay up late, but never lack of passion。The code is stripped of its thread, going through the process of repeatedly discovering bottlenecks, mercilessly beating the face, regaining confidence, and finally reaching the willow.。 End with those two sentences again: premature optimization is the root of all evil, and optimization without any data support is the root of all evil.。Mutual encouragement! \ No newline at end of file +Every time to do performance optimization, is a very cool experience, not to stay up late, but never lack of passion。The code is stripped of its thread, going through the process of repeatedly discovering bottlenecks, mercilessly beating the face, regaining confidence, and finally reaching the willow。 End with those two sentences again: premature optimization is the root of all evil, and optimization without any data support is the root of all evil。Mutual encouragement! \ No newline at end of file diff --git a/3.x/en/docs/articles/4_tools/44_sdk/multilingual_sdk.md b/3.x/en/docs/articles/4_tools/44_sdk/multilingual_sdk.md index 8379e8bfc..e89e39df0 100644 --- a/3.x/en/docs/articles/4_tools/44_sdk/multilingual_sdk.md +++ b/3.x/en/docs/articles/4_tools/44_sdk/multilingual_sdk.md @@ -1,20 +1,20 @@ -# Although the sparrow is small, it has all five internal organs.| From Python-SDK Talk about FISCO BCOS Multilingual SDK +# Although the sparrow is small, it has all five internal organs| Talk about FISCO BCOS Multilanguage SDK from Python-SDK Author: Zhang Kaixiang | Chief Architect, FISCO BCOS FISCO BCOS 2.0 has its own official console since its release. After continuous use and polishing by the community, it is strong enough, perfect and friendly。 -The community also has blockchain applications developed in various development languages. In order to meet the needs of developers to facilitate the management of blockchain nodes, Python is currently-SDK and Nodejs-The SDK is already on the shelves, and the go language version is already on the way.。 +The community also has blockchain applications developed in various development languages. To meet the needs of developers to facilitate the management of blockchain nodes, Python-SDK and Nodejs-SDK have been put on the shelves, and the go language version is already on the way。 ---- -In this article, the author is most familiar with Python-Take SDK as an example, and share some details about SDK development, including the application development process, protocol coding and decoding, network communication, and security issues.。 +This article takes Python-SDK, which I am most familiar with, as an example to share some of the SDK development, covering application development processes, protocol codecs, network communications, and security issues。 FISCO BCOS comes with its own quick build feature, after five minutes of one-click chain, developers only need to connect to the blockchain node, write contracts, issue transactions。 -The console and SDK are positioned to help users quickly access the blockchain, develop and test smart contracts, and implement business logic.。According to "**Occam Razor**"The design philosophy should be as light, modular, and shallow as possible, without introducing redundant features, and without creating an additional burden on users and secondary developers."。 +The console and SDK are positioned to help users quickly access the blockchain, develop and test smart contracts, and implement business logic。According to "**Occam Razor**"The design philosophy should be as light, modular, and shallow as possible, without introducing redundant features, and without creating an additional burden on users and secondary developers."。 -The client console and SDK are like a well-controlled and well-configured express for developers and users to drive, relax and enjoy the road of blockchain applications.。 +The client console and SDK are like a well-controlled and well-configured express for developers and users to drive, relax and enjoy the road of blockchain applications。 ## Console Experience @@ -24,13 +24,13 @@ First, combine the whole process from preparing the environment to calling the c #### 1. Prepare the environment -Before you begin, please**Read through user manuals and development documentation (very important!link at the end of the article)**According to the documentation, step by step initializes the environment, installs the dependency library, and currently Python-SDK supports Linux / Mac / Windows operating systems。 +Before you begin, please**Read through user manuals and development documentation (very important!link at the end of the article)**According to the documentation, step by step initializes the environment and installs dependency libraries. Currently, Python-SDK supports linux / mac / windows operating systems。 To connect to a blockchain node, you need to modify the local configuration file and enter the corresponding network port of the blockchain node. If you select the Channel protocol, you need to configure the corresponding client certificate。 #### 2. Online Experience -After configuring the network, you can run the get series of commands in the console。Try the feel and get in touch with FISCO BCOS。Confirm that the chain is working properly. Common commands include getNodeVersion, getBlockNumber, getPeers, and getSyncStatus. You can use the console usage or help command to learn about all supported commands.。 +After configuring the network, you can run the get series of commands in the console。Try the feel and get in touch with FISCO BCOS。Confirm that the chain is working properly. Common commands include getNodeVersion, getBlockNumber, getPeers, and getSyncStatus. You can use the console usage or help command to learn about all supported commands。 #### 3. Create an account @@ -38,27 +38,27 @@ Create a new account, a public-private key pair that represents your identity, a The account-related commands provided by the console are**newaccount, showaccount**(parameters are account name and password)。If you want to use the new account you just created to sign the transaction, remember to configure it to the appropriate location in the client _ config.py file。 -In addition, if the account information requires a high level of protection, secondary development is possible。Put it into secure areas such as encryption machines and TEEs, and develop solutions such as key fragmentation and mnemonics.。 +In addition, if the account information requires a high level of protection, secondary development is possible。Put it into secure areas such as encryption machines and TEEs, and develop solutions such as key fragmentation and mnemonics。 #### 4. Preparation of contracts -Write a smart contract, or modify and customize it according to the smart contract example in the SDK to implement your own business logic.。This article focuses on the Solidity smart contract, FISCO BCOS also has a "pre-compiled contract," using C.++Development, need to compile jointly with FISCO BCOS underlying code。 +Write a smart contract, or modify and customize it according to the smart contract example in the SDK to implement your own business logic。This article focuses on the Solidity smart contract, FISCO BCOS also has a "pre-compiled contract," using C++Development, need to compile jointly with FISCO BCOS underlying code。 #### 5. Compile deployment -Compile the contract to obtain the ABI interface file and BIN binary file of the contract.。Python-The bcos _ solc.py file in the SDK can help developers simplify compiler configuration and invocation. At the same time, as long as the contract path and compiler path information are correctly configured, run the console deployment or call contract interface instructions directly, and try to compile the contract automatically.。 +Compile the contract to obtain the ABI interface file and BIN binary file of the contract。The bcos _ solc.py file in the Python-SDK can help developers simplify compiler configuration and invocation. At the same time, as long as the contract path and compiler path information are correctly configured, run the console deployment or call contract interface instructions directly, and try to compile the contract automatically。 -If the contract is deployed independently, you can use the deploy command of the console, and the new contract address will be obtained after the deployment command is successful.。The reference command is. / console.py deploy SimpleInfo save, where SimpleInfo is the contract name (no suffix required), and the final"**save**"is optional, if you specify"**save**"the new contract address is recorded in a local file for subsequent use.。 +If the contract is deployed independently, you can use the deploy command of the console, and the new contract address will be obtained after the deployment command is successful。The reference command is. / console.py deploy SimpleInfo save, where SimpleInfo is the contract name (no suffix required), and the final"**save**"is optional, if you specify"**save**"the new contract address is recorded in a local file for subsequent use。 #### 6. Call the contract: -Use the call or sendtx command to specify the contract name, contract address, method name, and corresponding parameters to call the on-chain contract.。 +Use the call or sendtx command to specify the contract name, contract address, method name, and corresponding parameters to call the on-chain contract。 Refer to the command. / console.py sendtx SimpleInfo last setbalance 100, that is, select the SimpleInfo contract and point to the address of its most recently deployed success (with"**last**"refers to, you can save the tedious operation of copying and pasting the contract address), call the setbalance interface, pass in the parameter 100。 -After the transaction consensus is completed, the console will automatically print the method return code and transaction Event log information list in the transaction receipt for the user to view.。 +After the transaction consensus is completed, the console will automatically print the method return code and transaction Event log information list in the transaction receipt for the user to view。 -If everything is normal, you can basically follow the path of blockchain application here.。 +If everything is normal, you can basically follow the path of blockchain application here。 It is worth mentioning that FISCO BCOS several language versions of the console, all support the Tab key to prompt instructions and automatic completion, to help users smooth and error-free operation, improve the user experience。 @@ -66,7 +66,7 @@ Still further, if you want a rich, visually interactive page experience, use the ## Learn More (Dive Deeper) -The module combination of the whole SDK is as follows, it can be said that although the sparrow is small, the five internal organs are complete.。 +The module combination of the whole SDK is as follows, it can be said that although the sparrow is small, the five internal organs are complete。 ![](../../../../images/articles/multilingual_sdk/IMG_4957.PNG) @@ -76,60 +76,60 @@ Supporting interactive modules such as the console is a fully encapsulated, out- #### 1. get series -Many "get" beginning of the interface, used to obtain a variety of information on the chain, including blocks, transactions, receipts, status, system information, and so on.。Although dozens of get interfaces, but its implementation logic is basically the same, are specified command words and parameter lists, requests and processing responses, the implementation is also very fast.。 +Many "get" beginning of the interface, used to obtain a variety of information on the chain, including blocks, transactions, receipts, status, system information, and so on。Although dozens of get interfaces, but its implementation logic is basically the same, are specified command words and parameter lists, requests and processing responses, the implementation is also very fast。 #### 2. call -Constant method corresponding to contract。The so-called constant method means that the corresponding code in the contract does not modify the state, the request will not be broadcast across the network, only run on the specified node.。 +Constant method corresponding to contract。The so-called constant method means that the corresponding code in the contract does not modify the state, the request will not be broadcast across the network, only run on the specified node。 #### 3. sendRawTransaction -Build a transaction, sign it with the account's private key, send it to the chain, this transaction is broadcast, consensus processing is performed, and the generated status data is confirmed by the entire network。Deploying a new contract is actually a transaction, but you don't need to specify the target contract address.。 +Build a transaction, sign it with the account's private key, send it to the chain, this transaction is broadcast, consensus processing is performed, and the generated status data is confirmed by the entire network。Deploying a new contract is actually a transaction, but you don't need to specify the target contract address。 Related is**sendRawTransactionGetReceipt**The name is long, in**sendRawTransaction**On the basis of the increase in the process of obtaining receipts**,**Used to simplify the closed-loop process from issuing transactions to obtaining receipts。 #### 4.More -The API for FISCO BCOS's global system configuration, node management, CNS, permissions and other system-level functions, which is based on the system contract on the read-write chain, see the end of the article for a detailed list of instructions.。 -Developers can refer to the console and client / bcosclient.py and other code for secondary development to achieve more cool features.。In addition, a series of development libraries and gadgets are built into the SDK to help manage accounts, output logs, unified exception handling, and simple performance and time-consuming statistics.。 +The API for FISCO BCOS's global system configuration, node management, CNS, permissions and other system-level functions, which is based on the system contract on the read-write chain, see the end of the article for a detailed list of instructions。 +Developers can refer to the console and client / bcosclient.py and other code for secondary development to achieve more cool features。In addition, a series of development libraries and gadgets are built into the SDK to help manage accounts, output logs, unified exception handling, and simple performance and time-consuming statistics。 ## Contract development related -Around contract development, Python-The SDK implements contract compilation and deployment, contract address localization management, ABI interface file management, and supports automatic code generation (see codegen.py). A command line can generate code for direct use by the business side, such as +Around contract development, Python-SDK implements contract compilation and deployment, contract address localization management, ABI interface file management, supports automatic code generation (see codegen.py), a command line can generate code for direct use by the business side, such as python codegen.py contracts/SimpleInfo.abi。 -The ABI file compiled by the solidity contract is a good thing.。The full name of ABI is**Application Binary Interface**(application binary interface), which details the contract interface information, including method name, parameter list and type, method type (constant method, or transaction method), and Event log format definition, etc.。 +The ABI file compiled by the solidity contract is a good thing。The full name of ABI is**Application Binary Interface**(application binary interface), which details the contract interface information, including method name, parameter list and type, method type (constant method, or transaction method), and Event log format definition, etc。 -For ABI management, see client / datatype _ parser.py, load and parse ABI files (default is JSON format), according to the method name, method 4-byte signature, method type and other dimensions, flexible query method list and method definition, and for method definition, input data encoding and decoding, parsing transaction return values, Event logs, etc.。 +For ABI management, see client / datatype _ parser.py, load and parse ABI files (default is JSON format), according to the method name, method 4-byte signature, method type and other dimensions, flexible query method list and method definition, and for method definition, input data encoding and decoding, parsing transaction return values, Event logs, etc。 -With the ABI definition in hand, the manipulation of the contract is simply arbitrary, developers read the ABI description, basically can fully understand the input and output of a contract, and the contract without obstacles to dialogue, this "**Programming for Remote Interface**"The idea is very similar to classic software design such as WSDL, IDL, ACE, ProtoBuffer, and gRPC.。 +With the ABI definition in hand, the manipulation of the contract is simply arbitrary, developers read the ABI description, basically can fully understand the input and output of a contract, and the contract without obstacles to dialogue, this "**Programming for Remote Interface**"The idea is very similar to classic software design such as WSDL, IDL, ACE, ProtoBuffer, and gRPC。 -In fact, the most cumbersome part of the entire SDK is the ABI codec. In order to be compatible with EVM, FISCO BCOS uses ABI encoding for transaction processing and is compatible with the RLP protocol.。 +In fact, the most cumbersome part of the entire SDK is the ABI codec. In order to be compatible with EVM, FISCO BCOS uses ABI encoding for transaction processing and is compatible with the RLP protocol。 -ABI, RLP has established strict specifications, the basic data types, arrays and variable-length data, function methods, parameter lists, etc. have specific coding and decoding methods, otherwise the components can not communicate with each other, the data can not be parsed, the virtual machine "does not know" the input transactions, can not execute the contract.。 +ABI, RLP has established strict specifications, the basic data types, arrays and variable-length data, function methods, parameter lists, etc. have specific coding and decoding methods, otherwise the components can not communicate with each other, the data can not be parsed, the virtual machine "does not know" the input transactions, can not execute the contract。 -If you write the codec here by yourself, it will take a lot of time, even if you are proficient, to ensure that the test passes and the version is compatible, fortunately, there is already eth on github.-abi、eth-utils, rlp and a series of open source projects (mostly MIT loose license agreement), can be introduced into these projects and revised according to specific needs (retain the original author's statement and copyright open source license), can save a lot of work, to thank the authors of these projects, open source is cool.! +If you hand-write the codec here, even if you are proficient, it will take a lot of time, but also to ensure that the test passes, to maintain version compatibility, fortunately github already has eth-abi, eth-utils, rlp and a series of open source projects (mostly MIT loose license agreement), you can introduce these projects and modify according to specific needs (retain the original author statement and copyright open source license), can save a lot! ## Transaction data structure related -In addition to the basic codec, you also need to implement the FISCO BCOS transaction structure, focusing on the randomid and blocklimit fields added to support parallel processing transactions, the fiscoChainId and groupId fields added to support group features, and the transaction output added to the transaction receipt.。 +In addition to the basic codec, you also need to implement the FISCO BCOS transaction structure, focusing on the randomid and blocklimit fields added to support parallel processing transactions, the fiscoChainId and groupId fields added to support group features, and the transaction output added to the transaction receipt。 -The blocklimit of a transaction is defined as "the transaction lifecycle, the block height of the transaction to be processed at the latest," and the SDK needs to periodically query the current block height on the chain to determine the lifecycle of the current transaction (for example, this transaction is allowed to be processed within the next 100 blocks).。 +The blocklimit of a transaction is defined as "the transaction lifecycle, the block height of the transaction to be processed at the latest," and the SDK needs to periodically query the current block height on the chain to determine the lifecycle of the current transaction (for example, this transaction is allowed to be processed within the next 100 blocks)。 For developers, a clear understanding of the transaction input (tx.input), transaction receipt (tx.receipt), transaction output(tx.output)is very important。 ![](../../../../images/articles/multilingual_sdk/IMG_4958.JPG) -When a transaction calls a method in a contract, it first combines the method name with a list of parameter types, such as**'set(string,uint256,address)'**Keccak this paragraph of text-256 (SHA-3)Compute and intercept the first 4 bytes as the "method signature" (signature), and then perform ABI encoding on the incoming parameters in turn according to the type definition, and"Method signature"Splicing a string of binary data as input data for the transaction。 +When a transaction calls a method in a contract, it first combines the method name with a list of parameter types, such as**'set(string,uint256,address)'**Keccak-256 for this paragraph of text(SHA-3)Compute and intercept the first 4 bytes as the "method signature" (signature), and then perform ABI encoding on the incoming parameters in turn according to the type definition, and"Method signature"Splicing a string of binary data as input data for the transaction。 -And other fields of the transaction structure (from, to, groupid, randomid, etc.) together with RLP encoding, and signed with the account private key, to obtain a piece of binary request data, sent by sendRawTransaction to the node, after the node receives, immediately return the transaction Hash to the client.。 +And other fields of the transaction structure (from, to, groupid, randomid, etc.) together with RLP encoding, and signed with the account private key, to obtain a piece of binary request data, sent by sendRawTransaction to the node, after the node receives, immediately return the transaction Hash to the client。 -The transaction is confirmed by the network consensus on the chain, and after the processing is completed, the detailed results of the transaction processing can be obtained through the getTransactionReceipt interface (the transaction Hash obtained before the incoming).。 +The transaction is confirmed by the network consensus on the chain, and after the processing is completed, the detailed results of the transaction processing can be obtained through the getTransactionReceipt interface (the transaction Hash obtained before the incoming)。 ![](../../../../images/articles/multilingual_sdk/IMG_4959.JPG) -The following fields are particularly critical in transaction receipts. +The following fields are particularly critical in transaction receipts #### 1. contractAddress @@ -137,25 +137,25 @@ Valid only when contract transactions are deployed, indicating the address of th #### 2. output -The return value of the corresponding method, which can be used to determine the final result of the business logic (depending on how the contract is written).。 +The return value of the corresponding method, which can be used to determine the final result of the business logic (depending on how the contract is written)。 #### 3. Logs -If you write some Event logs in the contract code, you can decode the detailed information in the logs field of receipt.。Event logs can be used to help clients monitor and track transaction processing results, and can even help developers debug the contract execution process, which is equivalent to typing debug logs in the contract.。Of course, when the contract is officially released, the debugged Event log should be cleared and only the necessary logs should be retained to avoid redundant information being stored on the chain.。 +If you write some Event logs in the contract code, you can decode the detailed information in the logs field of receipt。Event logs can be used to help clients monitor and track transaction processing results, and can even help developers debug the contract execution process, which is equivalent to typing debug logs in the contract。Of course, when the contract is officially released, the debugged Event log should be cleared and only the necessary logs should be retained to avoid redundant information being stored on the chain。 -Python-The SDK client has built-in methods for parsing fields such as "method signature" (find the corresponding method definition based on the 4-byte signature), transaction input / output, and receipt.logs.。 +The Python-SDK client has built-in methods for parsing the "method signature" (according to the 4-byte signature, find the corresponding method definition), transaction input / output, receipt.logs and other fields。 When using the console command line, as long as the contract name is attached to the command line (provided that the user knows what contract the transaction is calling), the relevant data can also be automatically parsed, for example:. / console.py getTransactionReceipt 0x79b98dbb56d2eea289f756e212d5b6e5c08960beaaaempa33c。 -This thoughtful little design can help developers intuitively explore the context of blockchain transactions, be clear about all kinds of information at a glance, and not get lost in the sea of hexadecimal characters like gobbledygook.。 +This thoughtful little design can help developers intuitively explore the context of blockchain transactions, be clear about all kinds of information at a glance, and not get lost in the sea of hexadecimal characters like gobbledygook。 ## network protocol -Finally, let's talk about the two network protocols of FISCO BCOS: JSON RPC and Channel Long Connect.。 +Finally, let's talk about the two network protocols of FISCO BCOS: JSON RPC and Channel Long Connect。 ![](../../../../images/articles/multilingual_sdk/IMG_4960.PNG) -JSON RPC connections do not have certificate verification and communication encryption. It is recommended to use them in a secure and trusted environment, such as a local machine or an intranet. It is generally used for O & M management and statistical analysis.。 +JSON RPC connections do not have certificate verification and communication encryption. It is recommended to use them in a secure and trusted environment, such as a local machine or an intranet. It is generally used for O & M management and statistical analysis。 The format of JSON RPC is quite simple and universal. Various language libraries have built-in JSON codec and HTTP request protocol implementation, which generally do not need to be developed by themselves, and you can even use curl, telnet and other tools to send and receive, such as: @@ -166,9 +166,9 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"getBlockNumber","params":[1],"id { "id": 1, "jsonrpc": "2.0", "result": "0x1" } ``` -Channel protocol is a unique protocol of FISCO BCOS. Channel protocol is characterized by security and efficiency, supports two-way real-time communication, and can be used for remote calls and even public network communication.。 +Channel protocol is a unique protocol of FISCO BCOS. Channel protocol is characterized by security and efficiency, supports two-way real-time communication, and can be used for remote calls and even public network communication。 -If you use the channel long connection method, you need to obtain the SDK certificate from the blockchain node and place it in the corresponding path of the SDK project.。 +If you use the channel long connection method, you need to obtain the SDK certificate from the blockchain node and place it in the corresponding path of the SDK project。 The packet format of this protocol is shown below and is a TLV(Tag/Length/Value)Extended implementation of style: @@ -187,24 +187,24 @@ The packet format of this protocol is shown below and is a TLV(Tag/Length/Value) - 2. Long connections are maintained with heartbeat packets, which need to be initiated regularly。 -- 3. Data is encoded into streaming data transmission by packet, so when sending and receiving data, it is necessary to continuously obtain data from the socket stream, and judge whether the length is legal, whether the data is received, and whether it can be correctly parsed according to the format of the data packet.。 +- 3. Data is encoded into streaming data transmission in packets, so when sending and receiving data, you need to continue to get data from the socket stream, according to the format of the data packet, to determine whether the length is legal, whether the data is received, whether it can be correctly parsed, the "partially received" data, to be retained in the receiving buffer, to be received after the completion of the parsing, can not be discarded, otherwise it may lead。 ![](../../../../images/articles/multilingual_sdk/IMG_4962.JPG) -- 4. The Channel protocol supports two-way communication. The SDK can actively request nodes, and nodes may also push messages to the SDK, such as blockchain system notifications, data update notifications, and AMOP cross-agency messages.。 +-4. The Channel protocol supports two-way communication. The SDK can actively request nodes, and nodes may also push messages to the SDK, such as blockchain system notifications, data update notifications, and AMOP cross-agency messages。 -- 5. Design asynchronous, queued, callback message processing mechanism, according to the message sequence number, instruction type, status code and other dimensions, correctly handle the message。Python-The SDK uses multithreading and the Promise library to process messages as quickly and elegantly as possible.。 +- 5. Design asynchronous, queued, callback message processing mechanism, according to the message sequence number, instruction type, status code and other dimensions, correctly handle the message。The Python-SDK uses multithreading and the Promise library to process messages as quickly and elegantly as possible。 -Developers with some experience in socke streaming data programming will not be difficult to understand this protocol and implement it。For channel protocol implementation, see client / channelpack.py for packet parsing and client / channelhandler.py for communication and data sending and receiving.。 +Developers with some experience in socke streaming data programming will not be difficult to understand this protocol and implement it。For channel protocol implementation, see client / channelpack.py for packet parsing and client / channelhandler.py for communication and data sending and receiving。 ## SUMMARY -Python-The development of the SDK began in mid-June this year, and it took only a week to write the first available version, and then to carve out the details of user interaction, as well as code optimization, documentation improvement, and multiple rounds of testing to ensure quality, the rest of the team to implement the Nodejs version of the SDK in about the same time.。 -In general, with some basic code references, developing a FISCO BCOS language-specific SDK is still quite agile and freehand, and it's not difficult at all, Just for fun.。 -During the development and iteration of the SDK in various languages, the FISCO BCOS team and community developers have been communicating, incorporating high-quality pull requests and continuously optimizing the experience.。 +Python-SDK development began in mid-June this year, and it took only a week to write the first available version, and then to carve out the details of user interaction, as well as code optimization, documentation improvement, and multiple rounds of testing to ensure quality, the rest of the team to implement the Nodejs version of the SDK in about the same time。 +In general, with some basic code references, developing a FISCO BCOS language-specific SDK is still quite agile and freehand, and it's not difficult at all, Just for fun。 +During the development and iteration of the SDK in various languages, the FISCO BCOS team and community developers have been communicating, incorporating high-quality pull requests and continuously optimizing the experience。 -Community developers are welcome to continue to improve the existing SDK or contribute FISCO BCOS SDK in more languages according to the actual situation of their usage scenarios, so as to help more developers walk on the blockchain smoothly.。 -Finally, I would like to thank Jago, Mr. An, Xiaobai, wheat and other students, as well as many community developers for their interest in Python.-Important contribution of SDK。 +Community developers are welcome to continue to improve the existing SDK or contribute FISCO BCOS SDK in more languages according to the actual situation of their usage scenarios, so as to help more developers walk on the blockchain smoothly。 +Finally, I would like to thank Jago, Mr. An, Xiaobai, wheat and other students, as well as many community developers for their important contributions to Python-SDK。 ------ diff --git a/3.x/en/docs/articles/4_tools/44_sdk/node.js_sdk_quick_start.md b/3.x/en/docs/articles/4_tools/44_sdk/node.js_sdk_quick_start.md index 8a57e6434..4111f4822 100644 --- a/3.x/en/docs/articles/4_tools/44_sdk/node.js_sdk_quick_start.md +++ b/3.x/en/docs/articles/4_tools/44_sdk/node.js_sdk_quick_start.md @@ -2,7 +2,7 @@ Author: Li Chen Xi | FISCO BCOS Core Developer -SDK generally refers to a collection of APIs, development aids and documents provided to facilitate developers to develop applications for the system.。In the face of different developer groups, FISCO BCOS has successively launched a Java SDK for enterprise application developers (rich and stable), and a Python SDK for individual developers (quick and lightweight).。In the actual promotion process, we found that some developers are accustomed to using JavaScript to build the front end of the application, if there is an SDK that supports the use of JavaScript for back-end development, then the language barrier between the front and back end can be further broken - developers only need to understand JavaScript, you can complete the entire FISCO BCOS application front and back end development.。In this context, the FISCO BCOS Node.js SDK was born。This article will introduce the design and use of Node.js SDK。 +SDK generally refers to a collection of APIs, development aids and documents provided to facilitate developers to develop applications for the system。In the face of different developer groups, FISCO BCOS has successively launched a Java SDK for enterprise application developers (rich and stable), and a Python SDK for individual developers (quick and lightweight)。In the actual promotion process, we found that some developers are accustomed to using JavaScript to build the front end of the application, if there is an SDK that supports the use of JavaScript for back-end development, then the language barrier between the front and back end can be further broken - developers only need to understand JavaScript, you can complete the entire FISCO BCOS application front and back end development。In this context, the FISCO BCOS Node.js SDK was born。This article will introduce the design and use of Node.js SDK。 ## I. Design of SDK @@ -12,9 +12,9 @@ The Node.js SDK follows the hierarchical design principle, with clear boundaries Bottom-up in the figure: -- **Base Layer**: Provide basic functions such as network communication, transaction construction and signature, contract compilation, etc.; -- **API layer**: Based on the functions provided by the base layer, the common functions of FISCO BCOS are further encapsulated, exposing the API used to call the upper application layer.。These APIs cover basic JSON RPC functionality (query chain status, deployment contracts, etc.) and precompiled contract functionality (rights management, CNS services, etc.); -- **application layer**Applications that use the Node.js SDK for secondary development belong to this layer, such as the CLI (Command-Line Interface) chain management tools。 +- **Base Layer**: Provide basic functions such as network communication, transaction construction and signature, contract compilation, etc; +- **API layer**: Based on the functions provided by the base layer, the common functions of FISCO BCOS are further encapsulated, exposing the API used to call the upper application layer。These APIs cover basic JSON RPC functionality (query chain status, deployment contracts, etc.) and precompiled contract functionality (rights management, CNS services, etc.); +- **application layer**Applications that use the Node.js SDK for secondary development belong to this layer, such as the CLI (Command-Line Interface, Command Line Interface) chain management tool that comes with the Node.js SDK。 ## Second, the installation of SDK @@ -23,15 +23,15 @@ Bottom-up in the figure: The Node.js SDK depends on the following software: - Node.js version 8.10.0 or above, NPM version 5.6.0 or above; -- Python2、g++and make。Solidity compiler solc needs to be compiled before it can be used, and the basic software required for compilation needs to be provided by the user, where Python2 is used to run the build tool node-gyp,g++and make is used to compile solc。for no g++and make windows users, can install windows-build-Tools to build again。 +- Python2、g++and make。Solidity compiler solc needs to be compiled before it can be used, and the basic software required for compilation needs to be provided by the user, where Python2 is used to run the build tool node-gyp, g++and make is used to compile solc。for no g++And make Windows users, can install windows-build-tools and then build。 ### 2.2 Installing the SDK -- From https://github.com/FISCO-BCOS/nodejs-sdk download sdk; +- from https:/ / github.com / FISCO-BCOS / nodejs-sdk Download sdk; -- Enter nodejs-sdk directory; +- Enter the nodejs-sdk directory; -- Execute npm i。The Node.js SDK uses lerna to manage dependencies. This step is used to install lerna.; +- Execute npm i。The Node.js SDK uses lerna to manage dependencies. This step is used to install lerna; - Execute npm repoclean。You do not need to execute this command during the initial installation, but if the installation is interrupted halfway, it is recommended to execute this command first to clear all dependencies; @@ -42,21 +42,21 @@ After all the above commands are executed, the directory structure of the API la ![](../../../../images/articles/node.js_sdk_quick_start/IMG_5710.PNG) -For application developers, you can focus on nodejs-sdk / packages / api / web3j and nodejs-sdk / packages / api / precompiled directory. All APIs provided by the SDK are located in the modules in these two directories. You can import the corresponding modules through the require statement during application development.。 +For application developers, you can focus on the nodejs-sdk / packages / api / web3j and nodejs-sdk / packages / api / precompiled directories. All APIs provided by the SDK are located in the modules in these two directories。 ## Third, use SDK for development ### 3.1 Initialization -Before using all the features of the Node.js SDK, you must configure the SDK. The configuration is provided as a .json configuration file.。It mainly includes the following configuration items: +Before using all the features of the Node.js SDK, you must configure the SDK. The configuration is provided as a .json configuration file。It mainly includes the following configuration items: -- **privateKey**FISCO BCOS is based on a public-private key system. Each account (public key) has a corresponding private key. The SDK needs to use this private key to sign transactions.; +- **privateKey**FISCO BCOS is based on a public-private key system. Each account (public key) has a corresponding private key. The SDK needs to use this private key to sign transactions; - **nodes**: There can be multiple nodes connected to the SDK. When the number of nodes is greater than 1, each request of the SDK will randomly call up a node from nodes and send it; -- **authentication**: The SDK uses the Channel protocol to communicate with the node, and the Channel uses the SSL secure transmission protocol to transmit data. Before the two parties establish communication, necessary authentication is required。Therefore, this configuration item needs to indicate the path of the SDK private key file, certificate file, and CA root certificate file, which are usually automatically generated at the stage of the generation chain.; -- **groupID**FISCO BCOS uses a multi-group architecture. The same node can belong to multiple groups. Therefore, you need to specify the group ID of the chain that the SDK needs to connect to.; -- **timeout**Due to the network environment, the SDK request may time out, which may cause the program calling the SDK interface to fall into endless waiting. Therefore, you need to specify a timeout period.。 +- **authentication**: The SDK uses the Channel protocol to communicate with the node, and the Channel uses the SSL secure transmission protocol to transmit data. Before the two parties establish communication, necessary authentication is required。Therefore, this configuration item needs to indicate the path of the SDK private key file, certificate file, and CA root certificate file, which are usually automatically generated at the stage of the generation chain; +- **groupID**FISCO BCOS uses a multi-group architecture. The same node can belong to multiple groups. Therefore, you need to specify the group ID of the chain that the SDK needs to connect to; +- **timeout**Due to the network environment, the SDK request may time out, which may cause the program calling the SDK interface to fall into endless waiting. Therefore, you need to specify a timeout period。 -Developers need to load the configuration file into the Configuration object during the initialization phase. The Configuration object is globally unique and shared by all modules.。 +Developers need to load the configuration file into the Configuration object during the initialization phase. The Configuration object is globally unique and shared by all modules。 ### 3.2 Call example @@ -84,13 +84,13 @@ web3jService.getBlockNumber().then(blockNumber => { }); ``` -Note that the above code uses the Promise.protoype.then method。Promise, as its name suggests, encapsulates an asynchronous operation, and "promises" that after the operation is over, the callback function specified by the user in then or catch will be called.。 +Note that the above code uses the Promise.protoype.then method。Promise, as its name suggests, encapsulates an asynchronous operation, and "promises" that after the operation is over, the callback function specified by the user in then or catch will be called。 -Because Node.js naturally supports asynchronous features, the concept of Promise exists everywhere in the Node.js SDK (careful readers may have noticed that the module responsible for channel communication in the Node.js SDK base layer is called channelPromise)。The API calling convention of the Node.js SDK is that all API calls return the Promise object, and developers need to use the await or then... catch... method to get the call result.。Therefore, developers need to be careful when calling the API. If the return value of the API is directly used, it will easily lead to bugs.。 +Because Node.js naturally supports asynchronous features, the concept of Promise exists everywhere in the Node.js SDK (careful readers may have noticed that the module responsible for channel communication in the Node.js SDK base layer is called channelPromise)。The API calling convention of the Node.js SDK is that all API calls return the Promise object, and developers need to use the await or then... catch... method to get the call result。Therefore, developers need to be careful when calling the API. If the return value of the API is directly used, it will easily lead to bugs。 ## IV. Using CLI Tools -In addition to the API, the Node.js SDK also provides a small CLI tool for users to operate the chain directly from the command line. The CLI tool is also an example that shows how to use the Node.js SDK for secondary development.。The CLI tool is located in the packages / cli directory. If you need to use the CLI tool, you need to enter this directory and execute the. / cli.js script. You also need to configure the CLI tool before using it. The configuration file is located in the packages / cli / conf / config.json file.。Several examples of use are given below: +In addition to the API, the Node.js SDK also provides a small CLI tool for users to operate the chain directly from the command line. The CLI tool is also an example that shows how to use the Node.js SDK for secondary development。The CLI tool is located in the packages / cli directory. If you need to use the CLI tool, you need to enter this directory and execute the. / cli.js script. You also need to configure the CLI tool before using it. The configuration file is located in the packages / cli / conf / config.json file。Several examples of use are given below: 1. View the version of the connected node: @@ -100,7 +100,7 @@ In addition to the API, the Node.js SDK also provides a small CLI tool for users ![](../../../../images/articles/node.js_sdk_quick_start/IMG_5712.PNG) -3. Deploy the contract. The contract must be placed in the packages / cli / contracts directory before deployment.。 +3. Deploy the contract. The contract must be placed in the packages / cli / contracts directory before deployment。 ![](../../../../images/articles/node.js_sdk_quick_start/IMG_5713.PNG) @@ -110,4 +110,4 @@ In addition to the API, the Node.js SDK also provides a small CLI tool for users ## The future of Node.js SDK needs you -Currently, the Node.js SDK is still growing, and in some places it still needs to be further polished, such as the need for CLI tools to parse SQL statements, or the need to optimize the performance of the SDK...... Adhering to the spirit of open source, we believe that the energy of the community can make the Node.js SDK more convenient and easy to use.! \ No newline at end of file +Currently, the Node.js SDK is still growing, and in some places it still needs to be further polished, such as the need for CLI tools to parse SQL statements, or the need to optimize the performance of the SDK...... Adhering to the spirit of open source, we believe that the energy of the community can make the Node.js SDK more convenient and easy to use! \ No newline at end of file diff --git a/3.x/en/docs/articles/4_tools/44_sdk/python-sdk_origin_function_and_realization.md b/3.x/en/docs/articles/4_tools/44_sdk/python-sdk_origin_function_and_realization.md index f1188d590..1b0ffa492 100644 --- a/3.x/en/docs/articles/4_tools/44_sdk/python-sdk_origin_function_and_realization.md +++ b/3.x/en/docs/articles/4_tools/44_sdk/python-sdk_origin_function_and_realization.md @@ -2,23 +2,23 @@ Author : Chen Yujie | FISCO BCOS Core Developer -In June this year, on the basis of the existing Java SDK, FISCO BCOS has launched the Python SDK and Go SDK, many developers said, these two components greatly reduce the difficulty of developing blockchain applications, so that more developers can participate in the open source construction of FISCO BCOS.。Today, this article will take a look at the past lives of the Python SDK to see how this simple and easy-to-use component that plays a great role is implemented.。 +In June this year, on the basis of the existing Java SDK, FISCO BCOS has launched the Python SDK and Go SDK, many developers said, these two components greatly reduce the difficulty of developing blockchain applications, so that more developers can participate in the open source construction of FISCO BCOS。Today, this article will take a look at the past lives of the Python SDK to see how this simple and easy-to-use component that plays a great role is implemented。 ## Why: Why the Python SDK? -As we all know, Python language is simple and clear, almost close to natural language, you can quickly get started;Although the Java language is powerful, the syntax and components are relatively complex and are generally used in enterprise applications.。In order to combine the advantages of the two, FISCO BCOS Chief Architect Zhang Kaixiang achieved the first version of the Python SDK after two weeks of intensive development.。This version played a huge role in the community's hackathon competition in July this year, helping participants better focus on the development project itself.。In the high-intensity hackathon competition, participants have only 36 hours to complete blockchain-related projects。In the exchange with the contestants, we found that most geek developers started from public chains such as Bitcoin and Ethereum. They are not very familiar with the Java language, but are proficient in Python, Go, and Node.JS.。At that time, the team mainly promoted Java SDK, console, WeBASE and other Java-based blockchain development tools, but in just 36 hours, it was really difficult for geek developers who were not familiar with Java to complete development based on these tools.。In this case, the first Python SDK has become a life-saving medicine for developers.。In the end, many of the entries were based on the Python SDK.。 +As we all know, Python language is simple and clear, almost close to natural language, you can quickly get started;Although the Java language is powerful, the syntax and components are relatively complex and are generally used in enterprise applications。In order to combine the advantages of the two, FISCO BCOS Chief Architect Zhang Kaixiang achieved the first version of the Python SDK after two weeks of intensive development。This version played a huge role in the community's hackathon competition in July this year, helping participants better focus on the development project itself。In the high-intensity hackathon competition, participants have only 36 hours to complete blockchain-related projects。In the exchange with the contestants, we found that most geek developers started from public chains such as Bitcoin and Ethereum. They are not very familiar with the Java language, but are proficient in Python, Go, and Node.JS。At that time, the team mainly promoted Java SDK, console, WeBASE and other Java-based blockchain development tools, but in just 36 hours, it was really difficult for geek developers who were not familiar with Java to complete development based on these tools。In this case, the first Python SDK has become a life-saving medicine for developers。In the end, many of the entries were based on the Python SDK。 ## What: What is the Python SDK? -The Python SDK has been evolved and optimized in different versions to gradually realize rich functions.。 +The Python SDK has been evolved and optimized in different versions to gradually realize rich functions。 ### Basic Python SDK -The first version of the Python SDK is mainly developed on the Windows platform and implements an interface to access the RPC service of the FISCO BCOS chain. Through the Python SDK interface, users can obtain basic information about the blockchain.(Such as block height, block, transaction, transaction receipt, etc.)You can also deploy and invoke contracts。However, this version of the SDK does not support the channel protocol, so the communication security between the SDK and the node cannot be guaranteed.;In addition, the Python SDK cannot receive the receipt and block height information pushed by the FISCO BCOS blockchain node. Therefore, after the transaction is sent, the node must be polled to obtain the latest information.。 +The first version of the Python SDK is mainly developed on the Windows platform and implements an interface to access the RPC service of the FISCO BCOS chain. Through the Python SDK interface, users can obtain basic information about the blockchain(Such as block height, block, transaction, transaction receipt, etc)You can also deploy and invoke contracts。However, this version of the SDK does not support the channel protocol, so the communication security between the SDK and the node cannot be guaranteed;In addition, the Python SDK cannot receive the receipt and block height information pushed by the FISCO BCOS blockchain node. Therefore, after the transaction is sent, the node must be polled to obtain the latest information。 ### Python SDK supporting the Channel protocol -To improve the security of communication between the SDK and nodes, the Python SDK implements the Channel protocol in the basic version.。After the SDK that supports the Channel protocol sends the transaction, it does not need to poll the node to obtain the transaction execution result, but can directly receive the push of the execution result from the node, so that after the SDK sends the transaction, it will not block the polling in the state, realizing the function of sending the transaction asynchronously.。 +To improve the security of communication between the SDK and nodes, the Python SDK implements the Channel protocol in the basic version。After the SDK that supports the Channel protocol sends the transaction, it does not need to poll the node to obtain the transaction execution result, but can directly receive the push of the execution result from the node, so that after the SDK sends the transaction, it will not block the polling in the state, realizing the function of sending the transaction asynchronously。 ### Support for Precompile contract calls @@ -26,32 +26,32 @@ In order to break through the performance bottleneck of blockchain virtual machi ### Python SDK Console -After the above functions are ready, the Python SDK can already implement all interactions with the FISCO BCOS chain, but users still cannot intuitively experience the FISCO BCOS chain through the Python SDK. Therefore, on the basis of implementing all the above interfaces, the Python SDK integrates the console.。 +After the above functions are ready, the Python SDK can already implement all interactions with the FISCO BCOS chain, but users still cannot intuitively experience the FISCO BCOS chain through the Python SDK. Therefore, on the basis of implementing all the above interfaces, the Python SDK integrates the console。 ### Python SDK multi-platform support -As we all know, Python is a cross-platform language that supports various development platforms such as Windows, MacOS, CentOS, and Ubuntu.。The Python SDK is based on the basic version of Windows system development, so there are still some problems when deployed in MacOS and CentOS systems, such as inconsistent dependency package requirements for each platform, different installation methods for Python that meet the Python SDK version, and different installation methods for contract compilers.。 +As we all know, Python is a cross-platform language that supports various development platforms such as Windows, MacOS, CentOS, and Ubuntu。The Python SDK is based on the basic version of Windows system development, so there are still some problems when deployed in MacOS and CentOS systems, such as inconsistent dependency package requirements for each platform, different installation methods for Python that meet the Python SDK version, and different installation methods for contract compilers。 -The above problems will cause some developers to run out of energy on installing the basic environment. To solve this problem, the Python SDK provides a deployment script 'init _ env.sh', which can be used by users to initialize the Python SDK environment with one click.。 +The above problems will cause some developers to run out of energy on installing the basic environment. To solve this problem, the Python SDK provides a deployment script 'init _ env.sh', which can be used by users to initialize the Python SDK environment with one click。 The main features of the script 'init _ env.sh' include: -- If the Python version of the system running the Python SDK is less than 3.6.0, install the 3.7.3 Python virtual environment python-sdk; +- If the Python version of the system running the Python SDK is less than 3.6.0, install version 3.7.3 of the Python virtual environment python-sdk; - Install v0.4.25 version of the solidity compiler。 ## How: How to implement Python SDK -The previous section introduced the main features of the Python SDK, and this section talks about the implementation of the Python SDK.。 +The previous section introduced the main features of the Python SDK, and this section talks about the implementation of the Python SDK。 -Rome was not built in a day, and the Python SDK referred to [Ethereum Client] during development.(https://github.com/ethereum/web3.py)Some modules, including: +Rome was not built in a day, and the Python SDK referred to [Ethereum Client] during development(https://github.com/ethereum/web3.py)Some modules, including: - ABI Codec Module -- RLP Codec Module -- Account and Key Generation Module +- RLP codec module +- Account and key generation module ### Implementing the Channel Protocol -On the basis of the above modules, the Python SDK implements the RPC interface and the sending transaction interface that support the FISCO BCOS Channel protocol.。The Channel message package type definition is implemented in the module 'client.channelpack.ChannelPack'. The main message package types include: +On the basis of the above modules, the Python SDK implements the RPC interface and the sending transaction interface that support the FISCO BCOS Channel protocol。The Channel message package type definition is implemented in the module 'client.channelpack.ChannelPack'. The main message package types include: - 0x12: RPC type messages - 0x13: Node Heartbeat Packet @@ -65,21 +65,21 @@ On the basis of the above modules, the Python SDK implements the RPC interface a The implementation of the channel protocol is located in the 'client.channel.handler.ChannelHandler' module. In order to support asynchronous sending transactions, the Python SDK introduces the 'pymitter' and 'promise' components to manage asynchronous events. The main workflow of the module to send transactions and obtain transaction receipts is as follows: - 1. The Python SDK calls the 'sendRawTransactionAndGetReceipt' interface to encode and sign the transaction, and uses the encoded data as the parameters of the 'ChannelHandler' module to send the socket request packet; -- 2. After ChannelHandler receives the data of 1, register the uuid of the data packet to the asynchronous event queue, put the message packet into the send buffer and send it; -- 3. After the node has processed the transaction, it sends a transaction chain notification of type 0x1001 to the SDK.; -- 4. After receiving the return packet with message type 0x1001, SDK-side ChannelHandler takes out the uuid of the message packet, triggers the asynchronous event corresponding to the uuid, and returns the transaction execution result to the upper-level application。 +-2. After ChannelHandler receives the data of 1, register the uuid of the data packet to the asynchronous event queue, put the message packet into the send buffer and send it; +-3. After the node processes the transaction, it sends a transaction chain notification of type 0x1001 to the SDK; +-4. After receiving the return packet with message type 0x1001, the SDK-side ChannelHandler takes out the uuid of the message packet, triggers the asynchronous event corresponding to the uuid, and returns the transaction execution result to the upper-level application。 ### Implement Console The Python SDK console is implemented in the 'console' module, which mainly calls the module interface to interact with the blockchain: -- 'client.bcosclient ': Python SDK basic interface class, which provides the interface for accessing FISCO BCOS RPC services, deploying and invoking contracts; -- 'console _ utils.precompile ': Provides access to Precompile precompiled contracts; +- 'client.bcosclient': Python SDK basic interface class, which provides the interface for accessing FISCO BCOS RPC services, deploying and invoking contracts; +- 'console _ utils.precompile': Provides access to Precompile precompiled contracts; - `console_utils.rpc_console`: Encapsulates' client.bcosclient 'to call RPC services from the command line。 ## Python SDK usage demonstration -Learn about the origin, features, and implementation of the Python SDK. Here's a look at the Python SDK from the console perspective.: +Learn about the origin, features, and implementation of the Python SDK. Here's a look at the Python SDK from the console perspective: ### Get Node Version @@ -90,14 +90,14 @@ Enter '. / console.py getNodeVersion' in the command line terminal to obtain the ### Deploying HelloWorld Contracts -The Python SDK has a built-in HelloWorld contract. Enter '. / console.py deploy HelloWorld' in the command line terminal to deploy the 'HelloWorld' contract. If you need to deploy a custom contract, you need to place the contract in the 'contracts' subdirectory and use the command '. / console.py deploy [contract name]' to deploy the contract.。The following figure is the HelloWorld contract deployment output, from which you can see that the contract address is: `0xbbe16a7054c0f1d3b71f4efdb51b9e40974ad651` +The Python SDK has a built-in HelloWorld contract. Enter '. / console.py deploy HelloWorld' in the command line terminal to deploy the 'HelloWorld' contract. If you need to deploy a custom contract, you need to place the contract in the 'contracts' subdirectory and use the command '. / console.py deploy [contract name]' to deploy the contract。The following figure is the HelloWorld contract deployment output, from which you can see that the contract address is: `0xbbe16a7054c0f1d3b71f4efdb51b9e40974ad651` ![](../../../../images/articles/python-sdk_origin_function_and_realization/IMG_5716.JPG) ### Invoking the HelloWorld contract -The Python SDK console uses the 'sendtx' subcommand to send transactions and the call subcommand to call the contract constant interface.。HelloWorld contract code is as follows, mainly including 'set' and get two interfaces, the former is used to set the contract local variable 'name', the latter is a constant interface, get the current value of the local variable name。 +The Python SDK console uses the 'sendtx' subcommand to send transactions and the call subcommand to call the contract constant interface。HelloWorld contract code is as follows, mainly including 'set' and get two interfaces, the former is used to set the contract local variable 'name', the latter is a constant interface, get the current value of the local variable name。 ``` pragma solidity^0.4.24; @@ -127,10 +127,10 @@ Because the 'set' interface changes the contract status, it is called using the Parameters include: -- contract _ name: contract name -- contract _ address: contract address -- function: function interface -- args: parameter list +-contract _ name: contract name +-contract _ address: contract address +-function: function interface +-args: parameter list Use '. / console.py sendtx HelloWorld 0xbbe16a7054c0f1d3b71f4efdb51b9e40974ad651 set"Hello,Fisco"command sets the 'name' member variable of the 'HelloWorld' contract to"Hello, Fisco"and enter as follows: @@ -146,10 +146,10 @@ The get of 'HelloWorld' is a constant interface, so it is called using the 'call Parameters include: -- contract _ name: contract name -- contract _ address: the address of the contract called -- function: the contract interface called -- args: call parameter +-contract _ name: contract name +-contract _ address: the address of the contract called +-function: the contract interface called +-args: call parameter Use. / console.py call HelloWorld 0xbbe16a7054c0f1d3b71f4efdb51b9e40974ad651 get to get the latest value of the HelloWorld contract name member variable: @@ -158,5 +158,5 @@ Use. / console.py call HelloWorld 0xbbe16a7054c0f1d3b71f4efdb51b9e40974ad651 get ## Summary -This article describes the past and present life of the Python SDK, including the origin, function and implementation of the Python SDK.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/python_sdk/index.html)。Of course, the Python SDK is still in the process of continuous improvement, welcome community enthusiasts to pay attention to [related issue](https://github.com/FISCO-BCOS/python-sdk/issues)and contribute valuable PR to the optimization of Python SDK。 +This article describes the past and present life of the Python SDK, including the origin, function and implementation of the Python SDK(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/python_sdk/index.html)。Of course, the Python SDK is still in the process of continuous improvement, welcome community enthusiasts to pay attention to [related issue](https://github.com/FISCO-BCOS/python-sdk/issues)and contribute valuable PR to the optimization of Python SDK。 diff --git a/3.x/en/docs/articles/4_tools/44_sdk/python_blockchain_box.md b/3.x/en/docs/articles/4_tools/44_sdk/python_blockchain_box.md index 9e0526e23..7c4af6a4c 100644 --- a/3.x/en/docs/articles/4_tools/44_sdk/python_blockchain_box.md +++ b/3.x/en/docs/articles/4_tools/44_sdk/python_blockchain_box.md @@ -6,35 +6,35 @@ The author is a Python developer who encapsulates the FISCO BCOS Python SDK as a ## Foreword -As a Python developer, I've always wanted to learn about blockchain through Python.。By coincidence, at an open source annual meeting in 2019, I contacted and joined the FISCO BCOS open source community, and since then, I have been spending my spare time pondering the Python of FISCO BCOS.-SDK。 +As a Python developer, I've always wanted to learn about blockchain through Python。By coincidence, at an open source annual meeting in 2019, I contacted and joined the FISCO BCOS open source community, and since then, I have been spending my spare time pondering the Python-SDK of FISCO BCOS。 -When configuring the environment, I spent some time, so I also came up with the idea of encapsulating the entire framework into a docker image, I named it "Python blockchain box," just like Minecraft's "workbench," which can improve the speed of configuring the environment and improve ease of use.。With that in mind, I started using my spare time to write Dockerfiles.。 +When configuring the environment, I spent some time, so I also came up with the idea of encapsulating the entire framework into a docker image, I named it "Python blockchain box," just like Minecraft's "workbench," which can improve the speed of configuring the environment and improve ease of use。With that in mind, I started using my spare time to write Dockerfiles。 ![](../../../../images/articles/python_blockchain_box/IMG_5746.PNG) -I will build a good docker image to share with the students around the experience, "ready to use" feature response is very good, we will not be afraid because of the environment configuration is difficult, like Steve in Minecraft, put down the "workbench" can create a bunch of useful tools out.。 +I will build a good docker image to share with the students around the experience, "ready to use" feature response is very good, we will not be afraid because of the environment configuration is difficult, like Steve in Minecraft, put down the "workbench" can create a bunch of useful tools out。 ## What is a "Python Blockchain Box"?? -Before answering this question, let's take a look at Python.-SDK。This is open source by FISCO BCOS and helps developers develop components for blockchain applications using the Python language.。Since it is developed through the Python language, I believe it will have a continuous vitality。 +Before answering this question, let's take a look at the Python SDK。This is open source by FISCO BCOS and helps developers develop components for blockchain applications using the Python language。Since it is developed through the Python language, I believe it will have a continuous vitality。 ![](../../../../images/articles/python_blockchain_box/IMG_5747.PNG) -Python's reading difficulty is relatively low, especially for students and beginners through Python.-SDK to understand and learn blockchain。You can try to install the building according to the following environmental requirements。 +Python's reading difficulty is relatively low, especially for students and beginners to understand and learn blockchain through Python-SDK。You can try to install the building according to the following environmental requirements。 -- Python environment: Python 3.6.3, 3.7.x +- Python environment: python 3.6.3, 3.7.x - FISCO BCOS node: Please refer to [FISCO BCOS Installation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/installation.html) -About Python-SDK, click to refer to the development tutorial launched by the FISCO BCOS team。 +For Python-SDK, please refer to the development tutorial launched by FISCO BCOS team。 -- [The sparrow is small and has all five internal organs| From Python-SDK Talk about FISCO BCOS Multilingual SDK](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485291&idx=1&sn=c359380a89621d1a64856183568825ee&chksm=9f2ef577a8597c61e5dd5e458d489926138a42808a06517f4d6515d4666dc11a08646ccebea2&scene=21#wechat_redirect) -- [《Python-SDK's Past Lives](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485256&idx=1&sn=f1e70be6c53ea7e690392ce1ac8b5f5e&chksm=9f2ef554a8597c4278c630f60923a683b9e47499319e31aab41ccfd98715a9db2300e7d782e3&scene=21#wechat_redirect) +- ["Sparrow is small and has five internal organs| Talk about FISCO BCOS Multilingual SDK from Python-SDK](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485291&idx=1&sn=c359380a89621d1a64856183568825ee&chksm=9f2ef577a8597c61e5dd5e458d489926138a42808a06517f4d6515d4666dc11a08646ccebea2&scene=21#wechat_redirect) +- [Python - Past Lives of the SDK](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485256&idx=1&sn=f1e70be6c53ea7e690392ce1ac8b5f5e&chksm=9f2ef554a8597c4278c630f60923a683b9e47499319e31aab41ccfd98715a9db2300e7d782e3&scene=21#wechat_redirect) -"Python Blockchain Box" is equivalent to Python that will be configured-The SDK and the deployed blockchain are packaged into a package, similar to organizing a large house into an RV。In this way, users do not need to pay attention to environment configuration issues, which can reduce deployment time - images can be obtained in less than a minute, which is convenient for developers to quickly get started and facilitate automated operation and maintenance.。You can also try to combine this service with JenKins to further optimize the operation and maintenance process.。The "Python blockchain box" can be used as a workbench for Python blockchain development, a ready-to-use toolbox, and can ensure a clean development environment, and most importantly, it is very light, just like the workbench in Minecraft.。 +The "Python blockchain box" is equivalent to packaging a configured Python-SDK and a deployed blockchain into a package, similar to organizing a large house into an RV。In this way, users do not need to pay attention to environment configuration issues, which can reduce deployment time - images can be obtained in less than a minute, which is convenient for developers to quickly get started and facilitate automated operation and maintenance。You can also try to combine this service with JenKins to further optimize the operation and maintenance process。The "Python blockchain box" can be used as a workbench for Python blockchain development, a ready-to-use toolbox, and can ensure a clean development environment, and most importantly, it is very light, just like the workbench in Minecraft。 -As long as you have a computer, you can open this toolbox anytime, anywhere。The process of installing the toolbox has also become enjoyable: you can get started with direct research and development with just one line of code, without paying too much attention to the complexity of the environment configuration, saving a lot of time and freeing your hands.。 +As long as you have a computer, you can open this toolbox anytime, anywhere。The process of installing the toolbox has also become enjoyable: you can get started with direct research and development with just one line of code, without paying too much attention to the complexity of the environment configuration, saving a lot of time and freeing your hands。 ## Get and Run the "Python Blockchain Box" @@ -68,25 +68,25 @@ INFO >> user input : ['getNodeVersion'] INFO >> getNodeVersion >> { "Build Time": "20190923 13:22:09", "Build Type": "Linux/clang/Release", "Chain Id": "1", "FISCO-BCOS Version": "2.1.0", "Git Branch": "HEAD", "Git Commit Hash": "cb68124d4fbf3df563a57dfff5f0c6eedc1419cc", "Supported Version": "2.1.0" } ``` -After completing these, it was successful, back and forth, equivalent to building an open source blockchain framework in a few seconds.。You can put your contract in / python-For more information about sdk / contracts, see [Python-How to use SDK](https://github.com/FISCO-BCOS/python-sdk)。This ready-to-use "blockchain box" is very helpful for developers who want to use Python to develop blockchain applications or learn blockchain.。Developers can do this by calling / python.-Functions in the sdk / client use the. / console.py command line and interact with the blockchain running in the box。 +After completing these, it was successful, back and forth, equivalent to building an open source blockchain framework in a few seconds。You can put your own contract in / python-sdk / contracts, more can refer to [Python-SDK usage](https://github.com/FISCO-BCOS/python-sdk)。This ready-to-use "blockchain box" is very helpful for developers who want to use Python to develop blockchain applications or learn blockchain。Developers can use the. / console.py command line and interact with the blockchain running in the box by calling the function in / python-sdk / client。 -The following will be in Python-Flask development as an example to implement the function of calling the HelloWorld contract。 +The following will take Python-Flask development as an example to implement the function of calling the HelloWorld contract。 -- Step1 into container +-step1 into container ``` docker run -it -p 20200:20200 -p 80:80 --name flask_web fiscoorg/playground:python_sdk ``` -- step2 Start Node +-step2 Start Node ``` bash /root/fisco/nodes/127.0.0.1/start_all.sh ``` -- step3 Deploying HelloWorld Contracts +-step3 Deploy HelloWorld contract -/python-HelloWorld.sol is stored in the sdk / contract. You can directly use this contract for testing.。First, check the contents of the HelloWorld.sol contract。 +HelloWorld.sol is stored in / python-sdk / contract. You can directly use this contract for testing。First, check the contents of the HelloWorld.sol contract。 ``` pragma solidity ^0.4.24; @@ -138,9 +138,9 @@ on block : 1,address: 0x2d1c577e41809453c50e7e5c3f57d06f3cdd90ce address save to file: bin/contract.ini ``` -After completion, you can get the address of the HelloWord contract deployment, and call the function interface through this address.。 +After completion, you can get the address of the HelloWord contract deployment, and call the function interface through this address。 -- step4 in / python-Edit app.py under the sdk folder +-step4 Edit app.py under the / python-sdk folder ``` $ vi app.py @@ -171,21 +171,21 @@ if __name__ == '__main__': app.run(host="0.0.0.0", port=80) ``` -- step5 Install app.py Dependency / Runner +-step5 Install app.py Dependency / Runner ``` pip install flaskpython app.py ``` -This is done through Python.-The Flask framework implements calling the HelloWorld contract, calling the get interface to view the string, and calling the set interface to update the string.。Python for FISCO BCOS-SDK is very suitable for students like me or beginners to study and understand blockchain technology。I'm looking forward to more developers to participate in it and use it to build more interesting and fun open source projects.。Dockerfile address please refer to the end of the article, recently I will also make some updates to it to improve its ease of operation, the latest operation manual and news will be released on GitHub, welcome everyone to pay attention。Click for reference [More Python Demo](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/python_sdk/demo.html)。 +In this way, the Python-Flask framework is used to call the HelloWorld contract, call the get interface to view the string, and call the set interface to update the string。FISCO BCOS Python-SDK is very suitable for students like me or beginners to study and understand blockchain technology。I'm looking forward to more developers to participate in it and use it to build more interesting and fun open source projects。Dockerfile address please refer to the end of the article, recently I will also make some updates to it to improve its ease of operation, the latest operation manual and news will be released on GitHub, welcome everyone to pay attention。Click for reference [More Python Demo](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/python_sdk/demo.html)。 ## 3. Submit pr experience On how to submit pr in FISCO BCOS clickable reference open source community collated content, here is not much to explain。For details, please refer to [Uncovering FISCO BCOS Open Source Project Development Collaboration](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485380&idx=1&sn=1f32ddad49b542206d24739f3de98b95&chksm=9f2ef5d8a8597cce973c9321543174de0e9a0bfebf1750cef4f6ae1641f4d189ea22d616cf98&scene=21#wechat_redirect) -I would like to share the experience of an individual submitting a pr, the whole process is both novel and interesting。After initiating the idea of "blockchain box," I quickly wrote the first Dockerfile, and then submitted the pr。Soon the community's little brother Shi Xiang replied to me, and at first I thought it was a foreign friend, so I kept using it." Poor English "communicate with him。He is very welcome to my PR, but also continue to give me praise and support, he not only solved many problems in the configuration process, but also from time to time to share some small stories to encourage me.。In the process of merging my pr, I am very grateful to the team for their warm help, timely review comments to me, and carefully introduce me to the content functions that need to be added, which makes me fully feel the atmosphere of harmonious coexistence and mutual help in the FISCO BCOS community.。 +I would like to share the experience of an individual submitting a pr, the whole process is both novel and interesting。After initiating the idea of "blockchain box," I quickly wrote the first Dockerfile, and then submitted the pr。Soon the community's little brother Shi Xiang replied to me, and at first I thought it was a foreign friend, so I kept using it" Poor English "communicate with him。He is very welcome to my PR, but also continue to give me praise and support, he not only solved many problems in the configuration process, but also from time to time to share some small stories to encourage me。In the process of merging my pr, I am very grateful to the team for their warm help, timely review comments to me, and carefully introduce me to the content functions that need to be added, which makes me fully feel the atmosphere of harmonious coexistence and mutual help in the FISCO BCOS community。 -At present, the "Python blockchain box" still has some areas to be optimized. For example, every time a container is started, manual operations are required.--> The machine exposes the required ports, the default is 20200,8045,30300。Later, you may consider optimizing the functionality of the default startup node, adding data volumes, and optimizing the container size so that fiscoorg / playground:python _ sdk more refined。If you have good optimization ideas, welcome to submit pr, to a fun novel pr experience +At present, the "Python blockchain box" still has some areas to be optimized. For example, every time a container is started, manual operations are required. The conditions for starting a node are->The machine exposes the required ports, the default is 20200, 8045, 30300。Later, you may consider optimizing the functionality of the default startup node, adding data volumes, and optimizing the container size so that fiscoorg / playground:python _ sdk more refined。If you have good optimization ideas, welcome to submit pr, to a fun novel pr experience ------ diff --git a/3.x/en/docs/articles/4_tools/44_sdk/talking_about_java-contract-code.md b/3.x/en/docs/articles/4_tools/44_sdk/talking_about_java-contract-code.md index 276cbfa0e..e31be98c8 100644 --- a/3.x/en/docs/articles/4_tools/44_sdk/talking_about_java-contract-code.md +++ b/3.x/en/docs/articles/4_tools/44_sdk/talking_about_java-contract-code.md @@ -2,7 +2,7 @@ Author : WANG Zhang | FISCO BCOS Core Developer -FISCO BCOS provides SDKs in multiple languages, including Go, NodeJS, Python, and Java。The Java SDK is different from other language SDKs. When calling a contract, you need to use the contract compilation tool to generate the corresponding Java code from the source code of the Solidity contract.。This Java code generated by the contract compilation tool with the same name as the Solidity contract, commonly known as the Java contract code, this article describes how to generate and use this code.。 +FISCO BCOS provides SDKs in multiple languages, including Go, NodeJS, Python, and Java。The Java SDK is different from other language SDKs. When calling a contract, you need to use the contract compilation tool to generate the corresponding Java code from the source code of the Solidity contract。This Java code generated by the contract compilation tool with the same name as the Solidity contract, commonly known as the Java contract code, this article describes how to generate and use this code。 ## How to Generate Java Contract Code @@ -12,7 +12,7 @@ The contract compilation tool can generate the corresponding Java code from the The Contract Application Binary Interface (ABI) is the standard way to interact with contracts in the Ethereum ecosystem, both from outside the blockchain and for contract-to-contract interaction. ``` -ABI is the standard way to interact with contracts in the Ethereum ecosystem, including the interaction between external clients and contracts, and the interaction between contracts.。More generally, ABI is a specific description of a contract interface, including a list of contract interfaces, interface names, parameter names, parameter types, return types, and so on.。This description is usually in JSON format, see [ABI format details](https://solidity.readthedocs.io/en/develop/abi-spec.html#json)。In the EVM ecosystem, the Solidity compiler can generate contract ABI information。When the contract compilation tool generates Java code, compile the Solidity contract to generate ABI information, parse the ABI file, and determine the list of interfaces contained in the contract, the list of input parameter names / types of each interface, and the return type according to the description of the ABI file.。Based on this information, the contract compilation tool generates an interface for the generated Java contract contract class。Specific can refer to the following examples。 +ABI is the standard way to interact with contracts in the Ethereum ecosystem, including the interaction between external clients and contracts, and the interaction between contracts。More generally, ABI is a specific description of a contract interface, including a list of contract interfaces, interface names, parameter names, parameter types, return types, and so on。This description is usually in JSON format, see [ABI format details](https://solidity.readthedocs.io/en/develop/abi-spec.html#json)。In the EVM ecosystem, the Solidity compiler can generate contract ABI information。When the contract compilation tool generates Java code, compile the Solidity contract to generate ABI information, parse the ABI file, and determine the list of interfaces contained in the contract, the list of input parameter names / types of each interface, and the return type according to the description of the ABI file。Based on this information, the contract compilation tool generates an interface for the generated Java contract contract class。Specific can refer to the following examples。 ``` / / Sample contract HelloWorld.sol @@ -39,7 +39,7 @@ HelloWorld Contract ABI: [{"constant":false,"inputs":[{"name":"n","type":"string"}],"name":"set","outputs":[],"payable":false,"type":"function","stateMutability":"nonpayable"},{"constant":true,"inputs":[],"name":"get","outputs":[{"name":"","type":"string"}],"payable":false,"type":"function","stateMutability":"view"},{"inputs":[],"type":"constructor","payable":true,"stateMutability":"payable"}] ``` -The above code contains descriptions of three interfaces: set, get, and constructor.(Default constructor, no parameters are handled)。For the generation of set and get interfaces, please refer to the following figure.。 +The above code contains descriptions of three interfaces: set, get, and constructor(Default constructor, no parameters are handled)。For the generation of set and get interfaces, please refer to the following figure。 ### set interface @@ -61,22 +61,22 @@ public class HelloWorld { } ``` -**Set and get in HelloWorld.java class are encapsulations of HelloWorld contract get and set calls, respectively。**As can be seen from the above introduction, the contract compilation tool obtains the contract ABI information through compilation, obtains the contract interface description information by parsing the ABI content, and generates the corresponding interface for the Java class.。 +**Set and get in HelloWorld.java class are encapsulations of HelloWorld contract get and set calls, respectively。**As can be seen from the above introduction, the contract compilation tool obtains the contract ABI information through compilation, obtains the contract interface description information by parsing the ABI content, and generates the corresponding interface for the Java class。 ## Java Object Oriented -Having learned how to generate Java contract code, the next step is to explain how to invoke the contract through the generated interface.。The HelloWorld contract is still used here to illustrate how to call the interface.: +Having learned how to generate Java contract code, the next step is to explain how to invoke the contract through the generated interface。The HelloWorld contract is still used here to illustrate how to call the interface: ``` HelloWorld helloWorld; / / Initialize HelloWorld object, omit TransactionReceipt receipt = helloWorld.set("HelloWorld").send(); / / call the set interface ``` -This gesture of invoking contracts in the Java SDK can be summarized as: operating contracts for Java objects.。In this way, the user only needs to use the contract compilation tool to generate the Java contract class, all operations on the contract are based on the constructed Java contract object, no longer need to pay attention to the contract ABI, send the details of acceptance, transaction packaging encoding, decoding of the results returned and other masked details.。Please refer to [Java SDK Tutorial] for specific ways to call the contract.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk.html)。It is worth mentioning that in certain scenarios, the object-oriented Java calling method obviously cannot meet the requirements, for example: the Java contract code cannot be generated in advance, or the transaction signing and transaction construction services need to be separated.。In these scenarios, using gestures like nodejs / python sdk is more flexible。But the most flexible is that users themselves care about the overall process of transaction coding and decoding, packaging, signing, sending, retrieving, and decoding.。 +This gesture of invoking contracts in the Java SDK can be summarized as: operating contracts for Java objects。In this way, the user only needs to use the contract compilation tool to generate the Java contract class, all operations on the contract are based on the constructed Java contract object, no longer need to pay attention to the contract ABI, send the details of acceptance, transaction packaging encoding, decoding of the results returned and other masked details。Please refer to [Java SDK Tutorial] for specific ways to call the contract(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk.html)。It is worth mentioning that in certain scenarios, the object-oriented Java calling method obviously cannot meet the requirements, for example: the Java contract code cannot be generated in advance, or the transaction signing and transaction construction services need to be separated。In these scenarios, using gestures like nodejs / python sdk is more flexible。But the most flexible is that users themselves care about the overall process of transaction coding and decoding, packaging, signing, sending, retrieving, and decoding。 ![](../../../../images/articles/talking_about_java-contract-code/IMG_5745.PNG) ## SUMMARY -The use of the Java SDK helps users mask the details of the encoding, signing, sending, receiving, decoding and other processes, and generate Java contract code through the previous contract ABI conversion, which can be used in multiple places at once.。However, there are also users who want to be able to grasp the whole process of transaction sending, or to decouple the various processes of transaction sending in specific scenarios.。In this case, the NodeJS, Python, and Go versions of the client support these details more fully, and the Java SDK will gradually open the interfaces of each module.。 \ No newline at end of file +The use of the Java SDK helps users mask the details of the encoding, signing, sending, receiving, decoding and other processes, and generate Java contract code through the previous contract ABI conversion, which can be used in multiple places at once。However, there are also users who want to be able to grasp the whole process of transaction sending, or to decouple the various processes of transaction sending in specific scenarios。In this case, the NodeJS, Python, and Go versions of the client support these details more fully, and the Java SDK will gradually open the interfaces of each module。 \ No newline at end of file diff --git a/3.x/en/docs/articles/4_tools/44_sdk/use_javasdk_in_eclipse.md b/3.x/en/docs/articles/4_tools/44_sdk/use_javasdk_in_eclipse.md index 4e08329ad..229d617f2 100644 --- a/3.x/en/docs/articles/4_tools/44_sdk/use_javasdk_in_eclipse.md +++ b/3.x/en/docs/articles/4_tools/44_sdk/use_javasdk_in_eclipse.md @@ -4,7 +4,7 @@ Author : WANG Zhang | FISCO BCOS Core Developer Eclipse is one of the current mainstream Java IDE, this article will be hands-on, guide how to create FISCO BCOS JavaSDK applications in Eclipse。 -This article first describes how to create a new project in Eclipse, introduce JavaSDK dependencies into the project, interact with the blockchain through configuration, and finally get the blockchain's block high validation to create the project。At the same time, this article will also introduce a more convenient way to use it in actual development, and import the sample projects already provided into Eclipse.。 +This article first describes how to create a new project in Eclipse, introduce JavaSDK dependencies into the project, interact with the blockchain through configuration, and finally get the blockchain's block high validation to create the project。At the same time, this article will also introduce a more convenient way to use it in actual development, and import the sample projects already provided into Eclipse。 **Note:** @@ -23,7 +23,7 @@ Open Eclipse, right-click and select the Project option under New, as shown in t ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5632.PNG) -In the New Project dialog box that appears, select Gradle = > Gradle Project and click Next: +In the New Project dialog box that appears, select Gradle => Gradle Project, click Next: ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5633.PNG) @@ -71,12 +71,12 @@ dependencies { } ``` -Then right click on the project name: Gradle = > Refresh Gradle Project Refresh Project。 +Then right click on the project name: Gradle => Refresh Gradle Project。 ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5636.PNG) -**注意**: To refresh the project, you may need to download the JAR-dependent package from the remote Maven library. Please ensure that the network is unblocked. The download process will take some time.。 +**注意**: To refresh the project, you may need to download the JAR-dependent package from the remote Maven library. Please ensure that the network is unblocked. The download process will take some time。 ### Certificates and Profiles @@ -90,13 +90,13 @@ At this point, we have completed the creation of the new project, introduced and ### Create a class package -Right-click the project name, select New = > Package, and enter the package name. Here, use org.fisco.bcos.test。 +Right-click the project name and select New => Package and enter the package name, here use org.fisco.bcos.test。 ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5638.JPG) ### Create a test class -Right click on the package name and select 'New = > Class "。 +Right-click on the package name and select 'New => Class’。 ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5639.PNG) @@ -136,7 +136,7 @@ public class NodeVersionTest { } ``` -Right-click the NodeVersionTest.java file and select Run As = > Java Application Run Test Class。 +Right-click on the NodeVersionTest.java file and select Run As => Java Application Run Test Class。 ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5640.JPG) @@ -146,7 +146,7 @@ Run results: ## Eclipse Import Project -As can be seen from the above process, the process of creating a new project requires more configuration processes, in order to facilitate the use of users, we provide the existing sample project asset-App, users can quickly import it into Eclipse, and quickly modify and develop their own applications based on the sample.。Please refer to [asset-App Project Details](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/tutorial/sdk_application.html)。 +As can be seen from the above process, the process of creating a new project requires more configuration processes. In order to facilitate users' use, we provide the existing sample project asset-app, which users can quickly import into Eclipse and quickly modify and develop their own applications based on the sample。Please refer to [asset-app project details](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/tutorial/sdk_application.html)。 ### Download asset-app project engineering @@ -162,13 +162,13 @@ Open Eclipse Select: File => Import => Gradle => Existing Gradle Project。 ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5643.JPG) -Click Next and select Asset-app path, click Finish and wait for the project to load。 +Click Next, select the asset-app path, click Finish and wait for the project to load。 ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5644.JPG) -After the project is loaded, right-click the project name: Gradle = > Refresh Gradle Project Refresh Project。 +After the project is loaded, right-click the project name: Gradle => Refresh Gradle Project。 ![](../../../../images/articles/use_javasdk_in_eclipse/IMG_5645.JPG) -ok! asset-The app project has been loaded normally。 +ok! Asset-app project has been loaded normally。 diff --git a/3.x/en/docs/articles/4_tools/45_othertools/contract_analysis_tool_guide.md b/3.x/en/docs/articles/4_tools/45_othertools/contract_analysis_tool_guide.md index 53c364cf3..1363e4fb7 100644 --- a/3.x/en/docs/articles/4_tools/45_othertools/contract_analysis_tool_guide.md +++ b/3.x/en/docs/articles/4_tools/45_othertools/contract_analysis_tool_guide.md @@ -2,7 +2,7 @@ Author : Liao Feiqiang | FISCO BCOS Core Developer -> This article will introduce FISCO BCOS's transaction parsing tool, which helps developers easily and quickly parse the input, output, and logs fields in transactions and transaction receipts to help blockchain application development。 +> This article will introduce FISCO BCOS's transaction parsing tool to help developers easily and quickly parse the input, output, and logs fields in transactions and transaction receipts to help develop blockchain applications。 Community users often ask: does FISCO BCOS's smart contract support getting the return value directly after sending the transaction??What is stored in the input, output and logs fields in the transaction and transaction receipt, very curious, can you understand it??How to solve? @@ -10,21 +10,21 @@ Now, let the FISCO BCOS transaction parsing tool unravel this mystery! ## What: Parse what? -The parsing tool parses three important fields in the transaction and transaction receipt, namely input, output, and logs.。What do these three fields represent and what does it have to do with smart contracts??Please give a picture below for analysis.。 +The parsing tool parses three important fields in the transaction and transaction receipt, namely input, output, and logs。What do these three fields represent and what does it have to do with smart contracts??Please give a picture below for analysis。 ![](../../../../images/articles/contract_analysis_tool_guide/IMG_4963.PNG) In order to highlight the key points in the figure, only the key codes related to the transaction resolution fields in the TableTest.sol contract are shown (the TableTest.sol contract is a sample contract provided by the console, which is used to create the sample table t _ test and provides methods for adding, deleting, and modifying。The complete contract code can be found in the console directory contracts / consolidation / or directly through the document, please refer to: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html#crud)。 -The transaction and transaction receipt fields, again highlighting only the input, output, and logs fields to be parsed, and omitting the other fields。The transaction information contains the input field, and the transaction receipt information contains the input, output, and logs fields.**(Note:**Transaction receipts returned by FISCO BCOS 2.0.0 and above contain the input field**)。 +The transaction and transaction receipt fields, again highlighting only the input, output, and logs fields to be parsed, and omitting the other fields。The transaction information contains the input field, and the transaction receipt information contains the input, output, and logs fields**(Note:**Transaction receipts returned by FISCO BCOS 2.0.0 and above contain the input field**)。 As you can see from the figure, the blue part is the signature of the insert method, the method signature part and the parameters passed in to call the method, which will be encoded into the input field (hexadecimal string) of the transaction and transaction receipt。 -The green part is the return value of the method, which will be encoded into the output field of the transaction receipt (hexadecimal string)。Here can answer a user's question,**That is, FISCO BCOS smart contracts support the return value after sending a transaction, which will be encoded and saved in the output field of the transaction receipt, and the return value can be parsed using the transaction parsing tool.**。 +The green part is the return value of the method, which will be encoded into the output field of the transaction receipt (hexadecimal string)。Here can answer a user's question,**That is, FISCO BCOS smart contracts support the return value after sending a transaction, which will be encoded and saved in the output field of the transaction receipt, and the return value can be parsed using the transaction parsing tool**。 The orange part is the event of the method call, which can record the event log information, which will be encoded into the logs field of the transaction receipt (where address is the call contract address, data is the hexadecimal encoding of the event log data, and topic is the hexadecimal encoding of the event signature)。 -It follows that the input, output, and event log of the contract method are encoded in the input, output, and logs fields corresponding to the transaction and transaction receipt.。To know which method of a contract is called for a transaction or transaction receipt, and what data the input, output, and event log are, all you have to do is parse these three fields, which is exactly what the transaction parsing tool is trying to solve.! +It follows that the input, output, and event log of the contract method are encoded in the input, output, and logs fields corresponding to the transaction and transaction receipt。To know which method of a contract is called for a transaction or transaction receipt, and what data the input, output, and event log are, all you have to do is parse these three fields, which is exactly what the transaction parsing tool is trying to solve! ## How: How to use? @@ -53,15 +53,15 @@ compile ('org.fisco-bcos:web3sdk:2.0.5') Use the TransactionDecoderFactory factory class to create a transaction resolution object, TransactionDecoder, in two ways: 1. TransactionDecoder buildTransactionDecoder(String abi, String bin)The input parameters are the abi and bin strings of the contract (the bin string is not used for the time being, you can directly input an empty string)。 -2. TransactionDecoder buildTransactionDecoder(String contractName), the incoming contract name。You need to create the Solidity directory in the root directory of the application, place the contract related to the transaction in the Solidity directory, and obtain the transaction resolution object by specifying the contract name.。 +2. TransactionDecoder buildTransactionDecoder(String contractName), the incoming contract name。You need to create the Solidity directory in the root directory of the application, place the contract related to the transaction in the Solidity directory, and obtain the transaction resolution object by specifying the contract name。 -**注意**: Before creating a transaction resolution object, make sure to resolve the contract corresponding to the transaction (that is, the transaction is generated by calling the contract), you can directly provide the solidity contract or the user compiles it, and then passes it into the abi of the contract, both methods can create a transaction resolution object.。 +**注意**: Before creating a transaction resolution object, make sure to resolve the contract corresponding to the transaction (that is, the transaction is generated by calling the contract), you can directly provide the solidity contract or the user compiles it, and then passes it into the abi of the contract, both methods can create a transaction resolution object。 -### step 3: Call the transaction resolution object for the resolution task. +### step 3: Call the transaction resolution object for the resolution task -TransactionDecoder provides methods to return java objects and json strings (the json string form of java objects), respectively, for input, output, and logs.。For detailed design documents, please refer to: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/sdk.html#id11。 +TransactionDecoder provides methods to return java objects and json strings (the json string form of java objects), respectively, for input, output, and logs。For detailed design documents, please refer to: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/sdk.html#id11。 -Java objects are convenient for the server to process data, and json strings are convenient for the client to process data.。 +Java objects are convenient for the server to process data, and json strings are convenient for the client to process data。 The list of methods for transaction resolution objects is as follows: @@ -87,11 +87,11 @@ public class ResultEntity { private Object data; / / field value } public class EventResultEntity extends ResultEntity { - private boolean indexed; / / The indexed flag. True indicates that the event field is decorated with the indexed keyword. + private boolean indexed; / / The indexed flag. True indicates that the event field is decorated with the indexed keyword } ``` -Depending on the transaction object, you can get its input field;Depending on the transaction receipt object, you can get its input, output, and logs fields。Call the method corresponding to the transaction resolution object to resolve the relevant fields.。 +Depending on the transaction object, you can get its input field;Depending on the transaction receipt object, you can get its input, output, and logs fields。Call the method corresponding to the transaction resolution object to resolve the relevant fields。 **注意**If parsing FISCO BCOS versions prior to 2.0.0(namely rc1, rc2, rc3 version)The output field in the returned transaction receipt, because the method of parsing the output field requires the input field, but the input field is missing in the transaction receipt, you can query the getTransactionByHash method of the web3j object to obtain the transaction object based on the hash field in the transaction receipt, and then obtain the input field from the transaction object to parse the output field。 @@ -107,14 +107,14 @@ The following example parses the insert method that calls the TableTest contract | logs | Java objects:{InsertResult(int256)=[[EventResultEntity [name=count, type=int256, data=1, indexed=false]]]} | | | json string:{"InsertResult(int256)":[[{"name":"count","type":"int256","data":1,"indexed":false}]]} | -According to the analysis result, according to the input, output, and logs fields in the abi and transaction receipt of the TableTest.sol contract, the transaction parsing tool can parse the contract method name, parameter type, parameter value, return type, return value, and event log data of the call.。This is what we expect from transaction resolution.! +According to the analysis result, according to the input, output, and logs fields in the abi and transaction receipt of the TableTest.sol contract, the transaction parsing tool can parse the contract method name, parameter type, parameter value, return type, return value, and event log data of the call。This is what we expect from transaction resolution! ## Where: Where to use the scene? -Is a hero, must be useful!The places where the trade resolution tool enters include the following scenarios. +Is a hero, must be useful!The places where the trade resolution tool enters include the following scenarios -- **Console**: The console version 1.0.4 has used the transaction resolution tool to resolve the transaction of the query, the transaction receipt, and the relevant fields when the contract is called.。[specific usage](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html#call) +- **Console**: The console version 1.0.4 has used the transaction resolution tool to resolve the transaction of the query, the transaction receipt, and the relevant fields when the contract is called。[specific usage](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html#call) - **Blockchain Browser, WeBASE Management Platform**Used to decode fields in transactions and transaction receipts for easy display of transaction details。 -- **blockchain application based on web3sdk**: The more important significance is to obtain the return value of the contract method, in the past, for the method of sending transactions, it is customary to use event to record data, and the return value of the method is idle.。You can now use the return value and use the transaction parsing tool to parse the return value to help business development.。 +- **blockchain application based on web3sdk**: The more important significance is to obtain the return value of the contract method, in the past, for the method of sending transactions, it is customary to use event to record data, and the return value of the method is idle。You can now use the return value and use the transaction parsing tool to parse the return value to help business development。 In short, where transaction field parsing is required, the transaction parsing tool can be called! \ No newline at end of file diff --git a/3.x/en/docs/articles/4_tools/46_stresstest/caliper_stress_test_practice.md b/3.x/en/docs/articles/4_tools/46_stresstest/caliper_stress_test_practice.md index 8dfbb689c..5e7a1bc5b 100644 --- a/3.x/en/docs/articles/4_tools/46_stresstest/caliper_stress_test_practice.md +++ b/3.x/en/docs/articles/4_tools/46_stresstest/caliper_stress_test_practice.md @@ -6,32 +6,32 @@ Author: Li Chen Xi | FISCO BCOS Core Developer Regarding how to use Caliper to perform pressure testing on FISCO BCOS, the developer has made painstaking efforts to make a comprehensive summary of how to deploy Caliper and how to customize test cases. Welcome to [FISCO BCOS official document](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/tutorial/stress_testing.html)Learn how to use: -This article will analyze Caliper in depth so that you can better use Caliper pressure measurement FISCO BCOS.。 +This article will analyze Caliper in depth so that you can better use Caliper pressure measurement FISCO BCOS。 ## Why Adapt to Caliper? -For blockchain developers and users, performance is an important consideration in evaluating a blockchain platform.。FISCO BCOS has always been a continuous performance tracking of FISCO BCOS through the stress test program and stress test script included in the Java SDK.。 +For blockchain developers and users, performance is an important consideration in evaluating a blockchain platform。FISCO BCOS has always been a continuous performance tracking of FISCO BCOS through the stress test program and stress test script included in the Java SDK。 Although these stress testing methods are fully capable of evaluating the performance requirements of the underlying developers of FISCO BCOS, when the stress testing requirements extend from the underlying developers to external users who need to test custom contracts and scenarios, the traditional stress testing methods will be somewhat inadequate, and the areas that need to be improved are mainly focused on the following points。 ### Ways to extend test scenarios need to be streamlined -At present, the test scenarios directly supported by the pressure test program are representative transfer scenarios and simulations of adding, deleting, modifying and checking data tables. If you want to test custom contracts and scenarios, you need to write your own test program according to the template.。 -The SDK itself already provides a wealth of APIs to help users write such programs, but users still need to handle details such as contract compilation and API conversion, pressure thread pools, etc.。 -Although a user may only have one test requirement point, there are thousands of people, and when the needs of different users are accumulated, the repetitive workload becomes considerable.。 +At present, the test scenarios directly supported by the pressure test program are representative transfer scenarios and simulations of adding, deleting, modifying and checking data tables. If you want to test custom contracts and scenarios, you need to write your own test program according to the template。 +The SDK itself already provides a wealth of APIs to help users write such programs, but users still need to handle details such as contract compilation and API conversion, pressure thread pools, etc。 +Although a user may only have one test requirement point, there are thousands of people, and when the needs of different users are accumulated, the repetitive workload becomes considerable。 -**Therefore, we hope to have a test framework to help us deal with these trivial details, so that we can focus more on constructing the pressure test scene itself.**。 +**Therefore, we hope to have a test framework to help us deal with these trivial details, so that we can focus more on constructing the pressure test scene itself**。 -### The definition of performance indicators needs to be unified. +### The definition of performance indicators needs to be unified -For the problem of how to calculate the performance indicators, FISCO BCOS has formed a set of standardized calculation methods, but when the user writes the test program, due to the flexibility of the test program itself, the user can define how to calculate the performance indicators.。 -Taking the transaction processing capacity indicator TPS as an example, some users may feel that the local signature time of the transaction should not be attributed to the time taken by the remote node to process the transaction, or they may think that the calculation method of total transaction volume / total time is not accurate enough, and prefer to use the method of calculating the average of multiple samples in the pressure measurement process.。 -It can be seen that the negative effect of flexibility is the lack of a uniform measure of performance on all sides.。 -**Therefore, we hope to have a test system with its own standard performance indicator definition, and the best user can not directly intervene in the calculation process of performance indicators.**。 +For the problem of how to calculate the performance indicators, FISCO BCOS has formed a set of standardized calculation methods, but when the user writes the test program, due to the flexibility of the test program itself, the user can define how to calculate the performance indicators。 +Taking the transaction processing capacity indicator TPS as an example, some users may feel that the local signature time of the transaction should not be attributed to the time taken by the remote node to process the transaction, or they may think that the calculation method of total transaction volume / total time is not accurate enough, and prefer to use the method of calculating the average of multiple samples in the pressure measurement process。 +It can be seen that the negative effect of flexibility is the lack of a uniform measure of performance on all sides。 +**Therefore, we hope to have a test system with its own standard performance indicator definition, and the best user can not directly intervene in the calculation process of performance indicators**。 ### The way the results are presented needs to be optimized -Command-line-based stress testing programs are written more for the low-level developers of FISCO BCOS, so they are used in a more "low-level" way.。When external users want to use the stress test program included in the Java SDK for testing, they may see the following test results: +Command-line-based stress testing programs are written more for the low-level developers of FISCO BCOS, so they are used in a more "low-level" way。When external users want to use the stress test program included in the Java SDK for testing, they may see the following test results: ![](../../../../images/articles/caliper_stress_test_practice/IMG_4964.JPG) @@ -39,25 +39,25 @@ Although it is not difficult to understand the meaning of each statistic, it can **Therefore, we hope to have a test tool that can output an intuitive test report after the test is completed to facilitate user understanding and dissemination** 。 In the spirit of "Don't Repeat the Wheel," we are looking to the open source community to have ready-made tools to address the pain points of FISCO BCOS testing tools。After thorough research, we found the Hyperledger Caliper project。 -Caliper is a general-purpose blockchain performance testing tool。"Caliper"The original meaning of the word is the ruler, Caliper aims to provide a common baseline for the testing of blockchain platforms.。Caliper is completely open source, so users don't need to worry about the problem of not being able to verify the pressure test results because the test tool is not open source。 +Caliper is a general-purpose blockchain performance testing tool。"Caliper"The original meaning of the word is the ruler, Caliper aims to provide a common baseline for the testing of blockchain platforms。Caliper is completely open source, so users don't need to worry about the problem of not being able to verify the pressure test results because the test tool is not open source。 At the same time, the Hyperledger project has set up a performance and scalability working group (PWSG), which is responsible for the formal and standardized definition of various performance indicators (TPS, latency, resource utilization, etc.)。 -Caliper can easily connect to a variety of blockchain platforms and shield the underlying details, users only need to be responsible for designing the specific test process, you can obtain the visual performance test report of Caliper output.。It can be seen that the Caliper with these characteristics can meet the needs of FISCO BCOS for pressure measurement tools.。 +Caliper can easily connect to a variety of blockchain platforms and shield the underlying details, users only need to be responsible for designing the specific test process, you can obtain the visual performance test report of Caliper output。It can be seen that the Caliper with these characteristics can meet the needs of FISCO BCOS for pressure measurement tools。 -FISCO BCOS's adaptation to the Caliper framework began immediately, and in retrospect, the overall workload is not heavy, but the main time overhead is spent on the development brother because he is not familiar with Node.js (Caliper mainly uses Node.js for development) and learned for a period of time, which also confirms the easy integration of Caliper from the side.。 +FISCO BCOS's adaptation to the Caliper framework began immediately, and in retrospect, the overall workload is not heavy, but the main time overhead is spent on the development brother because he is not familiar with Node.js (Caliper mainly uses Node.js for development) and learned for a period of time, which also confirms the easy integration of Caliper from the side。 ## What does Caliper look like?? ![](../../../../images/articles/caliper_stress_test_practice/IMG_4965.PNG) -The architecture diagram of Caliper is shown in the figure above.。In Caliper, the Caliper CLI is responsible for providing easy-to-use command-line tools for the internal Caliper Core (interface and core layer)。The interface and core layer include blockchain adaptation interface, resource monitoring module, performance analysis module and report generation module.。 +The architecture diagram of Caliper is shown in the figure above。In Caliper, the Caliper CLI is responsible for providing easy-to-use command-line tools for the internal Caliper Core (interface and core layer)。The interface and core layer include blockchain adaptation interface, resource monitoring module, performance analysis module and report generation module。 ### Blockchain Adaptation API -Contains interfaces for operations such as deploying smart contracts on the back-end blockchain, invoking contracts, and querying status from the ledger, which are primarily provided by the blockchain dominator。Each blockchain adapter uses the corresponding blockchain SDK or RESTful API to implement these interfaces. Caliper integrates the blockchain system into the Caliper framework through the interfaces provided by these adapters. Currently, in addition to FISCO BCOS, Caliper also supports blockchain systems such as Fabric and Iroha.。 +Contains interfaces for operations such as deploying smart contracts on the back-end blockchain, invoking contracts, and querying status from the ledger, which are primarily provided by the blockchain dominator。Each blockchain adapter uses the corresponding blockchain SDK or RESTful API to implement these interfaces. Caliper integrates the blockchain system into the Caliper framework through the interfaces provided by these adapters. Currently, in addition to FISCO BCOS, Caliper also supports blockchain systems such as Fabric and Iroha。 ### Resource Monitoring Module -Provides support for starting / stopping the monitor and obtaining the resource consumption status of the backend blockchain system. The scope of resource monitoring includes CPU, memory, and network IO.。Caliper currently provides two monitors, one is to monitor local / remote docker containers, and the other is to monitor local processes。 +Provides support for starting / stopping the monitor and obtaining the resource consumption status of the backend blockchain system. The scope of resource monitoring includes CPU, memory, and network IO。Caliper currently provides two monitors, one is to monitor local / remote docker containers, and the other is to monitor local processes。 ### Performance Analysis Module @@ -65,9 +65,9 @@ Provides support for reading predefined performance statistics (including TPS, l ### Report Generation Module -Mainly responsible for beautifying the performance data obtained from the performance analysis module and generating HTML format test reports.。 -The upper layer of Caliper is the application layer, which is responsible for testing the blockchain system.。Each test needs to set the corresponding test configuration file and define the test parameters of the back-end blockchain network information.。Based on these configurations, Caliper can complete the performance test of the blockchain system.。 -Caliper comes pre-built with a default benchmark engine to help testers quickly understand the framework and implement their own tests, and the next section describes how to use the benchmark engine。Of course, testers can also use the blockchain adaptation API to test their own blockchain systems without using the test framework.。 +Mainly responsible for beautifying the performance data obtained from the performance analysis module and generating HTML format test reports。 +The upper layer of Caliper is the application layer, which is responsible for testing the blockchain system。Each test needs to set the corresponding test configuration file and define the test parameters of the back-end blockchain network information。Based on these configurations, Caliper can complete the performance test of the blockchain system。 +Caliper comes pre-built with a default benchmark engine to help testers quickly understand the framework and implement their own tests, and the next section describes how to use the benchmark engine。Of course, testers can also use the blockchain adaptation API to test their own blockchain systems without using the test framework。 ## Test process @@ -79,26 +79,26 @@ The entire testing process is driven by the Master process and consists of the f - **Preparation phase**At this stage, the Master process uses the blockchain configuration file to create and initialize internal blockchain objects, deploys smart contracts according to the parameters specified in the configuration, and starts monitoring objects to monitor the resource consumption of the backend blockchain system; -- **Testing Phase**: At this stage, the Master process performs tests according to the configuration file, and will generate tasks according to the defined load and assign them to the client child process。Finally, the performance statistics returned by each client are stored for subsequent analysis.。 +- **Testing Phase**: At this stage, the Master process performs tests according to the configuration file, and will generate tasks according to the defined load and assign them to the client child process。Finally, the performance statistics returned by each client are stored for subsequent analysis。 - **Reporting Phase**Analyze the statistics of all Client processes for each test round and automatically generate HTML reports。 -The Client process is mainly responsible for specific interaction with the back-end blockchain system.。In Local mode, the Master process uses the Node.js cluster module to launch multiple local clients (child processes) to perform actual test work。 +The Client process is mainly responsible for specific interaction with the back-end blockchain system。In Local mode, the Master process uses the Node.js cluster module to launch multiple local clients (child processes) to perform actual test work。 -Because Node.js is inherently single-threaded, local Client child process clusters are used to improve Client performance on multi-core machines.。In actual use, the greater the number of Client child processes (if the number of CPU cores can support it), the higher the transaction sending and processing power of Caliper.。 +Because Node.js is inherently single-threaded, local Client child process clusters are used to improve Client performance on multi-core machines。In actual use, the greater the number of Client child processes (if the number of CPU cores can support it), the higher the transaction sending and processing power of Caliper。 -In this mode, the total workload is evenly distributed to the sub-processes, each of which is equivalent to a blockchain client, and the sub-processes have temporarily generated contexts that can interact independently with the back-end blockchain system.。The child process context usually contains the client's identification and encryption information, and the context will be automatically released after the test, these details do not need to be concerned by the user.。 +In this mode, the total workload is evenly distributed to the sub-processes, each of which is equivalent to a blockchain client, and the sub-processes have temporarily generated contexts that can interact independently with the back-end blockchain system。The child process context usually contains the client's identification and encryption information, and the context will be automatically released after the test, these details do not need to be concerned by the user。 The Client process starts during the first round of testing and is destroyed after all tests are completed。 -The user-defined test module on the far right of the figure is used to realize the functions of transaction generation and up-chaining.。In this way, testers can implement their own test logic and integrate it with the benchmark engine。The test module mainly implements 3 functions, all of which should return a Promise object: +The user-defined test module on the far right of the figure is used to realize the functions of transaction generation and up-chaining。In this way, testers can implement their own test logic and integrate it with the benchmark engine。The test module mainly implements 3 functions, all of which should return a Promise object: -- **init**will be called by the Client at the beginning of each test run。Required parameters include the current blockchain object, context, and user-defined parameters read from the benchmark configuration file.。Within this function, the blockchain object and context can be saved for later use, and other initialization work can also be implemented here.; +- **init**will be called by the Client at the beginning of each test run。Required parameters include the current blockchain object, context, and user-defined parameters read from the benchmark configuration file。Within this function, the blockchain object and context can be saved for later use, and other initialization work can also be implemented here; -- **run**Use Caliper's blockchain adaptation API to generate transactions and put them on the chain.。Client will call this function repeatedly based on workload; +- **run**Use Caliper's blockchain adaptation API to generate transactions and put them on the chain。Client will call this function repeatedly based on workload; - **end**: Used to be called at the end of each round of testing, and any work that needs to clean up the environment at the end is performed here。 Caliper makes pressure testing FISCO BCOS elegant, and FISCO BCOS fixes and improves some of Caliper's bugs and performance issues in the process of adapting to Caliper。 -Caliper is still evolving, and features such as a friendly GUI interface, distributed test framework, and Prometheus monitoring system will be added in the future, and FISCO BCOS will continue to iteratively optimize test tools to meet user performance testing needs.。 \ No newline at end of file +Caliper is still evolving, and features such as a friendly GUI interface, distributed test framework, and Prometheus monitoring system will be added in the future, and FISCO BCOS will continue to iteratively optimize test tools to meet user performance testing needs。 \ No newline at end of file diff --git a/3.x/en/docs/articles/4_tools/47_maintenance/access_control_glance.md b/3.x/en/docs/articles/4_tools/47_maintenance/access_control_glance.md index 1e07b0ed5..ffa209cb7 100644 --- a/3.x/en/docs/articles/4_tools/47_maintenance/access_control_glance.md +++ b/3.x/en/docs/articles/4_tools/47_maintenance/access_control_glance.md @@ -4,43 +4,43 @@ Author: Zhang Kaixiang | Chief Architect, FISCO BCOS **Author language** -In the multi-party alliance chain, the division of labor and cooperation between the parties should also be done.**Clear responsibilities, each perform their own duties**。There is no need for chain managers to "be both referees and athletes" to participate in business transactions, and users who only participate in transactions do not have to worry about the development and deployment of smart contracts.。At the same time,"DO separation"(Separation of development and operation and maintenance) is a mature practice in the industry, and overstepping your authority poses risks that could ultimately undermine your reputation and cause loss of assets.。 +In the multi-party alliance chain, the division of labor and cooperation between the parties should also be done**Clear responsibilities, each perform their own duties**。There is no need for chain managers to "be both referees and athletes" to participate in business transactions, and users who only participate in transactions do not have to worry about the development and deployment of smart contracts。At the same time,"DO separation"(Separation of development and operation and maintenance) is a mature practice in the industry, and overstepping your authority poses risks that could ultimately undermine your reputation and cause loss of assets。 Clear, easy to use, comprehensive**Permission control ability**, both for information security, and for improving the governance of the alliance, are essential。 -This article is about FISCO BCOS permission control this matter, the author from the FISCO BCOS permission classification, typical alliance chain role design, permission control operation basic steps and so on.。 +This article is about FISCO BCOS permission control this matter, the author from the FISCO BCOS permission classification, typical alliance chain role design, permission control operation basic steps and so on。 ## Permission classification of FISCO BCOS -FISCO BCOS in the chain just set up, in order to facilitate rapid development and experience, the default does not do any permission control。However, if this chain is used to provide enterprise-level services, it is important to design and implement a permissions control strategy from the outset.。 Permission classification of FISCO BCOS: +FISCO BCOS in the chain just set up, in order to facilitate rapid development and experience, the default does not do any permission control。However, if this chain is used to provide enterprise-level services, it is important to design and implement a permissions control strategy from the outset。 Permission classification of FISCO BCOS: ![](../../../../images/articles/access_control_glance/IMG_4967.PNG) ### 1. Chain administrator permissions -即**Permissions to assign permissions**If you define account A as the chain administrator, A can assign permissions to accounts B, C, and D.;You can set up multiple administrators. If you do not set up an administrator, any account can modify various permissions indiscriminately.。 +即**Permissions to assign permissions**If you define account A as the chain administrator, A can assign permissions to accounts B, C, and D;You can set up multiple administrators. If you do not set up an administrator, any account can modify various permissions indiscriminately。 ### 2. System management authority Currently includes 4: -- Node management permissions (adding or deleting consensus nodes or observing nodes) +- Node management permissions (add or delete consensus nodes or observe nodes) - Permission to modify system parameters - Modify CNS contract naming permissions -- Can contracts be deployed and table creation permissions +- Can you deploy contracts and create table permissions -The deployment contract and table creation are "two-in-one" controls, when using the CRUD contract, we recommend that the deployment contract together with the table used in the contract built (written in the contract constructor), otherwise the next read and write table transactions may encounter "missing table" error.。If the business process requires dynamic table creation, the permissions for dynamic table creation should also be assigned to only a few accounts, otherwise various obsolete tables may appear on the chain。 +The deployment contract and table creation are "two-in-one" controls, when using the CRUD contract, we recommend that the deployment contract together with the table used in the contract built (written in the contract constructor), otherwise the next read and write table transactions may encounter "missing table" error。If the business process requires dynamic table creation, the permissions for dynamic table creation should also be assigned to only a few accounts, otherwise various obsolete tables may appear on the chain。 ### 3. User Table Permissions -At the granularity of the user table, control whether certain accounts can**Overwrite a user table**to prevent the user table from being accidentally modified by others, this permission depends on the FISCO BCOS CRUD contract writing。In addition,**Read User Table**Not controlled by permissions;If you want to control the privacy of data, you need to introduce technologies such as data encryption and zero knowledge.。 +At the granularity of the user table, control whether certain accounts can**Overwrite a user table**to prevent the user table from being accidentally modified by others, this permission depends on the FISCO BCOS CRUD contract writing。In addition,**Read User Table**Not controlled by permissions;If you want to control the privacy of data, you need to introduce technologies such as data encryption and zero knowledge。 ### 4. Contract Interface Permissions -A contract can include multiple interfaces, because the logic in the contract is closely related to the business, the interface granularity of the permission control is implemented by the developer, the developer can judge the msg.sender or tx.organ, decide whether to allow this call to continue processing.。 +A contract can include multiple interfaces, because the logic in the contract is closely related to the business, the interface granularity of the permission control is implemented by the developer, the developer can judge the msg.sender or tx.organ, decide whether to allow this call to continue processing。 -The FISCO BCOS console provides a series of commands to control permissions, which can be easily used by users.**Grant, Cancel(revoke), View(list)**For various permissions, see the documentation on the console。 +The FISCO BCOS console provides a series of commands to control permissions, which can be easily used by users**Grant, Cancel(revoke), View(list)**For various permissions, see the documentation on the console。 ## Typical Rights Management Role Design in Alliance Chain @@ -48,21 +48,21 @@ In the alliance chain, different roles perform their duties, division of labor a ### 1. Chain Manager -A committee is usually selected by multiple parties involved in the chain, and one or more agencies can be granted administrator privileges for personnel management and authority allocation。The chain administrator is not responsible for node management, modifying system parameters, deploying contracts and other system management operations.。 +A committee is usually selected by multiple parties involved in the chain, and one or more agencies can be granted administrator privileges for personnel management and authority allocation。The chain administrator is not responsible for node management, modifying system parameters, deploying contracts and other system management operations。 ### 2. System Administrator -Designated business operators or system operation and maintenance personnel, assign various permissions as needed, responsible for daily on-chain management, including node addition and deletion, system parameter modification, etc.。The chain administrator assigns permissions according to the governance rules agreed upon by everyone, for example, only the specified accounts are allowed to deploy contracts, and they are given contract deployment permissions so that other accounts cannot deploy contracts at will.。 +Designated business operators or system operation and maintenance personnel, assign various permissions as needed, responsible for daily on-chain management, including node addition and deletion, system parameter modification, etc。The chain administrator assigns permissions according to the governance rules agreed upon by everyone, for example, only the specified accounts are allowed to deploy contracts, and they are given contract deployment permissions so that other accounts cannot deploy contracts at will。 ### 3. Transaction Users -Users send business transaction requests to the blockchain. Business transactions mainly call contracts and read and write user tables, which can be flexibly controlled according to business logic, combined with user table permissions and contract interface permissions.。 +Users send business transaction requests to the blockchain. Business transactions mainly call contracts and read and write user tables, which can be flexibly controlled according to business logic, combined with user table permissions and contract interface permissions。 ### 4. Regulators Which system and user table permissions are assigned to the supervisor, you can refer to the specific regulatory rules, such as the supervisor read-only all data, there is no need to set special permissions。 -Managing accounts with different roles is another complex issue, one that needs to be clearly differentiated, easy to use, and secure;In case the account is lost, you need to support recovery. If the account is leaked, reset it. We will introduce it in another article later.。 +Managing accounts with different roles is another complex issue, one that needs to be clearly differentiated, easy to use, and secure;In case the account is lost, you need to support recovery. If the account is leaked, reset it. We will introduce it in another article later。 ## Basic steps for privilege control operations @@ -86,13 +86,13 @@ The command line to assign administrator privileges is: grantPermissionManager 0xf1585b8d0e08a0a00fff662e24d67ba95a438256 ``` -When this account gets the chain administrator permissions, exit the current console or switch to another terminal window, log in once with the private key of this account, and you can perform subsequent operations as a chain administrator.。 +When this account gets the chain administrator permissions, exit the current console or switch to another terminal window, log in once with the private key of this account, and you can perform subsequent operations as a chain administrator。 -**Tips**: Be sure to remember the correspondence between the administrator address and the private key, otherwise once the administrator permissions are set, only the administrator can assign permissions to other accounts, and the settings of other accounts will report no permissions.。 +**Tips**: Be sure to remember the correspondence between the administrator address and the private key, otherwise once the administrator permissions are set, only the administrator can assign permissions to other accounts, and the settings of other accounts will report no permissions。 ### step2 -Log in to the console with the chain administrator account, and assign node management permissions, system parameter modification permissions, CNS permissions, deployment contract and table creation permissions to other system administrator accounts in turn according to the management policy.。Then log on to the console with the private key of a system administrator account with the appropriate permissions, such as an account with deployment and table creation permissions, for the next step.。 +Log in to the console with the chain administrator account, and assign node management permissions, system parameter modification permissions, CNS permissions, deployment contract and table creation permissions to other system administrator accounts in turn according to the management policy。Then log on to the console with the private key of a system administrator account with the appropriate permissions, such as an account with deployment and table creation permissions, for the next step。 ### step3 @@ -106,13 +106,13 @@ Authorize 0xf1585b8d0e08a0a00fff662e24d67ba95a438256 to operate this account**t_ ### step4 -For an interface in the Solidity contract, you can refer to this code for control. +For an interface in the Solidity contract, you can refer to this code for control ``` function testFunction() public returns(int256) { - require(msg.sender == tx.origin); / / The effect of this line is to prohibit contract adjustment. - if(msg.sender != address(0x156dff526b422b17c4f576e6c0b243179eaa8407) ) / / Here is an example, the account address is written directly in clear text, which can actually be handled flexibly during development.。 + require(msg.sender == tx.origin); / / The effect of this line is to prohibit contract adjustment + if(msg.sender != address(0x156dff526b422b17c4f576e6c0b243179eaa8407) ) / / Here is an example, the account address is written directly in clear text, which can actually be handled flexibly during development。 { return -1; } / / If the caller and the preset authorized caller are different, return } ``` @@ -121,7 +121,7 @@ msg.sender is the address of the caller of the current contract, either the user ## Summary and references -This article describes some of the interfaces and capabilities that FISCOBCOS provides at the basic level, and the reasonableness and sophistication of permission control will ultimately depend on the user, and you can continue to explore the scenario governance and security control of different chains in depth to arrive at best practices.。 +This article describes some of the interfaces and capabilities that FISCOBCOS provides at the basic level, and the reasonableness and sophistication of permission control will ultimately depend on the user, and you can continue to explore the scenario governance and security control of different chains in depth to arrive at best practices。 #### References @@ -149,7 +149,7 @@ This article describes some of the interfaces and capabilities that FISCOBCOS pr If the answer to the above two questions is yes, is it possible to modify the data of the entire network as long as you have the super permissions of one node?? -**@ Light Path**Yes. Before establishing the chain, you must first negotiate which account or accounts will assume the role of the chain administrator. The roles will be assigned as soon as the chain is established. For details, see FISCO BCOS permission control related documents.。 +**@ Light Path**Yes. Before establishing the chain, you must first negotiate which account or accounts will assume the role of the chain administrator. The roles will be assigned as soon as the chain is established. For details, see FISCO BCOS permission control related documents。 Thank you for participating in this topic discussion of small partners!Open source community, because you are more beautiful! diff --git a/3.x/en/docs/articles/4_tools/five_step_to_develop_application.md b/3.x/en/docs/articles/4_tools/five_step_to_develop_application.md index 947ab1031..dcb593649 100644 --- a/3.x/en/docs/articles/4_tools/five_step_to_develop_application.md +++ b/3.x/en/docs/articles/4_tools/five_step_to_develop_application.md @@ -2,10 +2,10 @@ Author : LI Hui-zhong | Senior Architect, FISCO BCOS -This article is for developers of FISCO BCOS, which is a high-purity, ultra-concentrated and minimalist way to share how to quickly build your first DAPP application based on FISCO BCOS.。 +This article is for developers of FISCO BCOS, which is a high-purity, ultra-concentrated and minimalist way to share how to quickly build your first DAPP application based on FISCO BCOS。 The community often asks: The FISCO BCOS project has 10W+Line source code, 10W+Word description document, dozens of subprojects, how should I start, how to get started? -Don't panic, the five-step introductory book is ready.!!! +Don't panic, the five-step introductory book is ready!!! ## Step 1: Build a chain of FISCO BCOS @@ -14,12 +14,12 @@ The installation documentation gives a basket of details, but this article is to (Please create the fisco directory in the home directory first, and then operate in this directory) ```bash -$ curl -LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.9.1/build_chain.sh && chmod u+x build_chain.sh +$ curl -LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.11.0/build_chain.sh && chmod u+x build_chain.sh ``` ```eval_rst .. note:: - - If the build _ chain.sh script cannot be downloaded for a long time due to network problems, try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` + -If the build _ chain.sh script cannot be downloaded for a long time due to network problems, please try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` ``` Run the 'build _ chain.sh' script to start the four nodes: @@ -37,7 +37,7 @@ Of course, if you still need to read the detailed documentation, please refer to ## Step 2: Install an interactive console -The console is a tool that can interactively access the blockchain and make blockchain data read and write requests.。Without much explanation,**four steps to complete the console installation** +The console is a tool that can interactively access the blockchain and make blockchain data read and write requests。Without much explanation,**four steps to complete the console installation** To download the console: @@ -47,7 +47,7 @@ $ curl -#LO https://github.com/FISCO-BCOS/console/releases/download/v2.9.2/downl ```eval_rst .. note:: - - If you cannot download the console for a long time due to network problems, try the command 'curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master-2.0/tools/download_console.sh && bash download_console.sh -c 1.2.0` + -If you cannot download the console for a long time due to network problems, please try the command 'curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master-2.0/tools/download_console.sh && bash download_console.sh -c 1.2.0` ``` Configure and start the console: @@ -58,7 +58,7 @@ $ cp nodes/127.0.0.1/sdk/* console/conf/ $ cd console && ./start.sh ``` -At this point, you have entered the console interface, you can view the command list through help, get the node connection list through getPeers, and exit the console through the exit or quit command.。 +At this point, you have entered the console interface, you can view the command list through help, get the node connection list through getPeers, and exit the console through the exit or quit command。 At the same time, the console has a built-in HelloWorld contract, you can directly call deploy HelloWorld to deploy, and then call HelloWorld to access。 @@ -66,11 +66,11 @@ At the same time, the console has a built-in HelloWorld contract, you can direct ## Step 3: Write a Solidity contract -Tutorial documentation is still a step-by-step guide, but in fact!Follow business contract writing**Trilogy:****Storage Design-> Interface Design-> Logic implementation,**It is enough to successfully complete the business contract.。 +Tutorial documentation is still a step-by-step guide, but in fact!Follow business contract writing**Trilogy:****Storage design ->Interface design ->logical implementation,**It is enough to successfully complete the business contract。 If you are still used to reading detailed documents, please refer to the "tutorial": https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/tutorial/sdk_application.html -Take the asset transfer application in the document as an example to support the asset registration, query, and transfer functions of users on the chain. +Take the asset transfer application in the document as an example to support the asset registration, query, and transfer functions of users on the chain - **Storage design: Design storage table structure based on distributed storage** @@ -102,21 +102,21 @@ asset.sol contract: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest Solidity contracts need to be converted by a compiler into machine (virtual machine) executable binaries, which are combinations of a series of OpCodes that the virtual machine will parse and execute to implement the contract business logic。 -The compiled contract needs to be deployed to the blockchain through a tool (written to the blockchain ledger) before it can be accessed according to the contract interface description file (ABI).。 +The compiled contract needs to be deployed to the blockchain through a tool (written to the blockchain ledger) before it can be accessed according to the contract interface description file (ABI)。 -Well, the old problem is committed, and nagging about the principle, or talk about it.**How to complete contract compilation and deployment with one click without brain:** +Well, the old problem is committed, and nagging about the principle, or talk about it**How to complete contract compilation and deployment with one click without brain:** Refer to the deployment command of the description document [Console]: https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/console/console.html -Place Assert.sol in the console / solidity / contract directory and run deploy Assert.sol on the console to compile and deploy the contract.。 +Place Assert.sol in the console / solidity / contract directory and run deploy Assert.sol on the console to compile and deploy the contract。 ![](../../../images/articles/five_step_to_develop_application/IMG_4949.PNG) ## Step 5: Develop the business -Continue to assume that you are using Java to develop your business, and of course assume that you are familiar with common tools such as eclipse, gradle, and spring.。 +Continue to assume that you are using Java to develop your business, and of course assume that you are familiar with common tools such as eclipse, gradle, and spring。 -1. Create a Gradle Java project asset-client, via IntelliJ IDEA or Eclipse; +1. Create a Gradle Java project asset-client through IntelliJ IDEA or Eclipse; 2. Compile build.gradle and add the maven library dependency; @@ -124,14 +124,14 @@ Continue to assume that you are using Java to develop your business, and of cour **Dependencies increase**:compile ('org.fisco-bcos:web3sdk:2.0.4'){exclude group: 'org.ethereum'} -3. Copy the relevant configuration files (applicationContext.xml, log.properties, ca.crt, node.crt, node.key) of the console configuration directory (console / conf /) in the second step to asset-main / resource directory of the client project; +3. Copy the relevant configuration files (applicationContext.xml, log.properties, ca.crt, node.crt, node.key) of the console configuration directory (console / conf /) in the second step to the main / resource directory of the asset-client project; -4. Compile the generated java file (console / consolidation / java /*)Copy to asset-main / java directory of the client project; +4. Compile the generated java file (console / consolidation / java /*)Copy to the main / java directory of the asset-client project; -5. Create a new AssetClient class in the main / java directory. Asset.java has implemented the deploy, load, select, register, and transfer interfaces.。 +5. Create a new AssetClient class in the main / java directory. Asset.java has implemented the deploy, load, select, register, and transfer interfaces。 Specific code can refer to the sample project: https://github.com/FISCO-BCOS/LargeFiles/raw/master/tools/asset-app.tar.gz [sample project gitee download address] https://gitee.com/FISCO-BCOS/LargeFiles/raw/master/tools/asset-app.tar.gz -Here, you have completed the first FISCO BCOS-based application development!If you have questions about the development process or optimization suggestions, you can enter the technical exchange group through the public number to discuss with us.。 +Here, you have completed the first FISCO BCOS-based application development!If you have questions about the development process or optimization suggestions, you can enter the technical exchange group through the public number to discuss with us。 diff --git a/3.x/en/docs/articles/5_corporation/how_to_submit_pr.md b/3.x/en/docs/articles/5_corporation/how_to_submit_pr.md index 9fa449c9b..56b33e784 100644 --- a/3.x/en/docs/articles/5_corporation/how_to_submit_pr.md +++ b/3.x/en/docs/articles/5_corporation/how_to_submit_pr.md @@ -4,29 +4,29 @@ Author : SHI Xiang | FISCO BCOS Core Developer **Author language** -Since the development of the first version of FISCO BCOS 2.0, the node code of FISCO BCOS has reached more than 110,000 lines, and the code is still rapidly iterating.。 +Since the development of the first version of FISCO BCOS 2.0, the node code of FISCO BCOS has reached more than 110,000 lines, and the code is still rapidly iterating。 Such a large amount of code input, for the development process, is a huge test。How to ensure code quality?How to get in order?? -This article will reveal the process of submitting PR, and see how Cheng Xuyuan (Yuan) collaborates on development during this process.。 +This article will reveal the process of submitting PR, and see how Cheng Xuyuan (Yuan) collaborates on development during this process。 ## What is PR?? Everyone develops their own code locally, and when the code is ready, they submit a "request form" to the FISCO BCOS main warehouse, requesting the main warehouse to pull the developed code for merging, this "request form" is PR (Pull Request, pull code request)。 -In PR, other developers will review the code, and CI (continuous integration tool) will conduct a preliminary check on the specification and correctness of the code.。When the PR meets the requirements, it can be combined.。 +In PR, other developers will review the code, and CI (continuous integration tool) will conduct a preliminary check on the specification and correctness of the code。When the PR meets the requirements, it can be combined。 -When we open the PR list of FISCO BCOS, we can see many PR records of everyone。Experienced veteran driver (wheeli) in control of the overall situation, there used to want to give birth to monkeys for him Liao teacher (fqliao) in excellence, there is the program yuan little sister (cyjseagull) in the rescue, there are new cat lovers (vita)-dounai) in a small trial bull knife。 +When we open the PR list of FISCO BCOS, we can see many PR records of everyone。Experienced veteran driver (wheeli) in control of the overall situation, there used to want to give birth to monkeys for him Liao teacher (fqliao) in excellence, there is the program yuan little sister (cyjseagull) in the rescue, there are new cat lovers (vita-dounai) in the small test。 ![](../../../images/articles/how_to_submit_pr/IMG_4968.JPG) -Let's take a look at a PR mentioned by her little sister cyjseagull.。This PR is in Open status, indicating that it is under review。She requested that the developed code be merged into the feature.-2.3.0 Branch。JimmyShi22, wheeli and other younger brothers are reviewing and agreeing, and will be ticked later.。 +Let's take a look at a PR mentioned by her little sister cyjseagull。This PR is in Open status, indicating that it is under review。She requested that the developed code be merged into the feature-2.3.0 branch。JimmyShi22, wheeli and other younger brothers are reviewing and agreeing, and will be ticked later。 ![](../../../images/articles/how_to_submit_pr/IMG_4969.PNG) -We continue to look down at this PR, cyjseagull's little sister's code has been challenged by the little brothers, giving some Review comments。The consensus module and the synchronization module are independent of each other and have no dependencies。The node tree topology logic TreeTopology.h she developed should be put into a lower-level module.。The little sister happily agreed。 +We continue to look down at this PR, cyjseagull's little sister's code has been challenged by the little brothers, giving some Review comments。The consensus module and the synchronization module are independent of each other and have no dependencies。The node tree topology logic TreeTopology.h she developed should be put into a lower-level module。The little sister happily agreed。 -Usually, a PR needs to be modified repeatedly based on Review before it can be merged.。 +Usually, a PR needs to be modified repeatedly based on Review before it can be merged。 ![](../../../images/articles/how_to_submit_pr/IMG_4970.PNG) @@ -41,26 +41,26 @@ This PR also lacks a person's consent, the button is gray, can not be clicked。 After understanding the concept of PR, we have two problems to solve: -- How to integrate code without affecting each other when multiple people are developing at the same time?? -- The combined code requires further manual testing before it can be released, and at what stage does the testing intervene to be able to test more effectively without affecting the development of others?? +-Many people develop at the same time, how to do without affecting the case of code integration?? +-The combined code needs further manual testing before it can be released, and at what stage does the testing intervene to be able to test more effectively without affecting the development of others?? FISCO BCOS uses the classic branching strategy Gitflow to manage the entire development, testing and release process, let's take a look at Gitflow。 ![](../../../images/articles/how_to_submit_pr/IMG_4972.JPG) -In Gitflow, there are five types of branches: master, develop, feature, release, and hot fix.。Different branches have different functions。 +In Gitflow, there are five types of branches: master, develop, feature, release, and hot fix。Different branches have different functions。 The development, testing, and release phases of FISCO BCOS code correspond to the above branches。 ### feature Branch -FISCO BCOS code development, is based on a feature (feature).。 +FISCO BCOS code development, is based on a feature (feature)。 -Multiple people develop at the same time, based on their own features-xxx branch in progress。On the main repository of FISCO BCOS, we can see that there are many of these feature branches, each of which belongs to one (or more) Cheng Xuyuan (Yuan)。 +Multiple people at the same time to develop, is based on their own feature-xxx branch。On the main repository of FISCO BCOS, we can see that there are many of these feature branches, each of which belongs to one (or more) Cheng Xuyuan (Yuan)。 -They usually write code in their local warehouse and submit it to their respective feature in the form of PR.-On xxx branch。The PR submitted by cyjseagull's little sister in the previous section is in this state.。 +They usually write code in their local repository and submit it in the form of PR to their respective feature-xxx branches。The PR submitted by cyjseagull's little sister in the previous section is in this state。 -When feature-After the development of the xxx branch, the test is involved and the "feature test" is carried out.。Bugs fixed during testing are also submitted to this feature branch in PR mode.。The purpose of feature testing is to ensure that this feature functions correctly。 +When the feature-xxx branch is developed, testing is involved and "feature testing" is performed。Bugs fixed during testing are also submitted to this feature branch in PR mode。The purpose of feature testing is to ensure that this feature functions correctly。 ![](../../../images/articles/how_to_submit_pr/IMG_4973.PNG) @@ -68,17 +68,17 @@ When feature-After the development of the xxx branch, the test is involved and t The develop branch (dev in FISCO BCOS), which is used to merge multiple feature branches。 -When the Feature Test passes, the feature-The xxx branch can be merged into the dev branch。 +When the "feature test" test passes, the feature-xxx branch can be merged into the dev branch。 -The merger process is also done in a PR manner.。When multiple features are merged into the dev branch at the same time, they must be merged in order.。Joining dev's feature branch first will bring conflict to the later feature branch。The feature branch that is merged later needs to resolve the conflict before merging into dev。 +The merger process is also done in a PR manner。When multiple features are merged into the dev branch at the same time, they must be merged in order。Joining dev's feature branch first will bring conflict to the later feature branch。The feature branch that is merged later needs to resolve the conflict before merging into dev。 ### release Branch -When we have accumulated some developed features, we need to release the code.。At this time, pull the release from the dev branch-xxx branch for "release testing"。 +When we have accumulated some developed features, we need to release the code。At this point, pull the release-xxx branch from the dev branch and perform the release test。 -When the feature branch is merged into the dev branch, only the integrity of the feature can be guaranteed, but the influence between features cannot be guaranteed.。When multiple features are merged into the dev branch, the final overall test needs to be done before the release.。The bug found at this time is directly merged into the release branch in the form of PR.。 +When the feature branch is merged into the dev branch, only the integrity of the feature can be guaranteed, but the influence between features cannot be guaranteed。When multiple features are merged into the dev branch, the final overall test needs to be done before the release。The bug found at this time is directly merged into the release branch in the form of PR。 -In this way, the release branch does not affect the development of other developers on the feature branch during testing, nor does it affect the integration of features into the dev branch.。 +In this way, the release branch does not affect the development of other developers on the feature branch during testing, nor does it affect the integration of features into the dev branch。 ### master branch @@ -86,22 +86,22 @@ Master is the main branch, providing available code to the outside。 When the Release Test is complete, you can merge the releases branch into the master branch。 -The release branch is also merged into the master branch in the form of PR.。At the same time, the release branch also joins the dev branch。After joining the master branch, tag the new master branch。Publish Complete!The final version is based on the tag, and the code can be downloaded directly from the tag.。 +The release branch is also merged into the master branch in the form of PR。At the same time, the release branch also joins the dev branch。After joining the master branch, tag the new master branch。Publish Complete!The final version is based on the tag, and the code can be downloaded directly from the tag。 ![](../../../images/articles/how_to_submit_pr/IMG_4974.JPG) ### hot fix branch -After the code is released, if there are minor bugs or minor optimizations, pull a hot fix from the master branch.-xxx branch, quickly fix on it。 +After the code is released, if there are minor bugs or minor optimizations, pull a hot fix-xxx branch from the master branch and fix it quickly。 -After the repair and test are completed, merge the master and dev branches at the same time。Master plays a small version of tag。If a bug with a wide range of changes occurs, fix it on the feature or release branch according to the current release status of the project.。 +After the repair and test are completed, merge the master and dev branches at the same time。Master plays a small version of tag。If a bug with a wide range of changes occurs, fix it on the feature or release branch according to the current release status of the project。 ## FISCO BCOS branch strategy Understand the PR and branch strategy, then to the stage of PR。 -- If you just want to modify small bugs and make small optimizations, you can directly PR to the master branch。 -- If you want to develop for a feature, you can communicate with the community about the solution and pull your own feature from the dev.-Xxx branch, you can start rolling!Then use PR to submit the code。In order to avoid major changes in the Review, you need to mention PR as much as possible to show your thinking.。PR does not require the function to be fully available, only the feature branch is available when the final development is completed.。 +-If you just want to modify small bugs and make small optimizations, you can directly PR to the master branch。 +-If you want to develop for a feature, you can communicate with the community about the solution, pull out your feature-xxx branch from the dev, and you can start rolling!Then use PR to submit the code。In order to avoid major changes in the Review, you need to mention PR as much as possible to show your thinking。PR does not require the function to be fully available, only the feature branch is available when the final development is completed。 The specific steps of PR can also be referred to [How to Contribute to FISCO BCOS](https://mp.weixin.qq.com/s/_w_auH8X4SQQWO3lhfNrbQ) diff --git a/3.x/en/docs/articles/6_application/application_bsn_officially_designated.md b/3.x/en/docs/articles/6_application/application_bsn_officially_designated.md index bf0d53624..33bebdf2f 100644 --- a/3.x/en/docs/articles/6_application/application_bsn_officially_designated.md +++ b/3.x/en/docs/articles/6_application/application_bsn_officially_designated.md @@ -1,39 +1,39 @@ # The first batch of "officially designated blockchain applications" of BSN was released, and four applications of FISCO BCOS community were selected -After more than a month of solicitation, BSN recently announced the list of the first batch of "officially designated blockchain applications," four of which are based on the underlying research and development of FISCO BCOS blockchain, covering areas such as certificate storage, anti-counterfeiting traceability, supply chain management, etc.。In line with the purpose of "showing the most suitable blockchain applications to the users who need them most," the BSN Development Alliance Developer Committee reviewed and comprehensively considered the submitted works according to the application access mechanism, and this time prioritized 12 blockchain applications of 9 categories as the first batch of designated applications selected for BSN.。 +After more than a month of solicitation, BSN recently announced the list of the first batch of "officially designated blockchain applications," four of which are based on the underlying research and development of FISCO BCOS blockchain, covering areas such as certificate storage, anti-counterfeiting traceability, supply chain management, etc。In line with the purpose of "showing the most suitable blockchain applications to the users who need them most," the BSN Development Alliance Developer Committee reviewed and comprehensively considered the submitted works according to the application access mechanism, and this time prioritized 12 blockchain applications of 9 categories as the first batch of designated applications selected for BSN。 -The four applications based on the underlying research and development of FISCO BCOS blockchain are: blockchain depository service system in the era of chain movement, blockchain application of the whole process traceability cloud platform for the agricultural industry, Huiyun chain, and iOS transparent construction platform.。 +The four applications based on the underlying research and development of FISCO BCOS blockchain are: blockchain depository service system in the era of chain movement, blockchain application of the whole process traceability cloud platform for the agricultural industry, Huiyun chain, and iOS transparent construction platform。 ![](../../../images/articles/application_bsn_officially_designated/IMG_5268.JPG) ## Chain moving era: blockchain certificate of deposit service system -The blockchain certificate depository service system (hereinafter referred to as the "inBC certificate depository system") is based on the FISCO BCOS alliance chain on BSN.。The inBC certificate storage system helps users to expand existing business systems based on API interfaces to achieve the preservation of electronic evidence and call verification.。Can be widely used in electronic contracts, copyright protection, certificates, anti-counterfeiting traceability, public welfare donations and other scenarios and fields.。 +The blockchain certificate depository service system (hereinafter referred to as the "inBC certificate depository system") is based on the FISCO BCOS alliance chain on BSN。The inBC certificate storage system helps users to expand existing business systems based on API interfaces to achieve the preservation of electronic evidence and call verification。Can be widely used in electronic contracts, copyright protection, certificates, anti-counterfeiting traceability, public welfare donations and other scenarios and fields。 ![](../../../images/articles/application_bsn_officially_designated/IMG_5269.PNG) ## Tian Yan Wei Zhen: the whole process of agricultural industry traceability cloud platform blockchain application -The platform fully combines technologies such as Internet of Things, blockchain, cloud computing, big data and geographic information, and realizes the "integration, visualization, networking and desktop" of information collection, audit processing, control execution and scientific decision-making in the software environment of graphical interface.。By connecting all aspects of production, processing, warehousing, logistics and consumption, the platform sorts out unified product standards and control processes, standardizes the production and operation behavior of enterprises, improves the quality control ability of enterprises, and effectively guarantees the quality of products.。At the same time, this information will be opened to consumers simultaneously to enhance consumer awareness and build consumer trust.。 +The platform fully combines technologies such as Internet of Things, blockchain, cloud computing, big data and geographic information, and realizes the "integration, visualization, networking and desktop" of information collection, audit processing, control execution and scientific decision-making in the software environment of graphical interface。By connecting all aspects of production, processing, warehousing, logistics and consumption, the platform sorts out unified product standards and control processes, standardizes the production and operation behavior of enterprises, improves the quality control ability of enterprises, and effectively guarantees the quality of products。At the same time, this information will be opened to consumers simultaneously to enhance consumer awareness and build consumer trust。 ![](../../../images/articles/application_bsn_officially_designated/IMG_5270.PNG) -At present, Suzhou Yangcheng Lake hairy crab industry association collective trademark anti-counterfeiting traceability system, Gannan navel orange quality and safety traceability demonstration project and other applications are using the platform.。 +At present, Suzhou Yangcheng Lake hairy crab industry association collective trademark anti-counterfeiting traceability system, Gannan navel orange quality and safety traceability demonstration project and other applications are using the platform。 ## Safety Chain Data: Benefit Chain -Huiyun Chain is a financial solution for logistics insurance supply chain provided by Anchain Technology for logistics car-free carrier platforms, insurance, banks and other enterprises.。In the business scenario of freight transaction and transportation logistics management, it refines the documents and information of multi-party collaboration, connects logistics companies, insurance institutions, financial institutions and other ecological chain nodes into the alliance chain through the application of blockchain technology, optimizes resource utilization, improves the overall collaboration efficiency of the logistics industry, and uses trusted data to promote the integration of insurance and financial institutions with the logistics industry.。 +Huiyun Chain is a financial solution for logistics insurance supply chain provided by Anchain Technology for logistics car-free carrier platforms, insurance, banks and other enterprises。In the business scenario of freight transaction and transportation logistics management, it refines the documents and information of multi-party collaboration, connects logistics companies, insurance institutions, financial institutions and other ecological chain nodes into the alliance chain through the application of blockchain technology, optimizes resource utilization, improves the overall collaboration efficiency of the logistics industry, and uses trusted data to promote the integration of insurance and financial institutions with the logistics industry。 ![](../../../images/articles/application_bsn_officially_designated/IMG_5271.PNG) -At present, the cooperative users of Huiyun Chain include Nanjing Rongmaotong Smart Logistics Technology Co., Ltd., Jiangsu Xinning Modern Logistics Co., Ltd., Pacific Insurance, China Merchants Bank, etc.。 +At present, the cooperative users of Huiyun Chain include Nanjing Rongmaotong Smart Logistics Technology Co., Ltd., Jiangsu Xinning Modern Logistics Co., Ltd., Pacific Insurance, China Merchants Bank, etc。 ## Jianxin Zhuhe: IOS Transparent Construction Platform -IOS Transparent Construction Platform is a life cycle management system for the construction industry based on the application of blockchain technology developed by Shenzhen Jianxin Zhuhe Technology Co., Ltd.。The platform focuses on building a complete credit ecosystem for engineering projects, using blockchain, big data and other cutting-edge technology to assist Party A in the implementation of the project life cycle management, so that project responsibilities can be traced, project management transparency, so that the process becomes fair and just.。At present, the platform has provided services for the future building projects of China Xiong'an Group and Shenzhen Academy of Construction Sciences.。 +IOS Transparent Construction Platform is a life cycle management system for the construction industry based on the application of blockchain technology developed by Shenzhen Jianxin Zhuhe Technology Co., Ltd。The platform focuses on building a complete credit ecosystem for engineering projects, using blockchain, big data and other cutting-edge technology to assist Party A in the implementation of the project life cycle management, so that project responsibilities can be traced, project management transparency, so that the process becomes fair and just。At present, the platform has provided services for the future building projects of China Xiong'an Group and Shenzhen Academy of Construction Sciences。 ![](../../../images/articles/application_bsn_officially_designated/IMG_5272.PNG) ## What is the "BSN official designated application"? -The BSN Development Alliance Developer Committee has divided 14 relatively common blockchain application classifications and "other" classifications based on customer needs and the distribution of industry products, totaling 15 application classifications.。Classification covers supply chain management, supply chain finance, judicial deposit, electronic contracts, anti-counterfeiting traceability and other aspects.。Only 3 representative product solutions are introduced under each blockchain application classification。After the proposal is approved by the Developer Committee, it will be used as a blockchain application officially designated and recommended by BSN and widely recommended in various channels of BSN.。The second batch of designated applications are also in preparation for the launch. If you want to join the BSN official designated application, please contact the community assistant.。 \ No newline at end of file +The BSN Development Alliance Developer Committee has divided 14 relatively common blockchain application classifications and "other" classifications based on customer needs and the distribution of industry products, totaling 15 application classifications。Classification covers supply chain management, supply chain finance, judicial deposit, electronic contracts, anti-counterfeiting traceability and other aspects。Only 3 representative product solutions are introduced under each blockchain application classification。After the proposal is approved by the Developer Committee, it will be used as a blockchain application officially designated and recommended by BSN and widely recommended in various channels of BSN。The second batch of designated applications are also in preparation for the launch. If you want to join the BSN official designated application, please contact the community assistant。 \ No newline at end of file diff --git a/3.x/en/docs/articles/6_application/application_construction_industry_digitalization_jianxinzhuhe.md b/3.x/en/docs/articles/6_application/application_construction_industry_digitalization_jianxinzhuhe.md index d707f4e81..5325747a4 100644 --- a/3.x/en/docs/articles/6_application/application_construction_industry_digitalization_jianxinzhuhe.md +++ b/3.x/en/docs/articles/6_application/application_construction_industry_digitalization_jianxinzhuhe.md @@ -1,49 +1,49 @@ -# Create full-scene transparent management, and join hands with FISCO BCOS to help digitize the construction industry. +# Create full-scene transparent management, and join hands with FISCO BCOS to help digitize the construction industry Author : Fang Shaojun | Jianxinzhu and CTO ![](../../../images/articles/application_construction_industry_digitalization_jianxinzhuhe/IMG_5276.PNG) -The public number dialog box replies to [Jianxin Zhuhe] to obtain the PDF of the scheme. +The public number dialog box replies to [Jianxin Zhuhe] to obtain the PDF of the scheme -As the era of Industry 4.0 continues to advance, blockchain is gradually moving away from pure technological self-appreciation and financial labels, penetrating into various physical industries.。 +As the era of Industry 4.0 continues to advance, blockchain is gradually moving away from pure technological self-appreciation and financial labels, penetrating into various physical industries。 -In the traditional construction scenario, the industry information is not transparent, poor management coordination, information level is not high "chronic disease," making the construction project to promote the process of management difficulties, accountability difficulties, supervision difficulties, trust difficulties.。"Blockchain+The combination of "build," using blockchain technology, targets these industry pain points and proposes practical solutions at the application level。 +In the traditional construction scenario, the industry information is not transparent, poor management coordination, information level is not high "chronic disease," making the construction project to promote the process of management difficulties, accountability difficulties, supervision difficulties, trust difficulties。"Blockchain+The combination of "build," using blockchain technology, targets these industry pain points and proposes practical solutions at the application level。 -The "IOS Transparent Construction Solution" developed by Shenzhen Jianxin Zhuhe Technology Co., Ltd. (hereinafter referred to as "Jianxin Zhuhe") based on the FISCO BCOS blockchain platform is a typical representative of the scenario-based application of blockchain technology in the construction industry.。 +The "IOS Transparent Construction Solution" developed by Shenzhen Jianxin Zhuhe Technology Co., Ltd. (hereinafter referred to as "Jianxin Zhuhe") based on the FISCO BCOS blockchain platform is a typical representative of the scenario-based application of blockchain technology in the construction industry。 -With the leading advantage of the full-scene management model, the solution was selected as one of the first officially designated applications of the blockchain service network BSN, and won the crown at the 4th China Blockchain Development Competition in 2020 hosted by the China Electronics Standardization Institute of the Ministry of Industry and Information Technology.。 +With the leading advantage of the full-scene management model, the solution was selected as one of the first officially designated applications of the blockchain service network BSN, and won the crown at the 4th China Blockchain Development Competition in 2020 hosted by the China Electronics Standardization Institute of the Ministry of Industry and Information Technology。 ![](../../../images/articles/application_construction_industry_digitalization_jianxinzhuhe/IMG_5277.JPG) ## information asymmetry+Management is difficult to coordinate, construction industry pain point to be broken -In the traditional construction industry, "information asymmetry+Difficult management coordination "is an important factor restricting the development of the industry.。The upstream and downstream chain of the construction industry is very long, and if the information between the participants in each link of the industrial chain is not transparent, the communication and management costs of work collaboration are invisibly improved.。Such as the construction side and the designer for the design of the wrangling, the main body between the payment account period of the node control fuzzy and other issues emerge in an endless stream, slowing down the overall progress of the project.。At the same time, opaque information also makes it difficult for Party A to achieve full-process management penetration, management is not timely, not in place for the project buried the risk of violations, further pushing up the cost of trust, which in turn increases the difficulty of work collaboration.。 +In the traditional construction industry, "information asymmetry+Difficult management coordination "is an important factor restricting the development of the industry。The upstream and downstream chain of the construction industry is very long, and if the information between the participants in each link of the industrial chain is not transparent, the communication and management costs of work collaboration are invisibly improved。Such as the construction side and the designer for the design of the wrangling, the main body between the payment account period of the node control fuzzy and other issues emerge in an endless stream, slowing down the overall progress of the project。At the same time, opaque information also makes it difficult for Party A to achieve full-process management penetration, management is not timely, not in place for the project buried the risk of violations, further pushing up the cost of trust, which in turn increases the difficulty of work collaboration。 -In the traditional industry, the degree of informatization of the construction industry has been at a low level, the construction process involves owners and investors, regulators, agents, consultants, designers, constructors, supervisors, operators and many other units, at the same time, with the current expansion of the scale of investment in various types of projects, project management and capital supervision pressure, these problems restrict the healthy growth of the construction industry ecology。Therefore, one of the ways to break the traditional construction industry is to break the status quo as soon as possible and use information technology to promote the construction industry to achieve project management "full scene, full subject, full process transparent collaboration," while blockchain is a unique means to accelerate the process of technology.。 +In the traditional industry, the degree of informatization of the construction industry has been at a low level, the construction process involves owners and investors, regulators, agents, consultants, designers, constructors, supervisors, operators and many other units, at the same time, with the current expansion of the scale of investment in various types of projects, project management and capital supervision pressure, these problems restrict the healthy growth of the construction industry ecology。Therefore, one of the ways to break the traditional construction industry is to break the status quo as soon as possible and use information technology to promote the construction industry to achieve project management "full scene, full subject, full process transparent collaboration," while blockchain is a unique means to accelerate the process of technology。 -In this context, Jianxin Zhuhe joined the FISCO BCOS blockchain open source ecology, based on FISCO BCOS blockchain technology to create a full life cycle management system for the construction industry - "IOS Transparent Construction Solution" to promote the construction industry to achieve "full scene transparent management"。The solution uses blockchain distributed storage / sharing, smart contracts and other technologies to integrate with existing mature IT tools in the construction industry, such as design software, cost systems, smart construction sites, BIM systems, etc., and combines advanced technologies such as big data and artificial intelligence to effectively eliminate industry pain points and build a transparent construction platform with management penetration, openness and transparency, information sharing, and credit evaluation.。 +In this context, Jianxin Zhuhe joined the FISCO BCOS blockchain open source ecology, based on FISCO BCOS blockchain technology to create a full life cycle management system for the construction industry - "IOS Transparent Construction Solution" to promote the construction industry to achieve "full scene transparent management"。The solution uses blockchain distributed storage / sharing, smart contracts and other technologies to integrate with existing mature IT tools in the construction industry, such as design software, cost systems, smart construction sites, BIM systems, etc., and combines advanced technologies such as big data and artificial intelligence to effectively eliminate industry pain points and build a transparent construction platform with management penetration, openness and transparency, information sharing, and credit evaluation。 -## With the help of blockchain technology, the construction achieves full-scene penetration management. +## With the help of blockchain technology, the construction achieves full-scene penetration management -"IOS Transparent Construction Solution" adopts blockchain technologies such as scenario-based certificate storage, digital signatures, encryption algorithms, and smart contracts, breaking the original single-line task framework and realizing decentralized task collaboration and data flow.。The participants in the original project chain (industrial chain), such as project side, design, construction, survey, general contracting, subcontracting, supervision, team, etc., placed in the same task scenario, no longer has a strict upstream and downstream process relationship, the original easy to form each other constraints of "logistics, capital flow, information flow" can also be organically coordinated.。 +"IOS Transparent Construction Solution" adopts blockchain technologies such as scenario-based certificate storage, digital signatures, encryption algorithms, and smart contracts, breaking the original single-line task framework and realizing decentralized task collaboration and data flow。The participants in the original project chain (industrial chain), such as project side, design, construction, survey, general contracting, subcontracting, supervision, team, etc., placed in the same task scenario, no longer has a strict upstream and downstream process relationship, the original easy to form each other constraints of "logistics, capital flow, information flow" can also be organically coordinated。 ![](../../../images/articles/application_construction_industry_digitalization_jianxinzhuhe/IMG_5278.PNG) -At the same time, due to the existence of the smart contract trigger mechanism, the flow of engineering funds has a clear regulatory direction, and every step of the flow of funds can be transparent, which will further ensure that the use of funds is reasonable, compliant and legal.。At the same time, the transparency of engineering funds also makes the quality and safety of the project more secure, forming a virtuous circle of the construction industry.。 +At the same time, due to the existence of the smart contract trigger mechanism, the flow of engineering funds has a clear regulatory direction, and every step of the flow of funds can be transparent, which will further ensure that the use of funds is reasonable, compliant and legal。At the same time, the transparency of engineering funds also makes the quality and safety of the project more secure, forming a virtuous circle of the construction industry。 -"IOS Transparent Construction Solution" can not only realize the management of multiple bid sections for a single project, but also realize the management of multiple projects, which has been widely used in the construction industry to effectively improve the efficiency of project operation.。By the end of May 2020, more than 300 projects were running on the IOS platform, with more than 400 participating units.。100 under construction in a group+In a pipe network project, the IOS system assists the project leader in comprehensively managing the process, progress, quality and safety of on-site construction, and successfully implements the project problem solving rate from 60% to 80% through the reward and punishment mechanism.。 +"IOS Transparent Construction Solution" can not only realize the management of multiple bid sections for a single project, but also realize the management of multiple projects, which has been widely used in the construction industry to effectively improve the efficiency of project operation。By the end of May 2020, more than 300 projects were running on the IOS platform, with more than 400 participating units。100 under construction in a group+In a pipe network project, the IOS system assists the project leader in comprehensively managing the process, progress, quality and safety of on-site construction, and successfully implements the project problem solving rate from 60% to 80% through the reward and punishment mechanism。 -Jianxinzhu and CTO Fang Shaojun said that when blockchain technology meets the construction industry, it can not only "overlook" the whole process of the project, such as file sharing, fund supervision, project quantity declaration, quality and safety, performance appraisal, etc., but also effectively light up the "blind spot" in the original project management, project settlement and fund supervision, "pre-task distribution."-Real-time supervision-Post-responsibility traceability "has since formed an organic closed loop of efficient collaboration.。 +Jianxinzhu and CTO Fang Shaojun said that when blockchain technology meets the construction industry, it can not only "overlook" the whole process of the project, such as file sharing, fund supervision, project quantity declaration, quality and safety, performance appraisal, etc., but also effectively light up the "blind spot" in the original project management, project settlement and fund supervision, and "pre-task distribution - real-time supervision - post-responsibility traceability" has since formed an efficient。 ## FISCO BCOS Selected in 10,000-Word Report to Work Together to Reshape the Construction Credit Ecology -In order to effectively create the "IOS Transparent Construction Solution," CCB has selected FISCO BCOS.。"In order to select the underlying platform of the blockchain, our team made a research report of more than ten thousand words, and finally chose FISCO BCOS, mainly because it has a rare language advantage, ecological components, node deployment and technical support are very rich, easy to use and efficient."。"Fang Shaojun introduced that on the bottom of FISCO BCOS technology, Jianxin Zhuhe effectively completed the systematic construction of five business scenarios: scenario-based deposit, project volume declaration, capital supervision, data transaction confirmation and bill flow.。 +In order to effectively create the "IOS Transparent Construction Solution," CCB has selected FISCO BCOS。"In order to select the underlying platform of the blockchain, our team made a research report of more than ten thousand words, and finally chose FISCO BCOS, mainly because it has a rare language advantage, ecological components, node deployment and technical support are very rich, easy to use and efficient."。"Fang Shaojun introduced that on the bottom of FISCO BCOS technology, Jianxin Zhuhe effectively completed the systematic construction of five business scenarios: scenario-based deposit, project volume declaration, capital supervision, data transaction confirmation and bill flow。 ![](../../../images/articles/application_construction_industry_digitalization_jianxinzhuhe/IMG_5279.PNG) -The "IOS Transparent Construction Solution" based on FISCO BCOS blockchain technology can ensure one-to-one correspondence between logistics flow and information flow, transparent supervision of capital flow, traceability of logistics and high matching of capital to account during project implementation, which is not only the project management process reengineering of blockchain in the construction industry, but also the credit ecological remodeling under the new "full scene" management mode.。 +The "IOS Transparent Construction Solution" based on FISCO BCOS blockchain technology can ensure one-to-one correspondence between logistics flow and information flow, transparent supervision of capital flow, traceability of logistics and high matching of capital to account during project implementation, which is not only the project management process reengineering of blockchain in the construction industry, but also the credit ecological remodeling under the new "full scene" management mode。 ------ -**The community has long solicited blockchain applications based on FISCO BCOS. If you have an application that is being developed or has been launched, please click "Read the original" to tell us that your application deserves to be seen by more people.。** \ No newline at end of file +**The community has long solicited blockchain applications based on FISCO BCOS. If you have an application that is being developed or has been launched, please click "Read the original" to tell us that your application deserves to be seen by more people。** \ No newline at end of file diff --git a/3.x/en/docs/articles/6_application/application_manufacturing_changhong.md b/3.x/en/docs/articles/6_application/application_manufacturing_changhong.md index 014d6a228..581b5be2b 100644 --- a/3.x/en/docs/articles/6_application/application_manufacturing_changhong.md +++ b/3.x/en/docs/articles/6_application/application_manufacturing_changhong.md @@ -4,9 +4,9 @@ Author: Enlightenment Laboratory ## Why Choose Production Collaboration and Quality Traceability Scenarios? -Our work (hereinafter "we" refers to the Qisi Laboratory team) is to establish a trusted and contractual information flow within the enterprise through blockchain technology on the basis of manufacturing data informationization, and then use multi-level chains to open up the manufacturing enterprise supply chain and product downstream data information management, the establishment of multiple nodes, to achieve the manufacturing enterprise rules contract execution and data trust management, so as to achieve the effect of reducing costs and improving efficiency.。 +Our work (hereinafter "we" refers to the Qisi Laboratory team) is to establish a trusted and contractual information flow within the enterprise through blockchain technology on the basis of manufacturing data informationization, and then use multi-level chains to open up the manufacturing enterprise supply chain and product downstream data information management, the establishment of multiple nodes, to achieve the manufacturing enterprise rules contract execution and data trust management, so as to achieve the effect of reducing costs and improving efficiency。 -Speaking of which, many friends may wonder why we chose this scene to cut in?First of all, Qisi Laboratory (composed of members of the blockchain research team of the Information Security Laboratory of Sichuan Changhong Electric Appliance Co., Ltd.) itself focuses on the application research of blockchain technology, and cooperates with mainstream blockchain infrastructure technology providers to provide blockchain-based smart home and industrial Internet solutions.。At the same time, driven by the development of the entire industrial Internet and 5G, the traditional manufacturing capacity is facing a large degree of iterative upgrade, the team has been focused on manufacturing, and strive to find a breakthrough point, the use of blockchain technology to help this wave of manufacturing upgrades.。Secondly, in the traditional manufacturing industry, when the product quality problems, often face the quality of material parts can not be traced, difficult to pursue responsibility for the difficulties。The specific pain points are shown in the following figure: Company A in the figure is the product company and the sales company.;Company B is the principal producer of Company A's products.;Supplier C, Supplier D and Supplier E are designated material suppliers of Company A。 +Speaking of which, many friends may wonder why we chose this scene to cut in?First of all, Qisi Laboratory (composed of members of the blockchain research team of the Information Security Laboratory of Sichuan Changhong Electric Appliance Co., Ltd.) itself focuses on the application research of blockchain technology, and cooperates with mainstream blockchain infrastructure technology providers to provide blockchain-based smart home and industrial Internet solutions。At the same time, driven by the development of the entire industrial Internet and 5G, the traditional manufacturing capacity is facing a large degree of iterative upgrade, the team has been focused on manufacturing, and strive to find a breakthrough point, the use of blockchain technology to help this wave of manufacturing upgrades。Secondly, in the traditional manufacturing industry, when the product quality problems, often face the quality of material parts can not be traced, difficult to pursue responsibility for the difficulties。The specific pain points are shown in the following figure: Company A in the figure is the product company and the sales company;Company B is the principal producer of Company A's products;Supplier C, Supplier D and Supplier E are designated material suppliers of Company A。 ![](../../../images/articles/application_manufacturing_changhong/IMG_5273.PNG) @@ -14,56 +14,56 @@ After analysis and research, we have sorted out several major needs in the tradi **Requirement 1: Production planning and material matching automation** -Company B makes a production plan based on the difference between Company A's order and the existing finished product inventory, and then analyzes the difference.。Calculate the required materials according to the production quantity of the product。Production materials are removed from the inventory of the material library and the quantity of materials required is fed back directly to the supplier and prepaid.。 +Company B makes a production plan based on the difference between Company A's order and the existing finished product inventory, and then analyzes the difference。Calculate the required materials according to the production quantity of the product。Production materials are removed from the inventory of the material library and the quantity of materials required is fed back directly to the supplier and prepaid。 -**Requirement 2: Material suppliers respond to material identification in a timely manner.** +**Requirement 2: Material suppliers respond to material identification in a timely manner** -Supplier C, Supplier D, and Supplier E receive material demand orders for shipment and warehousing。At the same time, supplier C, supplier D, supplier E material matching unique identification, information on the chain.。 +Supplier C, Supplier D, and Supplier E receive material demand orders for shipment and warehousing。At the same time, supplier C, supplier D, supplier E material matching unique identification, information on the chain。 **Requirement 3: Financial Credible Liquidation** -Establish a financial automatic clearing mechanism according to the agreement in the links of order issuance, material procurement, logistics transportation, order delivery, etc.。 +Establish a financial automatic clearing mechanism according to the agreement in the links of order issuance, material procurement, logistics transportation, order delivery, etc。 **Demand 4: After-sales quality traceability** -After-sales based on the unique identification of the material's trusted responsibility and quality traceability, to achieve timely response.。 +After-sales based on the unique identification of the material's trusted responsibility and quality traceability, to achieve timely response。 ## Some Considerations on Selection of Bottom Layer After understanding the requirements, the next step is technical implementation, in the underlying selection, team architect Kang Hongjuan mainly from the following points to consider: -- Good practical operability。The blockchain application layer is closely aligned with the actual business, especially at the contract layer, and redeveloping and deploying business contracts will inevitably lead to repeated debugging.。Therefore, the underlying complete support is very important for the success of the project.。 -- Completed service layer functional components。The interaction between the user layer and the chain layer must go through the "grafting" of the intermediate service layer, and these "grafting modules" are universal, in addition to the basic chain layer functions, the completeness of the common components of the service layer is also crucial.。 +- Good practical operability。The blockchain application layer is closely aligned with the actual business, especially at the contract layer, and redeveloping and deploying business contracts will inevitably lead to repeated debugging。Therefore, the underlying complete support is very important for the success of the project。 +- Complete service layer functional components。The interaction between the user layer and the chain layer must go through the "grafting" of the intermediate service layer, and these "grafting modules" are universal, in addition to the basic chain layer functions, the completeness of the common components of the service layer is also crucial。 - Friendly open source atmosphere。The greatest joy for tech geeks is open source。 After understanding the evaluation, the team finally chose FISCO BCOS as the bottom layer, mainly for two reasons: -- FISCO BCOS is a safe and controllable domestic open source alliance chain, which can meet and fit the needs of domestic enterprises.; +-FISCO BCOS is a safe and controllable domestic open source alliance chain, which can meet and meet the needs of domestic enterprises; - High community activity, rich application scenarios, and timely response to developers' technical support。 -In addition, FISCO BCOS has the advantages of timely version iteration, strong performance, and rich service middleware.。 +In addition, FISCO BCOS has the advantages of timely version iteration, strong performance, and rich service middleware。 ## Smart Contract Solutions -In summary, based on the actual business needs and combined with the advantages of blockchain technology, we build four key levels, including "basic layer, core layer, service layer and user layer," covering core database, business contract, data analysis, message analysis, user management, business management and other functions, and build an order-based production collaboration vertical solution in the field of industrial Internet.。 +In summary, based on the actual business needs and combined with the advantages of blockchain technology, we build four key levels, including "basic layer, core layer, service layer and user layer," covering core database, business contract, data analysis, message analysis, user management, business management and other functions, and build an order-based production collaboration vertical solution in the field of industrial Internet。 -In this article, we focus on sharing the smart contract solutions, which are shown in the following figure.。This program includes product contracts, settlement contracts, production contracts, stocking contracts and licensing contracts, each of which will be described below.。 +In this article, we focus on sharing the smart contract solutions, which are shown in the following figure。This program includes product contracts, settlement contracts, production contracts, stocking contracts and licensing contracts, each of which will be described below。 ![](../../../images/articles/application_manufacturing_changhong/IMG_5274.JPG) ### Product Contract -This contract is used to complete product registration, product ownership change and product traceability during product production.。 +This contract is used to complete product registration, product ownership change and product traceability during product production。 Steps are as follows: -- Complete the relationship setup for the relevant contract; -- The admin of the contract authorizes certain addresses to become a product producer.; -- The product manufacturer calls updateProductPrice to set the product price.; -- Customer needs to getProductPrice to compare the price before placing a production order with the manufacturer, and then place a production order with the manufacturer through the payment contract.; -- After the manufacturer obtains the order through the payment contract, the product is produced.; -- Before the production of the product, the manufacturer will check the adequacy of the raw material reserves of his product through the Material contract, and if sufficient, the production of the product will be registered registerProduct on the chain.; -- The product manufacturer delivers, customer confirms receipt of the goods, confirms the order, and completes the change of funds and product ownership.。 +- Complete relationship setup for related contracts; +- the admin of the contract authorizes certain addresses to become a product producer; +-product manufacturer calls updateProductPrice to set the product price; +-customer needs to compare the price with getProductPrice before placing a production order with the manufacturer, and then place a production order with the manufacturer through the payment contract; +-After the manufacturer obtains the order through the payment contract, the product is produced; +-Before producing a product, the manufacturer will check whether the raw material reserves of his product are sufficient through the Material contract, and if sufficient, the production of the product will be registered registerProduct on the chain; +- The product manufacturer delivers, customer confirms receipt of the goods, confirms the order, completes the change of funds and product ownership。 ``` / / The manufacturer msg.sender updates the price of its productType product @@ -100,29 +100,29 @@ Responsible for user assets, the specific steps are as follows: - Balance and Prepayment Queue; - Product ownership change(Manual confirmation)Automatic settlement in process; -- If funds are insufficient, operations such as stocking, warehousing, etc. cannot be performed.; -- Recharge / direct consumption / advance payment, balance inquiry and other related functions.。 +-If funds are insufficient, operations such as stocking, warehousing, etc. cannot be performed; +-Recharge / direct consumption / advance payment, balance inquiry and other related functions。 ### production contract Have inventory production number and owned product queue, and raw material batch queue: -- Obtain orders to be produced, determine whether the inventory of raw materials meets the demand, and if not, place production orders through the settlement contract to advance the stocking contract.; -- By calculating the raw material queue, the external unique identification is generated, and the batch information of the raw material is written into the product for production.(External)Warehousing(Invoke the generation entry of the product contract.)。Maintain inventory production numbers and owned product queues; -- Issue(Product ownership change entry for external call product contracts.)The number of products produced under this contract and the owned product queue of the owned product queue inventory contract are maintained and settled through the settlement contract.。 +-Obtain orders to be produced, determine whether the inventory of raw materials meets the demand, and if not, place production orders through settlement contracts and advance payments to stocking contracts; +-By calculating the raw material queue, the external unique identification is generated, and the batch information of the raw material is written into the product for production(External)Warehousing(Invoke the generation entry of the product contract)。Maintain inventory production numbers and owned product queues; +- outbound(Product ownership change entry for external call product contracts)The number of products produced under this contract and the owned product queue of the owned product queue inventory contract are maintained and settled through the settlement contract。 ### stocking contract Responsible for the upstream manufacturers for stocking: -- By calling the production stock number to make stock of manufacturer B, calculate the number of existing parts, external call parts into the warehouse.(Fill in the batch, quantity and other information); -- Issue(External call)The batch information of the part information is written to manufacturer B, and its own data queue is maintained, and the settlement contract is automatically called for settlement after completion.。 +-By calling the production stocking number to stock up on manufacturer B, calculate the number of existing parts, and call the parts from outside(Fill in the batch, quantity and other information); +- outbound(External call)The batch information of the part information is written to manufacturer B, and its own data queue is maintained, and the settlement contract is automatically called for settlement after completion。 Supplement: Because parts are not like products with one thing and one code, multiple parts only correspond to a certain batch, and there is no need for a separate parts contract to maintain the parts。 ### Authorized contract -Through different permission levels, access restrictions are imposed on functions in each contract, and role conventions and call permissions between various contracts.。For example, ordinary users do not have the right to place orders with suppliers, and suppliers cannot provide ordinary users with finished goods.。 +Through different permission levels, access restrictions are imposed on functions in each contract, and role conventions and call permissions between various contracts。For example, ordinary users do not have the right to place orders with suppliers, and suppliers cannot provide ordinary users with finished goods。 ``` / / Does address have role permission @@ -149,4 +149,4 @@ function revokeRole(bytes32 role, address account) public virtual ## Smart Contract Solutions -Throughout the development of smart contracts, the main thing is actually the combing of the entire production process.。First of all, we first combed the requirements, and then developed a usable version for the solidity language, on this basis, the relevant calls to establish a simple restful server, complete the testing of each interface, there is a complete set of demonstrable process, and finally the relevant development to complete the reproduction of the contract.。 \ No newline at end of file +Throughout the development of smart contracts, the main thing is actually the combing of the entire production process。First of all, we first combed the requirements, and then developed a usable version for the solidity language, on this basis, the relevant calls to establish a simple restful server, complete the testing of each interface, there is a complete set of demonstrable process, and finally the relevant development to complete the reproduction of the contract。 \ No newline at end of file diff --git a/3.x/en/docs/articles/6_application/application_on-chain_collaboration_multiple_enterprises_jianxinzhuhe.md b/3.x/en/docs/articles/6_application/application_on-chain_collaboration_multiple_enterprises_jianxinzhuhe.md index 7934aeb1b..823071267 100644 --- a/3.x/en/docs/articles/6_application/application_on-chain_collaboration_multiple_enterprises_jianxinzhuhe.md +++ b/3.x/en/docs/articles/6_application/application_on-chain_collaboration_multiple_enterprises_jianxinzhuhe.md @@ -4,21 +4,21 @@ Author: Zhu Lipai | Senior Blockchain Developer (Shenzhen Jianxin Zhuhe Techno ## Why choose a multi-party collaborative governance scenario between enterprises? -Many of you may wonder why there is such a seemingly wild idea as a "joint company on the blockchain."?First of all, blockchain technology is very suitable for "multi-party collaborative governance" scenarios.。The open governance capacity of a polycentric autonomous organization is reflected in the fact that anyone with the appropriate credentials can openly exercise governance。Inspired by this, I would like to achieve a similar function based on the alliance chain: all parties to the alliance have pre-agreed governance capabilities to ensure that the governance process is fair, just, open, traceable and non-repudiation through blockchain technology.。Secondly, there is a real scenario demand for "multi-party collaborative governance" between the company's actual business dealings.。 +Many of you may wonder why there is such a seemingly wild idea as a "joint company on the blockchain."?First of all, blockchain technology is very suitable for "multi-party collaborative governance" scenarios。The open governance capacity of a polycentric autonomous organization is reflected in the fact that anyone with the appropriate credentials can openly exercise governance。Inspired by this, I would like to achieve a similar function based on the alliance chain: all parties to the alliance have pre-agreed governance capabilities to ensure that the governance process is fair, just, open, traceable and non-repudiation through blockchain technology。Secondly, there is a real scenario demand for "multi-party collaborative governance" between the company's actual business dealings。 -For example, the general business transactions between companies often involve projects, funds two categories, if more than one company needs to jointly manage a project, and there are financial transactions, you can consider using blockchain technology to achieve "on-chain collaboration and governance."。You can maintain a consistent global view of the progress of the project, and at the same time, any signature confirmation process is triggered by the corresponding private key signature, making it easier to achieve responsibility to the person.。 +For example, the general business transactions between companies often involve projects, funds two categories, if more than one company needs to jointly manage a project, and there are financial transactions, you can consider using blockchain technology to achieve "on-chain collaboration and governance."。You can maintain a consistent global view of the progress of the project, and at the same time, any signature confirmation process is triggered by the corresponding private key signature, making it easier to achieve responsibility to the person。 ## On-chain Collaboration and Governance Realization Ideas -Each company exists as a separate "company contract" on the blockchain, and as long as the "company contract interface" is implemented, the company's internal business logic and internal organizational relationships can be customized.。When a company wants to join a joint venture, it first applies and deploys its own "corporate contract.";A proposal is then initiated by a member already in the consortium with the newly deployed "Company Contract Address" as a parameter;Upon approval by a majority of the members of the syndicate, you can formally become a member of the syndicate.。The projects in which each company participates will exist separately in the form of a "joint project contract" in which any member company of the joint company can initiate joint projects.。 +Each company exists as a separate "company contract" on the blockchain, and as long as the "company contract interface" is implemented, the company's internal business logic and internal organizational relationships can be customized。When a company wants to join a joint venture, it first applies and deploys its own "corporate contract.";A proposal is then initiated by a member already in the consortium with the newly deployed "Company Contract Address" as a parameter;Upon approval by a majority of the members of the syndicate, you can formally become a member of the syndicate。The projects in which each company participates will exist separately in the form of a "joint project contract" in which any member company of the joint company can initiate joint projects。 -First, develop a "joint project contract" based on the "project contract interface" and deploy it to the blockchain.;and initiate the proposal with the "address of the joint project contract" as a parameter in the proposal;Each of the joint companies can view the contract based on the contract address in the proposal and decide whether to vote for the proposal;When approved by a majority of the member companies of the joint company, it becomes a "joint project contract."。 +First, develop a "joint project contract" based on the "project contract interface" and deploy it to the blockchain;and initiate the proposal with the "address of the joint project contract" as a parameter in the proposal;Each of the joint companies can view the contract based on the contract address in the proposal and decide whether to vote for the proposal;When approved by a majority of the member companies of the joint company, it becomes a "joint project contract."。 ## Design Ideas and Key Logic of Blockchain Smart Contract ### Contract Design Ideas -In the contract design, refer to the FISCO BCOS open source community "[Solidity programming strategy for smart contract writing](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485719&idx=1&sn=2466598f695c56d2865388b7db423196&chksm=9f2efb0ba859721d757cd12f9ff19b3f2af21c00781f31970b1fa156de73d72ca49b12fc0200&scene=21#wechat_redirect)The idea in the article, using the "data, management, control" hierarchical design method.。This smart contract solution mainly has three modules: joint governance module, company module, project module, contract interaction mainly occurs between the contracts of these three modules.。 +In the contract design, refer to the FISCO BCOS open source community "[Solidity programming strategy for smart contract writing](http://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247485719&idx=1&sn=2466598f695c56d2865388b7db423196&chksm=9f2efb0ba859721d757cd12f9ff19b3f2af21c00781f31970b1fa156de73d72ca49b12fc0200&scene=21#wechat_redirect)The idea in the article, using the "data, management, control" hierarchical design method。This smart contract solution mainly has three modules: joint governance module, company module, project module, contract interaction mainly occurs between the contracts of these three modules。 ![](../../../images/articles/application_on-chain_collaboration_multiple_enterprises_jianxinzhuhe/IMG_5275.PNG) @@ -26,19 +26,19 @@ In the contract design, refer to the FISCO BCOS open source community "[Solidity - **Company Module**: Single company management system, single company internal capital flow system; - **Project Module**: Joint project management of multiple companies, internal project management of a single company。 -Among them, the "alliance management module" centralizes the management of "company module" contracts and "project module" contracts, and the management mechanism is mainly "voting."-Register ";Company contracts and project contracts customize business logic based on the implementation of the corresponding interface contract method and are chained in the form of separate contracts.。 +Among them, the "alliance management module" centralizes the management of "company module" contracts and "project module" contracts, and the management mechanism is mainly "voting-registration.";Company contracts and project contracts customize business logic based on the implementation of the corresponding interface contract method and are chained in the form of separate contracts。 In terms of contract function, there are mainly the following points: -- Vote registration function, only if the number of votes exceeds a certain rate, the new company can become a member of the joint company, the new project can be recognized as a joint project.; -- Project management features, such as project administrator settings; +- Vote registration function, only if the number of votes exceeds a certain rate, the new company can become a member of the joint company, the new project can be recognized as a joint project; +- Project management features such as project administrator settings; - Role-based permission control, custom roles and permissions; -- Capital flows, including inter-firm capital flows (involving cross-contract calls) and intra-firm capital flows; +- Fund transfers, including inter-company transfers (involving cross-contract calls) and intra-company transfers; - Fund issuance function, based on voting to decide whether to issue funds。 ### Contract code implementation of key logic -Here is the contract code implementation of some key logic in the project, taking the ownership transfer of the "storage smart contract" as an example.。This project adopts the idea of "storage, logic, control" hierarchical design, the deployer must transfer the contract ownership relationship to the controller smart contract after deploying the "storage smart contract," the storage contract method is as follows. +Here is the contract code implementation of some key logic in the project, taking the ownership transfer of the "storage smart contract" as an example。This project adopts the idea of "storage, logic, control" hierarchical design, the deployer must transfer the contract ownership relationship to the controller smart contract after deploying the "storage smart contract," the storage contract method is as follows ``` function transferOwnership(address newOwner) public onlyOwner { diff --git a/3.x/en/docs/articles/6_application/application_online_lending_platforms.md b/3.x/en/docs/articles/6_application/application_online_lending_platforms.md index 93056ce85..41c0235a6 100644 --- a/3.x/en/docs/articles/6_application/application_online_lending_platforms.md +++ b/3.x/en/docs/articles/6_application/application_online_lending_platforms.md @@ -1,8 +1,8 @@ -# FISCO BCOS helps Shenzhen crack the problem of benign exit of online loans. +# FISCO BCOS helps Shenzhen crack the problem of benign exit of online loans Special reporter Shen Yong -The underlying open source platform of FISCO BCOS blockchain adds another blockbuster application in the financial field!Landing online lending institution voting system, effectively solve the problem of difficult decision-making in the process of benign exit of online lending platform.。The system took the lead in Shenzhen in the first half of this year and is expected to be extended to the whole country.。 +The underlying open source platform of FISCO BCOS blockchain adds another blockbuster application in the financial field!Landing online lending institution voting system, effectively solve the problem of difficult decision-making in the process of benign exit of online lending platform。The system took the lead in Shenzhen in the first half of this year and is expected to be extended to the whole country。 The following is a report on the successful application of Shenzhen, the content of the self-reading special news。Original title: "Shenzhen successfully applied blockchain technology to crack the benign exit problem of online lending" @@ -12,28 +12,28 @@ Although blockchain technology is booming, Shenzhen has successfully applied it ## Voting system gives thousands of lenders "voting rights" -How do ordinary lenders express their opinions and demands in the face of the withdrawal of online lenders?In the first half of this year, under the guidance and promotion of local financial regulatory authorities at the Shenzhen and district levels, Shenzhen took the lead in launching a unified voting system for the benign exit of P2P lending institutions (hereinafter referred to as the "voting system").。 +How do ordinary lenders express their opinions and demands in the face of the withdrawal of online lenders?In the first half of this year, under the guidance and promotion of local financial regulatory authorities at the Shenzhen and district levels, Shenzhen took the lead in launching a unified voting system for the benign exit of P2P lending institutions (hereinafter referred to as the "voting system")。 -The system is a supporting technical support platform for the "Guidelines for the Benign Exit of Online Lending Information Intermediaries in Shenzhen" (hereinafter referred to as the "Benign Exit Guidelines"), which aims to effectively solve the problem of difficult decision-making by the public in the process of benign exit of the platform, and has started a "cannon" for the benign exit of online lending.。 +The system is a supporting technical support platform for the "Guidelines for the Benign Exit of Online Lending Information Intermediaries in Shenzhen" (hereinafter referred to as the "Benign Exit Guidelines"), which aims to effectively solve the problem of difficult decision-making by the public in the process of benign exit of the platform, and has started a "cannon" for the benign exit of online lending。 -"The primary problem facing the benign exit of P2P lending institutions is the difficulty of decision-making by stakeholders.。"Shenzhen Internet Finance Association Secretary-General Zeng Guang introduced that the ownership of assets managed by online lending institutions belongs to the lender, but the number of lenders is huge, distributed throughout the country, the structure is complex and the demands are different, the platform and the lender communication costs are high, the lender does not trust the platform.。 +"The primary problem facing the benign exit of P2P lending institutions is the difficulty of decision-making by stakeholders。"Shenzhen Internet Finance Association Secretary-General Zeng Guang introduced that the ownership of assets managed by online lending institutions belongs to the lender, but the number of lenders is huge, distributed throughout the country, the structure is complex and the demands are different, the platform and the lender communication costs are high, the lender does not trust the platform。 -In order to provide unified rules for stakeholder decision-making, the Shenzhen Internet Finance Association took the lead in launching the "Benign Exit Guidelines," setting up an innovative lender supervision committee, "two-thirds plus more than half" voting rules and a series of benign exit rules requirements.。 +In order to provide unified rules for stakeholder decision-making, the Shenzhen Internet Finance Association took the lead in launching the "Benign Exit Guidelines," setting up an innovative lender supervision committee, "two-thirds plus more than half" voting rules and a series of benign exit rules requirements。 -On the basis of the Benign Exit Guidelines, the "Voting System for Online Lending Institutions" has been re-introduced to make it possible to vote on matters of significant interest to lenders.。"In the actual operation of previous lender voting, voting was often conducted through the voting software of WeChat groups, QQ groups, social networking sites, and on-site meetings of a few representatives, whose legitimacy and impartiality were questioned, and the implementation process was often repeated.。In this case, the new voting system provides voting services for lenders and is committed to achieving a fair, open and transparent voting environment.。The Lender Oversight Committee is responsible for auditing and supervising related matters, and online lending institutions authorize associations to provide information disclosure and voting services.。 +On the basis of the Benign Exit Guidelines, the "Voting System for Online Lending Institutions" has been re-introduced to make it possible to vote on matters of significant interest to lenders。"In the actual operation of previous lender voting, voting was often conducted through the voting software of WeChat groups, QQ groups, social networking sites, and on-site meetings of a few representatives, whose legitimacy and impartiality were questioned, and the implementation process was often repeated。In this case, the new voting system provides voting services for lenders and is committed to achieving a fair, open and transparent voting environment。The Lender Oversight Committee is responsible for auditing and supervising related matters, and online lending institutions authorize associations to provide information disclosure and voting services。 -The reporter learned that there are already 20 online lending institutions in Shenzhen using the voting system.。Zeng Guang introduced, "Most of the online lending institutions using the voting system have actively promoted the first voting on major matters in accordance with the requirements of the Exit Guidelines, confirmed the exit process, elected the members of the Supervisory Committee and authorized the liquidation group and the Supervisory Committee.。In addition, some platforms have completed the second round of voting through the voting system, and the benign exit work of each platform is progressing in an orderly manner.。 +The reporter learned that there are already 20 online lending institutions in Shenzhen using the voting system。Zeng Guang introduced, "Most of the online lending institutions using the voting system have actively promoted the first voting on major matters in accordance with the requirements of the Exit Guidelines, confirmed the exit process, elected the members of the Supervisory Committee and authorized the liquidation group and the Supervisory Committee。In addition, some platforms have completed the second round of voting through the voting system, and the benign exit work of each platform is progressing in an orderly manner。 ## **Block chain technology good steel used on the blade** -The application of blockchain technology is the "online lending institution voting system" to crack the difficult decision-making of stakeholders.。It is reported that the voting system is based on the FISCO BCOS blockchain open source underlying platform, while the introduction of artificial intelligence, biometrics, digital authentication and other leading domestic technologies, with completely independent intellectual property rights.。 +The application of blockchain technology is the "online lending institution voting system" to crack the difficult decision-making of stakeholders。It is reported that the voting system is based on the FISCO BCOS blockchain open source underlying platform, while the introduction of artificial intelligence, biometrics, digital authentication and other leading domestic technologies, with completely independent intellectual property rights。 -FISCO BCOS blockchain open source underlying platform led by the financial blockchain cooperation alliance (Shenzhen) (referred to as the "gold chain alliance") open source working group launched。FISCO BCOS code has been fully open source in 2017 and continues to be iteratively updated, and currently its open source community has tens of thousands of community members, more than 500 companies have participated, successfully landing more than 60 production environment application cases, and has developed into the largest and most active domestic alliance chain open source ecosystem.。 +FISCO BCOS blockchain open source underlying platform led by the financial blockchain cooperation alliance (Shenzhen) (referred to as the "gold chain alliance") open source working group launched。FISCO BCOS code has been fully open source in 2017 and continues to be iteratively updated, and currently its open source community has tens of thousands of community members, more than 500 companies have participated, successfully landing more than 60 production environment application cases, and has developed into the largest and most active domestic alliance chain open source ecosystem。 -The voting system of online lending institutions has successfully used the technical achievements of the platform in its design, and integrates a number of other cutting-edge technologies to create a multifunctional voting decision-making system that is pure online, intelligent, with the participation of the government and financial institutions, and has the effect of judicial arbitration.。 +The voting system of online lending institutions has successfully used the technical achievements of the platform in its design, and integrates a number of other cutting-edge technologies to create a multifunctional voting decision-making system that is pure online, intelligent, with the participation of the government and financial institutions, and has the effect of judicial arbitration。 -First, the voting system introduces intelligent voice interactive robots and online face recognition technology to build a voter identification chain and record voter identification。On the one hand, by using the robot to interact with the lender voice, confirm the debt information and keep the voice record.;On the other hand, the use of face recognition and digital certificate authentication to confirm the identity of the lender, and the voter's identity record on the blockchain, not only to prevent the voting process there are others to fake the situation, but also to avoid the voter's all sensitive information on the chain, while protecting user privacy, to ensure the credibility of the voting results.。 +First, the voting system introduces intelligent voice interactive robots and online face recognition technology to build a voter identification chain and record voter identification。On the one hand, by using the robot to interact with the lender voice, confirm the debt information and keep the voice record;On the other hand, the use of face recognition and digital certificate authentication to confirm the identity of the lender, and the voter's identity record on the blockchain, not only to prevent the voting process there are others to fake the situation, but also to avoid the voter's all sensitive information on the chain, while protecting user privacy, to ensure the credibility of the voting results。 -Secondly, the voting system makes full use of the technical advantages of blockchain technology to prevent tampering and decentralization to build a judicial depository arbitration chain.。By summarizing all voting records and results on the chain, the results of each vote are recorded accurately and cannot be tampered with, also while protecting user privacy.;At the same time, each deposit has an authoritative arbitration institution to participate in the chain of judicial deposit, from a legal point of view to ensure the preservation of the evidence of the voting results, and ultimately to ensure the judicial validity and authenticity of the whole process of voting data.。 +Secondly, the voting system makes full use of the technical advantages of blockchain technology to prevent tampering and decentralization to build a judicial depository arbitration chain。By summarizing all voting records and results on the chain, the results of each vote are recorded accurately and cannot be tampered with, also while protecting user privacy;At the same time, each deposit has an authoritative arbitration institution to participate in the chain of judicial deposit, from a legal point of view to ensure the preservation of the evidence of the voting results, and ultimately to ensure the judicial validity and authenticity of the whole process of voting data。 -Finally, the voting system has also set up an independent voting verification service, through which anyone can verify whether the data of the voting system and the data stored on the blockchain are consistent, so as to supervise the entire voting process from everyone's point of view and enhance the credibility of the voting results.。 \ No newline at end of file +Finally, the voting system has also set up an independent voting verification service, through which anyone can verify whether the data of the voting system and the data stored on the blockchain are consistent, so as to supervise the entire voting process from everyone's point of view and enhance the credibility of the voting results。 \ No newline at end of file diff --git a/3.x/en/docs/articles/6_application/application_people_copyright.md b/3.x/en/docs/articles/6_application/application_people_copyright.md index 6e341902a..f76d02576 100644 --- a/3.x/en/docs/articles/6_application/application_people_copyright.md +++ b/3.x/en/docs/articles/6_application/application_people_copyright.md @@ -4,37 +4,37 @@ In July 2019, the one-stop copyright management platform "People's Copyright" based on the underlying technology of FISCO BCOS blockchain was officially launched。By the first quarter of 2020, the platform has achieved a series of results in copyright original preservation, infringement monitoring, and judicial rights protection: -**"People's Copyright" has filed copyright certificates for more than 2 million news articles.;The number of automatically identifiable news items exceeds 100 million, which is equivalent to the total number of news items in three years.;The average daily monitoring data of the whole network is nearly 3 million, and the director measured more than 1 billion in the whole year.。** +**"People's Copyright" has filed copyright certificates for more than 2 million news articles;The number of automatically identifiable news items exceeds 100 million, which is equivalent to the total number of news items in three years;The average daily monitoring data of the whole network is nearly 3 million, and the director measured more than 1 billion in the whole year。** -At present, the "people's copyright" has been filed through the domestic blockchain information service of the State Internet Information Office.。In the future, the platform will continue to be deeply integrated with intellectual property protection through innovative technologies such as blockchain and artificial intelligence to combat infringement and help build a new ecology of copyright protection.。 +At present, the "people's copyright" has been filed through the domestic blockchain information service of the State Internet Information Office。In the future, the platform will continue to be deeply integrated with intellectual property protection through innovative technologies such as blockchain and artificial intelligence to combat infringement and help build a new ecology of copyright protection。 ### Block Chain Helps Copyright Protection and Certificates 2 Million News Reports -In recent years, nearly 60% of original media authors have encountered content infringement, and the frequency of infringement of original manuscripts is as high as 3.64 times per work.。The virtual nature of the network makes it difficult to detect and identify the infringement in time.;The immediacy and fission of the network make the development of infringement more rapid, and it also brings difficulties for copyright traceability, evidence collection and rights protection.。 +In recent years, nearly 60% of original media authors have encountered content infringement, and the frequency of infringement of original manuscripts is as high as 3.64 times per work。The virtual nature of the network makes it difficult to detect and identify the infringement in time;The immediacy and fission of the network make the development of infringement more rapid, and it also brings difficulties for copyright traceability, evidence collection and rights protection。 -"People's Copyright" takes advantage of the integrity, traceability and immutability of blockchain technology, and comprehensively applies WeIdentity, a distributed identity solution based on blockchain, to realize the whole process management of copyright protection, including information traceability on the digital work chain and data monitoring on the whole network.。 +"People's Copyright" takes advantage of the integrity, traceability and immutability of blockchain technology, and comprehensively applies WeIdentity, a distributed identity solution based on blockchain, to realize the whole process management of copyright protection, including information traceability on the digital work chain and data monitoring on the whole network。 -Since its launch, "People's Copyright" has provided copyright preservation protection for 2,030,890 original news reports.。The number of real-time comparative monitoring articles reached an average of 2,871,903 articles per day, and the annual director measured more than 1 billion articles.。The total number of identification and collection media reached 9,988,781, covering nearly all electronic newspapers, online media and mainstream clients.。Real-time monitoring of massive amounts of information provides a prerequisite for quickly identifying infringing reprints and obtaining evidence on the chain.。 +Since its launch, "People's Copyright" has provided copyright preservation protection for 2,030,890 original news reports。The number of real-time comparative monitoring articles reached an average of 2,871,903 articles per day, and the annual director measured more than 1 billion articles。The total number of identification and collection media reached 9,988,781, covering nearly all electronic newspapers, online media and mainstream clients。Real-time monitoring of massive amounts of information provides a prerequisite for quickly identifying infringing reprints and obtaining evidence on the chain。 ### First gradient judicial service to complete infringement litigation at low cost -"People's Copyright" Initiates "Gradient Judicial Comprehensive Service"。Facing the demand for digital copyright protection, "People's Copyright" provides a set of innovative judicial services, forming an authoritative judicial gradient service system including Internet courts, notarization, judicial authentication, lawyers and arbitration, laying a solid foundation for the judicial protection of copyright protection in the era of digital rights.。 +"People's Copyright" Initiates "Gradient Judicial Comprehensive Service"。Facing the demand for digital copyright protection, "People's Copyright" provides a set of innovative judicial services, forming an authoritative judicial gradient service system including Internet courts, notarization, judicial authentication, lawyers and arbitration, laying a solid foundation for the judicial protection of copyright protection in the era of digital rights。 -In addition, compared with the traditional rights protection model, "People's Copyright" adopts new models such as electronic evidence management, online mediation, Internet court litigation, etc., replacing the traditional manual model with intelligent electronic mode to reduce costs, and using 1 / 2 of the price of traditional copyright services can complete the whole process of rights confirmation and rights protection, helping users to protect their rights with the least cost and the highest efficiency, and improving the judicial efficiency of copyright rights protection.。 +In addition, compared with the traditional rights protection model, "People's Copyright" adopts new models such as electronic evidence management, online mediation, Internet court litigation, etc., replacing the traditional manual model with intelligent electronic mode to reduce costs, and using 1 / 2 of the price of traditional copyright services can complete the whole process of rights confirmation and rights protection, helping users to protect their rights with the least cost and the highest efficiency, and improving the judicial efficiency of copyright rights protection。 -At the beginning of this year, "People's Copyright" also officially connected to the Beijing Internet Court's "Balance Chain" electronic evidence platform, becoming the first media copyright platform to achieve the full chain of copyright deposit, infringement monitoring, online copyright trading and judicial rights protection.。 +At the beginning of this year, "People's Copyright" also officially connected to the Beijing Internet Court's "Balance Chain" electronic evidence platform, becoming the first media copyright platform to achieve the full chain of copyright deposit, infringement monitoring, online copyright trading and judicial rights protection。 ### Media annual turnover can reach millions to build the largest online copyright trading platform -"People's Copyright" Platform Brings Copyright Trading Links Online。The copyright trading center uses blockchain technology to record the whole process of content production and dissemination, which can further help the media realize the redistribution of value in the information distribution chain.。It is reported that the introduction of online trading links, media units copyright trading average annual revenue is estimated to reach 7 million yuan, market benefits are considerable.。 +"People's Copyright" Platform Brings Copyright Trading Links Online。The copyright trading center uses blockchain technology to record the whole process of content production and dissemination, which can further help the media realize the redistribution of value in the information distribution chain。It is reported that the introduction of online trading links, media units copyright trading average annual revenue is estimated to reach 7 million yuan, market benefits are considerable。 -At present, the "People's Copyright" platform has completed access to five primary nodes, including Beijing Gehua Cable Television Network Co., Ltd., Shandong Digital Publishing Media Co., Ltd., and Weizhong Bank.。In October 2019, "Beijing Cloud-Rong Media" became the first provincial-level financial media platform to access "People's Copyright."。At the same time, the "People's Copyright Alliance" launched by the People's Online / People's Network Public Opinion Data Center has more than 100 party media in the process of docking.。 +At present, the "People's Copyright" platform has completed access to five primary nodes, including Beijing Gehua Cable Television Network Co., Ltd., Shandong Digital Publishing Media Co., Ltd., and Weizhong Bank。In October 2019, "Beijing Cloud-Rong Media" became the first provincial-level financial media platform to access "People's Copyright."。At the same time, the "People's Copyright Alliance" launched by the People's Online / People's Network Public Opinion Data Center has more than 100 party media in the process of docking。 -"People's Copyright" is about to realize video copyright protection using innovative technologies such as blockchain, big data and AI recognition.。At present, "People's Copyright" is cooperating with the Western National Copyright Trading Center, the Northern National Copyright Trading Center, China Radio and Television Media Co.。 +"People's Copyright" is about to realize video copyright protection using innovative technologies such as blockchain, big data and AI recognition。At present, "People's Copyright" is cooperating with the Western National Copyright Trading Center, the Northern National Copyright Trading Center, China Radio and Television Media Co。 Article Source: People's Network -- [FISCO BCOS code repository](https://github.com/FISCO-BCOS/FISCO-BCOS/tree/master-2.0) +- [FISCO BCOS Code Repository](https://github.com/FISCO-BCOS/FISCO-BCOS/tree/master-2.0) -- [WeIdentity Code Repository](https://github.com/WeBankFinTech/WeIdentity) +- [WeIdentity code repository](https://github.com/WeBankFinTech/WeIdentity) diff --git a/3.x/en/docs/articles/6_application/application_westlake_longjingtea_yifei.md b/3.x/en/docs/articles/6_application/application_westlake_longjingtea_yifei.md index 13dc9062f..a31f77361 100644 --- a/3.x/en/docs/articles/6_application/application_westlake_longjingtea_yifei.md +++ b/3.x/en/docs/articles/6_application/application_westlake_longjingtea_yifei.md @@ -6,16 +6,16 @@ Author : Yi Fei | Tian Yan Wei Zhen CTO ##
***01 Project Background*** #### 1.1 Business requirements -West Lake Longjing is the first of the top ten famous teas and one of Hangzhou's unique business cards.。Located on Longjing Road at the foot of Shifeng Mountain in Longjing Village, Hangzhou West Lake Longjing Tea Leaf Co.。The company created the "Gong" brand West Lake Longjing, as the country's gift tea, enjoys a high reputation at home and abroad.。It is hoped that through digital transformation, the use of cutting-edge technology will break the traditional tea industry chain pattern and gradually move towards digital and intelligent development.。 +West Lake Longjing is the first of the top ten famous teas and one of Hangzhou's unique business cards。Located on Longjing Road at the foot of Shifeng Mountain in Longjing Village, Hangzhou West Lake Longjing Tea Leaf Co。The company created the "Gong" brand West Lake Longjing, as the country's gift tea, enjoys a high reputation at home and abroad。It is hoped that through digital transformation, the use of cutting-edge technology will break the traditional tea industry chain pattern and gradually move towards digital and intelligent development。 ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5647.PNG) ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5648.PNG) #### 1.2 Solutions -- To build a "digital tribute card" as the goal, around the variety protection, ecological tea garden, production management, tea tourism and other aspects, to build a "digital tribute card" industry digital foundation, the use of Internet of Things, blockchain, 5G and other technologies as the "digital tribute card" construction of technical support.。Empowering industries with numbers to enhance industrial value。 +- To build a "digital tribute card" as the goal, around the variety protection, ecological tea garden, production management, tea tourism and other aspects, to build a "digital tribute card" industry digital foundation, the use of Internet of Things, blockchain, 5G and other technologies as the "digital tribute card" construction of technical support。Empowering industries with numbers to enhance industrial value。 -- The establishment of "digital tea garden," "digital production," "digital display" and other three digital systems, can provide multi-dimensional, multi-scenario brand building services for production and operation, regulatory services, decision-making analysis, to achieve "data to speak, data decision-making, data management, data innovation," with data to empower "digital tribute brand" brand building.。 +- The establishment of "digital tea garden," "digital production," "digital display" and other three digital systems, for production and operation, regulatory services, decision-making analysis to provide multi-dimensional, multi-scene brand building services, to achieve "data to speak, data decision-making, data management, data innovation," with data to empower the "digital tribute brand" brand building。 ##
***02 Technical Scheme*** @@ -23,26 +23,26 @@ West Lake Longjing is the first of the top ten famous teas and one of Hangzhou's #### 2.1 Digital Tea Garden -- meteorological environment monitoring -- soil moisture monitoring -- visual video surveillance +- Meteorological environment monitoring +- Soil moisture monitoring +- Visual video surveillance - pest monitoring and early warning ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5650.PNG) ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5651.PNG) -#### 2.2 Digital production-production plan management -The production plan management system is the management of the enterprise plan, including the development of plans, the implementation of plans, the completion of plans and other three aspects of the work.。The production plan of the enterprise is mainly divided into field management plan, irrigation plan, plant protection plan, fertilization plan, harvest plan and so on.。Only by making a production plan can an enterprise reasonably arrange the operation and management of the production process.。 +#### 2.2 Digital Production - Production Planning Management +The production plan management system is the management of the enterprise plan, including the development of plans, the implementation of plans, the completion of plans and other three aspects of the work。The production plan of the enterprise is mainly divided into field management plan, irrigation plan, plant protection plan, fertilization plan, harvest plan and so on。Only by making a production plan can an enterprise reasonably arrange the operation and management of the production process。 ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5652.PNG) -#### 2.3 Digital production-planting management -The system provides professional and standard production file information forms for the planting process of planting products, the production files are customized according to the characteristics of tea trees and production standards and enterprise standards, and the production files are collected according to the production batches of the enterprise to realize the integrity of the production file data and ensure that every production batch of the enterprise is documented.。 +#### 2.3 Digital Production - Planting Management +The system provides professional and standard production file information forms for the planting process of planting products, the production files are customized according to the characteristics of tea trees and production standards and enterprise standards, and the production files are collected according to the production batches of the enterprise to realize the integrity of the production file data and ensure that every production batch of the enterprise is documented。 ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5653.PNG) -#### 2.4 Digital Presentation-Big Data Platform +#### 2.4 Digital Display - Big Data Platform ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5654.PNG) -#### 2.5 Digital Presentation-Traceability +#### 2.5 Digital Display - Traceability ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5655.PNG) @@ -52,21 +52,21 @@ The system provides professional and standard production file information forms ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5656.PNG) #### 3.2 How to trace -After consumers buy the product, they only need to scan the two-dimensional code ID card on the product to understand the product origin, producer, breeding information, pesticide fertilization information, various detection and circulation information, manufacturers, product brand stories, etc.。 +After consumers buy the product, they only need to scan the two-dimensional code ID card on the product to understand the product origin, producer, breeding information, pesticide fertilization information, various detection and circulation information, manufacturers, product brand stories, etc。 ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5657.PNG) #### 3.3 Business Pain Points -With the traceability system, each local characteristic product from where to where to go, the middle through which circulation links, can be traced back to the source. +With the traceability system, each local characteristic product from where to where to go, the middle through which circulation links, can be traced back to the source ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5658.PNG) #### 3.4 Blockchain+ Traceability ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5659.PNG) -Take advantage of blockchain's decentralized and tamper-resistant features。Chain process data for agricultural product traceability。When the traceability process data is linked, the data will be traceable and cannot be tampered with, which can effectively prove the authenticity and validity of the traceability data.。 +Take advantage of blockchain's decentralized and tamper-resistant features。Chain process data for agricultural product traceability。When the traceability process data is linked, the data will be traceable and cannot be tampered with, which can effectively prove the authenticity and validity of the traceability data。 ##
***04 Technical Discussion*** #### 4.1 Blockchain+IoT -Compared with the traditional manual filling method, the automatic collection of agricultural situation data through IoT devices ensures the authenticity of the process data source from the source. Based on the BSN alliance chain, we use the smart contract mechanism to chain the data, ensuring the transparency of the chain process, and at the same time, based on the Byzantine fault-tolerant consensus mechanism to achieve data tampering prevention.。In the consumer-oriented traceability code, "Xingnong Code" is based on the one-product-one-code model, providing each tea product with a globally unique blockchain authentication certificate, the electronic certificate records in detail the tea planting process of each link data hash, on-chain timestamp and other information, the certificate of the whole process of traceability data are stored in the BSN alliance chain of city nodes, through the "chain network" solution blockchain.+The landing of traceability applications has been greatly improved。 +Compared with the traditional manual filling method, the automatic collection of agricultural situation data through IoT devices ensures the authenticity of the process data source from the source. Based on the BSN alliance chain, we use the smart contract mechanism to chain the data, ensuring the transparency of the chain process, and at the same time, based on the Byzantine fault-tolerant consensus mechanism to achieve data tampering prevention。In the consumer-oriented traceability code, "Xingnong Code" is based on the one-product-one-code model, providing each tea product with a globally unique blockchain authentication certificate, the electronic certificate records in detail the tea planting process of each link data hash, on-chain timestamp and other information, the certificate of the whole process of traceability data are stored in the BSN alliance chain of city nodes, through the "chain network" solution blockchain+The landing of traceability applications has been greatly improved。 ![](../../../images/articles/application_westlake_longjingtea_yifei/IMG_5660.PNG) #### 4.2 5G, AI technology integration diff --git a/3.x/en/docs/articles/6_application/industry_application_case.md b/3.x/en/docs/articles/6_application/industry_application_case.md index 11639d2af..4fb32028f 100644 --- a/3.x/en/docs/articles/6_application/industry_application_case.md +++ b/3.x/en/docs/articles/6_application/industry_application_case.md @@ -1,12 +1,12 @@ # What industries has blockchain revolutionized??Attached application case download -According to Xinhua News Agency on the evening of October 25, the Political Bureau of the CPC Central Committee conducted the 18th collective study on the current situation and trend of blockchain technology development on the afternoon of October 24.。While presiding over the study, Xi Jinping, general secretary of the CPC Central Committee, stressed that the integrated application of blockchain technology plays an important role in new technological innovation and industrial transformation.。**We should take blockchain as an important breakthrough in independent innovation of core technologies, clarify the main direction of attack, increase investment, focus on conquering a number of key core technologies, and accelerate the development of blockchain technology and industrial innovation.。** +According to Xinhua News Agency on the evening of October 25, the Political Bureau of the CPC Central Committee conducted the 18th collective study on the current situation and trend of blockchain technology development on the afternoon of October 24。While presiding over the study, Xi Jinping, general secretary of the CPC Central Committee, stressed that the integrated application of blockchain technology plays an important role in new technological innovation and industrial transformation。**We should take blockchain as an important breakthrough in independent innovation of core technologies, clarify the main direction of attack, increase investment, focus on conquering a number of key core technologies, and accelerate the development of blockchain technology and industrial innovation。** -**The report pointed out that the application of blockchain technology has extended to digital finance, Internet of Things, intelligent manufacturing, supply chain management, digital asset trading and other fields.。** +**The report pointed out that the application of blockchain technology has extended to digital finance, Internet of Things, intelligent manufacturing, supply chain management, digital asset trading and other fields。** -As an open-source bottom-level platform that integrates practical achievements, FISCO BCOS has brought together tens of thousands of community members, over 1,000 enterprises and institutions to participate in the ecological construction of the blockchain industry since it was opened in 2017, and has extensively landed mature application cases in various industries, including government affairs, finance, public welfare, medical care, education, transportation, copyright, commodity traceability, supply chain, recruitment, agriculture, social networking, games, etc.。 +As an open-source bottom-level platform that integrates practical achievements, FISCO BCOS has brought together tens of thousands of community members, over 1,000 enterprises and institutions to participate in the ecological construction of the blockchain industry since it was opened in 2017, and has extensively landed mature application cases in various industries, including government affairs, finance, public welfare, medical care, education, transportation, copyright, commodity traceability, supply chain, recruitment, agriculture, social networking, games, etc。 -We have selected typical application scenarios and compiled blockchain application cases to quickly understand the current status and prospects of blockchain applications in the industry.。 +We have selected typical application scenarios and compiled blockchain application cases to quickly understand the current status and prospects of blockchain applications in the industry。 **[FISCO BCOS open source community] public number background reply "case," you can download the full HD。** @@ -133,6 +133,6 @@ If you have an application case or application plan, please contact us through t ## About Us -FISCO BCOS is the first enterprise-level financial alliance chain underlying platform led by domestic enterprises, open source, secure and controllable, providing reliable and free infrastructure for all walks of life to carry out blockchain applications.://github.com/fisco-bcos, welcome to download experience。 +FISCO BCOS is the first enterprise-level financial alliance chain underlying platform led by domestic enterprises, open source, secure and controllable, providing reliable and free infrastructure for all walks of life to carry out blockchain applications:/ / github.com / fisco-bcos, welcome to download。 -The platform was created by the open source working group established by the Financial Blockchain Cooperation Alliance (Shenzhen) (referred to as: Golden Chain Alliance), which was officially opened to the outside world in December 2017, with members including Boyan Technology, Huawei, SZSE, Digital China, Sifang Jingchuang, Tencent, WeBank, Yepi Technology and Yuexiu Jinke.。 \ No newline at end of file +The platform was created by the open source working group established by the Financial Blockchain Cooperation Alliance (Shenzhen) (referred to as: Golden Chain Alliance), which was officially opened to the outside world in December 2017, with members including Boyan Technology, Huawei, SZSE, Digital China, Sifang Jingchuang, Tencent, WeBank, Yepi Technology and Yuexiu Jinke。 \ No newline at end of file diff --git a/3.x/en/docs/articles/7_community/group_deploy_case.md b/3.x/en/docs/articles/7_community/group_deploy_case.md index eb4d17bef..9ea9b6937 100644 --- a/3.x/en/docs/articles/7_community/group_deploy_case.md +++ b/3.x/en/docs/articles/7_community/group_deploy_case.md @@ -1,14 +1,14 @@ -# Multi-machine deployment-Single Group Dual Mechanism Dual Node Networking Mode Actual Combat +# Multi-machine deployment-single group, double mechanism and double node networking mode actual combat Author : Pu Canglong(Xiao Yue)Member of the Center for Blockchain and Applied Research, Shanghai University of International Business and Economics ## 0. Needs Analysis -There are two servers, then the next organization of each machine generates a node, two connected to one, that is: dual-organization dual-node single group.。 +There are two servers, then the next organization of each machine generates a node, two connected to one, that is: dual-organization dual-node single group。 ## 1. Download and install the operation and maintenance deployment tool -> *It is assumed that there is nothing on the machine, because the user who compiles the client using the source code does not have to take the last step.* +> *It is assumed that there is nothing on the machine, because the user who compiles the client using the source code does not have to take the last step* Download @@ -28,7 +28,7 @@ Check whether the installation is successful. If the installation is successful, ``` Get Node Binary -pull the latest fisco-bcos binary to meta +pull the latest disco-bcos binary to meta ```bash ./generator --download_fisco ./meta @@ -36,7 +36,7 @@ pull the latest fisco-bcos binary to meta Check Binary Version -If successful, output FISCO-BCOS Version : x.x.x-x +If successful, output FISCO-BCOS Version: x.x.x-x ```bash ./meta/fisco-bcos -v @@ -47,7 +47,7 @@ Then I cloned the generator locally and found it was: ![](../../../images/articles/group_deploy_case/3.png) -The download _ fisco function of the tool class is the main card here.。No cdn friend can vim modify this url as follows: +The download _ fisco function of the tool class is the main card here。No cdn friend can vim modify this url as follows: ```bash fisco official cdn @@ -58,7 +58,7 @@ https://xiaoyue-blog.oss-cn-hangzhou.aliyuncs.com/fisco-bcos.tar.gz This is my OSS, open to use the master tap ah。 -It will be over in less than a second.。Then this is installed: +It will be over in less than a second。Then this is installed: ![](../../../images/articles/group_deploy_case/4.png) @@ -70,7 +70,7 @@ come to kangkang topology ![](../../../images/articles/group_deploy_case/1.png) -Because the official tutorial is on a machine with nodes 1,2。If it is divided, there is actually no difference between 1,2。Because it is on two machines, there will be no port conflict。If the port is not opened, an error may be reported. We recommend that you whitelist the two computers.。For more information, please refer to: [Port opening for FSICO BCOS multi-machine deployment](https://blog.csdn.net/xiaoyue2019/article/details/107401334) +Because the official tutorial is on a machine with nodes 1,2。If it is divided, there is actually no difference between 1,2。Because it is on two machines, there will be no port conflict。If the port is not opened, an error may be reported. We recommend that you whitelist the two computers。For more information, please refer to: [Port opening for FSICO BCOS multi-machine deployment](https://blog.csdn.net/xiaoyue2019/article/details/107401334) |机构|Node|rpc port|channel port|p2p port| |---|---|---|---|---| @@ -109,7 +109,7 @@ Generate Certificate for Authority A ./generator --generate_agency_certificate ./dir_agency_ca ./dir_chain_ca agencyA ``` -The certificate authority sends the certificate to the institution, which is placed in the meta directory. +The certificate authority sends the certificate to the institution, which is placed in the meta directory ```bash cp ./dir_agency_ca/agencyA/* ~/generator-A/meta/ @@ -123,7 +123,7 @@ Generate Certificate for Authority B ./generator --generate_agency_certificate ./dir_agency_ca ./dir_chain_ca agencyB ``` -The certificate authority sends the certificate to the institution, which is placed in the meta directory. +The certificate authority sends the certificate to the institution, which is placed in the meta directory ```bash cp ./dir_agency_ca/agencyB/* ~/generator-B/meta/ @@ -185,14 +185,14 @@ EOF ### 3.5 Organization A generates and sends node information -Generate the certificate and P2P connection address file of the institution node A, and generate the certificate based on the modified node _ depostion.ini. +Generate the certificate and P2P connection address file of the institution node A, and generate the certificate based on the modified node _ depostion.ini ```bash cd ~/generator-A ./generator --generate_all_certificates ./agencyA_node_info ``` -When the organization generates a node, it needs to specify the P2P connection address of other nodes, where Organization A sends the P2P connection organization to Organization B. +When the organization generates a node, it needs to specify the P2P connection address of other nodes, where Organization A sends the P2P connection organization to Organization B ```bash cp ./agencyA_node_info/peers.txt ~/generator-B/meta/peersA.txt @@ -200,14 +200,14 @@ cp ./agencyA_node_info/peers.txt ~/generator-B/meta/peersA.txt ### 3.6 Organization B generates and sends node information -Generate the certificate and P2P connection address file of the institution node A, and generate the certificate based on the modified node _ depostion.ini. +Generate the certificate and P2P connection address file of the institution node A, and generate the certificate based on the modified node _ depostion.ini ```bash cd ~/generator-B ./generator --generate_all_certificates ./agencyB_node_info ``` -Because the creation block needs to be generated, this institution must require a node certificate.。In addition to sending the P2P connection address, the B organization also sends the node certificate.。 +Because the creation block needs to be generated, this institution must require a node certificate。In addition to sending the P2P connection address, the B organization also sends the node certificate。 ```bash cp ./agencyB_node_info/cert*.crt ~/generator-A/meta/ @@ -216,7 +216,7 @@ cp ./agencyB_node_info/peers.txt ~/generator-A/meta/peersB.txt ### 3.7 Institution A Generates Group 1 Genesis Block -to generate the Genesis block。Here can actually be generated by that agency through negotiation, not necessarily A.。 +to generate the Genesis block。Here can actually be generated by that agency through negotiation, not necessarily A。 ```bash cd ~/generator-A @@ -239,7 +239,7 @@ Send the creation block of group1 to institution b cp ./group/group.1.genesis ~/generator-B/meta ``` -## 3.8 Organization A generates the node to which it belongs. +## 3.8 Organization A generates the node to which it belongs Generate Node for Agency A @@ -255,18 +255,18 @@ bash ./nodeA/start_all.sh ``` There are two points to note: -1. It is no problem that the ports are consistent between the production node configuration file and the Genesis block configuration file, because I do not test on one machine and there will be no port conflicts。However, it is embarrassing that when copying mechanism B to machine B, it cannot run.。 +1. It is no problem that the ports are consistent between the production node configuration file and the Genesis block configuration file, because I do not test on one machine and there will be no port conflicts。However, it is embarrassing that when copying mechanism B to machine B, it cannot run。 2. The default IP address of the rpc is 127.0.0.1. If the rpc is turned on, a warning will be issued: ![](../../../images/articles/group_deploy_case/6.png) -If you must enable the rpc test, you can also refer to the preceding statement to enable the firewall ip whitelist.。 +If you must enable the rpc test, you can also refer to the preceding statement to enable the firewall ip whitelist。 --- -## 4. Mechanism B transfers and generates nodes. +## 4. Mechanism B transfers and generates nodes -Compression: 'tar cvf B.tar generator-B` +Compression: 'tar cvf B.tar generator-B' Unzip: 'tar xvf B.tar' Then upload download operation @@ -294,7 +294,7 @@ The correct echo is as follows: ![](../../../images/articles/group_deploy_case/5.png) -Here's another question。It is the above-mentioned self-confidence is not tested, resulting in the wrong ip loss leading to consensus failure, this time is not echoed.。Just delete the following regular。Can see the log error, through the error to find the reason can not be consensus。 +Here's another question。It is the above-mentioned self-confidence is not tested, resulting in the wrong ip loss leading to consensus failure, this time is not echoed。Just delete the following regular。Can see the log error, through the error to find the reason can not be consensus。 --- @@ -310,4 +310,4 @@ welcome to our community to blow water duck ![](../../../images/articles/group_deploy_case/7.bmp) -*Reference: < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/enterprise_tools/tutorial_detail_operation.html>* +*Reference:* diff --git a/3.x/en/docs/articles/7_community/suibe_blockchain_center_toolbox.md b/3.x/en/docs/articles/7_community/suibe_blockchain_center_toolbox.md index 9b47d7c23..2d1a2e2a9 100644 --- a/3.x/en/docs/articles/7_community/suibe_blockchain_center_toolbox.md +++ b/3.x/en/docs/articles/7_community/suibe_blockchain_center_toolbox.md @@ -4,12 +4,12 @@ Author : Research Center of Blockchain Technology and Application, Shanghai Un ## Why do block chain development toolbox? -We (hereinafter "we" all refer to the Center for Blockchain Technology and Application Research of Shanghai University of International Business and Economics) have noted that blockchain developers often face the following four pain points in the process of blockchain learning and development. +We (hereinafter "we" all refer to the Center for Blockchain Technology and Application Research of Shanghai University of International Business and Economics) have noted that blockchain developers often face the following four pain points in the process of blockchain learning and development -- Complicated development tools: At present, various development tools related to blockchain are complicated, which requires developers to spend more time familiarizing and learning various tools, affecting development efficiency.。 -- IDE function is simple: smart contract development / blockchain development IDE is still in the early stage, the function is relatively simple, not friendly to developers。 -- Lack of systematic learning materials: The various materials currently available for developers to learn are mixed, fragmented and lack systematic.。 -- High learning cost: At this stage, smart contract development / blockchain development IDE is more for professionals with a certain foundation, and has a certain learning cost for beginners.。 +-Complicated development tools: At present, various development tools related to blockchain are complicated, which requires developers to spend more time familiarizing and learning various tools, affecting development efficiency。 +-IDE function is simple: smart contract development / blockchain development IDE is still in the early stage, the function is relatively simple, and it is not friendly to developers。 +-Lack of systematic learning materials: The various materials currently available for developers to learn are mixed, fragmented and unsystematic。 +-High learning cost: At this stage, smart contract development / blockchain development IDE is more for professionals with a certain foundation, and there is a certain learning cost for beginners。 @@ -19,19 +19,19 @@ So we have this idea: can we make an integrated, convenient and fast blockchain ## Implementation Ideas of Blockchain Development Toolbox -Compared with the development tools in the traditional computer field, blockchain development has many and miscellaneous development tools, many functions need to use specific tools or need their own manual development tools and other issues.。Therefore, the blockchain development toolbox we developed consists of 4 parts - 1 toolbox+2 Subplatforms+A CA system, through the toolbox to integrate a variety of functions, to solve the problems often encountered in the development process, the ultimate goal is to achieve the use of a mainstream compiler (such as Remix)+One development toolbox can complete the entire blockchain application development process。 +Compared with the development tools in the traditional computer field, blockchain development has many and miscellaneous development tools, many functions need to use specific tools or need their own manual development tools and other issues。Therefore, the blockchain development toolbox we developed consists of 4 parts - 1 toolbox+2 Subplatforms+A CA system, through the toolbox to integrate a variety of functions, to solve the problems often encountered in the development process, the ultimate goal is to achieve the use of a mainstream compiler (such as Remix)+One development toolbox can complete the entire blockchain application development process。 -- A toolbox: refers to the blockchain development toolbox, which can be used with mainstream IDEs to provide developers with services that are often used in the development process, such as simulated address generation, data conversion, and intelligent generation of blockchain configuration files.。 -- 2 sub-platforms: multi-user runnable library, multi-chain virtual console。The multi-user-runnable library can provide developers with online learning smart contracts, alliance chains, consensus algorithms and other blockchain articles and books, as well as support developers to produce their own content.;The multi-chain virtual console can help developers access the virtual consoles of major blockchains through the Web.。 -- 1 CA system: CA system based on WeIdentity, responsible for user account management, user behavior records and rewards, etc.。 +- 1 toolbox: refers to the blockchain development toolbox, which can be used with mainstream IDEs and provides developers with services that are often used in the development process, such as simulated address generation, data conversion, and intelligent generation of blockchain configuration files。 +- 2 sub-platforms: multi-user runnable library, multi-chain virtual console。The multi-user-runnable library can provide developers with online learning smart contracts, alliance chains, consensus algorithms and other blockchain articles and books, as well as support developers to produce their own content;The multi-chain virtual console can help developers access the virtual consoles of major blockchains through the Web。 +- 1 CA system: CA system based on WeIdentity, responsible for user account management, user behavior records and rewards, etc。 -*WeIdentity is a blockchain-based distributed multi-center technology solution that provides a series of basic layers and application interfaces such as distributed entity identity identification and management, trusted data exchange protocols, etc. It can realize the secure authorization and exchange of entity object (person or thing) data, and is independently developed and fully open source by WeBank.。 +*WeIdentity is a blockchain-based distributed multi-center technology solution that provides a series of basic layers and application interfaces such as distributed entity identity identification and management, trusted data exchange protocols, etc. It can realize the secure authorization and exchange of entity object (person or thing) data, and is independently developed and fully open source by WeBank。 -github address: < https://github.com/WeBankFinTech/WeIdentity> +github address: -Technical documentation: < https://fintech.webank.com/developer/docs/weidentity/> +Technical Documentation: @@ -43,47 +43,47 @@ Technical documentation: < https://fintech.webank.com/developer/docs/weidentity/ ## Main functions of blockchain development toolbox -Let's take a look at the main functions of each component of the blockchain development toolbox.。 +Let's take a look at the main functions of each component of the blockchain development toolbox。 - **Developer Toolbox** -The toolbox strives to cover the entire blockchain development process, shorten the development cycle, and improve developer efficiency and comfort.。 +The toolbox strives to cover the entire blockchain development process, shorten the development cycle, and improve developer efficiency and comfort。 Take the whole process of FISCO BCOS development as an example: -**-Start chain stage** +**- Start Chain Stage** -When developing a blockchain application, you usually need to start a test chain first. At this time, developers can use Toolbox > Configuration File Intelligent Generation Tool to quickly generate the configuration file of the blockchain network.。 +When developing a blockchain application, you usually need to start a test chain first. At this time, developers can use Toolbox > Configuration File Intelligent Generation Tool to quickly generate the configuration file of the blockchain network。 -**-Blockchain network monitoring** +**- Blockchain network monitoring** After starting the test chain, you can monitor the normal operation of the blockchain network with one click through Toolbox > Blockchain Network Monitoring。 -**-Contract Case Library** +**- Contract Case Library** -Before writing a smart contract, you can use the Toolbox > Contract Case Library to find out if there are other contracts that have achieved the same or similar functions to avoid duplication of wheels.。 +Before writing a smart contract, you can use the Toolbox > Contract Case Library to find out if there are other contracts that have achieved the same or similar functions to avoid duplication of wheels。 -**-Address generation, simulation data generation, data conversion, signature verification** +**- Address generation, simulation data generation, data conversion, signature verification** -During the contract development process, development and debugging are carried out through the various generation, conversion, and verification functions provided by the toolbox.。 +During the contract development process, development and debugging are carried out through the various generation, conversion, and verification functions provided by the toolbox。 -**-Performance Testing Tools** +**- Performance testing tools** After the development is complete, test the code performance and tune it through the Toolbox > Performance Test Tool。 ![](../../../images/articles/suibe_blockchain_center_toolbox/1.png) -At present, the toolbox has implemented or plans to implement the following functions: address generation, simulation data generation, data conversion, signature verification, intelligent generation of configuration files, blockchain network monitoring, contract case library, blockchain network management, performance testing tools, etc.。 +At present, the toolbox has implemented or plans to implement the following functions: address generation, simulation data generation, data conversion, signature verification, intelligent generation of configuration files, blockchain network monitoring, contract case library, blockchain network management, performance testing tools, etc。 - **Multi-user runnable library** -Based on JupyterHub, the multi-user runnable library integrates multiple language kernels, such as Python and Java, and supports uploading, reading or running the code in the Ipynb format.。 +Based on JupyterHub, the multi-user runnable library integrates multiple language kernels, such as Python and Java, and supports uploading, reading or running the code in the Ipynb format。 -It can help users quickly get started with blockchain development.。For example, if a user wants to study in the library, he can find something he is interested in in the library's public knowledge base, such as an introductory guide to consortium chain technology, and use the text+Runnable code way to learn and operate;You can also submit your study notes to the public knowledge base to feed the community.。 +It can help users quickly get started with blockchain development。For example, if a user wants to study in the library, he can find something he is interested in in the library's public knowledge base, such as an introductory guide to consortium chain technology, and use the text+Runnable code way to learn and operate;You can also submit your study notes to the public knowledge base to feed the community。 -For the team, the team can jointly operate and maintain an internal shared library, where all members can share books, articles, etc., activate group learning efficiency, and solve the problems of low efficiency of isolated learning and high training costs for newcomers.。 +For the team, the team can jointly operate and maintain an internal shared library, where all members can share books, articles, etc., activate group learning efficiency, and solve the problems of low efficiency of isolated learning and high training costs for newcomers。 ![](../../../images/articles/suibe_blockchain_center_toolbox/2.png) @@ -91,9 +91,9 @@ For the team, the team can jointly operate and maintain an internal shared libra - **Multi-chain Virtual Console** -Blockchain developers do not necessarily need to build a chain themselves at the beginning stage, because there are not many underlying blockchains that can support rapid chain building, and most of the underlying platforms are still a tedious operation.。 +Blockchain developers do not necessarily need to build a chain themselves at the beginning stage, because there are not many underlying blockchains that can support rapid chain building, and most of the underlying platforms are still a tedious operation。 -The multi-chain virtual console can provide such a function: developers can use a test chain with others, access the virtual console through the web and develop, when there are multiple developers need to develop in the same test chain through the console, just access the online multi-chain virtual console.。 +The multi-chain virtual console can provide such a function: developers can use a test chain with others, access the virtual console through the web and develop, when there are multiple developers need to develop in the same test chain through the console, just access the online multi-chain virtual console。 Web access to the FISCO BCOS console is now supported。 @@ -101,13 +101,13 @@ Web access to the FISCO BCOS console is now supported。 - **CA System** -The blockchain development toolbox CA system is based on blockchain technology to record and manage users' learning behavior, credit awards, electronic certificates, etc. in a multi-user operational library.。Through the system, the user's learning data at a glance, combined with e-government and credit reward system, can motivate users to learn more actively, feeding the community.。 +The blockchain development toolbox CA system is based on blockchain technology to record and manage users' learning behavior, credit awards, electronic certificates, etc. in a multi-user operational library。Through the system, the user's learning data at a glance, combined with e-government and credit reward system, can motivate users to learn more actively, feeding the community。 -The following features have been implemented or are planned to be implemented: WeIdentity-based DiD digital identity, user learning behavior record, credit reward system, blockchain-based e-certificate, etc.。 +The following features have been implemented or are planned to be implemented: WeIdentity-based DiD digital identity, user learning behavior record, credit reward system, blockchain-based e-certificate, etc。 -This blockchain development toolbox has been fully open-sourced and contributed to the FISCO BCOS open-source community, and the project is currently being continuously improved. We are also looking forward to all development friends in the community embracing open source and building the project together.://github.com/SUIBE-Blockchain/FISCO_BCOS_Toolbox/> +This blockchain development toolbox has been fully open-sourced and contributed to the FISCO BCOS open-source community, and the project is currently being continuously improved. We are also looking forward to all development friends in the community embracing open source and building the project together @@ -115,16 +115,16 @@ This blockchain development toolbox has been fully open-sourced and contributed **Q: What are the main factors you consider when selecting the underlying technology??** -**A:** On the one hand, I feel the need to consider the sense of boundaries that the underlying technology dominates the company。There is a big difference between blockchain technology and other traditional technologies: blockchain ecological construction not only depends on the efforts of the leading company, but also on the participation of all parties, if the underlying technology leading company does not set a good boundary, everything is done, others can only assume the role of users, in fact, is contrary to the spirit of the blockchain.。 +**A:** On the one hand, I feel the need to consider the sense of boundaries that the underlying technology dominates the company。There is a big difference between blockchain technology and other traditional technologies: blockchain ecological construction not only depends on the efforts of the leading company, but also on the participation of all parties, if the underlying technology leading company does not set a good boundary, everything is done, others can only assume the role of users, in fact, is contrary to the spirit of the blockchain。 -On the other hand, the underlying technology needs to have sufficient compatibility, because people will not tie themselves to a certain underlying platform, so the technology used by the underlying platform is best not to be exclusive to the platform, for example, Solidity smart contract is currently used in many chains, choose the underlying technology framework of the blockchain, whether to support Solidity is a very important reference indicator.。 +On the other hand, the underlying technology needs to have sufficient compatibility, because people will not tie themselves to a certain underlying platform, so the technology used by the underlying platform is best not to be exclusive to the platform, for example, Solidity smart contract is currently used in many chains, choose the underlying technology framework of the blockchain, whether to support Solidity is a very important reference indicator。 -Putting aside the above-mentioned two points to consider in selection, the biggest advantage of FISCO BCOS is the right strategic direction.。Many details can be continuously optimized and improved, but whether the strategic direction is correct is the focus of whether the underlying platform of this technology has development prospects.。 +Putting aside the above-mentioned two points to consider in selection, the biggest advantage of FISCO BCOS is the right strategic direction。Many details can be continuously optimized and improved, but whether the strategic direction is correct is the focus of whether the underlying platform of this technology has development prospects。 **Q: What do you think about the development of domestic open source?** -**A:** In the process of participating in the 4th China Blockchain Development Competition, we received good support and help from the FISCO BCOS open source community in the preparation of our works, which is actually a manifestation of the open source spirit.。At present, the development of open source in China is still in its infancy, which brings opportunities and dividends to developers, but there are still many interesting open source games to be tried. I hope that open source enthusiasts can cheer together to promote the development of domestic open source ecology and open source spirit.。 +**A:** In the process of participating in the 4th China Blockchain Development Competition, we received good support and help from the FISCO BCOS open source community in the preparation of our works, which is actually a manifestation of the open source spirit。At present, the development of open source in China is still in its infancy, which brings opportunities and dividends to developers, but there are still many interesting open source games to be tried. I hope that open source enthusiasts can cheer together to promote the development of domestic open source ecology and open source spirit。 diff --git "a/3.x/en/docs/articles/7_community/\346\231\272\346\205\247\345\233\255\345\214\272\345\214\272\345\235\227\351\223\276\345\273\272\350\256\276/\346\226\260\350\207\264\345\214\272\345\235\227\351\223\276.md" "b/3.x/en/docs/articles/7_community/\346\231\272\346\205\247\345\233\255\345\214\272\345\214\272\345\235\227\351\223\276\345\273\272\350\256\276/\346\226\260\350\207\264\345\214\272\345\235\227\351\223\276.md" index 70997592c..bad7a48ec 100644 --- "a/3.x/en/docs/articles/7_community/\346\231\272\346\205\247\345\233\255\345\214\272\345\214\272\345\235\227\351\223\276\345\273\272\350\256\276/\346\226\260\350\207\264\345\214\272\345\235\227\351\223\276.md" +++ "b/3.x/en/docs/articles/7_community/\346\231\272\346\205\247\345\233\255\345\214\272\345\214\272\345\235\227\351\223\276\345\273\272\350\256\276/\346\226\260\350\207\264\345\214\272\345\235\227\351\223\276.md" @@ -1,13 +1,13 @@ -## Xinzhi Blockchain- Trusted Cornerstone of Digital Smart Park -Author : Liu Jingyi | Shanghai Xinzhi Software-- Project Director +## Xinzhi Blockchain - Trusted Cornerstone of Digital Smart Park +Author : Liu Jingyi | Project Director, Shanghai Xinzhi Software ### Analysis of the Current Situation of the Development of Smart Park -  The information construction of the smart park is a systematic project, blockchain technology needs to consider the application scheme from the actual needs, the use of innovative ways to complete the advanced technology.。 -Therefore, it is very important to tap the advantages and characteristics of blockchain and apply blockchain technology to the construction of smart parks.。 -  Digital wisdom park development for many years, a variety of parks more and more。The rapid development of digital parks is mainly due to the following factors: from the economic factors, the rapid development of downstream industries, promoting the rapid development of the park.;Political factors, the "13th Five-Year Plan" in the wisdom of the park was proposed, in recent years to obtain policy support, so to rapid development。From a technical point of view, cutting-edge technologies such as big data, AI and 5G have slowly penetrated from first-tier cities into the construction of parks in second-, third- and even fourth-tier cities, so the overall development momentum of digital smart park construction is still relatively rapid.。 +  The information construction of the smart park is a systematic project, blockchain technology needs to consider the application scheme from the actual needs, the use of innovative ways to complete the advanced technology。 +Therefore, it is very important to tap the advantages and characteristics of blockchain and apply blockchain technology to the construction of smart parks。 +  Digital wisdom park development for many years, a variety of parks more and more。The rapid development of digital parks is mainly due to the following factors: from the economic factors, the rapid development of downstream industries, promoting the rapid development of the park;Political factors, the "13th Five-Year Plan" in the wisdom of the park was proposed, in recent years to obtain policy support, so to rapid development。From a technical point of view, cutting-edge technologies such as big data, AI and 5G have slowly penetrated from first-tier cities into the construction of parks in second-, third- and even fourth-tier cities, so the overall development momentum of digital smart park construction is still relatively rapid。   In the face of such rapid development, there are still some pain points in the development of digital wisdom parks in China, which are mainly reflected in: -- First of all, the major parks are geographically scattered and the problem of information islands is more serious.。In addition, a large number of application systems, including hardware systems, Internet of Things software, etc., have been piled up in the past, both traditional and new, but the efficiency of operation and maintenance is not necessarily high under various systems.。 -- Secondly, the homogenization of park construction。The so-called homogenization is that the first batch of smart parks in the development, as a local benchmark, other parks began to imitate the benchmark construction, so the construction of the park is very similar, not much innovation.。 +- First of all, the major parks are geographically scattered, and the problem of information islands is more serious。In addition, a large number of application systems, including hardware systems, Internet of Things software, etc., have been piled up in the past, both traditional and new, but the efficiency of operation and maintenance is not necessarily high under various systems。 +- Secondly, the homogenization of park construction。The so-called homogenization is that the first batch of smart parks in the development, as a local benchmark, other parks began to imitate the benchmark construction, so the construction of the park is very similar, not much innovation。 - Again, without the construction of the park ecology, it is difficult to form a closed loop。The earliest wisdom park construction did not mention ecology, but the overall design is more intelligent。 - Finally, the need for more forward-looking planning, low external scalability。 @@ -15,23 +15,23 @@ Therefore, it is very important to tap the advantages and characteristics of blo   Data generation value is the core of the construction of smart parks, in this process, how to ensure the credibility and authenticity of data, blockchain technology traceability, non-tampering and other characteristics here can play its role。 ![](Data schema.png) -  The overall data architecture diagram of the smart park construction, the leftmost is familiar data sources, basic data, platform data, the park data from the perception layer, the basic data collected, including government data and enterprise-owned data, uploaded to the new chain, through the verification of the data is true, upload and save the certificate can ensure that the data can not be tampered with.。In this context, we then perform data mining, which in turn generates valuable data。On the far right we see asset portraits, park portraits, corporate portraits, and personnel portraits derived from data mining so that the relevant agencies can provide better services to users.。 +  The overall data architecture diagram of the smart park construction, the leftmost is familiar data sources, basic data, platform data, the park data from the perception layer, the basic data collected, including government data and enterprise-owned data, uploaded to the new chain, through the verification of the data is true, upload and save the certificate can ensure that the data can not be tampered with。In this context, we then perform data mining, which in turn generates valuable data。On the far right we see asset portraits, park portraits, corporate portraits, and personnel portraits derived from data mining so that the relevant agencies can provide better services to users。 ![](Technical Architecture.png) -  The overall system architecture of the park construction is shown in the figure above. The whole architecture can be roughly divided into three parts: external coordination of the park, internal operation coordination of the park, and coordination of the underlying perception technology / cloud chain technology.。The next level is the operational collaboration within the park, which connects the business content, including online and offline business data, and the most external is the platform's user system.。 +  The overall system architecture of the park construction is shown in the figure above. The whole architecture can be roughly divided into three parts: external coordination of the park, internal operation coordination of the park, and coordination of the underlying perception technology / cloud chain technology。The next level is the operational collaboration within the park, which connects the business content, including online and offline business data, and the most external is the platform's user system。 ### Application of blockchain in the implementation process -  First of all, the entire underlying layer is actually the blockchain BaaS platform, including network nodes and chain management.。The most important thing is the alliance chain of the park, which not only regards the park as a certification node, but also includes enterprises and even some government agencies into the certification node and into the entire blockchain ecology, so as to ensure that a complete blockchain park ecosystem is truly established.。 +  First of all, the entire underlying layer is actually the blockchain BaaS platform, including network nodes and chain management。The most important thing is the alliance chain of the park, which not only regards the park as a certification node, but also includes enterprises and even some government agencies into the certification node and into the entire blockchain ecology, so as to ensure that a complete blockchain park ecosystem is truly established。   At the bottom, we chose FISCO BCOS, which is mainly based on the business scenarios of the alliance chain for selection and comparison. After comparison, we found that FISCO BCOS has the following advantages: - High node scalability, you can easily add and delete nodes。 - Smart contracts support EVM, you can use the current popular language Solidity to write smart contracts, easy to use。 -- Support distributed database, support KV database, can be convenient to query data, retain historical traceability。 -- Support for national secrets, friendly to domestic regulatory needs。 -- Support node access control, flexible access control, to achieve comprehensive security。 +-Support distributed database, support KV database, can easily query data, keep the history traceable。 +- Support national secrets, friendly to domestic regulatory needs。 +- Support node admission control, flexible access control, to achieve comprehensive security。 - Support regulators and auditors to join the alliance chain as observation nodes to obtain real-time data for regulatory audits。 -- The community is mature, the project iteration speed is fast, the corresponding ecological tools are many, the community has more than 40,000 members and more than 2,000 enterprises to participate.。 +- The community is mature, the project iteration speed is fast, the corresponding ecological tools are many, the community has more than 40000 members and more than 2000 enterprises to participate。   On-chain Park Effect and Prospect -  After several years of development, the digital smart park 1.0 stage has been basically completed, this stage mainly carried out digital construction, adding 5G, Internet of Things technology, the future, blockchain will penetrate into the park construction, so as to achieve from 1.0 to 2.0 construction direction.。 -  As an infrastructure, blockchain must be combined with practical application scenarios and integrated with various cutting-edge technologies to truly solve the pain points in the construction of the park.。Now that the cloud construction of most parks is relatively mature, how to combine blockchain with existing cloud technology to play a greater value in this process will be an important proposition for the 2.0 phase of exploration. -  As far as blockchain technology is concerned, the future imagination is still relatively large, according to different types of parks, collect different types of data, mine different potential values, and provide more reference value for the demand side.。 +  After several years of development, the digital smart park 1.0 stage has been basically completed, this stage mainly carried out digital construction, adding 5G, Internet of Things technology, the future, blockchain will penetrate into the park construction, so as to achieve from 1.0 to 2.0 construction direction。 +  As an infrastructure, blockchain must be combined with practical application scenarios and integrated with various cutting-edge technologies to truly solve the pain points in the construction of the park。Now that the cloud construction of most parks is relatively mature, how to combine blockchain with existing cloud technology to play a greater value in this process will be an important proposition for the 2.0 phase of exploration +  As far as blockchain technology is concerned, the future imagination is still relatively large, according to different types of parks, collect different types of data, mine different potential values, and provide more reference value for the demand side。 diff --git a/3.x/en/docs/articles/7_practice/ansible_FISCO-BCOS_Webase-deploy.md b/3.x/en/docs/articles/7_practice/ansible_FISCO-BCOS_Webase-deploy.md index 3fd5ecc55..869f3f533 100644 --- a/3.x/en/docs/articles/7_practice/ansible_FISCO-BCOS_Webase-deploy.md +++ b/3.x/en/docs/articles/7_practice/ansible_FISCO-BCOS_Webase-deploy.md @@ -1,11 +1,11 @@ -# Ansible for FISCO BCOS + Webase-Deploy efficiently builds enterprise-level production environment alliance chain -Author : Wuque | Xi'an R & D Center of Shenzhen Yingxing Chain Alliance Software Engineering Co., Ltd. +# Ansible for FISCO BCOS + Webase-deploy efficiently builds enterprise-level production environment alliance chain +Author : Wuque | Xi'an R & D Center of Shenzhen Yingxing Chain Alliance Software Engineering Co., Ltd ## 1 Background Introduction -If a worker wants to do a good job, he must first sharpen his tools. I have the artifact in my hand.! +If a worker wants to do a good job, he must first sharpen his tools. I have the artifact in my hand! ### 1.1 Ansible for FISCO BCOS -Ansible for FISCO BCOS provides ansible that automates the generation of enterprise profile-playbook。The environment of 2 groups, 3 institutions and 6 nodes can generate configurations within 30 seconds (except the download time), which greatly simplifies the difficulty of deployment and avoids errors that are prone to manual configuration。 +Ansible for FISCO BCOS provides an ansible-playbook that automates the generation of enterprise profiles。The environment of 2 groups, 3 institutions and 6 nodes can generate configurations within 30 seconds (except the download time), which greatly simplifies the difficulty of deployment and avoids errors that are prone to manual configuration。 [Github Access Address](https://github.com/newtouch-cloud/ansible-for-fisco-bcos) @@ -20,7 +20,7 @@ git clone https://gitee.com/hailong99/ansible-for-fisco-bcos.git ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy728.png) ### 1.2 Webase-deploy -Deploying WeBASE with one click allows you to quickly build a WeBASE console environment on the same machine, facilitating users to quickly experience the WeBASE management platform。One-click deployment build: node (FISCO-BCOS 2.0+), management platform (WeBASE-Web), Node Management Subsystem (WeBASE-Node-Manager), Node Front Subsystem (WeBASE-Front), signing service (WeBASE-Sign)。Among them, the construction of the node is optional, you can choose to use the existing chain or build a new chain through the configuration.。 +Deploying WeBASE with one click allows you to quickly build a WeBASE console environment on the same machine, facilitating users to quickly experience the WeBASE management platform。One-click deployment and construction: Node (FISCO-BCOS 2.0+), management platform (WeBASE-Web), node management subsystem (WeBASE-Node-Manager), node front subsystem (WeBASE-Front), signature service (WeBASE-Sign)。Among them, the construction of the node is optional, you can choose to use the existing chain or build a new chain through the configuration。 [Github Access Address](https://github.com/WeBankFinTech/WeBASE) @@ -57,7 +57,7 @@ System Centos7.6 Two servers in the same LAN, network access to each other is normal。 ### 3.2 Software preparation -Both servers have basic components installed such as: OpenSSL, Java8, Python3, Git, Vim, etc.。 +Both servers have basic components installed such as: OpenSSL, Java8, Python3, Git, Vim, etc。 Server A: Database MySQL @@ -119,11 +119,11 @@ After the field experiment path is: inventories / my _ inventory / group _ vars ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy2293.png) -Edit chain attributes according to actual business requirements, such as binary file version, whether to generate console, and whether to generate SDK。The notes in the document are clearly written, combined with business understanding.。 +Edit chain attributes according to actual business requirements, such as binary file version, whether to generate console, and whether to generate SDK。The notes in the document are clearly written, combined with business understanding。 ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy2357.png) -Edit the chain attributes according to the actual needs of the business, such as: organization, node, group.。The notes in the document are clearly written, combined with business understanding.。 +Edit the chain attributes according to the actual needs of the business, such as: organization, node, group。The notes in the document are clearly written, combined with business understanding。 ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy2417.png) @@ -158,13 +158,13 @@ ansible-playbook -i inventories/my_inventory/hosts.ini fisco_bcos.yml Generated Configuration Information -Note that after the command is executed, the node _ list.yml file will show that the organization and group have not been initialized. If you need to execute the command again, the group and organization have been initialized.。 +Note that after the command is executed, the node _ list.yml file will show that the organization and group have not been initialized. If you need to execute the command again, the group and organization have been initialized。 ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy2844.png) ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy2846.png) -According to the configuration information, the underlying file of the alliance chain has been generated. +According to the configuration information, the underlying file of the alliance chain has been generated ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy2869.png) @@ -188,7 +188,7 @@ cp -r agency_iMeshx.tar.gz /home/ ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy3179.png) -192.168.9.207 The server is uploaded directly with the ssh terminal tool and moved to the planned path.。 +192.168.9.207 The server is uploaded directly with the ssh terminal tool and moved to the planned path。 ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy3236.png) @@ -223,8 +223,8 @@ tail -f node*/log/log* |grep ++++ So far, our two servers have completed the construction of the chain using the Ansible for FISCO BCOS artifact, in which the tool automatically completes the commands for generating and copying many files, which is very simple and efficient ^ _ ^! -## 5 Using Webase-Deploy tool to build Webase -The underlying service of the alliance chain already exists and needs to be managed by Webase. +## 5 Use the Webase-deploy tool to build Webase +The underlying service of the alliance chain already exists and needs to be managed by Webase ### 5.1 Install webase-deploy ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy3768.png) @@ -246,8 +246,8 @@ unzip webase-deploy.zip ### 5.2 Configure webase #### 5.2.1 Configure each subsystem version information and database information -Edit the configuration file: / home / webase-deploy/common.properties -Follow the official tutorial and configuration file prompts to configure the subsystem version information and database information respectively.。 +Edit the configuration file: / home / webase-deploy / common.properties +Follow the official tutorial and configuration file prompts to configure the subsystem version information and database information respectively。 ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy4090.png) @@ -260,15 +260,15 @@ Follow the official tutorial and configuration file prompts to configure the sub (2) Copy / home / agency _ iMeshx / meta / sdk to the node directory / home / agency _ iMeshx / fisco _ deploy _ agency _ iMeshx -(3) Copy the three certificates under meta / sdk / to webase-under front / conf(You need to execute the installation command to download webase first.-The front file can only be copied successfully later.) +(3) Copy the three certificates under meta / sdk / to webase-front / conf(You need to execute the installation command to download the webase-front file before you can copy it successfully) #### 5.2.4 Configuring Nginx -configure the proxy ip address and port number of nginx according to the plan. +configure the proxy ip address and port number of nginx according to the plan ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy4419.png) -#### 5.2.5 Configure webase-front (you need to execute the installation command to download webase first-front file can be configured later) -The default IP address of SDK is 127.0.0.1, which needs to be changed to 192.168.9.11 and then saved. +#### 5.2.5 Configure webase-front (you need to run the installation command to download the webase-front file before configuring) +The default IP address of SDK is 127.0.0.1, which needs to be changed to 192.168.9.11 and then saved ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy4534.png) @@ -284,7 +284,7 @@ python3 deploy.py installAll ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy4640.png) -Note: webase-node-MGR database initialization is very important, the first run must choose y +Note: the database initialization of webase-node-mgr is very important, the first run must choose y ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy4685.png) @@ -366,22 +366,22 @@ Administrator password update required for initial login ![](../../../images/articles/ansible_FISCO-BCOS_Webase-deploy/ansible_FISCO-BCOS_Webase-deploy5106.png) -So far we have finished using webase-Deploy's management and functional testing of the alliance chain is complete.!^_^ 。 +So far, we have completed the management and functional testing of the alliance chain using webase-deploy, and we are done!^_^ 。 ## 7 Development Perception -#### The way of heaven, the damage is more than enough to make up for the deficiency, is the false victory, the deficiency is more than enough to win.! +#### The way of heaven, the damage is more than enough to make up for the deficiency, is the false victory, the deficiency is more than enough to win! #### Do not take small steps, not even a thousand miles, -#### If you don't accumulate small streams, you can't become a river or a sea.! +#### If you don't accumulate small streams, you can't become a river or a sea! #### False public opinion horse, non-profit also, but to thousands of miles, -#### False boat, non-energy water also, and the river.! +#### False boat, non-energy water also, and the river! -#### A gentleman is born different, good and false in things.! +#### A gentleman is born different, good and false in things! diff --git a/3.x/en/docs/articles/7_practice/build_chain_with_wsl_on_windows.md b/3.x/en/docs/articles/7_practice/build_chain_with_wsl_on_windows.md index 22788c6e1..dd5a13075 100644 --- a/3.x/en/docs/articles/7_practice/build_chain_with_wsl_on_windows.md +++ b/3.x/en/docs/articles/7_practice/build_chain_with_wsl_on_windows.md @@ -1,18 +1,18 @@ -# Windows based on wsl / wsl2-10 Building the Fisco-Bcos blockchain tips +# Experience of Building Fisco-Bcos Block Chain on Windows-10 Based on wsl / wsl2 Author : Huang Yi ( Sichuan Everything Digital Technology Co., Ltd. ) | FISCO BCOS Developer ## I: Overview -Recently, some friends in the FISCO community mentioned that because of certain restrictions, can only use the Windows platform for development, hope to have a Windows-based Fisco-Bcos Deployment Tutorial。Just @ power Lin Xuanming's teacher C# The SDK is also maturing, so I wrote this article in the hope of making it easier to deploy the Fisco development environment on Windows。 +Recently, some friends in the FISCO community mentioned that because of certain restrictions, only use the Windows platform for development, I hope to have a Windows-based Fisco-Bcos deployment tutorial。Just @ power Lin Xuanming's teacher C# The SDK is also maturing, so I wrote this article in the hope of making it easier to deploy the Fisco development environment on Windows。 -This paper describes the adoption of**Linux Subsystem for Windows(wsl/wsl2)**, in Windows-10 No dual system / virtual machine burden on platform to build Fisco-Bcos process and experience, then you can combine the development of Visual Studio and Fisco to build a more comfortable Windows development environment.。 +This paper describes the adoption of**Linux Subsystem for Windows(wsl/wsl2)**On the Windows-10 platform, there is no dual system / virtual machine burden to build the process and experience of Fisco-Bcos, then you can combine the development of Visual Studio and Fisco to build a more comfortable Windows development environment。 wsl and wsl2 are completely different in the underlying implementation, see https for differences://docs.microsoft.com/zh-cn/windows/wsl/compare-versions。 -Based on the performance and compatibility of the first generation of WSL using Linux middleware translation, we recommend that you use lightweight Hyper-based-v wsl2, for not wanting to use hyper-v's friends, can only use wsl, at least at this stage has not been found to have compatibility problems。 +Based on the performance and compatibility of the first generation of wsl using linux middleware translation, we recommend using wsl2 based on lightweight hyper-v, for friends who do not want to use hyper-v, you can only use wsl, at least at this stage has not been found to have compatibility issues。 -Because the steps for building a fisco are identical, this article will first build a fisco stand-alone 4-node blockchain on the wsl, and then switch to the wsl2 installation console to show that the two can be switched at any time.。 +Because the steps for building a fisco are identical, this article will first build a fisco stand-alone 4-node blockchain on the wsl, and then switch to the wsl2 installation console to show that the two can be switched at any time。 ## II: Configuration requirements @@ -45,11 +45,11 @@ dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /nores After successful enablement, restart the computer, download and install the wsl2 kernel update package -Link: < https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi> +Links: Open the Microsoft Store, download and install the Ubuntu 20.04 LTS subsystem -Link: < https://www.microsoft.com/store/apps/9n6svws3rx71> +Links: ![](../../../images/articles/build_chain_with_wsl_on_windows/install_ubuntu20.png) @@ -69,7 +69,7 @@ You can see that Ubuntu is now using wsl version 1. Next, you will install and c ## Four: Based on wsl in the ubuntu20.04 subsystem to build a single 4 node -It is officially recommended to use Windows Termintal for operation. You can also use the shell or powershell that comes with Windows. Download link: < https://aka.ms/terminal> +It is officially recommended to use Windows Termintal for operation. You can also use the shell or powershell that comes with Windows. Download link: Open the terminal and type directly @@ -81,9 +81,9 @@ Enter the ubuntu subsystem, as shown in the figure below ![](../../../images/articles/build_chain_with_wsl_on_windows/windows_terminal_wsl.png) -Here you can see one of the features of wsl: you can directly access files in the NTFS file system.(The starting position is located at c:\Users)and can call windows applications with the .exe suffix。 +Here you can see one of the features of wsl: you can directly access files in the NTFS file system(The starting position is located at c:\Users)and can call windows applications with the .exe suffix。 -In the development period, for ease of management, you can put the FICO under the NTFS file system, such as "My Documents," but do not configure it in a production environment, the Linux subsystem access across file systems will reduce performance.。 +In the development period, for ease of management, you can put the FICO under the NTFS file system, such as "My Documents," but do not configure it in a production environment, the Linux subsystem access across file systems will reduce performance。 Because it is a local subsystem, no network configuration is required. Refer to [Building the First Blockchain Network](../../installation.md)Quickly deploy a single-machine 4-node FICO blockchain @@ -91,7 +91,7 @@ Because it is a local subsystem, no network configuration is required. Refer to cd ~ sudo apt install -y openssl curl cd ~ && mkdir -p fisco && cd fisco -curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.9.1/build_chain.sh && chmod u+x build_chain.sh +curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.11.0/build_chain.sh && chmod u+x build_chain.sh bash build_chain.sh -l 127.0.0.1:4 -p 30300,20200,8545 ``` @@ -108,11 +108,11 @@ tail -f nodes/127.0.0.1/node0/log/log* | grep +++ ![](../../../images/articles/build_chain_with_wsl_on_windows/start_node_wsl.png) -At the same time, since we are using wsl-1. You can view the 4-node linux process fisco in the task manager-bcos and its resource usage +At the same time, since we are using wsl-1, we can view the 4-node linux process fisco-bcos and its resource usage in the task manager ![](../../../images/articles/build_chain_with_wsl_on_windows/taskmgr_fisco.png) -By right-clicking the process, you can quickly locate the home directory of the Ubuntu subsystem.**Do not modify any files in this directory in Windows**To access the home of wsl, enter:\ wsl $in the address bar of the explorer +By right-clicking the process, you can quickly locate the home directory of the Ubuntu subsystem**Do not modify any files in this directory in Windows**To access the home of wsl, enter:\ wsl $in the address bar of the explorer ## Five: switch between wsl and wsl2 @@ -153,9 +153,9 @@ wsl --set-version Ubuntu-20.04 1 ## Six: Install the FICO console -Please refer to [Building the First Blockchain Network] for installation tutorial.(../../installation.md)In the installation console section, this article tries to make a brief integration introduction. +Please refer to [Building the First Blockchain Network] for installation tutorial(../../installation.md)In the installation console section, this article tries to make a brief integration introduction -Since the source of the Ubuntu subsystem is abroad, the download speed of jdk will be slow for domestic users. +Since the source of the Ubuntu subsystem is abroad, the download speed of jdk will be slow for domestic users ```bash sudo chmod 777 /etc/apt/sources.list @@ -197,13 +197,13 @@ After startup, you will see the following screen and the console deployment is s ![](../../../images/articles/build_chain_with_wsl_on_windows/console_start.png) -## 7: Using csharp-sdk Visual Studio development example +## Seven: Visual Studio development examples using csharp-sdk -This article uses @ power teacher's csharp-sdk and its tutorials, thanks to the hard work of power teacher。 +This article uses @ power teacher's csharp-sdk and its tutorials, thanks to power teacher's hard work。 -git address: < https://github.com/FISCO-BCOS/csharp-sdk> +git address: -Tutorial address: < https://www.bilibili.com/video/BV1av41147Lo> +Tutorial address: ### 1. Establish new project, introduce C#-SDK @@ -223,7 +223,7 @@ Compile the project and create a contracts folder in the project output director ![](../../../images/articles/build_chain_with_wsl_on_windows/mkdir_contracts.png) -Switch to Terminal and copy the HellowWorld.sol contract in the console you just downloaded to the contracts folder you just created. +Switch to Terminal and copy the HellowWorld.sol contract in the console you just downloaded to the contracts folder you just created ```bash cp ~ / fisco / console / contracts / consolidation / HellowWorld.sol [your contracts directory] @@ -235,7 +235,7 @@ Open HelloWorld.sol with vscode, install the solidity plugin and switch to versi ![](../../../images/articles/build_chain_with_wsl_on_windows/change_solidity_version.png) -Press F5 to compile the contract. The bin folder will be generated under contracts, and the compiled HelloWorld.bin and HelloWorld.abi will be generated. +Press F5 to compile the contract. The bin folder will be generated under contracts, and the compiled HelloWorld.bin and HelloWorld.abi will be generated ### 3. Interaction with FICO @@ -359,5 +359,5 @@ The results are as follows ![](../../../images/articles/build_chain_with_wsl_on_windows/run_deployed_contract.png) -At this point, using Visual Studio to federate csharp on Windows-The local development of FICO by SDK has come to an end. About charp-For more information about the SDK, see the links at the beginning of this section. +At this point, using Visual Studio in conjunction with csharp-sdk for local development of fisco on Windows is over. For other features of charp-sdk, see the link at the beginning of this section diff --git a/3.x/en/docs/articles/7_practice/deploy_webase_management_platform.md b/3.x/en/docs/articles/7_practice/deploy_webase_management_platform.md new file mode 100644 index 000000000..8e7f758fb --- /dev/null +++ b/3.x/en/docs/articles/7_practice/deploy_webase_management_platform.md @@ -0,0 +1,188 @@ +# Deploy WeBASE Management Platform with One Click + +Author : WANG Cunqi | Shandong Business Vocational College + +## I: Preface + +WeBASE (WeBank Blockchain Application Software Extension) is a set of common components built between blockchain applications and FISCO BCOS nodes。 + +Building a WeBASE Management Platform Including Nodes (FISCO-BCOS 2.0+), management platform (WeBASE-Web), node management subsystem (WeBASE-Node-Manager), node front subsystem (WeBASE-Front), signature service (WeBASE-Sign)。Among them, the construction of the node is optional, you can choose to use the existing chain or build a new chain through the configuration。 + +## II: Description of environment + +### 1. Required Tools + +| Tools| Version| +| :-----: | ------------------ | +| Java | Oracle JDK 8 to 14| +| MySQL | MySQL - 5.6 and above| +| Python | Python3.6 and above| +| PyMySQL | | + +### 2. Current system + +![](../../../images/articles/deploy_webase_management_platform/pic1.png) + +## Three: download tools + +In order to facilitate management, we first create a deployWeBASE folder and enter the cd to operate under the deployWeBASE folder。 + +![](../../../images/articles/deploy_webase_management_platform/pic2.png) + +### 1. Install Oracle JDK(Not an Open JDK) + +#### ①. Create a folder to manage java + +Install versions of Oracle Java 8 to 13 and place the downloaded jdk in the java directory + +What I downloaded here is [jdk-13.0.2 _ linux-x64 _ bin.tar.gz](https://www.oracle.com/java/technologies/javase/jdk13-archive-downloads.html "Download Oraclejdk")。 + +![](../../../images/articles/deploy_webase_management_platform/pic3.png) + +#### ②. Decompress the installation package + +```linux +tar -zxvf jdk-13.0.2_linux-x64_bin.tar.gz +``` + +![](../../../images/articles/deploy_webase_management_platform/pic4.png) + +#### ③. Configure the java home environment + +Modify the ~ / .bashrc configuration file + +```linux +vim ~/.bashrc +``` + +After opening, enter the following three sentences into the file and save to exit + +``` +export JAVA_HOME=/home/ly102/deployWeBASE/java/jdk-13.0.2 +export PATH=$JAVA_HOME/bin:$PATH +export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar +``` + +#### ④. Refresh configuration + +```linux +source ~/.bash_profile +``` + +#### ⑤. Verify successful configuration + +```linux +java -version +``` + +![](../../../images/articles/deploy_webase_management_platform/pic5.png) + +### 2. Install mysql5.6 + +```linux +sudo apt-get install software-properties-common +sudo add-apt-repository 'deb http://archive.ubuntu.com/ubuntu trusty universe' +sudo apt-get update +sudo apt install mysql-server-5.6 +sudo apt install mysql-client-5.6 +sudo systemctl start mysql +sudo systemctl enable mysql +``` + +To the fourth step there will be problems, you can learn from this article to solve + +[ubantu installs mysql 5.6 dependency problem solving _ nvidia-docker depends on sysv-rc(>= 2.88dsf-24) | file-r-CSDN Blog](https://blog.csdn.net/qq_33388707/article/details/126540762) + +Verify + +```linux +mysql --version +``` + +![](../../../images/articles/deploy_webase_management_platform/pic6.png) + +### 3. Install python3 + +```linux +/ / Add warehouse, enter to continue +sudo add-apt-repository ppa:deadsnakes/ppa +/ / Install Python +sudo apt-get install -y python3.6 +sudo apt-get install -y python3-pip +``` + +Verify + +```linux +python3 --version +``` + +![](../../../images/articles/deploy_webase_management_platform/pic7.png) + +### 4. Install PyMySQL + +Python3.6 and above, PyMySQL dependency package must be installed + +```linux +sudo pip3 install PyMySQL +``` + +![](../../../images/articles/deploy_webase_management_platform/pic8.png) + +Now we have the tools we need + +## Four: build WeBASE management platform + +### 1. Create a folder to manage the WeBASE management platform + +```linux +mkdir webase +``` + +### 2. Enter the WeBASE directory and obtain the download installation package + +```linux + cd webase/ && wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeBASE/releases/download/v1.5.5/webase-deploy.zip +``` + +### 3. Decompress the installation package and enter the decompressed directory + +```linux +unzip webase-deploy.zip && cd webase-deploy +``` + +### 4. Modify the configuration + +```linux +vim common.properties +``` + +Change these two places to your own database user and password。 + +![](../../../images/articles/deploy_webase_management_platform/pic9.png) + +I'm using the default chain here. If you want to deploy using an existing chain, you need to modify the + +if.exist.fisco change no to yes + +fisco.dir is modified to its own node path + +Modify the three port numbers node.p2pPort, node.channelPort, and node.rpcPort to the port number corresponding to the node + +![](../../../images/articles/deploy_webase_management_platform/pic10.png) + +### 5. Deploy and launch + +```linux +python3 deploy.py installAll +``` + +![](../../../images/articles/deploy_webase_management_platform/pic11.png) + +### 6. Visits + +![](../../../images/articles/deploy_webase_management_platform/pic12.png) + +The default account is admin and the default password is Abcd1234 + +Also access the WeBASE-Front front platform \ No newline at end of file diff --git a/3.x/en/docs/articles/7_practice/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0.md b/3.x/en/docs/articles/7_practice/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0.md index b10b4c842..d9593f9c6 100644 --- a/3.x/en/docs/articles/7_practice/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0.md +++ b/3.x/en/docs/articles/7_practice/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0.md @@ -1,4 +1,4 @@ -# Compiling and Running FISCO on Kunpeng Platform-BCOS 2.6.0 +# Compile and Run FISCO-BCOS 2.6.0 on Kunpeng Platform ## One: Apply for the Kunpeng server (there are already Kunpeng servers that have skipped this step) @@ -19,12 +19,12 @@ Fill it yourself in the page that opens"Demand Request" Orders, planning hardwar ### 4. Wait for notification -After the demand order is submitted, wait for official approval, and you will receive an email notification of the approval results after the approval is completed. +After the demand order is submitted, wait for official approval, and you will receive an email notification of the approval results after the approval is completed ![](../../../images/articles/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0/4.png) ### 5. Pass the application -Log in to the Kunpeng server to view the server information, and the Kunpeng server is now ready. +Log in to the Kunpeng server to view the server information, and the Kunpeng server is now ready ![](../../../images/articles/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0/5.png) ## Two: install basic software in Kunpeng server @@ -48,7 +48,7 @@ sudo yum install -y openssl-devel openssl cmake3 gcc-c++ git flex patch bison gm ### 3. Install Kunpeng version jdk-1.8 * Install JDK - From [Oracle](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) Download JDK-1.8 + From [Oracle](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) Download jdk-1.8 ![](../../../images/articles/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0/13.png) @@ -76,7 +76,7 @@ source /etc/profile ### 1. Download Compile Dependencies - * FISCO compilation depends on many files, and it will be slow to download them directly from git, so here you download the corresponding dependencies from gitee first, and then copy them directly when compiling. + * FISCO compilation depends on many files, and it will be slow to download them directly from git, so here you download the corresponding dependencies from gitee first, and then copy them directly when compiling ``` cd git clone https://gitee.com/FISCO-BCOS/LargeFiles.git @@ -123,7 +123,7 @@ Scanning dependencies of target jsoncpp dst='/root/FISCO-BCOS/deps/src/jsoncpp' ``` -### 5. Copy the dependency package to the appropriate directory. +### 5. Copy the dependency package to the appropriate directory ``` / / If prompted whether to override, enter y @@ -140,7 +140,7 @@ make ### 7 Solve the error of compiling GroupSigLib - * During the compilation process, if the problem of compiling GroupSigLib fails, an error will be reported. + * During the compilation process, if the problem of compiling GroupSigLib fails, an error will be reported ``` [ 24%] Performing configure step for 'GroupSigLib' -- GroupSigLib configure command succeeded. See also /root/FISCO-BCOS/deps/src/GroupSigLib-stamp/GroupSigLib-configure-*.log @@ -155,7 +155,7 @@ CMake Error at /root/FISCO-BCOS/deps/src/GroupSigLib-stamp/GroupSigLib-build-Rel /root/FISCO-BCOS/deps/src/GroupSigLib-stamp/GroupSigLib-build-*.log -make[2]: *** [CMakeFiles/GroupSigLib.dir/build.make:115:../deps/src/GroupSigLib-stamp/GroupSigLib-build] Error 1 +make[2]: *** [CMakeFiles/GroupSigLib.dir/build.make:115:.. / deps / src / GroupSigLib-stamp / GroupSigLib-build] Error 1 ``` * Workaround: @@ -163,7 +163,7 @@ make[2]: *** [CMakeFiles/GroupSigLib.dir/build.make:115:../deps/src/GroupSigLi cp /usr/share/automake-1.13/config.guess ${HOME}/FISCO-BCOS/deps/src/GroupSigLib/deps/src/pbc_sig/config.guess ``` -### 8. View the compilation results. +### 8. View the compilation results * Compile completion effect ![](../../../images/articles/kunpeng_platform_compiles_and_runs_fisco-bcos-2.6.0/20.png) @@ -188,7 +188,7 @@ cd mkdir bin ``` -### 2. Copy the compiled fisco.-bcos file to the created directory +### 2. Copy the compiled fisco-bcos file to the created directory ``` cp ${HOME}/FISCO-BCOS/build/bin/fisco-bcos bin @@ -200,7 +200,7 @@ cp ${HOME}/FISCO-BCOS/build/bin/fisco-bcos bin curl -LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/v2.9.1/build_chain.sh && chmod u+x build_chain.sh ``` -### 4. Run one key to build the bottom layer FISCO of 2 groups, 3 institutions and 6 nodes-BCOS Consortium Chain Service Script +### 4. Run a key to build the FISCO-BCOS alliance chain service script at the bottom of 2 groups, 3 institutions and 6 nodes ``` # ./build_chain.sh -l 127.0.0.1:4 -p 30300,20200,8545 -e bin/fisco-bcos @@ -262,9 +262,9 @@ info|2020-09-04 17:34:21.456586|[g:1][CONSENSUS][SEALER]++++++++++++++++ Generat info|2020-09-04 17:34:22.459794|[g:1][CONSENSUS][SEALER]++++++++++++++++ Generating seal on,blkNum=1,tx=0,nodeIdx=1,hash=d1dd4738... ``` -## Five: Installation of FISCO on Kunpeng platform-BCOS Console +## Five: Installation of FISCO-BCOS Console on Kunpeng Platform -(The console program depends on java)-1.8 You need to install the Kunpeng version (arrch64) of java in advance.-1.8 +(The console program depends on java-1.8 need to install good Kunpeng version in advance (arrch64) java-1.8) ```bash # Download Console @@ -275,7 +275,7 @@ $ cd console cp ~/nodes/127.0.0.1/sdk/* conf # Modify Profile -# If there is no port conflict, copy the configuration file directly. Otherwise, modify the network.peers configuration item in config.toml to the corresponding channel port. +# If there is no port conflict, copy the configuration file directly. Otherwise, modify the network.peers configuration item in config.toml to the corresponding channel port $ cp conf/config-example.toml conf/config.toml ``` diff --git "a/3.x/en/docs/articles/7_practice/\346\213\206\350\247\243build_chain.sh\350\247\243\350\257\273FISCO-BCOS\345\273\272\351\223\276\350\277\207\347\250\213.md" "b/3.x/en/docs/articles/7_practice/\346\213\206\350\247\243build_chain.sh\350\247\243\350\257\273FISCO-BCOS\345\273\272\351\223\276\350\277\207\347\250\213.md" index 1225de683..12a69a16c 100644 --- "a/3.x/en/docs/articles/7_practice/\346\213\206\350\247\243build_chain.sh\350\247\243\350\257\273FISCO-BCOS\345\273\272\351\223\276\350\277\207\347\250\213.md" +++ "b/3.x/en/docs/articles/7_practice/\346\213\206\350\247\243build_chain.sh\350\247\243\350\257\273FISCO-BCOS\345\273\272\351\223\276\350\277\207\347\250\213.md" @@ -1,11 +1,11 @@ -# Disassemble build _ chain.sh Interpretation of FISCO-BCOS chain building process +# Disassembling build _ chain.sh Interpreting FISCO-BCOS Chain Building Process Author : Chongqing Electronic Engineering Vocational College| to the key male Here is the tutorial: [companion video](https://space.bilibili.com/335373077) # lifting chain -We're not going to talk about the chain here in a complete way, pick the point if you're interested in or unfamiliar with the chain, you can go to another article of mine. -[[Tutorial] Perfect FISCO-How to start the BCOS blockchain network, stand-alone four-node, alliance chain](https://blog.csdn.net/qq_57309855/article/details/126180787?spm=1001.2014.3001.5501) +We're not going to talk about the chain here in a complete way, pick the point if you're interested in or unfamiliar with the chain, you can go to another article of mine +[[Tutorial] How to start the perfect FISCO-BCOS blockchain network, stand-alone four-node, alliance chain](https://blog.csdn.net/qq_57309855/article/details/126180787?spm=1001.2014.3001.5501) First I will download the build _ chain.sh script @@ -31,7 +31,7 @@ curl: (7) Failed to connect to github.com port 443: Connection refused ` [INFO] Download speed is too low, try https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v2.9.0/fisco-bcos.tar.gz -Here we go to GitHub to download FISCO-BCOS compressed package, found that the link failed, so jump to the domestic code cloud to download +Here, we went to GitHub to download the FISCO-BCOS compression package, and found that the link failed, so we jumped to the domestic code cloud to download it ``` @@ -45,7 +45,7 @@ This corresponds to line 1633 in our build _ chain.sh - There are many judgments below to prevent GitHub from being inaccessible in China. + There are many judgments below to prevent GitHub from being inaccessible in China ## Second paragraph @@ -57,7 +57,7 @@ Generating CA key... ``` -Generate the CA key corresponding to the script 1677 lines, the running process in the script is to find ${output_dir}The CA certificate is stored in the cert directory under ${output_dir}The nodes directory is defined above, so we can see our CA certificate after we enter, and there will be a column to explain the generation of the specific CA certificate.。 +Generate the CA key corresponding to the script 1677 lines, the running process in the script is to find ${output_dir}The CA certificate is stored in the cert directory under ${output_dir}The nodes directory is defined above, so we can see our CA certificate after we enter, and there will be a column to explain the generation of the specific CA certificate。 ![prepareCA code screenshot](https://user-images.githubusercontent.com/111106471/184881843-f81179d6-945f-4e2e-b64c-6b1e86a3cd50.png) ![output _ dir screenshot](https://user-images.githubusercontent.com/111106471/184881884-f3ff536b-0d61-46de-af3b-832f2982b129.png) @@ -77,7 +77,7 @@ Processing IP=127.0.0.1 Total=1 Agency=agency Groups=1 ``` -The generated secret key and certificate correspond to line 1793 of the script. The running process in the script is to assign them to the variable $OPTARG after entering the chain command, determine the chain mode, the number of nodes, determine the parameters such as IP, group, etc., and start creating the node directory. The node directory is determined by node _ count. +The generated secret key and certificate correspond to line 1793 of the script. The running process in the script is to assign them to the variable $OPTARG after entering the chain command, determine the chain mode, the number of nodes, determine the parameters such as IP, group, etc., and start creating the node directory. The node directory is determined by node _ count ​ ![Key certificate code screenshot](https://user-images.githubusercontent.com/111106471/184882272-d8c63950-9576-42c8-bff5-5b11349f6b9c.png) @@ -100,7 +100,7 @@ Processing IP=127.0.0.1 Total=1 Agency=agency Groups=1 ``` -The generated configuration file corresponds to line 1925 of the script. The running process in the script is to first determine the location of the output directory of the certificate, and then send the generated certificate to the directory after receiving it with node _ count and node _ dir. The generated certificate includes group certificates, group.X.genesis, group.x.ini, config.ini, and agency directories. +The generated configuration file corresponds to line 1925 of the script. The running process in the script is to first determine the location of the output directory of the certificate, and then send the generated certificate to the directory after receiving it with node _ count and node _ dir. The generated certificate includes group certificates, group.X.genesis, group.x.ini, config.ini, and agency directories @@ -118,7 +118,7 @@ The generated configuration file corresponds to line 1925 of the script. The run ``` -Here is the feedback of all ports and services and the final working directory to the user, for their own determination of whether to meet expectations and to prevent excessive work after the configuration file, etc. can not be found, corresponding to script 226 lines. +Here is the feedback of all ports and services and the final working directory to the user, for their own determination of whether to meet expectations and to prevent excessive work after the configuration file, etc. can not be found, corresponding to script 226 lines ![Generate Profile Code Screenshot](https://user-images.githubusercontent.com/111106471/184882563-01bbca48-4460-408d-878e-4214e5563777.png) @@ -135,7 +135,7 @@ e.g. bash /home/fisco223/fisco/nodes/127.0.0.1/download_console.sh -f ``` -This is to remind users to use sh script to obtain FISCO in the directory named by IP.-BCOS Console。And gave an example of e.g. to explain the usage, and finally prompted the user that all the processes have been completed, set up the completion of the work directory in ${output_dir}Lower。 +This is to remind the user to use sh script to obtain FISCO-BCOS console in the directory named by IP。And gave an example of e.g. to explain the usage, and finally prompted the user that all the processes have been completed, set up the completion of the work directory in ${output_dir}Lower。 ![Feedback screenshot](https://user-images.githubusercontent.com/111106471/184882668-3f673308-042c-419d-9bc0-2b1fc49c3500.png) diff --git a/3.x/en/docs/articles/index.md b/3.x/en/docs/articles/index.md index 80aaed475..3a0ec8826 100644 --- a/3.x/en/docs/articles/index.md +++ b/3.x/en/docs/articles/index.md @@ -2,9 +2,9 @@ ## 介绍 -The new year has begun, in order to thank you for your long-term company and support, we will FISCO BCOS open source community since the establishment of more than 400 technical dry goods and classic chapters into a document, as a collection of blockchain dry goods, to share with you.! Please click here for the full version.(https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247493260&idx=1&sn=b042be04819c89169b81bcee99eb2a18&chksm=9e3a71e6e1d2ad682bf8fde9f178c91160e5d7cdf187a81cedb191b8da1605f206f9bfd98ec0&from=industrynews&version=4.1.7.6018&platform=win#rd) +The new year has begun, in order to thank you for your long-term company and support, we will FISCO BCOS open source community since the establishment of more than 400 technical dry goods and classic chapters into a document, as a collection of blockchain dry goods, to share with you! Please click here for the full version(https://mp.weixin.qq.com/s?__biz=MzA3MTI5Njg4Mw==&mid=2247493260&idx=1&sn=b042be04819c89169b81bcee99eb2a18&chksm=9e3a71e6e1d2ad682bf8fde9f178c91160e5d7cdf187a81cedb191b8da1605f206f9bfd98ec0&from=industrynews&version=4.1.7.6018&platform=win#rd) -FISCO BCOS is divided into six chapters, which are created and optimized by all members of the community. Community developers can submit PR contribution articles on GitHub to share FISCO BCOS development experience and usage experience.。 +FISCO BCOS is divided into six chapters, which are created and optimized by all members of the community. Community developers can submit PR contribution articles on GitHub to share FISCO BCOS development experience and usage experience。 ## The concept and principle of blockchain diff --git a/3.x/en/docs/community.md b/3.x/en/docs/community.md index e4481f7bd..ca33b0ed2 100644 --- a/3.x/en/docs/community.md +++ b/3.x/en/docs/community.md @@ -1,6 +1,6 @@ # Community Resources -FISCO BCOS is a domestic enterprise-led R & D, open source, safe and controllable enterprise-level financial alliance chain underlying platform.。An open source working group set up by the Financial Blockchain Cooperation Alliance (Shenzhen) (referred to as "Golden Chain Alliance") is collaboratively built, with members including Boyan Technology, Huawei, SZSE, Shenzhou Information, Sifang Jingchuang, Tencent, WeBank, Yepi Technology and Yuexiu Jinke.。 +FISCO BCOS is a domestic enterprise-led R & D, open source, safe and controllable enterprise-level financial alliance chain underlying platform。An open source working group set up by the Financial Blockchain Cooperation Alliance (Shenzhen) (referred to as "Golden Chain Alliance") is collaboratively built, with members including Boyan Technology, Huawei, SZSE, Shenzhou Information, Sifang Jingchuang, Tencent, WeBank, Yepi Technology and Yuexiu Jinke。 ## FISCO BCOS Community Resources diff --git a/3.x/en/docs/community/MVP_list_new.md b/3.x/en/docs/community/MVP_list_new.md index dc2ea70b0..cd96202c5 100644 --- a/3.x/en/docs/community/MVP_list_new.md +++ b/3.x/en/docs/community/MVP_list_new.md @@ -6,11 +6,11 @@ Author: Little Assistant -In order to encourage opinion pioneers and opinion leaders who contribute high-quality technical content to the open source community, the open source community is open to FISCO BCOS MVP recognition, and as of 2023, the open source community has recognized 63 MVPs.。 +In order to encourage opinion pioneers and opinion leaders who contribute high-quality technical content to the open source community, the open source community is open to FISCO BCOS MVP recognition, and as of 2023, the open source community has recognized 63 MVPs。 -These outstanding contributors either land FISCO BCOS technology in various applications to help digitize the industry, or spread the spirit of the open source community to further afield in multi-channel sermons.。 +These outstanding contributors either land FISCO BCOS technology in various applications to help digitize the industry, or spread the spirit of the open source community to further afield in multi-channel sermons。 -Let's meet the practitioners of these technologies and the evangelists of the community.。 +Let's meet the practitioners of these technologies and the evangelists of the community。 ![](../../images/community/mvp_review_2023.png)
diff --git a/3.x/en/docs/community/contributor_list_new.md b/3.x/en/docs/community/contributor_list_new.md index d28fd537a..57ab326f5 100644 --- a/3.x/en/docs/community/contributor_list_new.md +++ b/3.x/en/docs/community/contributor_list_new.md @@ -5,9 +5,9 @@ Author: Little Assistant -In 2023, the world is changing, and science and technology are changing with each passing day.。FISCO BCOS adheres to the belief in blockchain technology, bringing together more than 5,000 enterprises and institutions and more than 100,000 individual members to build co-governance and share, creating a more active and prosperous open source alliance chain ecosystem.。 +In 2023, the world is changing, and science and technology are changing with each passing day。FISCO BCOS adheres to the belief in blockchain technology, bringing together more than 5,000 enterprises and institutions and more than 100,000 individual members to build co-governance and share, creating a more active and prosperous open source alliance chain ecosystem。 -Over the past year, a large number of open source contributors have joined us to support the development of FISCO BCOS open source in many directions, from code, tools, solutions and sermons.。More than 100 small partners contribute code to FISCO BCOS, bringing a more robust and powerful FISCO BCOS to the community;More than ten teams contribute tool components and solutions to the open source community, further enriching open source ecological components and application cases;More than 60 partners have become the "tap water" of FISCO BCOS, and have spontaneously exported technical interpretations, operational practices and application cases to promote FISCO BCOS as community preachers.。 +Over the past year, a large number of open source contributors have joined us to support the development of FISCO BCOS open source in many directions, from code, tools, solutions and sermons。More than 100 small partners contribute code to FISCO BCOS, bringing a more robust and powerful FISCO BCOS to the community;More than ten teams contribute tool components and solutions to the open source community, further enriching open source ecological components and application cases;More than 60 partners have become the "tap water" of FISCO BCOS, and have spontaneously exported technical interpretations, operational practices and application cases to promote FISCO BCOS as community preachers。 We have compiled the FISCO BCOS Contributor Honor Roll 2023, and thank you for your enthusiastic participation and active contributions! @@ -17,12 +17,12 @@ The following list is ranked in no particular order. If there are any errors or ### Code contribution -Code contribution refers to the contribution made around the FISCO BCOS project and the code repository of the community eco-project, including but not limited to submitting PR to modify the source code, contributing new code, building technical documents, etc.。 +Code contribution refers to the contribution made around the FISCO BCOS project and the code repository of the community eco-project, including but not limited to submitting PR to modify the source code, contributing new code, building technical documents, etc。 ![](../../images/community/contributors_2023.png) ### Tool Contribution -On the basis of FISCO BCOS, community developers continue to explore and develop a variety of practical components and tools to facilitate the development process and expand the platform functions.。At present, these tools have all been open source and contributed to the community developers to use, greatly reducing the application development threshold and cost。 +On the basis of FISCO BCOS, community developers continue to explore and develop a variety of practical components and tools to facilitate the development process and expand the platform functions。At present, these tools have all been open source and contributed to the community developers to use, greatly reducing the application development threshold and cost。 **Project Name: WeCross-BCOS3-Stub** @@ -43,52 +43,52 @@ Core participants: Zhang Sheng, Chen Xun, Li Qilong, Jin Wei, Lin Bin, Zhang Yuh Project Introduction: -In order to allow more developers to participate in the optimization of smart contract library components, the community launched the "Task Challenge" campaign.。In the first and second quarters of 2023, the activities respectively solicited contracts such as "factoring financing of supply chain financial receivables," "high inquiry of historical blocks," "type conversion contract," "time lock operation," "multi-party voting," "transparent supervision and voting solution of block chain funds," "game of fighting monsters and upgrading up to the game," "bicycle sharing case," "adding code comments to contracts," "innovative solutions in the public domain。 +In order to allow more developers to participate in the optimization of smart contract library components, the community launched the "Task Challenge" campaign。In the first and second quarters of 2023, the activities respectively solicited contracts such as "factoring financing of supply chain financial receivables," "high inquiry of historical blocks," "type conversion contract," "time lock operation," "multi-party voting," "transparent supervision and voting solution of block chain funds," "game of fighting monsters and upgrading up to the game," "bicycle sharing case," "adding code comments to contracts," "innovative solutions in the public domain。 https://github.com/WeBankBlockchain/SmartDev-Contract ### Programme contribution -In the 2023 Shenzhen International Finance and Technology Competition and many other authoritative blockchain competitions, many excellent entries based on FISCO BCOS have emerged.。In the open source spirit of giving back to the community, the participants contributed to these solutions, providing a useful reference for community users to learn about blockchain.。 +In the 2023 Shenzhen International Finance and Technology Competition and many other authoritative blockchain competitions, many excellent entries based on FISCO BCOS have emerged。In the open source spirit of giving back to the community, the participants contributed to these solutions, providing a useful reference for community users to learn about blockchain。 **Program name: AI4C - AIGC-oriented creation of cultural digital product operation platform** Core participants: Yu Zhenqi, Hu Xin, Sun Yan, Yang Guoming -At present, AIGC cultural creation faces many problems, such as the difficulty of storing and checking massive materials, the untrustworthy reasoning under the chain, the difficulty of protecting product copyright, and the opaque distribution of benefits.。In order to solve the above problems, the team built a copyright operation platform for AIGC creation of cultural digital products.。The platform integrates six key functions, such as trusted copyright identification, to realize the whole process of cultural and digital copyright services.。Carry out research on on-chain storage retrieval and off-chain trusted extension technology, and design a cultural digital product operation plan。The platform and FISCO BCOS blockchain have been developed and deployed, and trusted computing hardware is used to support the privacy calculation and reasoning acceleration of the generative model.。 +At present, AIGC cultural creation faces many problems, such as the difficulty of storing and checking massive materials, the untrustworthy reasoning under the chain, the difficulty of protecting product copyright, and the opaque distribution of benefits。In order to solve the above problems, the team built a copyright operation platform for AIGC creation of cultural digital products。The platform integrates six key functions, such as trusted copyright identification, to realize the whole process of cultural and digital copyright services。Carry out research on on-chain storage retrieval and off-chain trusted extension technology, and design a cultural digital product operation plan。The platform and FISCO BCOS blockchain have been developed and deployed, and trusted computing hardware is used to support the privacy calculation and reasoning acceleration of the generative model。 https://github.com/FISCO-BCOS/hackathon/pull/81/files **Program name: FISCO BCOS-based federated learning platform** Core participants: Ma Haobin, Su Bingquan -The platform innovatively integrates federated learning with blockchain technology to enable efficient and secure business processes。Innovation Proposed Serial+The parallel federated learning method subtly enhances the diversity of training data, reduces the pressure on data storage, and protects privacy and security to the greatest extent.。The system uses a combination of FISCO BCOS multi-group deployment and Ribbon load balancing to significantly improve the processing efficiency and throughput of blockchain services.。Using RocketMQ to achieve peak shaving to ensure the stability of the system;At the same time, the model file and the file index are stored separately to achieve the function of the large amount of data file on the chain certificate.。Realize node trusted authentication through public and private key files, providing a security cornerstone for the entire system。 +The platform innovatively integrates federated learning with blockchain technology to enable efficient and secure business processes。Innovation Proposed Serial+The parallel federated learning method subtly enhances the diversity of training data, reduces the pressure on data storage, and protects privacy and security to the greatest extent。The system uses a combination of FISCO BCOS multi-group deployment and Ribbon load balancing to significantly improve the processing efficiency and throughput of blockchain services。Using RocketMQ to achieve peak shaving to ensure the stability of the system;At the same time, the model file and the file index are stored separately to achieve the function of the large amount of data file on the chain certificate。Realize node trusted authentication through public and private key files, providing a security cornerstone for the entire system。 https://github.com/FISCO-BCOS/hackathon/pull/87/files **Scheme Name: Integrated Supply Chain Carbon Footprint System Based on Blockchain** Core participants: Deng Yitian, Liang Haotian, Wang Weijie, Xuan Haojun, Ren Xuhao -Traditional supply chain vendors usually store carbon footprint data in their own local databases, resulting in the inability to circulate database data between different supply chain vendors, making it difficult to synchronize carbon footprint data.。This work proposes a "blockchain-based supply chain carbon footprint unified system"。The system consists of three key technologies, including a blockchain-oriented fine-grained access control technology, an on-chain, off-chain hybrid storage architecture, and an efficient on-chain, off-chain collaboration mechanism.。The system enables synchronization and consistency of data across supply chain vendors, helping companies achieve their "dual carbon" goals.。 +Traditional supply chain vendors usually store carbon footprint data in their own local databases, resulting in the inability to circulate database data between different supply chain vendors, making it difficult to synchronize carbon footprint data。This work proposes a "blockchain-based supply chain carbon footprint unified system"。The system consists of three key technologies, including a blockchain-oriented fine-grained access control technology, an on-chain, off-chain hybrid storage architecture, and an efficient on-chain, off-chain collaboration mechanism。The system enables synchronization and consistency of data across supply chain vendors, helping companies achieve their "dual carbon" goals。 https://github.com/FISCO-BCOS/hackathon/pull/89/files **Scheme name: second-hand tide play trading platform based on blockchain and emotional sustenance** Core participants: Lu Longji, Gao Tao, Wu Jiaheng, Yuan Yao, Lin Jiaer -The project aims to build a more credible, efficient, safe and warm second-hand trendy trading platform, enhance the user experience through emotional storage and original rights confirmation, use blockchain technology to keep emotional stories forever, protect original content, reduce fraud and malicious behavior in the transaction process, and promote the healthy development of the second-hand trading market.。The innovation of this project is to make emotions generate value and protect the rights and interests of the original creators. At the beginning of the project, in the promising second-hand trading market, the project focused on the special category of second-hand trendplay, linking the emotions of buyers and sellers with stories and commodities to resonate: allowing sellers to sell their goods at higher prices through emotional assignment.;At the same time for the most emotional needs of buyers, to provide emotional consumption place。 +The project aims to build a more credible, efficient, safe and warm second-hand trendy trading platform, enhance the user experience through emotional storage and original rights confirmation, use blockchain technology to keep emotional stories forever, protect original content, reduce fraud and malicious behavior in the transaction process, and promote the healthy development of the second-hand trading market。The innovation of this project is to make emotions generate value and protect the rights and interests of the original creators. At the beginning of the project, in the promising second-hand trading market, the project focused on the special category of second-hand trendplay, linking the emotions of buyers and sellers with stories and commodities to resonate: allowing sellers to sell their goods at higher prices through emotional assignment;At the same time for the most emotional needs of buyers, to provide emotional consumption place。 https://github.com/FISCO-BCOS/hackathon/pull/80/files **Scheme Name: Ask Chain - ESG Rating System Based on Blockchain** Core participants: Zheng Huiwen, Zhong Nanhai, Li Zhiyuan -In the context of sustainable development, ESG assessment has become an important indicator of corporate sustainability.。However, there is a lack of a unified, transparent and efficient ESG evaluation platform in the current market.。Existing assessment tools often face problems such as inconsistent scoring standards, opaque data, easy tampering, and process redundancy, which not only affects the fairness of the assessment, but also increases the operating costs of the enterprise.。Therefore, the development of a blockchain-based ESG scoring system aims to provide enterprises with a reliable, transparent and easy-to-operate ESG assessment solution, which has significant advantages in terms of cost efficiency and is suitable for use by enterprises of all sizes, assessment agencies and relevant regulatory agencies.。 +In the context of sustainable development, ESG assessment has become an important indicator of corporate sustainability。However, there is a lack of a unified, transparent and efficient ESG evaluation platform in the current market。Existing assessment tools often face problems such as inconsistent scoring standards, opaque data, easy tampering, and process redundancy, which not only affects the fairness of the assessment, but also increases the operating costs of the enterprise。Therefore, the development of a blockchain-based ESG scoring system aims to provide enterprises with a reliable, transparent and easy-to-operate ESG assessment solution, which has significant advantages in terms of cost efficiency and is suitable for use by enterprises of all sizes, assessment agencies and relevant regulatory agencies。 https://github.com/FISCO-BCOS/hackathon/pull/83/files **Scheme Name: Carbon Road - Blockchain-based Carbon Asset Management and Trusted Trading Scheme** Core participants: Zhang Fan, Song Yu, Xiao Yitao, Wang Qingnan, Shen Tongbo -Aiming at the pain points in the process of carbon data collection, access certification, carbon asset trading and carbon data management, this scheme formulates the project solution, which provides an integrated scheme for breaking the barriers of carbon market asset trading, supporting diversified carbon asset trading business, multi-market transaction data fusion analysis, carbon asset trading and management through the technical advantages of carbon emission trusted automatic collection, multi-subject low-cost access certification, efficient and credible carbon asset trading, and carbon data dynamic authorization management。This project uses the FISCO BCOS v3.0 blockchain architecture, the front end uses VUE, and the back end uses core development tools such as SpringBoot to implement an integrated platform for blockchain-based carbon asset management and trusted trading.。 +Aiming at the pain points in the process of carbon data collection, access certification, carbon asset trading and carbon data management, this scheme formulates the project solution, which provides an integrated scheme for breaking the barriers of carbon market asset trading, supporting diversified carbon asset trading business, multi-market transaction data fusion analysis, carbon asset trading and management through the technical advantages of carbon emission trusted automatic collection, multi-subject low-cost access certification, efficient and credible carbon asset trading, and carbon data dynamic authorization management。This project uses the FISCO BCOS v3.0 blockchain architecture, the front end uses VUE, and the back end uses core development tools such as SpringBoot to implement an integrated platform for blockchain-based carbon asset management and trusted trading。 https://github.com/FISCO-BCOS/hackathon/pull/92/files **Scheme name: WeTender - ESG governance model for project bidding between government and enterprises for the construction of "four good rural roads."** @@ -102,28 +102,28 @@ https://github.com/FISCO-BCOS/hackathon/pull/86/files Core participants: Chen Mingyuan, Lin Zejun, Ye Litao, Lu Yuhao, Zhang Shijie -Building an anonymous weighted voting system based on the FISCO BCOS blockchain platform, giving full play to the non-tamperability, traceability and support for smart contracts of blockchain technology, and is committed to solving the problems of traditional voting systems controlled by centralized institutions, not open and transparent, not supporting weighted voting and privacy.。 +Building an anonymous weighted voting system based on the FISCO BCOS blockchain platform, giving full play to the non-tamperability, traceability and support for smart contracts of blockchain technology, and is committed to solving the problems of traditional voting systems controlled by centralized institutions, not open and transparent, not supporting weighted voting and privacy。 https://github.com/FISCO-BCOS/hackathon/pull/91/files **Program name: wisdom to promote agriculture - from the net to the chain, "Red Star Apple" characteristic agricultural products holding a key "certificate" traceability** Core participants: Bai Qili, Li Yile, Gao Huiwen, Chen Guocui, Chang Jiaxuan -The traceability system of intelligent agricultural characteristic agricultural products is an Internet of Things platform that integrates Internet of Things, cloud computing, data analysis and blockchain technology to perceive, analyze, predict and control the agricultural environment and transportation.。Blockchain is a technical solution that combines digital, cryptography, Internet and computer programming technologies to collectively maintain a reliable database in a decentralized and trusted manner.。First of all, relying on various sensor nodes deployed in the agricultural production site to collect real-time online monitoring of growth conditions and other real-time data, through the websocket long connection to collect data transmitted to the FISCO BCOS blockchain network cloud blockchain computing center, and through storage and encryption to ensure the security and visibility of the data running the alliance chain on the hard disk, the formation of the actual crop from planting to transportation of the whole process of visualization, through AI analysis to provide。 +The traceability system of intelligent agricultural characteristic agricultural products is an Internet of Things platform that integrates Internet of Things, cloud computing, data analysis and blockchain technology to perceive, analyze, predict and control the agricultural environment and transportation。Blockchain is a technical solution that combines digital, cryptography, Internet and computer programming technologies to collectively maintain a reliable database in a decentralized and trusted manner。First of all, relying on various sensor nodes deployed in the agricultural production site to collect real-time online monitoring of growth conditions and other real-time data, through the websocket long connection to collect data transmitted to the FISCO BCOS blockchain network cloud blockchain computing center, and through storage and encryption to ensure the security and visibility of the data running the alliance chain on the hard disk, the formation of the actual crop from planting to transportation of the whole process of visualization, through AI analysis to provide。 https://github.com/FISCO-BCOS/hackathon/pull/90/files ### Sermon Contribution -In addition to code contributions and tool contributions, there is also a category of contributors who do not hesitate to share FISCO BCOS-based development experience and technical / industrial perspectives in various channels, giving the FISCO BCOS open source community a stronger and longer-term vitality and influence, and encouraging more people to participate in ecological co-construction.。 +In addition to code contributions and tool contributions, there is also a category of contributors who do not hesitate to share FISCO BCOS-based development experience and technical / industrial perspectives in various channels, giving the FISCO BCOS open source community a stronger and longer-term vitality and influence, and encouraging more people to participate in ecological co-construction。 -Sermon contributions include but are not limited to sharing FISCO BCOS related technologies in various activities, writing articles or compiling video parsing FISCO BCOS related technologies, etc.。Preaching channels are not limited. If there are any omissions in the contribution list, please contact your assistant and let us know.。 +Sermon contributions include but are not limited to sharing FISCO BCOS related technologies in various activities, writing articles or compiling video parsing FISCO BCOS related technologies, etc。Preaching channels are not limited. If there are any omissions in the contribution list, please contact your assistant and let us know。 ![](../../images/community/sermon_contributors_2023.png) ### Description of source of contribution data -The current list of contributors is mainly collected from the FISCO BCOS code repository in GitHub and the code repositories of community eco-projects such as FISCO BCOS Toolbox and WeBankBlockchain. The statistical time period is January 1, 2023.-On December 31, 2023, if there is any omission or improvement suggestion, please contact the assistant [FISCOBCOS010] for feedback.。 +The list of contributors is mainly collected from the FISCO BCOS code repository in GitHub and the code repositories of community ecological projects such as FISCO BCOS Toolbox and WeBankBlockchain. The statistical period is from January 1, 2023 to December 31, 2023. If there are any omissions or suggestions for improvement, please contact the assistant [FISCOBCOS010] for feedback。 ![](../../images/community/img.png) -Scan the code to view the quarterly contributor list. +Scan the code to view the quarterly contributor list diff --git a/3.x/en/docs/community/partner_list_new.md b/3.x/en/docs/community/partner_list_new.md index 19fddc9be..0b08120aa 100644 --- a/3.x/en/docs/community/partner_list_new.md +++ b/3.x/en/docs/community/partner_list_new.md @@ -8,9 +8,9 @@ Author: Little Assistant In order to better promote the blockchain landing industry, cultivate more professionals for the industry, and help the blockchain ecology flourish, the FISCO BCOS Partner Program for the industry long-term recruitment of "industrial application partners," "talent cultivation partners" and "ecological development partners."。 -Industrial application partners aim to help blockchain technology to be better applied and promote the development of blockchain industry;Talent cultivation partners will join hands with FISCO BCOS open source community to carry out curriculum research and development, talent cultivation and talent certification based on FISCO BCOS open source blockchain technology, help build a blockchain talent cultivation system, and provide professional skills for industrial development.;Ecological development partners will work with FISCO BCOS open source community to build a blockchain open source ecology, leading the high-quality development of the industry with solid basic technology support.。 +Industrial application partners aim to help blockchain technology to be better applied and promote the development of blockchain industry;Talent cultivation partners will join hands with FISCO BCOS open source community to carry out curriculum research and development, talent cultivation and talent certification based on FISCO BCOS open source blockchain technology, help build a blockchain talent cultivation system, and provide professional skills for industrial development;Ecological development partners will work with FISCO BCOS open source community to build a blockchain open source ecology, leading the high-quality development of the industry with solid basic technology support。 -Since the launch of the FISCO BCOS Partner Program, many partners have actively applied。By 2023, there are 50 certified FISCO BCOS partners (35 industrial application partners, 13 talent cultivation partners and 2 ecological development partners), which play an important role in promoting the industrial application of FISCO BCOS and helping the development of blockchain industry.。 +Since the launch of the FISCO BCOS Partner Program, many partners have actively applied。By 2023, there are 50 certified FISCO BCOS partners (35 industrial application partners, 13 talent cultivation partners and 2 ecological development partners), which play an important role in promoting the industrial application of FISCO BCOS and helping the development of blockchain industry。 Certified partners are announced as follows, welcome to add a small assistant [FISCOBCOS010] to understand and sign up to join the program。 ![](../../images/community/partner/industrial_application_partners_2023.jpeg) @@ -27,29 +27,29 @@ Certified partners are announced as follows, welcome to add a small assistant [F **Beijing Copyright Home Technology Development Co., Ltd** -Beijing Copyright Home Technology Development Co., Ltd. and FISCO BCOS to provide digital copyright services, the development of the copyright blockchain system in conjunction with copyright regulatory agencies, judicial institutions, the National Timing Center, CA and other copyright to provide copyright confirmation, piracy monitoring, copyright protection and copyright trading and other one-stop comprehensive copyright services, to achieve the creation of rights, use of rights, discovery of rights.! +Beijing Copyright Home Technology Development Co., Ltd. and FISCO BCOS to provide digital copyright services, the development of the copyright blockchain system in conjunction with copyright regulatory agencies, judicial institutions, the National Timing Center, CA and other copyright to provide copyright confirmation, piracy monitoring, copyright protection and copyright trading and other one-stop comprehensive copyright services, to achieve the creation of rights, use of rights, discovery of rights! ![](../../images/community/partner/img.png) -**Beijing Zhongxiang Bit Technology Co., Ltd.** +**Beijing Zhongxiang Bit Technology Co., Ltd** -Founded in July 2014, Beijing Zhongxiang Bit Technology Co., Ltd. is one of the first technology-driven companies engaged in the development of blockchain underlying platforms and application cases in China, providing blockchain products and integrated solutions services to many customers at home and abroad based on FISCO BCOS.。 -Zhongxiang Bit is a national high-tech enterprise, Beijing "specialized and special new" small and medium-sized enterprises, 2021 Beijing intellectual property demonstration unit, won the 2022 / 2021 Beijing private enterprise small and medium-sized top 100, 2021 / 2020 / 2019 KPMG China leading financial technology 50 enterprises, 2020 / 2019 Zhongguancun gazelle enterprises and other honors.。 +Founded in July 2014, Beijing Zhongxiang Bit Technology Co., Ltd. is one of the first technology-driven companies engaged in the development of blockchain underlying platforms and application cases in China, providing blockchain products and integrated solutions services to many customers at home and abroad based on FISCO BCOS。 +Zhongxiang Bit is a national high-tech enterprise, Beijing "specialized and special new" small and medium-sized enterprises, 2021 Beijing intellectual property demonstration unit, won the 2022 / 2021 Beijing private enterprise small and medium-sized top 100, 2021 / 2020 / 2019 KPMG China leading financial technology 50 enterprises, 2020 / 2019 Zhongguancun gazelle enterprises and other honors。 ![](https://img-blog.csdnimg.cn/343fead6b7a643c1bf534149d224b3e5.png) -**Radio and Television Express Group Co., Ltd.** +**Radio and Television Express Group Co., Ltd** -Founded in 1999, Radio and Television Express is a state-controlled high-tech listed company (securities code.:002152), the main business covers intelligent finance, public safety, transportation, government affairs, cultural tourism, new retail and education and other fields, together with FISCO BCOS to provide global customers with competitive intelligent terminals, operational services and big data solutions.。 +Founded in 1999, Radio and Television Express is a state-controlled high-tech listed company (securities code:002152), the main business covers intelligent finance, public safety, transportation, government affairs, cultural tourism, new retail and education and other fields, together with FISCO BCOS to provide global customers with competitive intelligent terminals, operational services and big data solutions。 -The company started from domestic financial self-service equipment, relying on the advantages of scene landing ability, technology research and development, supply chain and so on accumulated over the years, implementing the new development concept, focusing on the two main lines of financial technology and urban intelligence, in the fields of intelligent finance, intelligent transportation, intelligent security, intelligent convenience, etc., to enable the upgrading of traditional industries with science and technology.。In overseas markets, the company has established 9 global branches, and its products and services have entered more than 100 countries and regions around the world.。In the face of a new wave of science and technology, Radio and Television Express will continue to accelerate the deep integration of cutting-edge information technologies such as blockchain, artificial intelligence, big data and the Internet of Things with the real economy, and contribute to the construction of the Smart Greater Bay Area and "Digital China."。 +The company started from domestic financial self-service equipment, relying on the advantages of scene landing ability, technology research and development, supply chain and so on accumulated over the years, implementing the new development concept, focusing on the two main lines of financial technology and urban intelligence, in the fields of intelligent finance, intelligent transportation, intelligent security, intelligent convenience, etc., to enable the upgrading of traditional industries with science and technology。In overseas markets, the company has established 9 global branches, and its products and services have entered more than 100 countries and regions around the world。In the face of a new wave of science and technology, Radio and Television Express will continue to accelerate the deep integration of cutting-edge information technologies such as blockchain, artificial intelligence, big data and the Internet of Things with the real economy, and contribute to the construction of the Smart Greater Bay Area and "Digital China."。 ![](https://img-blog.csdnimg.cn/bbf93b5ae6ea41e1976137f072cd8dcd.png) -**Guangzhou One Chain Block Chain Technology Co., Ltd.** +**Guangzhou One Chain Block Chain Technology Co., Ltd** Guangzhou One Chain Blockchain Technology Co., Ltd. focuses on blockchain technology products and industry application solutions, with more than 10 soft patents, successfully applied FISCO BCOS in the "audit supervision" scenario of government and enterprises, and launched the "blockchain data tamper-proof platform."。 -The platform is committed to "making the world's data credible and free of fraud," and has successfully landed in the audit and supervision scenarios of GAC Honda, China Merchants Expressway, Anjubao and other customers.。 +The platform is committed to "making the world's data credible and free of fraud," and has successfully landed in the audit and supervision scenarios of GAC Honda, China Merchants Expressway, Anjubao and other customers。 ![](../../images/community/partner/hucais.jpeg) @@ -57,150 +57,150 @@ The platform is committed to "making the world's data credible and free of fraud Hucai Group Co., Ltd. was founded in 1989, is a set of digital printing, puree beer, smart wedding three business segments in one of the group enterprises, with dozens of member enterprises, covering the tiger color printing art, Taishan puree beer, fresh lemon smart wedding three brands。Headquartered in Dongguan National High-tech Development Zone Songshan Lake。 -Based on the company's digital and industrial Internet strategy, Hucai established the Hucai Blockchain Innovation and Application Center in 2020, developing blockchain technology as one of the company's five core capabilities for industrial Internet.。Based on FISCO BCOS, Hucai has built a large number of industrial blockchain applications from marketing, logistics, shopping malls to content, including small tiger intelligence marketing, capacity chain, exchange mall, content ecological platform, unified transaction settlement platform and printing chain platform, has been widely used in the actual business of Hucai, serving more than one million partners and users.。 +Based on the company's digital and industrial Internet strategy, Hucai established the Hucai Blockchain Innovation and Application Center in 2020, developing blockchain technology as one of the company's five core capabilities for industrial Internet。Based on FISCO BCOS, Hucai has built a large number of industrial blockchain applications from marketing, logistics, shopping malls to content, including small tiger intelligence marketing, capacity chain, exchange mall, content ecological platform, unified transaction settlement platform and printing chain platform, has been widely used in the actual business of Hucai, serving more than one million partners and users。 ![](https://img-blog.csdnimg.cn/f3a426136533477b8561146dce75ae9f.png) -**Jinan Spring Chain Haiwo Digital Technology Co., Ltd.** +**Jinan Spring Chain Haiwo Digital Technology Co., Ltd** -Jinan Spring Chain Haiwo Digital Technology Co., Ltd. is a company specializing in RegTech.(Regulatory Technology)The high-tech enterprises in the sub-sector, established by well-known scholars in the blockchain field, blockchain technology experts certified by the Ministry of Industry and Information Technology and talents in the field of financial science and technology, are one of the first commercial blockchain companies in China to enter the government pilot catalogue, and are key partners of blockchain for large enterprises such as Aerospace Information, People's Online and Shandong Digital Publishing.。 +Jinan Spring Chain Haiwo Digital Technology Co., Ltd. is a company specializing in RegTech(Regulatory Technology)The high-tech enterprises in the sub-sector, established by well-known scholars in the blockchain field, blockchain technology experts certified by the Ministry of Industry and Information Technology and talents in the field of financial science and technology, are one of the first commercial blockchain companies in China to enter the government pilot catalogue, and are key partners of blockchain for large enterprises such as Aerospace Information, People's Online and Shandong Digital Publishing。 -The company has joined hands with FISCO BCOS to provide one-stop blockchain application solutions, and has launched products such as chain-linked gold service platform, tamper-proof comparative review system, special fund supervision, blockchain super-integrated machine, shared bicycle management, community digital epidemic prevention, distributed big data sharing, etc., and has won many domestic blockchain event awards.。 +The company has joined hands with FISCO BCOS to provide one-stop blockchain application solutions, and has launched products such as chain-linked gold service platform, tamper-proof comparative review system, special fund supervision, blockchain super-integrated machine, shared bicycle management, community digital epidemic prevention, distributed big data sharing, etc., and has won many domestic blockchain event awards。 ![](https://img-blog.csdnimg.cn/2df1a680702f4f30b7002066f8f0af44.png) -**Value Internet (Guangzhou) Blockchain Technology Co., Ltd.** +**Value Internet (Guangzhou) Blockchain Technology Co., Ltd** Value Internet (Guangzhou) Blockchain Technology Co., Ltd. ("Value Internet" or "Value++"), established in 2017, focuses on the research and development and sales of blockchain-related technologies and products. By building a data exchange and circulation platform based on blockchain technology, integrating advanced technologies such as blockchain, privacy and security computing, cloud computing, and artificial intelligence, it builds a platform for data exchange, transaction, circulation and sharing between enterprises, solves mutual trust in enterprise data and free circulation and exchange of data, activates enterprises' sleeping data, realizes data aggregation and sublimation, and helps。 ![](https://img-blog.csdnimg.cn/d701715249f140e88c17b1b8b760f349.png) -**Jiangsu Anhuang Lingyu Technology Co., Ltd.** +**Jiangsu Anhuang Lingyu Technology Co., Ltd** -Jiangsu Anhuang Lingyu Technology Co., Ltd. was established in August 2017, the company focuses on the application scenarios of blockchain, and works with FISCO BCOS to provide a number of solutions and services such as IoT security services, blockchain products, and trusted security service capability output.。Committed to becoming the industry's leading provider of blockchain solutions and services to help government and enterprise digital transformation。 +Jiangsu Anhuang Lingyu Technology Co., Ltd. was established in August 2017, the company focuses on the application scenarios of blockchain, and works with FISCO BCOS to provide a number of solutions and services such as IoT security services, blockchain products, and trusted security service capability output。Committed to becoming the industry's leading provider of blockchain solutions and services to help government and enterprise digital transformation。 -Anhuang Lingyu was awarded the title of Blockchain Filing Enterprise by the National Cyberspace Administration of China in November 2020, and was successfully selected as an innovator in IDC's blockchain digital depository field in May 2021.。 +Anhuang Lingyu was awarded the title of Blockchain Filing Enterprise by the National Cyberspace Administration of China in November 2020, and was successfully selected as an innovator in IDC's blockchain digital depository field in May 2021。 ![](../../images/community/partner/north_king.png) **Beijing North Information Technology Co., Ltd** -Beijing North Information Technology Co., Ltd. [Stock Code: 002987] As a leading financial technology service provider, it provides software and information technology services to customers, mainly financial institutions, and empowers enterprises in their digital construction.。Led by big data, cloud computing, artificial intelligence, blockchain, privacy computing and 5G applications, the company deeply couples cutting-edge technology with financial business scenarios and becomes a new engine for industry development.。 +Beijing North Information Technology Co., Ltd. [Stock Code: 002987] As a leading financial technology service provider, it provides software and information technology services to customers, mainly financial institutions, and empowers enterprises in their digital construction。Led by big data, cloud computing, artificial intelligence, blockchain, privacy computing and 5G applications, the company deeply couples cutting-edge technology with financial business scenarios and becomes a new engine for industry development。 -Beijing North has built a universal blockchain public service platform based on FISCO BCOS, which has core modules such as blockchain distributed ledger, encryption algorithm, data storage, network protocol, consensus mechanism, smart contract, application API interface, etc.。The platform supports business information chaining, financial transactions, contract documents and financial data depository services, optimizes business processes and improves system operational efficiency.。The platform supports financial industry applications and can be used in supply chain finance, bill management, cross-border settlement, payroll and other scenarios.。Using encryption algorithms, public and private key systems, consensus algorithms, timestamps and other technologies, the platform can issue financial blockchain certificates to enhance financial security, and the relevant invention patents have been granted by the State Intellectual Property Office.。 +Beijing North has built a universal blockchain public service platform based on FISCO BCOS, which has core modules such as blockchain distributed ledger, encryption algorithm, data storage, network protocol, consensus mechanism, smart contract, application API interface, etc。The platform supports business information chaining, financial transactions, contract documents and financial data depository services, optimizes business processes and improves system operational efficiency。The platform supports financial industry applications and can be used in supply chain finance, bill management, cross-border settlement, payroll and other scenarios。Using encryption algorithms, public and private key systems, consensus algorithms, timestamps and other technologies, the platform can issue financial blockchain certificates to enhance financial security, and the relevant invention patents have been granted by the State Intellectual Property Office。 ![](../../images/community/partner/img_1.png) **iFLYTEK Corporation Limited** -iFLYTEK Co., Ltd. is a national software enterprise specializing in intelligent voice and language technology, artificial intelligence technology research, software and chip product development, voice information services and e-government system integration.。In 2008, iFLYTEK was listed on the Shenzhen Stock Exchange.。 +iFLYTEK Co., Ltd. is a national software enterprise specializing in intelligent voice and language technology, artificial intelligence technology research, software and chip product development, voice information services and e-government system integration。In 2008, iFLYTEK was listed on the Shenzhen Stock Exchange。 -iFLYTEK is committed to building a new blockchain infrastructure based on blockchain for digital identity, public data sharing and intelligent perception for governments, industries and users, realizing the full interconnection of data elements, solving the trusted circulation of data across regions, industries and systems, and promoting the transformation of the digital economy.。The company has joined hands with FISCO BCOS to provide blockchain infrastructure platforms and blockchain application solutions, and has launched open alliance chain services, certification traceability platforms, copyright protection systems, supply chain finance financing platforms, etc.。 +iFLYTEK is committed to building a new blockchain infrastructure based on blockchain for digital identity, public data sharing and intelligent perception for governments, industries and users, realizing the full interconnection of data elements, solving the trusted circulation of data across regions, industries and systems, and promoting the transformation of the digital economy。The company has joined hands with FISCO BCOS to provide blockchain infrastructure platforms and blockchain application solutions, and has launched open alliance chain services, certification traceability platforms, copyright protection systems, supply chain finance financing platforms, etc。 ![](https://img-blog.csdnimg.cn/5d6e5a309d4343519f15ec191fe27c9b.png) -**Nanjing Anlian Data Technology Co., Ltd.** +**Nanjing Anlian Data Technology Co., Ltd** -Nanjing Anchan Data Technology Co., Ltd. is a professional and leading blockchain company in China, with research and development direction covering blockchain, big data analysis and other fields, and has successfully applied FISCO BCOS in logistics, finance, traceability, certificate storage and other business scenarios, greatly reducing the development cost of blockchain application layer.。 +Nanjing Anchan Data Technology Co., Ltd. is a professional and leading blockchain company in China, with research and development direction covering blockchain, big data analysis and other fields, and has successfully applied FISCO BCOS in logistics, finance, traceability, certificate storage and other business scenarios, greatly reducing the development cost of blockchain application layer。 ![](../../images/community/partner/ctk.jpeg) **Xiamen Hash Technology Co., Ltd** -Xiamen Hash Technology Co., Ltd. was established in April 2018, affiliated to Beijing Hash Digital Road Group, is the director unit of Zhongguancun Beijing Green Carbon Sink Research Institute, is committed to the development and service of carbon sink industry chain and ecological value of the green ecological technology company.。 +Xiamen Hash Technology Co., Ltd. was established in April 2018, affiliated to Beijing Hash Digital Road Group, is the director unit of Zhongguancun Beijing Green Carbon Sink Research Institute, is committed to the development and service of carbon sink industry chain and ecological value of the green ecological technology company。 -The company insists on independent innovation and mastery of its own core technology, takes GEP as the breakthrough point, and builds a number of national, provincial and municipal ecological value development and carbon neutral systems based on the underlying blockchain platform FISCO BCOS.。Turn digital "green water and green mountains" into real value "golden mountains and silver mountains," and provide full-chain ecological value services for local government agencies and financial resources.。 +The company insists on independent innovation and mastery of its own core technology, takes GEP as the breakthrough point, and builds a number of national, provincial and municipal ecological value development and carbon neutral systems based on the underlying blockchain platform FISCO BCOS。Turn digital "green water and green mountains" into real value "golden mountains and silver mountains," and provide full-chain ecological value services for local government agencies and financial resources。 ![](https://img-blog.csdnimg.cn/332ceb0c12fb403fb1a56226753c5327.png) **Entropy Chain Technology (Fujian) Co., Ltd** -Entropy Chain Technology (Fujian) Co., Ltd. is a comprehensive service provider focusing on industrial digital applications.。In 2017, the company took the lead in laying out blockchain technology, building a leading blockchain industry ecological service platform, and providing advanced blockchain solutions for the digital development of the industry.。It has its own technical solutions for blockchain supply chain finance, agricultural traceability, distributed commerce, data storage and data assetization.。As the first president unit of Fujian Blockchain Association and the only operator designated by BSN Fujian Blockchain Backbone Network, Entropy Chain Technology brings together the world's leading blockchain industry resources and has professional practical application and market development service capabilities.。 +Entropy Chain Technology (Fujian) Co., Ltd. is a comprehensive service provider focusing on industrial digital applications。In 2017, the company took the lead in laying out blockchain technology, building a leading blockchain industry ecological service platform, and providing advanced blockchain solutions for the digital development of the industry。It has its own technical solutions for blockchain supply chain finance, agricultural traceability, distributed commerce, data storage and data assetization。As the first president unit of Fujian Blockchain Association and the only operator designated by BSN Fujian Blockchain Backbone Network, Entropy Chain Technology brings together the world's leading blockchain industry resources and has professional practical application and market development service capabilities。 ![](https://img-blog.csdnimg.cn/58e8010398fb4754b9324b72cd79face.png) **shanghai jiuyu software system co ltd** -Shanghai Jiuyu Software System Co., Ltd. was established in 2013. It is a professional software development and system integration enterprise jointly invested by Shanghai Jiushi Company and Shanghai Public Transport Card Co., Ltd.。 +Shanghai Jiuyu Software System Co., Ltd. was established in 2013. It is a professional software development and system integration enterprise jointly invested by Shanghai Jiushi Company and Shanghai Public Transport Card Co., Ltd。 -Together with FISCO BCOS, Shanghai Jiuyu has first-class technology in card-based business, mobile payment, urban public transport "all-in-one card," "one-ticket transfer" fund clearing and clearing, intelligent terminals, data services and other industries around the two core technologies of payment clearing and consumption exchange.;The company integrates industry knowledge, application development, system integration, operation management, project implementation and value-added services, adheres to the development strategy of "independent innovation, own brand, based in Shanghai, radiation around," and provides customers with industry solutions, system integration services and machine sales in financial electronics, social information and other industries.。The company has obtained 23 software copyrights and 17 software product certification certificates in the past 5 years.。 +Together with FISCO BCOS, Shanghai Jiuyu has first-class technology in card-based business, mobile payment, urban public transport "all-in-one card," "one-ticket transfer" fund clearing and clearing, intelligent terminals, data services and other industries around the two core technologies of payment clearing and consumption exchange;The company integrates industry knowledge, application development, system integration, operation management, project implementation and value-added services, adheres to the development strategy of "independent innovation, own brand, based in Shanghai, radiation around," and provides customers with industry solutions, system integration services and machine sales in financial electronics, social information and other industries。The company has obtained 23 software copyrights and 17 software product certification certificates in the past 5 years。 ![](../../images/community/partner/img_2.png) **SHANGHAI CARBON INFORMATION TECHNOLOGY CO., LTD** -Shanghai Carbon Information Technology Co., Ltd. is a carbon-neutral digital financial technology company.。The company has been deeply involved in the financial digital service industry for a long time, and has strong capabilities in software and small program development, digitization and operation management.。 +Shanghai Carbon Information Technology Co., Ltd. is a carbon-neutral digital financial technology company。The company has been deeply involved in the financial digital service industry for a long time, and has strong capabilities in software and small program development, digitization and operation management。 -At present, the company's strategy focuses on "carbon-neutral digitalization" and "collaborative low-carbon," and is committed to providing fast and convenient carbon emission reduction and carbon-neutral services for governments and enterprises through the SaaS platform, as well as helping the construction and operation of carbon inclusion system, low-carbon data credit enhancement and other diversified services.。 +At present, the company's strategy focuses on "carbon-neutral digitalization" and "collaborative low-carbon," and is committed to providing fast and convenient carbon emission reduction and carbon-neutral services for governments and enterprises through the SaaS platform, as well as helping the construction and operation of carbon inclusion system, low-carbon data credit enhancement and other diversified services。 -Based on the underlying blockchain platform FISCO BCOS, the company has built a low-carbon chain service platform, providing new practical ideas for enabling low-carbon emission reduction through science and technology.。Carbon technology partners include a number of low-carbon field of professional institutions, service customers include a number of large companies and a number of large and medium-sized conference。 +Based on the underlying blockchain platform FISCO BCOS, the company has built a low-carbon chain service platform, providing new practical ideas for enabling low-carbon emission reduction through science and technology。Carbon technology partners include a number of low-carbon field of professional institutions, service customers include a number of large companies and a number of large and medium-sized conference。 ![](../../images/community/partner/img_3.png) **Shanghai Wanglian Information Technology Co., Ltd** -Shanghai Wang Chain Information Technology Co., Ltd. was established in 2016, is a high-speed growth of national high-tech enterprises, the State Ministry of Industry and Information Technology Blockchain Key Laboratory member units, headquartered in Shanghai, in Ningbo, Hefei, Changsha, Xi'an, Jakarta, Singapore has branches。In 2018, it was listed in the "Top 100 Chinese Blockchain Enterprises List" of the Ministry of Industry and Information Technology's CCID Blockchain.;In 2020, it was awarded the "Top 10 Innovative Companies in Asia";In 2022, it was selected as the "2022 China Top 100 Industrial Blockchain"。 +Shanghai Wang Chain Information Technology Co., Ltd. was established in 2016, is a high-speed growth of national high-tech enterprises, the State Ministry of Industry and Information Technology Blockchain Key Laboratory member units, headquartered in Shanghai, in Ningbo, Hefei, Changsha, Xi'an, Jakarta, Singapore has branches。In 2018, it was listed in the "Top 100 Chinese Blockchain Enterprises List" of the Ministry of Industry and Information Technology's CCID Blockchain;In 2020, it was awarded the "Top 10 Innovative Companies in Asia";In 2022, it was selected as the "2022 China Top 100 Industrial Blockchain"。 Relying on data center operations and cloud computing, Wanglian Technology has developed its own cloud management platform and cloud operating system to provide cutting-edge high-tech solutions to customers around the world, including smart agriculture, blockchain traceability, supply chain finance, industrial Internet, edge computing, meta-cosmic model rendering, high-speed distributed storage, autonomous organization management and other application scenarios, and has successfully served hundreds of customers in dozens of industries including aviation, government, finance, energy, agriculture, medical care,。 -Wang Chain Technology and FISCO BCOS have always maintained a good cooperative relationship, based on FISCO BCOS developed the VoneBaaS blockchain infrastructure service platform;The company also continues to participate in community building and has been on the FISCO BCOS contribution list many times.。 +Wang Chain Technology and FISCO BCOS have always maintained a good cooperative relationship, based on FISCO BCOS developed the VoneBaaS blockchain infrastructure service platform;The company also continues to participate in community building and has been on the FISCO BCOS contribution list many times。 ![](https://img-blog.csdnimg.cn/289b2382adc840369106d00dcc42ad08.png) **Shanghai Xinzhi Software Co., Ltd** -Founded in 1994, Shanghai Xinzhi Software Co., Ltd. is a leading software service provider in China.。Headquartered in Shanghai, the company has set up branches in Beijing, Shenzhen, Dalian, Chongqing, Chengdu, Gui'an, Wuhan, Tokyo and other places.。In December 2020, Xinzhi Software officially landed on the A-share board, ushering in a new chapter of development for the company.。 +Founded in 1994, Shanghai Xinzhi Software Co., Ltd. is a leading software service provider in China。Headquartered in Shanghai, the company has set up branches in Beijing, Shenzhen, Dalian, Chongqing, Chengdu, Gui'an, Wuhan, Tokyo and other places。In December 2020, Xinzhi Software officially landed on the A-share board, ushering in a new chapter of development for the company。 -Relying on more than 20 years of practical experience in serving the financial industry and based on the research and development capabilities of the four basic laboratories of cloud computing, big data, artificial intelligence and blockchain, the company provides financial customers with comprehensive information solutions including channels, cores, data and management to meet the needs of financial customers to achieve intelligence, security compliance and business diversification.。At the same time, the company's related products and services have been widely used in telecommunications, medical, automotive and many other fields, including China Taibao, China Life, PICC, Bank of Communications, Construction Bank, China UnionPay, China Telecom, Shanghai Automobile, Fosun Group and many other high-quality customers.。 +Relying on more than 20 years of practical experience in serving the financial industry and based on the research and development capabilities of the four basic laboratories of cloud computing, big data, artificial intelligence and blockchain, the company provides financial customers with comprehensive information solutions including channels, cores, data and management to meet the needs of financial customers to achieve intelligence, security compliance and business diversification。At the same time, the company's related products and services have been widely used in telecommunications, medical, automotive and many other fields, including China Taibao, China Life, PICC, Bank of Communications, Construction Bank, China UnionPay, China Telecom, Shanghai Automobile, Fosun Group and many other high-quality customers。 ![](https://img5.tianyancha.com/logo/product/b75785ccf33885df0c815de4129a697c.png@!f_200x200) **Shanghai Xinyi Technology Co., Ltd** -Shanghai Xinyi Technology Co., Ltd. is a medical blockchain and big data leader located in the Shanghai Blockchain Ecological Valley, and is a core member and technical support unit of the National Health Commission and the medical blockchain in Shanghai, Shandong, Hubei, Zhejiang, Sichuan, Jiangxi and other provinces and cities.;It is also a partner of many enterprises and mainstream platforms such as many health care big data centers.。 +Shanghai Xinyi Technology Co., Ltd. is a medical blockchain and big data leader located in the Shanghai Blockchain Ecological Valley, and is a core member and technical support unit of the National Health Commission and the medical blockchain in Shanghai, Shandong, Hubei, Zhejiang, Sichuan, Jiangxi and other provinces and cities;It is also a partner of many enterprises and mainstream platforms such as many health care big data centers。 -Xinyi Technology provides FISCO BCOS blockchain technical support for the construction of a number of medical platforms, including "Xinyi Chain Electronic Medical Record Folder Application," "Blockchain-based Commercial Insurance Settlement and Audit Support Service," "Blockchain-based Quality Service and Supervision Platform for the Circulation of Chinese Herbal Pieces," etc.。The company participated in Guangzhou National Laboratory, Shanghai Shenkang, National Health and Medical Big Data (North) Center, a number of specialized disease banks and science and technology big data platform construction, covering nearly 100 specialized disease banks.。 +Xinyi Technology provides FISCO BCOS blockchain technical support for the construction of a number of medical platforms, including "Xinyi Chain Electronic Medical Record Folder Application," "Blockchain-based Commercial Insurance Settlement and Audit Support Service," "Blockchain-based Quality Service and Supervision Platform for the Circulation of Chinese Herbal Pieces," etc。The company participated in Guangzhou National Laboratory, Shanghai Shenkang, National Health and Medical Big Data (North) Center, a number of specialized disease banks and science and technology big data platform construction, covering nearly 100 specialized disease banks。 ![](https://img-blog.csdnimg.cn/2c768788331843469936a1796277d2ff.png) -**Sheung Waihu Information Technology Co., Ltd.** +**Sheung Waihu Information Technology Co., Ltd** -Shanghai Haihu Information Technology Co., Ltd. is a high-tech enterprise with blockchain technology as its core, focusing on blockchain technology and digital asset services. In 2018, it passed the functional test of the blockchain system of the Ministry of Industry and Information Technology, and participated in the preparation of the Blue Book of Blockchain and Blockchain Technology Security Standards of the Ministry of Industry and Information Technology.。Apply FISCO BCOS to digitize agriculture, ensure the identity and data credibility of digital agriculture subjects, and help enterprises, governments, banks and other institutions to share trusted data.。 +Shanghai Haihu Information Technology Co., Ltd. is a high-tech enterprise with blockchain technology as its core, focusing on blockchain technology and digital asset services. In 2018, it passed the functional test of the blockchain system of the Ministry of Industry and Information Technology, and participated in the preparation of the Blue Book of Blockchain and Blockchain Technology Security Standards of the Ministry of Industry and Information Technology。Apply FISCO BCOS to digitize agriculture, ensure the identity and data credibility of digital agriculture subjects, and help enterprises, governments, banks and other institutions to share trusted data。 ![](https://img-blog.csdnimg.cn/fef9f8c1236f45389ea7f8ffee06ed04.png) -**Shenzhen Hanlan Block Chain Real Estate Co., Ltd.** +**Shenzhen Hanlan Block Chain Real Estate Co., Ltd** -Shenzhen Hanlan Blockchain Real Estate Co.。 +Shenzhen Hanlan Blockchain Real Estate Co。 ![](../../images/community/partner/green_credit_tech.png) -**China Carbon Green Letter Technology (Shenzhen) Co., Ltd.** +**China Carbon Green Letter Technology (Shenzhen) Co., Ltd** -Founded on May 17, 2018, China Carbon Lvxin Technology (Shenzhen) Co., Ltd. is a national high-tech enterprise under China Carbon Neutral Development Group that focuses on the dual-carbon economy with blockchain.。 +Founded on May 17, 2018, China Carbon Lvxin Technology (Shenzhen) Co., Ltd. is a national high-tech enterprise under China Carbon Neutral Development Group that focuses on the dual-carbon economy with blockchain。 -Lvxin Technology creates a financial-grade trusted collaboration network for the industry to collaborate across organizations, drive value consensus, and refine digital assets.-Green letter chain。By reducing the threshold and cost of using blockchain technology, we will promote more enterprises and individuals around the world to establish digital credit based on blockchain, realize business innovation and connect green finance.。 +Green letter technology for the industry to create cross-organizational collaboration, drive value consensus, refine digital assets of the financial level trusted collaboration network - Green letter chain。By reducing the threshold and cost of using blockchain technology, we will promote more enterprises and individuals around the world to establish digital credit based on blockchain, realize business innovation and connect green finance。 ![](../../images/community/partner/XGD.png) **Shenzhen Xinguodu Digital Technology Co., Ltd** -Shenzhen Xinguodu Digital Technology Co., Ltd. was established in 2016, is a listed company Xinguodu shares.(300130) Company。Relying on digital technology capabilities, Xindu Digital Branch has formed four core business segments: digital government and enterprises, digital credit, digital security and digital employment, mainly providing data governance, data products, data application development, data asset operation, data security and other services.。 +Shenzhen Xinguodu Digital Technology Co., Ltd. was established in 2016, is a listed company Xinguodu shares(300130) Company。Relying on digital technology capabilities, Xindu Digital Branch has formed four core business segments: digital government and enterprises, digital credit, digital security and digital employment, mainly providing data governance, data products, data application development, data asset operation, data security and other services。 -Adhering to the mission of "data asset operator," taking big data, artificial intelligence and blockchain technology as the core, Xinguodu Digital Science Co., Ltd. cooperates with FISCO BCOS in the fields of blockchain underlying platform and privacy computing, providing data analysis, platform technology, blockchain products and operation services, providing comprehensive solutions for digital services for governments, financial institutions and state-owned enterprises, and is committed to contributing to the construction of social integrity system.。 +Adhering to the mission of "data asset operator," taking big data, artificial intelligence and blockchain technology as the core, Xinguodu Digital Science Co., Ltd. cooperates with FISCO BCOS in the fields of blockchain underlying platform and privacy computing, providing data analysis, platform technology, blockchain products and operation services, providing comprehensive solutions for digital services for governments, financial institutions and state-owned enterprises, and is committed to contributing to the construction of social integrity system。 ![](https://storage-public.zhaopin.cn/org/logo/1644992846111082045/7bf6f160950405e91717b28e1d89586b330aa8db533bb5d2452c2f62b5a38136fa86a38250e8e3ab.jpg) -**Gold Public Service (Qingdao) Co., Ltd.** +**Gold Public Service (Qingdao) Co., Ltd** -Digital Gold Public Service (Qingdao) Co., Ltd. is a digital financial technology enterprise, a digital RMB pilot landing institution and a member of the standard group, with business in the fields of green, low-carbon and digital finance.。The company is deeply involved in technology research and development, product innovation, operation expansion, resource synergy and ecological construction services in the process of digital RMB pilot promotion, providing "scenarios" for governments, enterprises and financial institutions to access the digital RMB system and carry out digital RMB activities.+技术+Service "support to promote digital financial technology ecological construction and industrial upgrading.。 +Digital Gold Public Service (Qingdao) Co., Ltd. is a digital financial technology enterprise, a digital RMB pilot landing institution and a member of the standard group, with business in the fields of green, low-carbon and digital finance。The company is deeply involved in technology research and development, product innovation, operation expansion, resource synergy and ecological construction services in the process of digital RMB pilot promotion, providing "scenarios" for governments, enterprises and financial institutions to access the digital RMB system and carry out digital RMB activities+技术+Service "support to promote digital financial technology ecological construction and industrial upgrading。 -The company has joined hands with FISCO BCOS to launch scenario-based solutions around the field of personal carbon inclusion, including the use of blockchain for trusted data storage, to solve the problem of multi-party trust of data.;Link the participants of carbon inclusion to build an open, co-created and credible carbon inclusion alliance chain ecology.。 +The company has joined hands with FISCO BCOS to launch scenario-based solutions around the field of personal carbon inclusion, including the use of blockchain for trusted data storage, to solve the problem of multi-party trust of data;Link the participants of carbon inclusion to build an open, co-created and credible carbon inclusion alliance chain ecology。 ![](../../images/community/partner/digit_guangdong.jpeg) **Digital Guangdong Network Construction Co., Ltd** -As a digital government construction and operation center in Guangdong Province and a secondary electronic institution in China, Digital Guangdong Network Construction Co., Ltd. adheres to the public welfare, basic, platform and security positioning of digital government, focuses on digital government public services, infrastructure, platform construction, security protection and other main responsibilities, strongly supports the high-quality economic and social development of Guangdong Province, vigorously promotes the modernization of the governance system and governance capacity, and makes important contributions to actively exploring the Guangdong path of Chinese。The company currently has nearly 3,000 employees, of whom more than 70% are technical.。 +As a digital government construction and operation center in Guangdong Province and a secondary electronic institution in China, Digital Guangdong Network Construction Co., Ltd. adheres to the public welfare, basic, platform and security positioning of digital government, focuses on digital government public services, infrastructure, platform construction, security protection and other main responsibilities, strongly supports the high-quality economic and social development of Guangdong Province, vigorously promotes the modernization of the governance system and governance capacity, and makes important contributions to actively exploring the Guangdong path of Chinese。The company currently has nearly 3,000 employees, of whom more than 70% are technical。 Digital Guangdong has now built a provincial and municipal integration of government cloud platform, a network sharing platform and public support platform, continue to consolidate the foundation of digital government, and built a series of important achievements in digital government reform and construction, such as Guangdong provincial affairs, Guangdong business communication, Guangdong government service network, "one network management" Guangdong governance platform, Guangdong fair, etc., fully supporting Guangdong digital government reform and construction in the forefront of the country, in improving people's livelihood services, optimizing。Digital Guangdong will join hands with FISCO BCOS to continuously explore the application scenarios of blockchain in the field of smart government affairs。 @@ -208,82 +208,82 @@ Digital Guangdong has now built a provincial and municipal integration of govern **Sichuan Hongxin Software Co., Ltd** -Sichuan Hongxin Software Co., Ltd. is one of the flagship enterprises of Sichuan Changhong. After more than ten years of rapid development, it has become a high-tech enterprise with four core capabilities, focusing on the research and development, consulting and implementation of cutting-edge IT technology in the three major business areas of smart enterprises, smart cities and military-civilian integration.。Taking advantage of its advantages in the field of smart home, Changhong has joined hands with FISCO BCOS to establish a trusted Internet of Things. Hongxin Software undertakes the identity authentication module and uses the FISCO BCOS certificate authentication system to improve the group / node access verification mechanism in the Internet of Things and provide a trusted data base for cross-brand smart home scenarios.。 +Sichuan Hongxin Software Co., Ltd. is one of the flagship enterprises of Sichuan Changhong. After more than ten years of rapid development, it has become a high-tech enterprise with four core capabilities, focusing on the research and development, consulting and implementation of cutting-edge IT technology in the three major business areas of smart enterprises, smart cities and military-civilian integration。Taking advantage of its advantages in the field of smart home, Changhong has joined hands with FISCO BCOS to establish a trusted Internet of Things. Hongxin Software undertakes the identity authentication module and uses the FISCO BCOS certificate authentication system to improve the group / node access verification mechanism in the Internet of Things and provide a trusted data base for cross-brand smart home scenarios。 ![](https://user-images.githubusercontent.com/93572056/155054716-90294c2f-831d-4385-b5d1-7917baabce09.png) -**Sichuan Everything Digital Creation Technology Co., Ltd.** +**Sichuan Everything Digital Creation Technology Co., Ltd** -Sichuan Everything Digital Innovation Technology Co., Ltd. is a new economic enterprise with scientific and technological innovation capabilities.。The core product of the enterprise, CoT Network, is an autonomous and controllable multi-elastic network computing system that integrates blockchain distributed computing and edge computing technologies to provide blockchain systems and applications, multi-computing software and hardware products, blockchain services, and consulting and implementation of overall industry solutions for various industries and scenarios.。At present, Sichuan Everything based on FISCO BCOS has landed community intelligent governance and service, intelligent party building and other direction application cases.。 +Sichuan Everything Digital Innovation Technology Co., Ltd. is a new economic enterprise with scientific and technological innovation capabilities。The core product of the enterprise, CoT Network, is an autonomous and controllable multi-elastic network computing system that integrates blockchain distributed computing and edge computing technologies to provide blockchain systems and applications, multi-computing software and hardware products, blockchain services, and consulting and implementation of overall industry solutions for various industries and scenarios。At present, Sichuan Everything based on FISCO BCOS has landed community intelligent governance and service, intelligent party building and other direction application cases。 -The core team of the company is composed of overseas returned experts and managers of the world's top 500 enterprises, with a number of intellectual property rights and high-quality partners.。 +The core team of the company is composed of overseas returned experts and managers of the world's top 500 enterprises, with a number of intellectual property rights and high-quality partners。 ![](../../images/community/partner/taikang_pension.jpeg) -**Taikang Pension Insurance Co., Ltd.** +**Taikang Pension Insurance Co., Ltd** -Taikang Pension Insurance Co., Ltd. is a national, joint-stock professional pension insurance company jointly funded by Taikang Life Insurance Co., Ltd. and Taikang Asset Management Co., Ltd.。The business scope covers three major areas: group insurance, enterprise annuity and personal pension insurance.。At present, 23 branches and 9 enterprise annuity centers have been established nationwide.。 +Taikang Pension Insurance Co., Ltd. is a national, joint-stock professional pension insurance company jointly funded by Taikang Life Insurance Co., Ltd. and Taikang Asset Management Co., Ltd。The business scope covers three major areas: group insurance, enterprise annuity and personal pension insurance。At present, 23 branches and 9 enterprise annuity centers have been established nationwide。 -As one of the five professional pension insurance companies in China, Taikang Pension has always pursued the development strategy of "specialization, standardization and internationalization," joined hands with Taikang Assets, relying on Taikang Life's 17 years of rich experience in employee benefits and supplementary pension insurance management, and adhering to the business strategy of "customer-centric, value-oriented, strong employee welfare and large annuity," and is committed to providing enterprises and employees with the best professional life insurance, injury accident insurance。Taikang Pension has landed multiple applications based on FISCO BCOS, and will continue to work with FISCO BCOS to explore more solutions for insurance health management scenarios in the future.。 +As one of the five professional pension insurance companies in China, Taikang Pension has always pursued the development strategy of "specialization, standardization and internationalization," joined hands with Taikang Assets, relying on Taikang Life's 17 years of rich experience in employee benefits and supplementary pension insurance management, and adhering to the business strategy of "customer-centric, value-oriented, strong employee welfare and large annuity," and is committed to providing enterprises and employees with the best professional life insurance, injury accident insurance。Taikang Pension has landed multiple applications based on FISCO BCOS, and will continue to work with FISCO BCOS to explore more solutions for insurance health management scenarios in the future。 ![](https://img-blog.csdnimg.cn/2662383423db48dc8892d3b1c29f45ad.png) **WANGAO INFORMATION TECHNOLOGY CO., LTD** -Wangao was established in 2014 and Zhuhai Branch was established in 2015 to provide long-term, stable and sufficient human resources guarantee for Macao. Wangao team is divided into different businesses and is responsible for special management.。The technical R & D team is stable and experienced in projects, aiming to provide customers with better technical advice and development services。 +Wangao was established in 2014 and Zhuhai Branch was established in 2015 to provide long-term, stable and sufficient human resources guarantee for Macao. Wangao team is divided into different businesses and is responsible for special management。The technical R & D team is stable and experienced in projects, aiming to provide customers with better technical advice and development services。 -At present, the company has received FISCO BCOS professional training personnel have 8 people, and plans to let more personnel to accept FISCO BCOS professional training。Wangao focuses on mobile and data application development. During the 2019 epidemic, it successfully used FISCO BCOS technology to develop Macao health codes for the Macao government to respond to the epidemic, and helped Guangdong and Macao achieve cross-border mutual recognition of health codes in May 2020.。So far, the Guangdong-Macao health code cross-border mutual recognition system has served hundreds of millions of customs clearance。 +At present, the company has received FISCO BCOS professional training personnel have 8 people, and plans to let more personnel to accept FISCO BCOS professional training。Wangao focuses on mobile and data application development. During the 2019 epidemic, it successfully used FISCO BCOS technology to develop Macao health codes for the Macao government to respond to the epidemic, and helped Guangdong and Macao achieve cross-border mutual recognition of health codes in May 2020。So far, the Guangdong-Macao health code cross-border mutual recognition system has served hundreds of millions of customs clearance。 ![](https://img-blog.csdnimg.cn/b887aab7a31f4711aece65fd258201c0.png) -**Wuhan Chain Times Technology Co., Ltd.** +**Wuhan Chain Times Technology Co., Ltd** -Wuhan Chain Times Technology Co., Ltd. and FISCO BCOS to solve the blockchain application development "last mile" problem, the research and development of the "inBC blockchain depository service system" to format and coordinate the data on the chain, to achieve blockchain depository one-click touch.;"Falcon Zero Code Development Platform" can efficiently complete system construction and deployment through graphical drag and drop, parameter configuration, etc.。 +Wuhan Chain Times Technology Co., Ltd. and FISCO BCOS to solve the blockchain application development "last mile" problem, the research and development of the "inBC blockchain depository service system" to format and coordinate the data on the chain, to achieve blockchain depository one-click touch;"Falcon Zero Code Development Platform" can efficiently complete system construction and deployment through graphical drag and drop, parameter configuration, etc。 ![](https://img-blog.csdnimg.cn/e70b73e27d6e43b8859ba1adb75c4010.png) -**Wuhan Lingshengwang Chain Technology Co., Ltd.** +**Wuhan Lingshengwang Chain Technology Co., Ltd** -Wuhan Lingshengwang Chain Technology Co., Ltd. was established in November 2018 as the secretary-general unit of Wuhan Blockchain Association, formerly known as Feiwang Technology Blockchain Division.。The company is the leading domestic blockchain+Smart city digital rights service provider, around the "smart chain city" construction, in government affairs, Internet of Things, data security and other aspects of the creation and landing of a series of products and solutions, including FISCO BCOS-based development of the "chain tax pass" is the first blockchain project in the field of tax governance in the country.;"Ming Chef Liang Zao Block Chain Internet of Things Platform" was selected as a new information consumption demonstration project of the Ministry of Industry and Information Technology in 2020。 +Wuhan Lingshengwang Chain Technology Co., Ltd. was established in November 2018 as the secretary-general unit of Wuhan Blockchain Association, formerly known as Feiwang Technology Blockchain Division。The company is the leading domestic blockchain+Smart city digital rights service provider, around the "smart chain city" construction, in government affairs, Internet of Things, data security and other aspects of the creation and landing of a series of products and solutions, including FISCO BCOS-based development of the "chain tax pass" is the first blockchain project in the field of tax governance in the country;"Ming Chef Liang Zao Block Chain Internet of Things Platform" was selected as a new information consumption demonstration project of the Ministry of Industry and Information Technology in 2020。 -At the same time, the company was approved to become a talent capability evaluation agency in key areas of industry and information technology (blockchain), and has obtained 15 blockchain-related soft works and declared 8 patents.。 +At the same time, the company was approved to become a talent capability evaluation agency in key areas of industry and information technology (blockchain), and has obtained 15 blockchain-related soft works and declared 8 patents。 ![](../../images/community/partner/yeepay.png) **Epay Payments Limited** -Established in 2003, Epay is the first third-party payment institution to obtain the Payment Business License of the People's Bank of China.。In 2006, Yibao established the B-side industry payment model to provide enterprise customers with one-stop digital transaction service solutions integrating technology, products and services.。At present, Epay has served more than one million merchants, covering aviation, tourism, retail, energy, automotive, Internet 3.0, cross-border, finance, government, local life and many other industries head customers, business scale in the forefront of the industry.。 +Established in 2003, Epay is the first third-party payment institution to obtain the Payment Business License of the People's Bank of China。In 2006, Yibao established the B-side industry payment model to provide enterprise customers with one-stop digital transaction service solutions integrating technology, products and services。At present, Epay has served more than one million merchants, covering aviation, tourism, retail, energy, automotive, Internet 3.0, cross-border, finance, government, local life and many other industries head customers, business scale in the forefront of the industry。 -Based on FISCO BCOS, the company has hatched a fast non-Caton transaction-level alliance chain, the billion chain, which pioneered the payment node contract rules and introduced credibility nodes to ensure the permanent retention and credible circulation of digital assets.。By building a technical base focusing on the Web3.0 economic system, we will continue to provide enterprise customers with fast, safe and stable on-chain transaction solutions, build an open alliance chain of ecological integration, and lead the upgrade and leap of Web3.0 trusted ecology.。 +Based on FISCO BCOS, the company has hatched a fast non-Caton transaction-level alliance chain, the billion chain, which pioneered the payment node contract rules and introduced credibility nodes to ensure the permanent retention and credible circulation of digital assets。By building a technical base focusing on the Web3.0 economic system, we will continue to provide enterprise customers with fast, safe and stable on-chain transaction solutions, build an open alliance chain of ecological integration, and lead the upgrade and leap of Web3.0 trusted ecology。 ![](https://img-blog.csdnimg.cn/be9d01ea303a4e3cb2a3d3b0787995c0.png) -**Yi Lianzhong Information Technology Co., Ltd.** +**Yi Lianzhong Information Technology Co., Ltd** -Yi Lianzhong Information Technology Co., Ltd. is a listed company in the field of domestic people's livelihood information services.。Since its establishment in 2000, Yi Lianzhong has kept in mind the corporate mission of "Let the world have no sad life," driven by big data, focusing on "medical security, health, human resources and social security" and other livelihood areas, providing a full range of overall solutions and products and technical service system.。The company makes full use of technologies such as big data, blockchain and artificial intelligence to continuously develop and improve product innovation capabilities and promote digital innovation in its main business.。 +Yi Lianzhong Information Technology Co., Ltd. is a listed company in the field of domestic people's livelihood information services。Since its establishment in 2000, Yi Lianzhong has kept in mind the corporate mission of "Let the world have no sad life," driven by big data, focusing on "medical security, health, human resources and social security" and other livelihood areas, providing a full range of overall solutions and products and technical service system。The company makes full use of technologies such as big data, blockchain and artificial intelligence to continuously develop and improve product innovation capabilities and promote digital innovation in its main business。 ![](https://img-blog.csdnimg.cn/a90c74530b7b4d39b237b220fe767499.png) -**Zhejiang Tianyan Weizhen Network Technology Co., Ltd.** +**Zhejiang Tianyan Weizhen Network Technology Co., Ltd** -Zhejiang Tianyan Weizhen Network Technology Co., Ltd. is the country's leading rural revitalization digital service overall solution provider, successfully applied FISCO BCOS in the whole process of agricultural traceability, is currently integrating blockchain and Internet of Things technology research and development of intelligent agricultural cloud platform, explore blockchain.+Agricultural Finance Application Scenarios。 +Zhejiang Tianyan Weizhen Network Technology Co., Ltd. is the country's leading rural revitalization digital service overall solution provider, successfully applied FISCO BCOS in the whole process of agricultural traceability, is currently integrating blockchain and Internet of Things technology research and development of intelligent agricultural cloud platform, explore blockchain+Agricultural Finance Application Scenarios。 ![](../../images/community/partner/chinatower.png) **CHINA TOWER COMPANY LIMITED** -China Tower Co., Ltd. is a large state-owned communications infrastructure service enterprise promoted by the State Council.。Mainly engaged in communication towers and other base station supporting facilities and high-speed rail subway public network coverage, large indoor distribution system construction, maintenance and operation, while relying on unique resources to provide information applications and intelligent power exchange, backup, charging and other energy application services to the community, is China's mobile communication infrastructure construction "national team" and 5G new infrastructure "main force," is the world's largest communication infrastructure service provider。 -China Tower was selected in the 2019 Fortune Global Future 50 (ranked 22nd) and Global Digital Economy 100 (ranked 71st), 2018-2020 was awarded the "Most Valuable Listed Company" of China Securities Golden Bauhinia Award for three consecutive years, and won the "14th Five-Year Most Valuable Listed Company" Special Award in the 11th China Securities Golden Bauhinia Award.。 +China Tower Co., Ltd. is a large state-owned communications infrastructure service enterprise promoted by the State Council。Mainly engaged in communication towers and other base station supporting facilities and high-speed rail subway public network coverage, large indoor distribution system construction, maintenance and operation, while relying on unique resources to provide information applications and intelligent power exchange, backup, charging and other energy application services to the community, is China's mobile communication infrastructure construction "national team" and 5G new infrastructure "main force," is the world's largest communication infrastructure service provider。 +China Tower was selected as one of the Fortune Global Future 50 (ranked 22nd) and Global Digital Economy 100 (ranked 71st) in 2019, and was named the "Most Valuable Listed Company" of the China Securities Golden Bauhinia Award for three consecutive years from 2018 to 2020。 -China Tower focuses on independent and controllable blockchain technology research and development, with FISCO BCOS development capabilities, is building a blockchain management platform based on FISCO BCOS open source technology。China Tower is also committed to blockchain application research, carrying out the "blockchain+The research results of "Taxation" won the first prize of the first "State-owned Enterprise Digital Scene Innovation Professional Competition" sponsored by SASAC, and the business model received attention from all walks of life and authoritative media reports at home and abroad.。 +China Tower focuses on independent and controllable blockchain technology research and development, with FISCO BCOS development capabilities, is building a blockchain management platform based on FISCO BCOS open source technology。China Tower is also committed to blockchain application research, carrying out the "blockchain+The research results of "Taxation" won the first prize of the first "State-owned Enterprise Digital Scene Innovation Professional Competition" sponsored by SASAC, and the business model received attention from all walks of life and authoritative media reports at home and abroad。 ![](../../images/community/partner/img_5.png) -**CICC Data (Wuhan) Supercomputing Technology Co., Ltd.** +**CICC Data (Wuhan) Supercomputing Technology Co., Ltd** -Founded in 2016, CICC Data (Wuhan) Supercomputing Technology Co., Ltd. is located in Wuhan National Network Talent and Innovation Industry Base. It is a high-tech enterprise mainly based on data center outsourcing and operation and maintenance services to carry out cloud computing, blockchain, network and information security, supercomputing, system integration and other information technology services.。 +Founded in 2016, CICC Data (Wuhan) Supercomputing Technology Co., Ltd. is located in Wuhan National Network Talent and Innovation Industry Base. It is a high-tech enterprise mainly based on data center outsourcing and operation and maintenance services to carry out cloud computing, blockchain, network and information security, supercomputing, system integration and other information technology services。 Companies work with FISCO BCOS to provide the country's leading secure data base and trusted data services。Based on the underlying technology of cloud computing and blockchain, the company built the CICC data cloud chain platform, which provides users with integrated services for the development, deployment and application of data centers, cloud computing, big data and blockchain, and was named the digital economy pilot demonstration project of the Hubei Provincial Development and Reform Commission in 2022 and selected as the "Top Ten Excellent Application Cases of Blockchain in Hubei Province in 2022."。 @@ -295,96 +295,96 @@ Companies work with FISCO BCOS to provide the country's leading secure data base **Babbitt College** -Babbitt is a leading domestic blockchain information and technology service provider, which has developed into an ecological platform integrating information content, offline activities, training, incubators, investment and blockchain technology application.。Babbitt College is a brand of education and training under Babbitt. It works together with FISCO BCOS to build a blockchain talent highland and deliver high-quality blockchain talents to the society.。 +Babbitt is a leading domestic blockchain information and technology service provider, which has developed into an ecological platform integrating information content, offline activities, training, incubators, investment and blockchain technology application。Babbitt College is a brand of education and training under Babbitt. It works together with FISCO BCOS to build a blockchain talent highland and deliver high-quality blockchain talents to the society。 ![](https://img-blog.csdnimg.cn/a459dafa03f045b3a8be2b673f54e5b3.png) **Beijing Bailiandaojie Education Technology Co., Ltd** -Bo Chain Education is a leading domestic blockchain and digital education talent service organization, selected by the Ministry of Industry and Information Technology in key areas of talent evaluation and capacity improvement task-taking unit, together with FISCO BCOS launched the "Blockchain Engineering Technology Series Certification Course," students through the certification examination can obtain a national blockchain technology certificate.。 +Bo Chain Education is a leading domestic blockchain and digital education talent service organization, selected by the Ministry of Industry and Information Technology in key areas of talent evaluation and capacity improvement task-taking unit, together with FISCO BCOS launched the "Blockchain Engineering Technology Series Certification Course," students through the certification examination can obtain a national blockchain technology certificate。 ![](https://img-blog.csdnimg.cn/251b477e43f242ada74c97ee83ea5718.png) -**Beijing Zhigu Xingtu Education Technology Co., Ltd.** +**Beijing Zhigu Xingtu Education Technology Co., Ltd** -Zhigu Xingtu is a cutting-edge science and technology industry-education integration institution with an international vision, providing blockchain technology solutions, while providing universities with laboratory co-construction, professional co-construction and industrial college co-construction services around cutting-edge technology (blockchain, artificial intelligence and other technologies).。 +Zhigu Xingtu is a cutting-edge science and technology industry-education integration institution with an international vision, providing blockchain technology solutions, while providing universities with laboratory co-construction, professional co-construction and industrial college co-construction services around cutting-edge technology (blockchain, artificial intelligence and other technologies)。 -Most of the team members are from Silicon Valley in the United States and have mature new technology research and development capabilities, including the accumulation of blockchain underlying technology.。At the same time, the product and research team has a wealth of experience in the education industry, and the advisory team comes from Stanford University, Carnegie Mellon University and other well-known universities.。Relying on the strong industry-university-research resources of Silicon Valley and global industrial partners, Zhigu Star Map closely follows the industrial development trend, promotes the progress of higher education with new technologies, and strives to cultivate comprehensive talents with international vision and practical ability.。 +Most of the team members are from Silicon Valley in the United States and have mature new technology research and development capabilities, including the accumulation of blockchain underlying technology。At the same time, the product and research team has a wealth of experience in the education industry, and the advisory team comes from Stanford University, Carnegie Mellon University and other well-known universities。Relying on the strong industry-university-research resources of Silicon Valley and global industrial partners, Zhigu Star Map closely follows the industrial development trend, promotes the progress of higher education with new technologies, and strives to cultivate comprehensive talents with international vision and practical ability。 ![](../../images/community/partner/img_6.png) -**Guangdong Zhongchuang Wisdom Technology Co., Ltd.** +**Guangdong Zhongchuang Wisdom Technology Co., Ltd** -Guangdong Zhongchuang Wisdom Technology Co., Ltd. was established in 2018 and is engaged in the business of integrating production and education in the digital economy.。The company is an industry-education integration enterprise assessed by the Guangdong Provincial Development and Reform Commission and the vice-chairman unit of the Guangdong Industry-Education Integration Promotion Association, with the positioning of experts in the integration of industry and education in the digital economy.。Through its own advantageous resources and strong integration ability, Zhongchuang Zhike develops the business of integrating production and education in the digital economy, and provides the best solution for the professional construction and personnel training of various colleges and universities in Guangdong Province.。Including: blockchain, big data, artificial intelligence, innovation, network security, industrial Internet six areas, and Guangdong Province released on July 8 this year, "Guangdong Province Digital Economy Development Guidelines" in the focus of the development of digital economy emerging industries are highly compatible.。 +Guangdong Zhongchuang Wisdom Technology Co., Ltd. was established in 2018 and is engaged in the business of integrating production and education in the digital economy。The company is an industry-education integration enterprise assessed by the Guangdong Provincial Development and Reform Commission and the vice-chairman unit of the Guangdong Industry-Education Integration Promotion Association, with the positioning of experts in the integration of industry and education in the digital economy。Through its own advantageous resources and strong integration ability, Zhongchuang Zhike develops the business of integrating production and education in the digital economy, and provides the best solution for the professional construction and personnel training of various colleges and universities in Guangdong Province。Including: blockchain, big data, artificial intelligence, innovation, network security, industrial Internet six areas, and Guangdong Province released on July 8 this year, "Guangdong Province Digital Economy Development Guidelines" in the focus of the development of digital economy emerging industries are highly compatible。 -Zhongchuang joined hands with FISCO BCOS to support the 2022 Ministry of Education blockchain national training program project, supported the Shenzhen Municipal Bureau of Human Resources and Social Security blockchain application operator competition, at the same time, is also the Guangdong Provincial Department of Human Resources and Social Security blockchain application operator competition only technical support unit, Guangdong Provincial Department of Education blockchain technology application competition only technical support unit.。 +Zhongchuang joined hands with FISCO BCOS to support the 2022 Ministry of Education blockchain national training program project, supported the Shenzhen Municipal Bureau of Human Resources and Social Security blockchain application operator competition, at the same time, is also the Guangdong Provincial Department of Human Resources and Social Security blockchain application operator competition only technical support unit, Guangdong Provincial Department of Education blockchain technology application competition only technical support unit。 ![](../../images/community/partner/img_7.png) -**Teaching Chain Technology (Shenzhen) Technology Co., Ltd.** +**Teaching Chain Technology (Shenzhen) Technology Co., Ltd** -Education Chain Technology (Shenzhen) Co., Ltd. is a leading international provider of blockchain education and training products and application solutions, committed to building vocational education and training infrastructure in the digital age.。Teaching chain technology focuses on the professional services of blockchain talent development, providing innovative topic declaration, discipline and professional co-construction, blockchain training courses, national blockchain talent skill level certification training, training tour camps and teacher training and other services.。 +Education Chain Technology (Shenzhen) Co., Ltd. is a leading international provider of blockchain education and training products and application solutions, committed to building vocational education and training infrastructure in the digital age。Teaching chain technology focuses on the professional services of blockchain talent development, providing innovative topic declaration, discipline and professional co-construction, blockchain training courses, national blockchain talent skill level certification training, training tour camps and teacher training and other services。 -Adhering to the vision of "let every blockchain practitioner have a senior skill title," the teaching chain technology has released the "blockchain application operator" vocational training materials, undertook a number of domestic blockchain application operator vocational training, its blockchain online training platform has been filed by the Ministry of Industry and Information Technology.。The application system based on FISCO BCOS has protected the copyright of more than 8,000 blockchain courses, and is also a blockchain professional certification partner in Guangdong, Hainan, Zhejiang, Shanghai, Anhui and other provinces and cities across the country.。 +Adhering to the vision of "let every blockchain practitioner have a senior skill title," the teaching chain technology has released the "blockchain application operator" vocational training materials, undertook a number of domestic blockchain application operator vocational training, its blockchain online training platform has been filed by the Ministry of Industry and Information Technology。The application system based on FISCO BCOS has protected the copyright of more than 8,000 blockchain courses, and is also a blockchain professional certification partner in Guangdong, Hainan, Zhejiang, Shanghai, Anhui and other provinces and cities across the country。 ![](https://img-blog.csdnimg.cn/00d7bbf0c5a24a13af8dc63bde12dbcc.png) **Linker International Consulting (Beijing) Co., Ltd** -Chain People International Service G, B, C, is China's leading blockchain industry talent comprehensive service provider, three ministries blockchain talent certification service provider, the Ministry of Industry and Information Technology Talent Exchange Center blockchain industry talent research institute operator, with academician scientists talent pool, practitioner talent pool and teacher talent pool.。FISCO BCOS blockchain has been trained in public classes at universities, blockchain development engineers (weekend classes), and online classes on "Enterprise Alliance Chain Principles and Applications."。 +Chain People International Service G, B, C, is China's leading blockchain industry talent comprehensive service provider, three ministries blockchain talent certification service provider, the Ministry of Industry and Information Technology Talent Exchange Center blockchain industry talent research institute operator, with academician scientists talent pool, practitioner talent pool and teacher talent pool。FISCO BCOS blockchain has been trained in public classes at universities, blockchain development engineers (weekend classes), and online classes on "Enterprise Alliance Chain Principles and Applications."。 ![](../../images/community/partner/IBWEDU.png) **Nanjing Bingwei Information Technology Co., Ltd** -Nanjing Bingwei Information Technology Co., Ltd. focuses on the construction of engineering practice capacity of block chain, artificial intelligence and big data new engineering specialty.。Cooperate with world-renowned industrial enterprises, take the job skills demand as the design starting point, provide colleges and universities with a pan-IT field of professional curriculum resources, take the real business case realization process as the blueprint, provide colleges and universities with case data, experimental training manuals and experimental training environment, and form a mixed teacher with college teachers to provide students with professional skills training and teaching services, is the "Internet."+A comprehensive talent training solution provider in the field of "education" to help colleges and universities build a new ecology of industry-university integration.。 +Nanjing Bingwei Information Technology Co., Ltd. focuses on the construction of engineering practice capacity of block chain, artificial intelligence and big data new engineering specialty。Cooperate with world-renowned industrial enterprises, take the job skills demand as the design starting point, provide colleges and universities with a pan-IT field of professional curriculum resources, take the real business case realization process as the blueprint, provide colleges and universities with case data, experimental training manuals and experimental training environment, and form a mixed teacher with college teachers to provide students with professional skills training and teaching services, is the "Internet."+A comprehensive talent training solution provider in the field of "education" to help colleges and universities build a new ecology of industry-university integration。 -The company insists on promoting teaching and learning by competition, and actively participates in university-related competitions.。In 2023, the joint micro-public blockchain will be used as a technical support unit to support the "computer design competition for Chinese college students."-Blockchain application and development "track, around FISCO BCOS and its surrounding components for track proposition。At present, the company has had in-depth cooperation with more than 100 universities across the country.。 +The company insists on promoting teaching and learning by competition, and actively participates in university-related competitions。In 2023, the joint micro-public blockchain as a technical support unit to support the "China University Student Computer Design Contest - Blockchain Application and Development" track, around FISCO BCOS and its surrounding components of the track proposition。At present, the company has had in-depth cooperation with more than 100 universities across the country。 ![](https://img-blog.csdnimg.cn/74bf4a085db14dd1b45ee95ab1bfd3b4.png) -**Qianhai Fengchuang Blockchain (Shenzhen) Co., Ltd.** +**Qianhai Fengchuang Blockchain (Shenzhen) Co., Ltd** -Founded in December 2016, Fengchuang Blockchain is an early established and rapidly developing blockchain training and consulting service company in China. At present, the company has a complete, efficient and mature operation team, and is one of the most professional and authoritative blockchain education and operation teams in China.。 +Founded in December 2016, Fengchuang Blockchain is an early established and rapidly developing blockchain training and consulting service company in China. At present, the company has a complete, efficient and mature operation team, and is one of the most professional and authoritative blockchain education and operation teams in China。 -At present, the company has trained the first phase of the blockchain industry product managers, blockchain financial industry engineers and other industry leaders, students in Shandong, Shanxi, Henan, Hebei, Anhui, Jiangsu, Zhejiang, Guangdong, Hunan, Hubei, Beijing, Shanghai and other 25 provinces (municipalities), covering the traditional industry in the senior management, blockchain enthusiasts, fresh college students, etc.。 +At present, the company has trained the first phase of the blockchain industry product managers, blockchain financial industry engineers and other industry leaders, students in Shandong, Shanxi, Henan, Hebei, Anhui, Jiangsu, Zhejiang, Guangdong, Hunan, Hubei, Beijing, Shanghai and other 25 provinces (municipalities), covering the traditional industry in the senior management, blockchain enthusiasts, fresh college students, etc。 ![](../../images/community/partner/digQuant.png) -**Shenzhen Diankuan Network Technology Co., Ltd.** +**Shenzhen Diankuan Network Technology Co., Ltd** -Shenzhen Dian Kuan Network Technology Co., Ltd. is an educational technology company dedicated to assisting universities to realize interdisciplinary construction, mainly in the field of financial technology to assist universities to cultivate "financial+Science and technology "compound talent。The company provides courses, training systems, professional construction solutions and co-construction of financial science and technology industry colleges for related majors in colleges and universities.。Based on its own research and development of course products and training system products, Diankuan has researched over 400 hours of financial technology-related courses and developed 6 sets of industry training systems to assist universities in economic management, mathematical statistics, financial engineering and financial mathematics to achieve financial technology talent training services.。 +Shenzhen Dian Kuan Network Technology Co., Ltd. is an educational technology company dedicated to assisting universities to realize interdisciplinary construction, mainly in the field of financial technology to assist universities to cultivate "financial+Science and technology "compound talent。The company provides courses, training systems, professional construction solutions and co-construction of financial science and technology industry colleges for related majors in colleges and universities。Based on its own research and development of course products and training system products, Diankuan has researched over 400 hours of financial technology-related courses and developed 6 sets of industry training systems to assist universities in economic management, mathematical statistics, financial engineering and financial mathematics to achieve financial technology talent training services。 -Together with FISCO BCOS, the company has landed the BCW v1.0 blockchain programming practice platform for the training of blockchain professionals in universities, which is used for blockchain expertise learning and smart contract programming practice.。The innovation project CERX Research Resource Exchange Platform is under development, which is a distributed research asset exchange network based on FISCO BCOS blockchain technology, with the goal of creating a data asset flow platform for data, algorithmic models, papers and courses between universities, and the company's chief architect has been awarded FISCO BCOS MVP of the Year for his contributions to the community.。 +Together with FISCO BCOS, the company has landed the BCW v1.0 blockchain programming practice platform for the training of blockchain professionals in universities, which is used for blockchain expertise learning and smart contract programming practice。The innovation project CERX Research Resource Exchange Platform is under development, which is a distributed research asset exchange network based on FISCO BCOS blockchain technology, with the goal of creating a data asset flow platform for data, algorithmic models, papers and courses between universities, and the company's chief architect has been awarded FISCO BCOS MVP of the Year for his contributions to the community。 ![](https://img-blog.csdnimg.cn/11a2ddd3eb014d0f8fbfb0015d6fb77c.png) **Shenzhen Fire Chain Education Technology Co., Ltd** -Shenzhen Fire Chain Education Technology Co., Ltd., referred to as "Fire Chain Technology," is a company focusing on the ecological construction of the blockchain industry, the output of blockchain core technology and education, with the domestic leading industry technology and teaching level.。 +Shenzhen Fire Chain Education Technology Co., Ltd., referred to as "Fire Chain Technology," is a company focusing on the ecological construction of the blockchain industry, the output of blockchain core technology and education, with the domestic leading industry technology and teaching level。 -The company's main products and businesses include providing blockchain technical support and consulting services to the government, building a blockchain ecosystem, and empowering the local economy.;Export technical services for enterprises and institutions, leading or participating in DApps development;Cooperate with major universities across the country to establish blockchain colleges, open blockchain majors, and recruit students for the society。 +The company's main products and businesses include providing blockchain technical support and consulting services to the government, building a blockchain ecosystem, and empowering the local economy;Export technical services for enterprises and institutions, leading or participating in DApps development;Cooperate with major universities across the country to establish blockchain colleges, open blockchain majors, and recruit students for the society。 ![](../../images/community/partner/img_8.png) **Shenzhen Vocational and Technical College** -Shenzhen Vocational and Technical College will start the blockchain technology application major in 2019, and recruit students from all over the country, with FISCO BCOS domestic open source blockchain as the main technology line, set up blockchain and digital economy research institute and FISCO BCOS blockchain talent cultivation SIG, deepen the cultivation of blockchain deployment operation and maintenance, smart contract programming, engineering project development and other capabilities, and provide high-quality technical and skilled personnel for the blockchain industry.。A number of students have won more than 10 awards in various national and provincial blockchain competitions, and 2 of them have won FISCO BCOS annual MVP。 +Shenzhen Vocational and Technical College will start the blockchain technology application major in 2019, and recruit students from all over the country, with FISCO BCOS domestic open source blockchain as the main technology line, set up blockchain and digital economy research institute and FISCO BCOS blockchain talent cultivation SIG, deepen the cultivation of blockchain deployment operation and maintenance, smart contract programming, engineering project development and other capabilities, and provide high-quality technical and skilled personnel for the blockchain industry。A number of students have won more than 10 awards in various national and provincial blockchain competitions, and 2 of them have won FISCO BCOS annual MVP。 ![](https://img-blog.csdnimg.cn/59e3af34a51a4117baf6a32b6d47bfdd.png) **Tencent Education Tengshi College** -Tengshi College is a college under Tencent Education, providing comprehensive and up-to-date education and talent training solutions, and effectively promoting the Ministry of Education's ambitious goal of "world-class universities and first-class disciplines" and "characteristic high-level vocational schools and majors," covering courses in ten major areas such as blockchain, artificial intelligence and big data.。Beginning last year, Tengshi College and FISCO BCOS launched extensive cooperation in the field of training。 +Tengshi College is a college under Tencent Education, providing comprehensive and up-to-date education and talent training solutions, and effectively promoting the Ministry of Education's ambitious goal of "world-class universities and first-class disciplines" and "characteristic high-level vocational schools and majors," covering courses in ten major areas such as blockchain, artificial intelligence and big data。Beginning last year, Tengshi College and FISCO BCOS launched extensive cooperation in the field of training。 ![](../../images/community/partner/img_9.png) **Southwest Forestry University** -Southwest Forestry University has built Yunnan Province Supply Chain Management Blockchain Engineering Research Center and Yunnan Province University Supply Chain Management Blockchain Innovation Team, and has undertaken a number of national projects and provincial and ministerial-level major projects in the field of blockchain, and its research and development of a secure and controllable blockchain basic service platform "Xilin Chain" has been applied in cross-border trade, digital tobacco, supply chain and green food traceability and other fields.。The center and team work with FISCO BCOS to provide blockchain products, technology research and development, and talent training services to help governments and enterprises in their digital transformation.。 +Southwest Forestry University has built Yunnan Province Supply Chain Management Blockchain Engineering Research Center and Yunnan Province University Supply Chain Management Blockchain Innovation Team, and has undertaken a number of national projects and provincial and ministerial-level major projects in the field of blockchain, and its research and development of a secure and controllable blockchain basic service platform "Xilin Chain" has been applied in cross-border trade, digital tobacco, supply chain and green food traceability and other fields。The center and team work with FISCO BCOS to provide blockchain products, technology research and development, and talent training services to help governments and enterprises in their digital transformation。 -"Xilin Chain" passed the seventh batch of domestic blockchain information service filings of the State Cyberspace Administration of China in March 2022, and passed the "Blockchain System Function Test" of the China Institute of Electronic Technology Standardization and the China National Accreditation Committee for Conformity Assessment (CNAS) certification in May 2022. +"Xilin Chain" passed the seventh batch of domestic blockchain information service filings of the State Cyberspace Administration of China in March 2022, and passed the "Blockchain System Function Test" of the China Institute of Electronic Technology Standardization and the China National Accreditation Committee for Conformity Assessment (CNAS) certification in May 2022 ### FISCO BCOS Eco-Development Partners @@ -392,16 +392,16 @@ Southwest Forestry University has built Yunnan Province Supply Chain Management **Beijing Blockchain Technology Application Association** -Beijing Blockchain Technology Application Association is China's first blockchain application association, officially launched in 2016, is registered in the civil affairs department in accordance with the law, in the Beijing Investment Promotion Service Center to guide the establishment of non-profit professional social organizations.。Based on the principle of "policy guidance, theoretical research, scenario application, capital access and technology practice," the association uses rich social resources to build a blockchain ecosystem integrating government, industry, academia, research and application with domestic first-class universities and research institutes, well-known domestic and foreign enterprises and core member units, aiming to gather industry resources through blockchain, promote the development of blockchain through cross-border integration, and promote the innovation and development of advanced technologies.。 -The Association has joined hands with FISCO BCOS to carry out cooperation in blockchain industry competitions, ecological exchanges, personnel training and other aspects to promote the application of technological achievements, create a good ecology for industrial development, and accelerate the vigorous development of blockchain.。 +Beijing Blockchain Technology Application Association is China's first blockchain application association, officially launched in 2016, is registered in the civil affairs department in accordance with the law, in the Beijing Investment Promotion Service Center to guide the establishment of non-profit professional social organizations。Based on the principle of "policy guidance, theoretical research, scenario application, capital access and technology practice," the association uses rich social resources to build a blockchain ecosystem integrating government, industry, academia, research and application with domestic first-class universities and research institutes, well-known domestic and foreign enterprises and core member units, aiming to gather industry resources through blockchain, promote the development of blockchain through cross-border integration, and promote the innovation and development of advanced technologies。 +The Association has joined hands with FISCO BCOS to carry out cooperation in blockchain industry competitions, ecological exchanges, personnel training and other aspects to promote the application of technological achievements, create a good ecology for industrial development, and accelerate the vigorous development of blockchain。 ![](../../images/community/partner/sichuang_blockchain_association.jpeg) **Sichuan Blockchain Industry Association** -The Sichuan Blockchain Industry Association is a social organization registered with the Sichuan Provincial Civil Affairs Department under the guidance of the Sichuan Provincial Department of Economy and Information Technology.。The association was established to promote the integration and development of Sichuan blockchain and the real economy, strengthen the cooperation and contact between industry enterprises, institutions and individuals, accelerate the application of blockchain technology, develop Sichuan blockchain industry, actively promote local enterprises to explore domestic and foreign markets, and play the role of the association as a link to promote the good development of blockchain industry in Sichuan Province.。 +The Sichuan Blockchain Industry Association is a social organization registered with the Sichuan Provincial Civil Affairs Department under the guidance of the Sichuan Provincial Department of Economy and Information Technology。The association was established to promote the integration and development of Sichuan blockchain and the real economy, strengthen the cooperation and contact between industry enterprises, institutions and individuals, accelerate the application of blockchain technology, develop Sichuan blockchain industry, actively promote local enterprises to explore domestic and foreign markets, and play the role of the association as a link to promote the good development of blockchain industry in Sichuan Province。 -Shuxin Chain is a regional blockchain infrastructure under the guidance of Sichuan Provincial Department of Economy and Information Technology, organized by Sichuan Blockchain Industry Association, and jointly constructed and operated by blockchain-related practitioners and application institutions in the province.。The association will join hands with the FISCO BCOS open source community to create a win-win blockchain industry ecology through multi-party collaboration.。 +Shuxin Chain is a regional blockchain infrastructure under the guidance of Sichuan Provincial Department of Economy and Information Technology, organized by Sichuan Blockchain Industry Association, and jointly constructed and operated by blockchain-related practitioners and application institutions in the province。The association will join hands with the FISCO BCOS open source community to create a win-win blockchain industry ecology through multi-party collaboration。 **Recommended reading:** diff --git a/3.x/en/docs/community/pr.md b/3.x/en/docs/community/pr.md index c487e8ace..bb51a78cd 100644 --- a/3.x/en/docs/community/pr.md +++ b/3.x/en/docs/community/pr.md @@ -6,14 +6,14 @@ If you are already a community contributor, you can directly complete the PR submission as follows;If you first try PR contribution, please refer to [document](https://mp.weixin.qq.com/s/_w_auH8X4SQQWO3lhfNrbQ)。 -#### 1. Preset condition of PR: Fork FISCO-BCOS-DOC to personal github repository +#### 1. Preset condition of PR: Fork FISCO-BCOS-DOC to personal github warehouse 1. Step1: Enter [Register GitHub account](https://github.com/join) 2. step2: Fork [FISCO-BCOS-DOC](https://github.com/FISCO-BCOS/FISCO-BCOS-DOC) To personal warehouse #### 2. Branch Description -PR Mention FISCO-BCOS-Release of DOC-3 branches +PR must mention the release-3 branch of FISCO-BCOS-DOC #### 3. Essential tool: git [click for reference](https://gitee.com/help/articles/4106) @@ -26,7 +26,7 @@ PR Mention FISCO-BCOS-Release of DOC-3 branches (The following steps only need to be performed once) -- The official [FISCO-BCOS-DOC](https://github.com/FISCO-BCOS/FISCO-BCOS-DOC)Add as upstream: git remote add upstream +- Will the official [FISCO-BCOS-DOC](https://github.com/FISCO-BCOS/FISCO-BCOS-DOC)Add as upstream: git remote add upstream @@ -34,13 +34,13 @@ PR Mention FISCO-BCOS-Release of DOC-3 branches **Before submitting a PR, run the following command to synchronize the latest official documents:** -1. Pull the official document release-Latest documentation for 3 branches: git fetch upstream release-3 +1. Pull the latest document from the official document release-3 branch: git fetch upstream release-3 -2. Synchronize official document release-3 Branch the latest document to the local: git rebase update / release-3 +2. Synchronize the latest document of the official document release-3 branch to the local: git rebase update / release-3 (Note: This step may have a conflict, if there is a conflict, please resolve the conflict, click the reference [conflict resolution](https://gitee.com/help/articles/4194)) -3. Push the synchronized document to your personal git repository: git push origin-f +3. Push the synchronized document to your personal git repository: git push origin -f **Main commands for submitting personal documents** @@ -49,9 +49,9 @@ git add, git commit, git push, etc. [click for reference](https://gitee.com/help #### 5. Document format description 1. The content of the article must be edited in markdown format, [click to refer to markdown syntax](https://www.runoob.com/markdown/md-tutorial.html)。 -2. (This step is not mandatory) Before submitting a PR, it is recommended to build readthedocs based on the documents of the personal repository, check whether the built documents display as expected, and attach a link to the readthedocs that describes the personal build when submitting a PR.。 +2. (This step is not mandatory) Before submitting a PR, it is recommended to build readthedocs based on the documents of the personal repository, check whether the built documents display as expected, and attach a link to the readthedocs that describes the personal build when submitting a PR。 -Click on the reference [readthedocs build method](https://www.jianshu.com/p/d1d59d0cd58c), FISCO-BCOS-Refer to the following table for the readthedocs configuration options of the DOC: +Click on the reference [readthedocs build method](https://www.jianshu.com/p/d1d59d0cd58c)and the readthedocs configuration options of FISCO-BCOS-DOC refer to the following table: | **Setting Fields** | **Setting Results** | | - | - | @@ -64,12 +64,12 @@ Click on the reference [readthedocs build method](https://www.jianshu.com/p/d1d5 #### 6. Reviewer Feedback and Integration -After submitting the PR, Reviewer will directly feedback the modification comments on GitHub, you can also add a small assistant WeChat FISCOBCOS010 for direct communication;Finally, when Reviewer joins the PR, your article is entered.! +After submitting the PR, Reviewer will directly feedback the modification comments on GitHub, you can also add a small assistant WeChat FISCOBCOS010 for direct communication;Finally, when Reviewer joins the PR, your article is entered! ### Article PR Contribution Writing Norms -- The initial signature is the name of the contributor, which can also show the company or school to which the individual belongs, for example, by Zhang San.| FISCO BCOS Core Developer; -- The content of the article as far as possible to include the introduction introduction, paragraph and paragraph between the natural convergence and excessive, the end of the summary; -- The article should ensure that the sentence is smooth and free of speech disorders, and that the expression will not cause misunderstanding to the reader.; -- Practical articles need to ensure that the technical points are accurate and that the test run can be completed.; -- If the article involves links to relevant technical documents or code repositories, please use FISCO BCOS official links to avoid link failure。 \ No newline at end of file +-The initial signature is the name of the contributor, which can also show the company or school to which the individual belongs, for example, by Zhang San| FISCO BCOS Core Developer; +- The content of the article contains the introduction as far as possible, the natural connection and transition between paragraphs, and the summary at the end of the article; +- The article should ensure that the sentence is smooth and free of speech disorders, and that the presentation does not cause misunderstanding by the reader; +-Practical articles need to ensure that the technical points are accurate and can complete the test run; +-If the article involves links to relevant technical documents or code repositories, please use the FISCO BCOS official link to avoid link failure。 \ No newline at end of file diff --git a/3.x/en/docs/components/data_index.md b/3.x/en/docs/components/data_index.md index bdf6ec261..8f2e25e72 100644 --- a/3.x/en/docs/components/data_index.md +++ b/3.x/en/docs/components/data_index.md @@ -1,57 +1,57 @@ # Common Components for Data Governance -Tag: "WeBankBlockchain-Data "" Data Governance "" Generic Components "" Data Export "" Data Warehouse "" Data Reconciliation " +Tags: "WeBankBlockchain-Data" "Data Governance" "" Common Components "" Data Export "" Data Warehouse "" Data Reconciliation " ---- ## Component positioning -The full name of the data governance common component is WeBankBlockchain-Data governance is a set of stable, efficient, and secure blockchain data governance component solutions that can seamlessly adapt to the underlying platform of the FISCO BCOS blockchain.。 +The full name of the data governance generic component is WeBankBlockchain-Data data governance generic component, which is a stable, efficient and secure blockchain data governance component solution that can seamlessly adapt to the underlying platform of FISCO BCOS blockchain。 It consists of the Data Export component(Data-Export), Data Warehouse Components(Data-Stash)Data Reconciliation Component(Data-Reconcile)These three independent, pluggable, flexible assembly components, out of the box, flexible and convenient, easy to secondary development。 -These three components provide key capabilities in data governance such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision from three aspects: the underlying data storage layer, the smart contract data parsing layer, and the application layer.。 -WeBankBlockchain-Data has been in the financial, public welfare, agricultural and animal husbandry products traceability, judicial deposit, retail and other industries landing and use.。 +These three components provide key capabilities in data governance such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision from three aspects: the underlying data storage layer, the smart contract data parsing layer, and the application layer。 +WeBankBlockchain-Data has been implemented and used in finance, public welfare, traceability of agricultural and animal husbandry products, judicial deposit certificate, retail and other industries。 ## Design Objectives -Between the bottom layer of blockchain and blockchain applications, there is a gap between blockchain technology, business and products, and there are many challenges such as the difficulty of expanding blockchain data, the difficulty of querying and analyzing data on the chain, and the lack of universal product or component support in typical scenarios.。 +Between the bottom layer of blockchain and blockchain applications, there is a gap between blockchain technology, business and products, and there are many challenges such as the difficulty of expanding blockchain data, the difficulty of querying and analyzing data on the chain, and the lack of universal product or component support in typical scenarios。 Friends in the community often ask: The disk of the blockchain node server is almost full, what should I do?? How to query data in smart contracts in batches? -I would like to inquire how to check all transactions sent by an account.? +I would like to inquire how to check all transactions sent by an account? What is the blockchain reconciliation solution for WeBank and is there a universal solution?? …… Why do powerful blockchains still have these problems?? -First of all, with the "explosive" growth of blockchain data, the chain has accumulated hundreds of millions of transactions, several tons of data, node servers gradually can not meet the storage needs of transaction data, simply expand the node storage space not only high development costs, high hardware costs, but also in the process of data expansion due to high technical requirements, easy to cause systemic risks, and can not solve the problem once and for all。On the other hand, a large amount of transaction cold data is not only a waste of space, but also affects the performance of blockchain nodes to block and execute transactions.。 +First of all, with the "explosive" growth of blockchain data, the chain has accumulated hundreds of millions of transactions, several tons of data, node servers gradually can not meet the storage needs of transaction data, simply expand the node storage space not only high development costs, high hardware costs, but also in the process of data expansion due to high technical requirements, easy to cause systemic risks, and can not solve the problem once and for all。On the other hand, a large amount of transaction cold data is not only a waste of space, but also affects the performance of blockchain nodes to block and execute transactions。 -Secondly, due to the specific chain storage structure of the blockchain, the data on the chain can only be obtained and called through the smart contract interface, which is not only inefficient, but also with the increase of the data on the chain, its query and computing performance gradually decreases, unable to meet the demands of big data analysis and complex queries, such as the need to retrieve all contracts that have been deployed on the chain.。Data export solutions based on specific scenarios require specific development for smart contracts due to the large differences in smart contracts, which are costly and cannot be reused.。 +Secondly, due to the specific chain storage structure of the blockchain, the data on the chain can only be obtained and called through the smart contract interface, which is not only inefficient, but also with the increase of the data on the chain, its query and computing performance gradually decreases, unable to meet the demands of big data analysis and complex queries, such as the need to retrieve all contracts that have been deployed on the chain。Data export solutions based on specific scenarios require specific development for smart contracts due to the large differences in smart contracts, which are costly and cannot be reused。 -Finally, blockchain-based trusted data lacks common products and reusable components, and there are similar needs between some scenarios, such as business reconciliation, blockchain browser, business analysis, regulatory audit, etc.。There is a lot of duplication of development between different projects, which is time-consuming and laborious, while developers of blockchain applications need to go through a steep learning curve to complete their work goals, which may also introduce various risks in development and testing.。 +Finally, blockchain-based trusted data lacks common products and reusable components, and there are similar needs between some scenarios, such as business reconciliation, blockchain browser, business analysis, regulatory audit, etc。There is a lot of duplication of development between different projects, which is time-consuming and laborious, while developers of blockchain applications need to go through a steep learning curve to complete their work goals, which may also introduce various risks in development and testing。 -WeBankBlockchain-Starting from the underlying data storage layer, smart contract data parsing layer, and application layer, Data provides key capabilities in data governance such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision to meet the needs of the entire data governance process development scenario, as shown in the following figure: +WeBankBlockchain-Data provides key data governance capabilities such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision from multiple levels, including the underlying data storage layer, smart contract data parsing layer, and application layer, to meet the needs of the entire data governance process development scenario, as shown in the following figure: ![](../../../../2.x/images/governance/data/data-comp-design.png) -The blockchain data passes through the multi-party consensus of the blockchain consensus node and is not modified once generated.。 +The blockchain data passes through the multi-party consensus of the blockchain consensus node and is not modified once generated。 -In the operation and maintenance layer, the historical block data of the blockchain can be exported by the data warehouse component in whole or in part to the local。As a trusted storage image, the exported data is only valid locally, and modifications will not affect the consensus on the chain.。We recommend that users establish management methods to limit changes to local data.。 +In the operation and maintenance layer, the historical block data of the blockchain can be exported by the data warehouse component in whole or in part to the local。As a trusted storage image, the exported data is only valid locally, and modifications will not affect the consensus on the chain。We recommend that users establish management methods to limit changes to local data。 -In the application data layer, the data export component supports exporting source data, preliminary parsing, and contract-based parsing of multidimensional data.。All participants can deploy their own export service as a trusted data source for local queries or analytics。 +In the application data layer, the data export component supports exporting source data, preliminary parsing, and contract-based parsing of multidimensional data。All participants can deploy their own export service as a trusted data source for local queries or analytics。 -At the business layer, the business reconciliation component supports internal and external reconciliation of off-chain business data within the organization.。 +At the business layer, the business reconciliation component supports internal and external reconciliation of off-chain business data within the organization。 ## Component Introduction -Currently, WeBankBlockchain-Data by Data Warehouse Component(Data-Stash)Data Export Components(Data-Export)Data Reconciliation Component(Data-Reconcile)It consists of three independent, pluggable, and flexibly assembled components. More functions and solution sub-components will be provided according to business and scenario requirements.。 +Currently, WeBankBlockchain-Data consists of the data warehouse component(Data-Stash)Data Export Components(Data-Export)Data Reconciliation Component(Data-Reconcile)It consists of three independent, pluggable, and flexibly assembled components. More functions and solution sub-components will be provided according to business and scenario requirements。 ![](../../../../2.x/images/governance/data/data-gov.png) ### WeBankBlockchain-Data-Stash Data Warehouse Components Provides FISCO BCOS node data expansion, backup and tailoring capabilities。 -The binlog protocol can be used to synchronize the data of the underlying nodes of the blockchain. It supports resumable transmission, data trust verification, and fast synchronization mechanism.。 +The binlog protocol can be used to synchronize the data of the underlying nodes of the blockchain. It supports resumable transmission, data trust verification, and fast synchronization mechanism。 ![](../../../../2.x/images/governance/data/Data-Stash.png) @@ -87,35 +87,35 @@ Please refer to ## Usage Scenarios -Enterprise-level blockchain applications involve multiple roles, such as business roles, operators, development roles, and operation and maintenance roles.。For blockchain data, each specific role has different data governance demands。WeBankBlockchain-Data abstracts and designs the corresponding components from the three dimensions of data maintenance, application data processing and business data application of the underlying nodes of the blockchain to meet the needs of different roles for data governance.。 +Enterprise-level blockchain applications involve multiple roles, such as business roles, operators, development roles, and operation and maintenance roles。For blockchain data, each specific role has different data governance demands。WeBankBlockchain-Data abstracts and designs the corresponding components to meet the needs of different roles for data governance from the three dimensions of blockchain underlying node data maintenance, application data processing, and business data application。 ### Scenario 1: Node data maintenance -Data Warehouse Components Data-Stash is a lightweight, high-security, and high-availability component for blockchain node data processing, mainly for operation and maintenance personnel and developers.。 +Data warehouse component Data-Stash is a lightweight, high-security, and high-availability component for blockchain node data processing, mainly for operation and maintenance personnel and developers。 -Data Backup: Data-Stash can back up the data of blockchain nodes in real time through the Binlog protocol, and the blockchain nodes can cut and separate hot and cold data according to the actual situation, which solves the problem of node expansion and reduces development and hardware costs on the basis of ensuring data security and credibility.。While solving the problem of node expansion, it can make the node "light load," which can not only reduce the cost of node space, but also effectively improve the performance of node execution transactions.。 +Data backup: Data-Stash can back up the data of blockchain nodes in quasi-real time through the Binlog protocol, and the blockchain nodes can cut and separate hot and cold data according to the actual situation, which solves the problem of node expansion and reduces development and hardware costs on the basis of ensuring data security and trustworthiness。While solving the problem of node expansion, it can make the node "light load," which can not only reduce the cost of node space, but also effectively improve the performance of node execution transactions。 -Data synchronization: For new nodes that join the blockchain network, you can use Data-Stash, with the cooperation of the Fisco Sync tool, quickly synchronizes data in the blockchain network, ensures that nodes participate in the "work" of the blockchain network as quickly as possible, and reduces the time waste caused by new nodes waiting for data synchronization.。 +Data synchronization: For new nodes that join the blockchain network, you can use Data-Stash to quickly synchronize the data of the blockchain network with the cooperation of the Fisco Sync tool to ensure that the nodes participate in the "work" of the blockchain network as quickly as possible, reducing the waste of time caused by new nodes waiting for data synchronization。 ### Scenario 2: Application Data Processing -Data Export Components Data-Export provides standard exported blockchain data and customized data automatically generated based on intelligent analysis of smart contract code, stored in storage media such as MySQL and ElasticSearch, mainly for developers.。 +Data-Export provides standard exported blockchain data and customized data automatically generated based on intelligent analysis of smart contract code, stored in storage media such as MySQL and ElasticSearch, mainly for developers。 -Complex query and analysis: The existing blockchain is not very friendly to query functions, and on-chain calculations are very valuable, Data-Export supports exporting blockchain data stored on the chain to a distributed storage system under the chain。Developers can deploy contract accounts, events, functions and other data based on the exported basic data of the blockchain system, perform secondary development, customize the logic of complex queries and data analysis, and quickly realize business requirements.。For example, developers can perform statistics and correlation query analysis on transaction details based on business logic, develop various anti-money laundering and audit supervision reports, and so on.。 +Complex query and analysis: The existing blockchain is not very friendly to query functions, and on-chain computing is very valuable. Data-Export supports exporting blockchain data stored on the chain to a distributed storage system under the chain。Developers can deploy contract accounts, events, functions and other data based on the exported basic data of the blockchain system, perform secondary development, customize the logic of complex queries and data analysis, and quickly realize business requirements。For example, developers can perform statistics and correlation query analysis on transaction details based on business logic, develop various anti-money laundering and audit supervision reports, and so on。 -Blockchain Data Visualization: Data-Export automatically generates Grafana configuration files, enabling blockchain data visualization without development。Blockchain data visualization can not only be used as a tool for blockchain data inventory, data viewing, and operational analysis, but also can be used in the application development, debugging, and testing phases to improve R & D experience and efficiency in a visible and accessible way.。In addition, data-Export also provides Restful APIs for external system integration。The operation and maintenance personnel can monitor the status of the business system in real time through Grafana, and the business personnel can obtain the real-time progress of the business on the integrated business background system.。 +Blockchain data visualization: Data-Export automatically generates Grafana configuration files, enabling blockchain data visualization without development。Blockchain data visualization can not only be used as a tool for blockchain data inventory, data viewing, and operational analysis, but also can be used in the application development, debugging, and testing phases to improve R & D experience and efficiency in a visible and accessible way。In addition, Data-Export provides Restful APIs for external system integration。The operation and maintenance personnel can monitor the status of the business system in real time through Grafana, and the business personnel can obtain the real-time progress of the business on the integrated business background system。 -The data export subsystem of the blockchain middleware platform WeBASE has integrated Data-Export, meanwhile, data-Export can also be independently integrated with the underlying blockchain to flexibly support business needs, and has so far been stable and safe in dozens of production systems.。 +The data export subsystem of the blockchain middleware platform WeBASE has been integrated with Data-Export, and at the same time, Data-Export can also be independently integrated with the underlying blockchain to flexibly support business needs, and has been stable and safe in dozens of production systems so far。 -Now, data-Export, as a key component of blockchain data governance, is released in open source form and perfected by community partners to adapt to more usage scenarios and create more features.。 +Today, as a key component of blockchain data governance, Data-Export is released in open source form and perfected by community partners to adapt to more usage scenarios and create more functions。 ### Scenario 3: Business Data Application -At the business level, data reconciliation is one of the most common scenarios in blockchain trading systems.。Based on the development and practical experience of several blockchain DAPP applications, we encapsulated and developed the data reconciliation component Data-Reconcile provides a universal data reconciliation solution based on the blockchain smart contract ledger, and provides a set of dynamically extensible reconciliation framework that supports customized development, mainly for developers, and provides services for business personnel.。 +At the business level, data reconciliation is one of the most common scenarios in blockchain trading systems。Based on the development and practical experience of several blockchain DAPP applications, we have packaged and developed the data reconciliation component Data-Reconcile, which provides a universal data reconciliation solution based on the blockchain smart contract ledger, and provides a set of dynamically extensible reconciliation framework, which supports customized development, mainly for developers, and provides services for business personnel。 -Internal Enterprise Reconciliation: Data-Reconcile supports reconciliation between internal enterprise systems, such as between data on the blockchain and off-chain business systems。Developers can take advantage of Data-Reconcile quickly conducts secondary development and compares business system data with on-chain data to ensure the reliability and operational security of internal business system data.。 +Internal enterprise reconciliation: Data-Reconcile supports reconciliation between internal enterprise systems, such as between data on the blockchain and off-chain business systems。Developers can use Data-Reconcile to quickly carry out secondary development, accounting and comparing business system data with on-chain data, ensuring the reliability and operational security of internal business system data。 -Inter-Enterprise Reconciliation: Data-Reconcile helps developers quickly build cross-agency reconciliation applications。For example, during settlement, Enterprise A regularly exports its own business system transaction data as reconciliation files and sends them to the file storage center.。B Enterprises can use Data-Reconcile regularly pulls A enterprise reconciliation files and cooperates with Data-Export, reconciling with on-chain data within the enterprise。Data-Reconcile improves the efficiency of reconciliation while ensuring the credibility of reconciliation results, enabling quasi-real-time reconciliation.。 +Inter-Enterprise Reconciliation: Data-Reconcile helps developers quickly build inter-agency reconciliation applications。For example, during settlement, Enterprise A regularly exports its own business system transaction data as reconciliation files and sends them to the file storage center。Enterprise B can use Data-Reconcile to regularly pull enterprise A reconciliation files, and cooperate with Data-Export to reconcile with the data on the chain within the enterprise。Data-Reconcile improves the efficiency of reconciliation while ensuring the credibility of reconciliation results, enabling quasi-real-time reconciliation。 -In summary, WeBankBlockchain-Data is a stable, efficient and secure three-dimensional blockchain data governance solution. It aims to provide a series of independent, pluggable and flexibly assembled components to deal with and handle the massive data of the blockchain, bringing users a more convenient, simple, low-cost and lightweight user experience, thus promoting the development of blockchain data governance.。 +To sum up, WeBankBlockchain-Data is a stable, efficient and secure three-dimensional blockchain data governance solution, which aims to provide a series of independent, pluggable and flexible components to deal with and deal with the problem of massive data in the blockchain, bringing users a more convenient, simple, low-cost and lightweight user experience, thus promoting the development of blockchain data governance。 diff --git a/3.x/en/docs/components/governance_index.md b/3.x/en/docs/components/governance_index.md index 022c48390..e21487bd9 100644 --- a/3.x/en/docs/components/governance_index.md +++ b/3.x/en/docs/components/governance_index.md @@ -1,35 +1,35 @@ # Multi-party collaborative governance component -Tag: "WeBankBlockchain-Governance "" Blockchain Multi-Party Collaboration Governance "" Common Components "" Account Governance "" Permission Governance "" Private Key Management "" Certificate Management "" +Tags: "WeBankBlockchain-Governance" "Blockchain Multi-Party Collaboration Governance" "Common Components" "Account Governance" "Permission Governance" "Private Key Management" "Certificate Management" " ---- ## Component positioning -After more than 10 years of development, the basic technical framework of blockchain has been gradually improved, the business carried on the chain is becoming more and more abundant, and more and more participants are participating.。Whether multi-party collaboration can be carried out smoothly, whether business frictions can be effectively resolved, and whether past governance strategies and practices can meet the needs of rapid development in the future...... The industry's focus is gradually focusing on these more challenging challenges.。 +After more than 10 years of development, the basic technical framework of blockchain has been gradually improved, the business carried on the chain is becoming more and more abundant, and more and more participants are participating。Whether multi-party collaboration can be carried out smoothly, whether business frictions can be effectively resolved, and whether past governance strategies and practices can meet the needs of rapid development in the future...... The industry's focus is gradually focusing on these more challenging challenges。 In January 2021, on the basis of years of technical research and application practice, WeBank Blockchain released [White Paper on Blockchain-Oriented Multi-Party Collaborative Governance Framework](https://mp.weixin.qq.com/s?__biz=MzU0MDY4MDMzOA==&mid=2247486381&idx=1&sn=caae41a2241e3b1c2cd58181ef73a1bc&chksm=fb34c250cc434b46b2c1b72299c2eb71e1bd6b7597c341423c5d262f18a6e0af1628e0ba4037&scene=21#wechat_redirect)MCGF (Multilateral Collaborative Governance Framework)。 -As a reference architecture for blockchain governance, MCGF comprehensively covers the design specifications, participation roles, core system architecture, functional processes, and application scenarios of blockchain governance.。 +As a reference architecture for blockchain governance, MCGF comprehensively covers the design specifications, participation roles, core system architecture, functional processes, and application scenarios of blockchain governance。 -Its open framework can be adapted to a variety of heterogeneous blockchain underlying networks, and combines management and technical strategies to coordinate on-chain and off-chain governance.。At the system level, MCGF supports governance through a variety of tools, components and services.。Finally, MCGF designs visual, interactive, multi-terminal perception and operation methods for all participants to provide an excellent user experience.。 +Its open framework can be adapted to a variety of heterogeneous blockchain underlying networks, and combines management and technical strategies to coordinate on-chain and off-chain governance。At the system level, MCGF supports governance through a variety of tools, components and services。Finally, MCGF designs visual, interactive, multi-terminal perception and operation methods for all participants to provide an excellent user experience。 -Blockchain itself pursues multi-party collaboration, and the development of its system and technology cannot be achieved without the support of the community.。Adhering to the consistent concept of open source and openness, we sincerely invite partners from various industries to work together to build a blockchain governance system and jointly explore the way of blockchain governance.。 +Blockchain itself pursues multi-party collaboration, and the development of its system and technology cannot be achieved without the support of the community。Adhering to the consistent concept of open source and openness, we sincerely invite partners from various industries to work together to build a blockchain governance system and jointly explore the way of blockchain governance。 -We will gradually open source the content of MCGF one by one to benefit the community.。This open source list includes a set of out-of-the-box blockchain governance generic components (WeBankBlockchain-Governance)。These components are the implementation basis and atomic building blocks of the MCGF framework, reusable and customizable.。 +We will gradually open source the content of MCGF one by one to benefit the community。This open source list includes a set of out-of-the-box blockchain governance common components (WeBankBlockchain-Governance)。These components are the implementation basis and atomic building blocks of the MCGF framework, reusable and customizable。 They are embedded and run in all parts of the entire MCGF framework, just like the wheels, gears, transmission groups, and sensors on a high-speed car, and work together to help build a governance framework and improve development efficiency。Welcome the community to build and develop more and better high-availability components。 ## Design Objectives -In a federated chain based on distributed collaboration, the participants collaborate in a form that is loosely coupled and does not fully trust each other.。 +In a federated chain based on distributed collaboration, the participants collaborate in a form that is loosely coupled and does not fully trust each other。 -In the alliance chain, a variety of mechanisms are designed to help participants build trust and reach consensus, with private keys, certificates, accounts, and permission management all key supporting technologies.。 +In the alliance chain, a variety of mechanisms are designed to help participants build trust and reach consensus, with private keys, certificates, accounts, and permission management all key supporting technologies。 -However, the above technology is more complex, in the application effect, but also need more reusable, easy to land tools or components.。 +However, the above technology is more complex, in the application effect, but also need more reusable, easy to land tools or components。 We also often hear about issues in the development, use, and governance of affiliate chains: -The concept of private key is complex, and its algorithm types, storage files, and generation methods are numerous, which is difficult to understand and master.? +The concept of private key is complex, and its algorithm types, storage files, and generation methods are numerous, which is difficult to understand and master? The key on the blockchain node is stored in clear text on the hard disk, there is a great operational risk, is there a solution for secure storage? @@ -45,32 +45,32 @@ Certificate management not only involves the generation of certificates, but als …… -Analyzing and summarizing the above problems, it is not difficult to see that there are high thresholds for the management and use of private keys, accounts, permissions, and certificates: developers need to repeatedly and tediously solve the same problem in different scenarios, and users are prone to confusion and discomfort during use, and may even bring security risks and risks to the system due to imperfect governance solutions.。 +Analyzing and summarizing the above problems, it is not difficult to see that there are high thresholds for the management and use of private keys, accounts, permissions, and certificates: developers need to repeatedly and tediously solve the same problem in different scenarios, and users are prone to confusion and discomfort during use, and may even bring security risks and risks to the system due to imperfect governance solutions。 In order to solve the above problems, we have developed a common component of blockchain governance, aiming to provide lightweight decoupling, out-of-the-box, simple and easy-to-use, one-stop blockchain governance capabilities。 -- **lightweight decoupling**。All governance components are decoupled from the specific business。Lightweight integration, pluggable without invading the underlying。Through the class library, smart contract, SDK and other ways to provide.。Users can deploy and control governance processes even using the chain console。 -- **General scenario**。All governance components are aimed at all "just-in-time" scenarios in alliance chain governance, such as the first open source account reset, contract permissions, private key and certificate lifecycle management, accounts, contracts, private keys and certificates are the cornerstones of alliance chain technology and upper-level governance.。 -- **One-stop shop**。The common components of chain governance are committed to providing a one-stop experience.。Take the private key management component as an example, it supports a variety of private key generation methods and formats, covers almost all mainstream scenarios, provides file-based, multi-database and other managed methods, and supports private key derivation, sharding and other encryption methods.。 +- **lightweight decoupling**。All governance components are decoupled from the specific business。Lightweight integration, pluggable without invading the underlying。Through the class library, smart contract, SDK and other ways to provide。Users can deploy and control governance processes even using the chain console。 +- **General scenario**。All governance components are aimed at all "just-in-time" scenarios in alliance chain governance, such as the first open source account reset, contract permissions, private key and certificate lifecycle management, accounts, contracts, private keys and certificates are the cornerstones of alliance chain technology and upper-level governance。 +- **One-stop shop**。The common components of chain governance are committed to providing a one-stop experience。Take the private key management component as an example, it supports a variety of private key generation methods and formats, covers almost all mainstream scenarios, provides file-based, multi-database and other managed methods, and supports private key derivation, sharding and other encryption methods。 - **Simple and easy to use**。Committed to providing a simple user experience, so that users can easily get started。 -WeBankBlockchain-Government is positioned as a blockchain governance component, not only to provide tools at the development level, but also to provide blockchain participants with reference cases at the practical level to help improve the governance level of the blockchain industry as a whole.。 +WeBankBlockchain-Governance is positioned as a blockchain governance component. It not only hopes to provide tools at the development level, but also hopes to provide blockchain participants with reference cases at the practical level to help improve the governance level of the blockchain industry as a whole。 ## Component Introduction -This open source blockchain governance generic component consists of the private key management component (Governance-Key), Account Governance Component (Governance-Account), permission governance components (Governance-Authority), Certificate Management Components (Governance-Cert) and other components.。 +This open source blockchain governance common component consists of private key management component (Governance-Key), account governance component (Governance-Account), permission governance component (Governance-Authority), certificate management component (Governance-Cert) and other components。 ![](../../../../2.x/images/governance/MCGF/MCGF_overview.png) -Each governance component provides detailed usage documentation。Among them, the account governance component and permission governance component also provide contract code, Java language SDK, contract integration demo and Java version SDK use demo, so that users can freely and flexibly use and integrate based on their own business scenarios.。 +Each governance component provides detailed usage documentation。Among them, the account governance component and permission governance component also provide contract code, Java language SDK, contract integration demo and Java version SDK use demo, so that users can freely and flexibly use and integrate based on their own business scenarios。 ### WeBankBlockchain-Governance-Account Account Governance Component -Based on the development of smart contracts, it provides full life cycle management of blockchain user accounts, such as account registration, private key reset, freezing, and unfreezing, and supports multiple governance policies such as administrators, threshold voting, and multi-signature system.。 +Based on the development of smart contracts, it provides full life cycle management of blockchain user accounts, such as account registration, private key reset, freezing, and unfreezing, and supports multiple governance policies such as administrators, threshold voting, and multi-signature system。 -In the existing blockchain design, once the private key is lost, it is impossible to re-operate the corresponding identity.。As a result, the account governance component adheres to the concept of "account as the core" and proposes a two-tier account system to solve the pain point of strong binding of private keys and accounts, thus realizing the ability to replace the private key of accounts, which means that even if the private key is lost, the account can be recovered.。 +In the existing blockchain design, once the private key is lost, it is impossible to re-operate the corresponding identity。As a result, the account governance component adheres to the concept of "account as the core" and proposes a two-tier account system to solve the pain point of strong binding of private keys and accounts, thus realizing the ability to replace the private key of accounts, which means that even if the private key is lost, the account can be recovered。 -In the account governance component, accounts no longer use public key addresses, but a two-tier account system of public key accounts plus internal random accounts.。 +In the account governance component, accounts no longer use public key addresses, but a two-tier account system of public key accounts plus internal random accounts。 -The account governance component provides a variety of blockchain account governance rules, account life cycle management and other overall solutions, including creating governance accounts, selecting a variety of governance rules, authorizing governance permissions, creating accounts, freezing accounts, unfreezing accounts, replacing private keys, closing accounts and other account life cycle management functions.。 +The account governance component provides a variety of blockchain account governance rules, account life cycle management and other overall solutions, including creating governance accounts, selecting a variety of governance rules, authorizing governance permissions, creating accounts, freezing accounts, unfreezing accounts, replacing private keys, closing accounts and other account life cycle management functions。 ![](../../../../2.x/images/governance/MCGF/governance_account.png) @@ -81,15 +81,15 @@ Please refer to - [Quick Start](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Acct/quickstart.html) ### WeBankBlockchain-Governance-Authority Permission Governance Component -A generic component that provides access control at the granularity of blockchain accounts, contracts, functions, etc. based on smart contracts.。 +A generic component that provides access control at the granularity of blockchain accounts, contracts, functions, etc. based on smart contracts。 -With the emergence of blockchain application development cases based on smart contracts, the need for the control and grouping of smart contract permissions in various application development scenarios is becoming more and more urgent.。The permission governance component provides permission control at the granularity of blockchain accounts and contract functions based on smart contracts.。 +With the emergence of blockchain application development cases based on smart contracts, the need for the control and grouping of smart contract permissions in various application development scenarios is becoming more and more urgent。The permission governance component provides permission control at the granularity of blockchain accounts and contract functions based on smart contracts。 -The permission governance component supports intercepting illegal calls to contract functions and also supports permission grouping - by configuring the association between functions and groups, you can easily control the permissions of the grouping.。Permission control can be achieved by simply introducing the permission contract address into the business code and accessing the judgment interface of the permission contract in the function that requires permission control.。 +The permission governance component supports intercepting illegal calls to contract functions and also supports permission grouping - by configuring the association between functions and groups, you can easily control the permissions of the grouping。Permission control can be achieved by simply introducing the permission contract address into the business code and accessing the judgment interface of the permission contract in the function that requires permission control。 -The administrator only needs to operate the permission management contract without adjusting the business contract, and the modification of the permission can take effect in real time.。Permission control supports on-demand configuration of blacklist mode and whitelist mode。 +The administrator only needs to operate the permission management contract without adjusting the business contract, and the modification of the permission can take effect in real time。Permission control supports on-demand configuration of blacklist mode and whitelist mode。 -In addition, the permission governance component supports multiple permission governance rules, such as one vote pass, threshold vote, and so on.。 +In addition, the permission governance component supports multiple permission governance rules, such as one vote pass, threshold vote, and so on。 ![](../../../../2.x/images/governance/MCGF/governance_authority.png) @@ -100,13 +100,13 @@ Please refer to - [Quick Start](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Auth/quickstart.html) ### WeBankBlockchain-Governance-Key Private Key Management Component -Provides a common solution for the full life cycle management of private keys such as private key generation, storage, encryption and decryption, signing, and verification.。 +Provides a common solution for the full life cycle management of private keys such as private key generation, storage, encryption and decryption, signing, and verification。 -The private key management component provides the ability to generate, save, host, and use private keys, covering the entire life cycle of private key use.。 +The private key management component provides the ability to generate, save, host, and use private keys, covering the entire life cycle of private key use。 -This component supports a variety of standard protocols. In terms of private key generation, it supports random number generation, mnemonic generation, and derivative generation.;As far as saving is concerned, it supports threshold sharding restore, and also supports exporting in pkcs12 (p12), keystore, pem and other formats.;In terms of hosting, multiple trust models can be adapted to meet the diverse needs of enterprise users.;In terms of usage, support for private key signature, public key encryption, etc.。 +This component supports a variety of standard protocols. In terms of private key generation, it supports random number generation, mnemonic generation, and derivative generation;As far as saving is concerned, it supports threshold sharding restore, and also supports exporting in pkcs12 (p12), keystore, pem and other formats;In terms of hosting, multiple trust models can be adapted to meet the diverse needs of enterprise users;In terms of usage, support for private key signature, public key encryption, etc。 -The private key management component also provides full support for state secrets.。 +The private key management component also provides full support for state secrets。 ![](../../../../2.x/images/governance/MCGF/governance_key.png) @@ -116,12 +116,12 @@ Please refer to - [Documentation](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Key/index.html) - [Quick Start](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Key/corequickstart.html) -### WeBankBlockchain-Governance-Cert Certificate Management Components -Provides a common solution for the full lifecycle management of certificates such as certificate generation, validation, and sub-certificate requests.。 +### WeBankBlockchain-Governance-Cert Certificate Management Component +Provides a common solution for the full lifecycle management of certificates such as certificate generation, validation, and sub-certificate requests。 -The certificate management component provides the ability to issue, verify, reset, revoke, export and host multi-level certificates in the X509 standard, covering the full life cycle of certificates, and supports a variety of signature algorithms, such as SHA256WITHRSA, SHA256WITHECDSA, SM3WITHSM2 and other signature algorithms, as well as state secret support.。 +The certificate management component provides the ability to issue, verify, reset, revoke, export and host multi-level certificates in the X509 standard, covering the full life cycle of certificates, and supports a variety of signature algorithms, such as SHA256WITHRSA, SHA256WITHECDSA, SM3WITHSM2 and other signature algorithms, as well as state secret support。 -Components include cert-toolkit and cert-mgr two modules, cert-toolkit provides basic capabilities such as certificate generation. It can be used as an independent toolkit.-mgr based on cert-toolkit, which provides the ability to host certificates and standardizes the issuance process.。 +The cert-toolkit component includes two modules, cert-toolkit and cert-mgr. The cert-toolkit provides basic capabilities such as certificate generation and can be used as an independent toolkit。 ![](../../../../2.x/images/governance/MCGF/governance_cert.png) @@ -136,48 +136,48 @@ Please refer to ###Private key management scenario Private key is indispensable in the design system of block chain。But the private key itself is difficult to understand, difficult to use, more difficult to keep, the management cost is huge, seriously weakened the use of blockchain experience。 -An effective tool for private key management in the actual scenario of the existing blockchain is still missing.。Private key management is generally difficult, high learning costs, poor user experience and other issues.。 +An effective tool for private key management in the actual scenario of the existing blockchain is still missing。Private key management is generally difficult, high learning costs, poor user experience and other issues。 -The private key management component provides a series of rich and independent private key management methods, and users can choose the appropriate solution according to their needs.。 +The private key management component provides a series of rich and independent private key management methods, and users can choose the appropriate solution according to their needs。 -**Private key generation**: Users can use mnemonic methods to generate。On the one hand, mnemonic words are composed of words, which are relatively easy to remember and reduce the difficulty of memorizing and expressing。On the other hand, if you use separate private keys for different scenarios, it will increase the cost of memory and the risk of loss, at this time you can use the private key derivation function, users only need to keep the root private key, in different scenarios the root private key will derive different sub-private keys.。 +**Private key generation**: Users can use mnemonic methods to generate。On the one hand, mnemonic words are composed of words, which are relatively easy to remember and reduce the difficulty of memorizing and expressing。On the other hand, if you use separate private keys for different scenarios, it will increase the cost of memory and the risk of loss, at this time you can use the private key derivation function, users only need to keep the root private key, in different scenarios the root private key will derive different sub-private keys。 -**Private key hosting**After obtaining the private key, you can choose to export it to a format such as keystore or pkcs12 after password encryption, or you can hand it over to an enterprise organization for hosting.;You can also choose to split into several sub-slices and distribute them to different devices for storage.。 +**Private key hosting**After obtaining the private key, you can choose to export it to a format such as keystore or pkcs12 after password encryption, or you can hand it over to an enterprise organization for hosting;You can also choose to split into several sub-slices and distribute them to different devices for storage。 -**Private key usage**After obtaining the private key, the user can use the private key to sign transactions, use the public key to encrypt the private key to decrypt, etc.。 +**Private key usage**After obtaining the private key, the user can use the private key to sign transactions, use the public key to encrypt the private key to decrypt, etc。 ### Account Governance Scenarios -The private key itself is easy to lose and leak.。Economic losses due to loss of private keys are common。Driven by huge economic interests, security attacks and thefts of private keys are also emerging.。How to reset the user's private key and protect the user's asset security is the bottom line of blockchain promotion。 +The private key itself is easy to lose and leak。Economic losses due to loss of private keys are common。Driven by huge economic interests, security attacks and thefts of private keys are also emerging。How to reset the user's private key and protect the user's asset security is the bottom line of blockchain promotion。 -The account governance component is designed to provide a self-consistent account governance mechanism based on smart contracts to achieve the effect of private key changes without changing identity.。The account governance component supports both the meta-governance of the Alliance Chain Governance Committee and governance scenarios based on the specific business applications of the Alliance Chain.。 +The account governance component is designed to provide a self-consistent account governance mechanism based on smart contracts to achieve the effect of private key changes without changing identity。The account governance component supports both the meta-governance of the Alliance Chain Governance Committee and governance scenarios based on the specific business applications of the Alliance Chain。 -Alliance Chain Governance Board Account Governance: There is a unique risk in traditional centralized solutions。In the alliance chain, a polycentric governance committee is often used to avoid a single point of risk。Members of the Alliance Chain Governance Committee can rely on governance contracts to perform management functions and vote and vote on matters.。 +Alliance Chain Governance Board Account Governance: There is a unique risk in traditional centralized solutions。In the alliance chain, a polycentric governance committee is often used to avoid a single point of risk。Members of the Alliance Chain Governance Committee can rely on governance contracts to perform management functions and vote and vote on matters。 -However, there is still a risk of disclosure or loss of private keys associated with committee members。The account governance component can be applied to the account governance of the Alliance Chain Governance Committee, and the accounts of the Alliance Chain Governance Committee members are also managed by the account governance component.。 +However, there is still a risk of disclosure or loss of private keys associated with committee members。The account governance component can be applied to the account governance of the Alliance Chain Governance Committee, and the accounts of the Alliance Chain Governance Committee members are also managed by the account governance component。 -Blockchain depository business account governance: Users can use the current private key to open an account in the account governance component to generate an internal identity.。The business system can rely on this internal identity, for example, in a depository business contract, the record of the data is bound to that internal identity.。 +Blockchain depository business account governance: Users can use the current private key to open an account in the account governance component to generate an internal identity。The business system can rely on this internal identity, for example, in a depository business contract, the record of the data is bound to that internal identity。 -When you need to modify the private key, you can modify the private key by voting through the associated account or governance committee, and apply for binding the old identity with the new private key, so that you can continue to operate the old identity with the new private key, while the old private key is invalidated.。 +When you need to modify the private key, you can modify the private key by voting through the associated account or governance committee, and apply for binding the old identity with the new private key, so that you can continue to operate the old identity with the new private key, while the old private key is invalidated。 ### Permission governance scenario -In application development, the lack of a security mechanism will inevitably have serious consequences.。On the one hand, blockchain applications need to refine security access control to the level of contract function granularity.;On the other hand, grouping permissions for different users to prevent loopholes such as transaction overreach and avoid being attacked by hackers is also a rigid need for blockchain application security.。 +In application development, the lack of a security mechanism will inevitably have serious consequences。On the one hand, blockchain applications need to refine security access control to the level of contract function granularity;On the other hand, grouping permissions for different users to prevent loopholes such as transaction overreach and avoid being attacked by hackers is also a rigid need for blockchain application security。 -The permission governance component provides business permission governance tools, including grouping information for different accounts and permissions for different groups.。Permission configuration meets various requirements, allowing developers to quickly integrate permission control functions for their smart contract applications.。Typical functions are as follows: +The permission governance component provides business permission governance tools, including grouping information for different accounts and permissions for different groups。Permission configuration meets various requirements, allowing developers to quickly integrate permission control functions for their smart contract applications。Typical functions are as follows: - **Account Grouping**You can group account addresses and set permissions for the group to reuse the group。 - **Black and White List Mode**: Supports two permission modes of black and white lists. Administrators or governance committees can set a function to be accessed only by members of a group, or only allow accounts outside the group to access。 -- **Cross-Contract**Allows you to configure permissions across contracts. For example, you can set a group member to be prohibited by functions in multiple contracts at the same time.。 -- **Lightweight Access**The business contract does not need to know these complex permission configurations, but only needs to call the interception interface of the permission contract in its own function. When the user calls the function, the business contract will automatically submit the call information context to the permission system for judgment and interception.。 +- **Cross-Contract**Allows you to configure permissions across contracts. For example, you can set a group member to be prohibited by functions in multiple contracts at the same time。 +- **Lightweight Access**The business contract does not need to know these complex permission configurations, but only needs to call the interception interface of the permission contract in its own function. When the user calls the function, the business contract will automatically submit the call information context to the permission system for judgment and interception。 ### Certificate Management Scenarios -Certificate is the cornerstone of network security in the enterprise authentication management of the alliance chain。The disadvantages of certificate operation and use experience will endanger the participants of the entire alliance chain network, affecting mutual trust and business security.。 +Certificate is the cornerstone of network security in the enterprise authentication management of the alliance chain。The disadvantages of certificate operation and use experience will endanger the participants of the entire alliance chain network, affecting mutual trust and business security。 -For example, FISCO BCOS network adopts CA-oriented admission mechanism, uses the certificate format of x509 protocol, supports any multi-level certificate structure, and ensures information confidentiality, authentication, integrity and non-repudiation.。 +For example, FISCO BCOS network adopts CA-oriented admission mechanism, uses the certificate format of x509 protocol, supports any multi-level certificate structure, and ensures information confidentiality, authentication, integrity and non-repudiation。 -The certificate management component provides a solution for certificate lifecycle management, standardizes the certificate issuance process, supports certificate hosting, and supports multiple signature algorithms for personal or enterprise use.。Take certificate management and toolkit usage as an example: +The certificate management component provides a solution for certificate lifecycle management, standardizes the certificate issuance process, supports certificate hosting, and supports multiple signature algorithms for personal or enterprise use。Take certificate management and toolkit usage as an example: -**On-chain node admission certificate management**: The issuance of certificates for nodes on the chain is completed by the certificate management component, which can be integrated or deployed independently, and the service is managed by the authority.。 +**On-chain node admission certificate management**: The issuance of certificates for nodes on the chain is completed by the certificate management component, which can be integrated or deployed independently, and the service is managed by the authority。 -During chain initialization, the deployer can call the interface to complete the generation of the root certificate。The new authority or node can query the root certificate and submit a sub-certificate request through the query interface provided by the certificate management component.。The root certificate manager can choose to issue sub-certificates from the list of requests through the query。Through the certificate management component for certificate management, you can standardize the issuance process, improve efficiency.。 +During chain initialization, the deployer can call the interface to complete the generation of the root certificate。The new authority or node can query the root certificate and submit a sub-certificate request through the query interface provided by the certificate management component。The root certificate manager can choose to issue sub-certificates from the list of requests through the query。Through the certificate management component for certificate management, you can standardize the issuance process, improve efficiency。 -**Certificate Toolkit Use**: cert in the certificate management component-The toolkit can be referenced in the project as a standalone JAVA toolkit instead of the command line to complete the generation and issuance of certificates.。Enterprise or personal projects can integrate certificate management components as a certificate issuance toolkit。 +**Certificate Toolkit Use**The cert-toolkit in the certificate management component can be referenced in the project as an independent JAVA toolkit instead of the command line to complete the generation and issuance of certificates。Enterprise or personal projects can integrate certificate management components as a certificate issuance toolkit。 diff --git a/3.x/en/docs/components/index.md b/3.x/en/docs/components/index.md index 04d18cac3..c5e3073c1 100644 --- a/3.x/en/docs/components/index.md +++ b/3.x/en/docs/components/index.md @@ -4,24 +4,24 @@ Tags: "blockchain middleware platform" "graphical blockchain management tool" "" The FISCO BCOS community has access to a wealth of open source blockchain components: -- **Graphical blockchain management tool**: WeBankBlockchain WeBASE(WeBank Blockchain Application Software Extension, WBC-WeBASE) is a set of management FISCO-Toolset for the BCOS Alliance Chain。WBC-WeBASE provides a graphical management interface that shields the complexity of the underlying blockchain, reduces the threshold for blockchain use, and greatly improves the development efficiency of blockchain applications, including subsystems such as node front, node management, transaction links, data export, and web management platforms.。 +- **Graphical blockchain management tool**: WeBankBlockchain WeBASE(WeBank Blockchain Application Software Extension, WBC-WeBASE) is a set of tools for managing the FISCO-BCOS alliance chain。WBC-WeBASE provides a graphical management interface, shielding the complexity of the underlying blockchain, reducing the threshold for blockchain use, and greatly improving the development efficiency of blockchain applications, including subsystems such as node front, node management, transaction links, data export, and web management platforms。 *** ------ -- **Common Components for Data Governance**: The full name is "WeBankBlockchain."-Data Data Governance Common Components, "a stable, efficient, and secure blockchain data governance component solution that seamlessly adapts to the underlying FISCO BCOS blockchain platform.。It consists of the Data Export component(Data-Export), Data Warehouse Components(Data-Stash)Data Reconciliation Component(Data-Reconcile)These three independent, pluggable, flexible assembly components, out of the box, flexible and convenient, easy to secondary development。 +- **Common Components for Data Governance**: It is a stable, efficient, and secure blockchain data governance component solution that can seamlessly adapt to the underlying platform of the FISCO BCOS blockchain。It consists of the Data Export component(Data-Export), Data Warehouse Components(Data-Stash)Data Reconciliation Component(Data-Reconcile)These three independent, pluggable, flexible assembly components, out of the box, flexible and convenient, easy to secondary development。 *** ------ -- **Blockchain multi-party collaboration governance component**: WeBankBlockchain-The Governance blockchain multi-party collaborative governance component is a lightweight, easy-to-use, common scenario and one-stop blockchain governance component solution.。 First open source account governance components(Governance-Account), Permission Governance Components(Governance-Auth)Private key management component(Governance-Key)and certificate management components (Governance-Cert)。The above components all provide deliverables such as deployable smart contract code, easy-to-use SDK and reference landing practice Demo.。 +- **Blockchain multi-party collaboration governance component**: WeBankBlockchain-Governance is a lightweight, easy-to-use, common scenario and one-stop blockchain governance component solution。 First open source account governance components(Governance-Account), Permission Governance Components(Governance-Auth)Private key management component(Governance-Key)and Certificate Management Components (Governance-Cert)。The above components all provide deliverables such as deployable smart contract code, easy-to-use SDK and reference landing practice Demo。 *** ------ -- **Blockchain Application Development Components**: WeBankBlockchain-SmartDev application development components include an open, lightweight set of development components that cover the development, debugging, and application development of smart contracts, including the smart contract library (SmartDev-Contract), Smart Contract Compilation Plug-in (SmartDev-SCGP) and application development scaffolding (SmartDev-Scaffold)。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency.。 +- **Blockchain Application Development Components**: The WeBankBlockchain-SmartDev application development component includes an open and lightweight set of development components covering the development, debugging, and application development of smart contracts, including the SmartDev-Contract, SmartDev-SCGP, and SmartDev-Scaffold。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency。 ---------- diff --git a/3.x/en/docs/components/smartdev_index.md b/3.x/en/docs/components/smartdev_index.md index 150a5e60d..ccf679684 100644 --- a/3.x/en/docs/components/smartdev_index.md +++ b/3.x/en/docs/components/smartdev_index.md @@ -1,17 +1,17 @@ # Blockchain Application Development Components -Tag: "WeBankBlockchain-SmartDev "" Application Development "" Common Components "" Smart Contract Library "" Smart Contract Compilation Plug-in "" Application Development Scaffolding " +Tags: "WeBankBlockchain-SmartDev" "Application Development" "Common Components" "Smart Contract Library" "Smart Contract Compilation Plugin" "Application Development Scaffolding" " ---- ## Component positioning -After more than ten years of development, blockchain technology has gradually taken root in various industries.。But at the same time, from a technical point of view, blockchain application development still has a high threshold, there are many pain points, the user experience in all aspects of application development needs to be improved.。 +After more than ten years of development, blockchain technology has gradually taken root in various industries。But at the same time, from a technical point of view, blockchain application development still has a high threshold, there are many pain points, the user experience in all aspects of application development needs to be improved。 -WeBankBlockchain-The original intention of SmartDev application development components is to help developers develop block chain applications efficiently and quickly.。SmartDev includes a set of open, lightweight development components, covering smart contract development, debugging, application development and other aspects, developers can freely choose the appropriate development tools according to their own situation, improve development efficiency.。 +The original intention of WeBankBlockchain-SmartDev application development component is to help developers develop block chain applications efficiently and quickly in an all-round way。SmartDev includes a set of open, lightweight development components, covering smart contract development, debugging, application development and other aspects, developers can freely choose the appropriate development tools according to their own situation, improve development efficiency。 ## Design Objectives -After more than ten years of development, blockchain technology has gradually taken root in various industries.。But at the same time, from a technical point of view, blockchain application development still has a high threshold, there are many pain points, in the application development of all aspects of the user experience, efficiency and security needs to be improved.。 +After more than ten years of development, blockchain technology has gradually taken root in various industries。But at the same time, from a technical point of view, blockchain application development still has a high threshold, there are many pain points, in the application development of all aspects of the user experience, efficiency and security needs to be improved。 In the community, we often hear many questions about the development of blockchain applications: How to transfer account address and string to each other in solidity code? @@ -24,25 +24,25 @@ Is it possible to provide a blockchain application code generator that is easy t How can programming Xiaobai quickly get started with blockchain application development? ... -These issues are both contract development-related and application development-related.。Based on such scenarios, combined with their own practical experience, WeBank Blockchain officially open source blockchain application development component WeBankBlockchain.-SmartDev hopes to improve the development efficiency of blockchain applications from all aspects of blockchain application development, and help developers become "10 times engineers" in blockchain application development.。Currently, the entire component is developed based on the solidity language。Recently, Weizhong Bank's blockchain has also opened up webankblockchain.-liquid (hereinafter referred to as WBC-Liquid) contract language, we will also adapt to WBC in the future.-Liquid Language。 +These issues are both contract development-related and application development-related。Based on such scenarios and combined with its own practical experience, WeBankBlockchain-SmartDev, a blockchain application development component of WeBank, is officially open-sourced. It is expected to start from all aspects of blockchain application development to improve the development efficiency of blockchain applications in multiple dimensions and help developers become "10 times engineers" in blockchain application development。Currently, the entire component is developed based on the solidity language。Recently, the WeBank blockchain has also open-sourced the webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract language, and we will also adapt the WBC-Liquid language in the future。 -Blockchain application development component WeBankBlockchain-SmartDev's original intention is to create a low-code development of the component library, all-round help developers efficient, agile development of blockchain applications.。WeBankBlockchain-SmartDev includes a set of open, lightweight development components, covering contract development, compilation, application development and other aspects, developers can choose the appropriate development tools according to their own situation, improve development efficiency.。 +The original intention of WeBankBlockchain-SmartDev is to create a low-code component library to help developers develop blockchain applications efficiently and quickly。WeBankBlockchain-SmartDev includes a set of open, lightweight development components, covering contract development, compilation, application development and other aspects, developers can choose the appropriate development tools according to their own situation, improve development efficiency。 -From the perspective of contract development, for commonly used functions, there is no need to repeat the wheel, just quote on demand, refer to the code in the "smart contract library," you can introduce the corresponding functions, for the efficiency and safety of contract development escort.。For non-basic features, such as business scenarios, we also provide code templates for reuse.。 +From the perspective of contract development, for commonly used functions, there is no need to repeat the wheel, just quote on demand, refer to the code in the "smart contract library," you can introduce the corresponding functions, for the efficiency and safety of contract development escort。For non-basic features, such as business scenarios, we also provide code templates for reuse。 -From the perspective of contract compilation, for blockchain applications under development, you no longer need to rely on the console to compile the contract code, just use the contract gradle compilation plug-in to compile in place, and you can immediately get abi, bin and java contracts.。These compilations are exported directly to the Java project, eliminating the step of copying and providing a fast, silky experience like developing native Java programs。 +From the perspective of contract compilation, for blockchain applications under development, you no longer need to rely on the console to compile the contract code, just use the contract gradle compilation plug-in to compile in place, and you can immediately get abi, bin and java contracts。These compilations are exported directly to the Java project, eliminating the step of copying and providing a fast, silky experience like developing native Java programs。 -From the perspective of application development, from smart contracts to project construction, there is a lot of mechanical and repetitive work, such as creating projects, introducing dependencies, writing configuration code, accessing smart contracts, and writing related entity classes.。By contrast, via WeBankBlockchain-SmartDev, developers can choose application development scaffolding。Scaffolding automatically generates project works based on smart contracts。The project already contains the above logic code, developers only need to continue to add business logic code based on the project, focusing on their own business.。 +From the perspective of application development, from smart contracts to project construction, there is a lot of mechanical and repetitive work, such as creating projects, introducing dependencies, writing configuration code, accessing smart contracts, and writing related entity classes。By contrast, with WeBankBlockchain-SmartDev, developers can choose application development scaffolding。Scaffolding automatically generates project works based on smart contracts。The project already contains the above logic code, developers only need to continue to add business logic code based on the project, focusing on their own business。 ![](../../../../2.x/images/governance/SmartDev/compare.png) ## Component Introduction -SmartDev includes a set of open, lightweight development components that cover the development, debugging, and application development of smart contracts, including the smart contract library (SmartDev-Contract), Smart Contract Compilation Plug-in (SmartDev-SCGP) and application development scaffolding (SmartDev-Scaffold)。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency.。 +SmartDev includes a set of open and lightweight development components, covering the development, debugging, and application development of smart contracts, including the SmartDev-Contract, SmartDev-SCGP, and SmartDev-Scaffold。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency。 ![](../../../../2.x/images/governance/SmartDev/smartdev_overview.png) -### SmartDev-Contract Smart Contract Library -Solidity Smart Contract Code Base。Contains basic types, data structures, common functions, upper-level business and other smart contract libraries.。Users can reference and reuse according to actual needs.。 +### SmartDev - Contract Smart Contract Library +Solidity Smart Contract Code Base。Contains basic types, data structures, common functions, upper-level business and other smart contract libraries。Users can reference and reuse according to actual needs。 ![](../../../../2.x/images/governance/SmartDev/contract_lib.png) @@ -58,7 +58,7 @@ Please refer to ### SmartDev-SCGP (Solidity Compiler Gradle Plugin) Smart Contract Compilation Plugin -The gradle plug-in that compiles the Solidity smart contract code into Java code can compile the smart contract in the project, generate the corresponding Java file, and automatically copy it to the corresponding package directory.。 +The gradle plug-in that compiles the Solidity smart contract code into Java code can compile the smart contract in the project, generate the corresponding Java file, and automatically copy it to the corresponding package directory。 ![](../../../../2.x/images/governance/SmartDev/compile_plugin.png) @@ -73,7 +73,7 @@ Please refer to - [Quick Start](https://smartdev-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-SmartDev-SCGP/quick_start.html) ### SmartDev-Scaffold Application Development Scaffold -Based on the configuration of the smart contract file, automatically generate the scaffolding code of the application project, including the smart contract corresponding to the entity class, service class and other content, help users only need to modify and write a small amount of code, you can implement an application, greatly simplifying the development of smart contracts.。 +Based on the configuration of the smart contract file, automatically generate the scaffolding code of the application project, including the smart contract corresponding to the entity class, service class and other content, help users only need to modify and write a small amount of code, you can implement an application, greatly simplifying the development of smart contracts。 ![](../../../../2.x/images/governance/SmartDev/scaffold.png) @@ -92,25 +92,25 @@ Please refer to ### Scenario 1: Smart Contract Development -In the development of smart contracts, from the basic four operations to the upper-level business scenarios, you can use mature, reusable libraries.。 +In the development of smart contracts, from the basic four operations to the upper-level business scenarios, you can use mature, reusable libraries。 -Take the four-rule operation as an example, you need to determine whether there is a risk of overflow, at which point you can use the math-related library LibSafeMathForUint256Utils.。 +Take the four-rule operation as an example, you need to determine whether there is a risk of overflow, at which point you can use the math-related library LibSafeMathForUint256Utils。 -Take the data structure as an example, in solidity, the key of the mapping type cannot be iterated, at this time, if you need to use the mapping of the key iteration, you can use the mapping-related library LibBytesMap.。 +Take the data structure as an example, in solidity, the key of the mapping type cannot be iterated, at this time, if you need to use the mapping of the key iteration, you can use the mapping-related library LibBytesMap。 -For example, if you want to introduce cryptographic functions such as hashing and signature verification, you can use the Crypto library.。 +For example, if you want to introduce cryptographic functions such as hashing and signature verification, you can use the Crypto library。 -Take the business scenario as an example, if you want to implement the certificate storage function, you can refer to the scenario template Evidence, which incorporates the relevant implementation, which has the effect of throwing bricks and mortar.。 +Take the business scenario as an example, if you want to implement the certificate storage function, you can refer to the scenario template Evidence, which incorporates the relevant implementation, which has the effect of throwing bricks and mortar。 ### Scenario 2: Contract modification and debugging -In the process of blockchain application development and debugging, it is usually necessary to use abi, bin, java contract, etc. in the project, and debug accordingly based on these contents.。If the contract needs to be recompiled for reasons such as adjustments, you don't have to copy the contract into the console to compile it, just run the corresponding gradle directive to generate a new compilation.。At the same time, these compilations are directly embedded in the project.。As shown in the following figure, after the HelloWorld contract is compiled, the resulting compiled product example: +In the process of blockchain application development and debugging, it is usually necessary to use abi, bin, java contract, etc. in the project, and debug accordingly based on these contents。If the contract needs to be recompiled for reasons such as adjustments, you don't have to copy the contract into the console to compile it, just run the corresponding gradle directive to generate a new compilation。At the same time, these compilations are directly embedded in the project。As shown in the following figure, after the HelloWorld contract is compiled, the resulting compiled product example: ![](../../../../2.x/images/governance/SmartDev/example.png) ### Scenario 3: Blockchain application development -If you have written a smart contract, you need to develop a web project that provides a rest interface based on the smart contract.。In this case, the user can drag the contract into the scaffold and generate the project with one click。The following figure shows the generated sample project, including the necessary configuration classes, DAO (Data Access Object) related code。Developers only need to make the necessary configuration of the project, and add the corresponding controller and other code, you can easily achieve the above requirements。 +If you have written a smart contract, you need to develop a web project that provides a rest interface based on the smart contract。In this case, the user can drag the contract into the scaffold and generate the project with one click。The following figure shows the generated sample project, including the necessary configuration classes, DAO (Data Access Object) related code。Developers only need to make the necessary configuration of the project, and add the corresponding controller and other code, you can easily achieve the above requirements。 diff --git a/3.x/en/docs/components/webase.md b/3.x/en/docs/components/webase.md index 33ba1ab8b..1c15afa69 100644 --- a/3.x/en/docs/components/webase.md +++ b/3.x/en/docs/components/webase.md @@ -1,11 +1,11 @@ # Graphical blockchain management tool -Tag: "WBC-WeBASE "" Middleware Platform "" Node Management "" System Monitoring "" System Management "" +Tags: "WBC-WeBASE" "Middleware Platform" "Node Management" "System Monitoring" "System Management" ---- -WeBank's open source self-developed blockchain middleware platform - [WeBankBlockchain WeBASE(WeBank Blockchain Application Software Extension, WBC-WeBASE)](https://webasedoc.readthedocs.io/zh_CN/lab/) It is a middleware platform built between blockchain applications and FISCO BCOS nodes.。WBC-WeBASE shields the complexity of the underlying blockchain, reduces the threshold for blockchain use, and greatly improves the development efficiency of blockchain applications, including subsystems such as node front, node management, transaction links, data export, and web management platforms.。Users can select subsystems for deployment according to their business needs, and can further experience the rich interactive experience, visual smart contract development environment IDE。 +WeBank's open source self-developed blockchain middleware platform - [WeBankBlockchain WeBASE(WeBank Blockchain Application Software Extension, WBC-WeBASE)](https://webasedoc.readthedocs.io/zh_CN/lab/) It is a middleware platform built between blockchain applications and FISCO BCOS nodes。WBC-WeBASE masks the complexity of the underlying blockchain, reduces the threshold for blockchain use, and greatly improves the development efficiency of blockchain applications, including subsystems such as node front, node management, transaction links, data export, and web management platforms。Users can select subsystems for deployment according to their business needs, and can further experience the rich interactive experience, visual smart contract development environment IDE。 -WBC-The WeBASE management platform is comprised of four WBCs-WeBASE subsystem consists of a set of management FISCO-Toolset for the BCOS Alliance Chain。WBC-WeBASE lab version(lab branch)FISCO BCOS 3.X version has been adapted, for more information, please refer to [WBC-WeBASE Management Platform User Manual](https://webasedoc.readthedocs.io/zh_CN/lab/) 。 +The WBC-WeBASE Management Platform is a set of four WBC-WeBASE subsystems to manage the FISCO-BCOS Alliance Chain。WBC-WeBASE lab version(lab branch)FISCO BCOS 3.X version has been adapted. For more information, please refer to [WBC-WeBASE Management Platform User Manual](https://webasedoc.readthedocs.io/zh_CN/lab/) 。 ## 1. Main functions @@ -18,7 +18,7 @@ WBC-The WeBASE management platform is comprised of four WBCs-WeBASE subsystem co 7. Transaction Audit 8. Account Management -## 2. WBC-Construction of WeBASE Management Platform +## 2. Construction of WBC-WeBASE Management Platform For building, please refer to [One-click Deployment Document](https://webasedoc.readthedocs.io/zh_CN/lab/docs/WeBASE/install.html)。 @@ -27,13 +27,13 @@ For building, please refer to [One-click Deployment Document](https://webasedoc. ### 2.1 [WBC-WeBASE Quick Start](https://webasedoc.readthedocs.io/zh_CN/lab/docs/WeBASE-Install/developer.html) -Developers can edit, compile, deploy, and debug contracts through the contract editor of the node pre-service by building the node and the node pre-service.。Build can refer to [Quick Start Document](https://webasedoc.readthedocs.io/zh_CN/lab/docs/WeBASE-Install/developer.html)。 +Developers can edit, compile, deploy, and debug contracts through the contract editor of the node pre-service by building the node and the node pre-service。Build can refer to [Quick Start Document](https://webasedoc.readthedocs.io/zh_CN/lab/docs/WeBASE-Install/developer.html)。 ![](../../../../2.x/images/webase/webase-front.png) ### 2.2 [WBC-WeBASE Management Console](https://webasedoc.readthedocs.io/zh_CN/lab/docs/WeBASE/install.html) -by WBC-WeBASE one-click script, you can build a WBC-The basic environment of WeBASE makes it easy for users to experience core functions such as block browsing, node viewing, contract IDE, system management, node monitoring, transaction auditing, and private key management.。For building, please refer to [One-click Deployment Document](https://webasedoc.readthedocs.io/zh_CN/lab/docs/WeBASE/install.html)。![](../../../../2.x/images/webase/webase-web.png) +Through the WBC-WeBASE one-click script, you can build a basic environment of WBC-WeBASE, which can facilitate users to experience core functions such as block browsing, node viewing, contract IDE, system management, node monitoring, transaction audit, and private key management。For building, please refer to [One-click Deployment Document](https://webasedoc.readthedocs.io/zh_CN/lab/docs/WeBASE/install.html)。![](../../../../2.x/images/webase/webase-web.png) ### 2.3 [WBC-WeBASE Other](https://webasedoc.readthedocs.io/zh_CN/lab) diff --git a/3.x/en/docs/contract_develop/Liquid_develop.md b/3.x/en/docs/contract_develop/Liquid_develop.md index 5d00fc47b..9c8fb0804 100644 --- a/3.x/en/docs/contract_develop/Liquid_develop.md +++ b/3.x/en/docs/contract_develop/Liquid_develop.md @@ -1,19 +1,19 @@ # 3. WBC-Liquid Contract Development -Tags: "Develop first app" "WBC-Liquid "" Contract Development "" Blockchain Application "" WASM "" +Tags: "Develop first app" "WBC-Liquid" "Contract development" "Blockchain app" "WASM" " --- FISCO BCOS supports implementing smart contracts in several ways -* [Solidity](https://solidity.readthedocs.io/en/latest/)The contract programming language used in the Ethereum ecosystem, FISCO BCOS expands a series of functions for the alliance chain, and is the most commonly used way to develop smart contracts on FISCO BCOS.。 +* [Solidity](https://solidity.readthedocs.io/en/latest/)The contract programming language used in the Ethereum ecosystem, FISCO BCOS expands a series of functions for the alliance chain, and is the most commonly used way to develop smart contracts on FISCO BCOS。 * [Pre-Compiled Contract](./c++_contract/add_precompiled_impl.md): Built-in customized smart contracts directly inside blockchain nodes, using c++Language implementation, can directly call the node internal various interfaces, for complex scenarios, the use of high barriers to entry。 -* [WBC-Liquid](./Liquid_develop.md)The Rust-based smart contract programming language developed by the micro-blockchain, with the help of Rust language features, can achieve more powerful programming functions than the Solidity language.。 +* [WBC-Liquid](./Liquid_develop.md)The Rust-based smart contract programming language developed by the micro-blockchain, with the help of Rust language features, can achieve more powerful programming functions than the Solidity language。 -WBC-Liquid is a Rust-based smart contract programming language developed by Microblockchain. With the help of Rust language features, it can achieve more powerful programming functions than Solidity language. For related tutorials, see: +WBC-Liquid is a Rust-based smart contract programming language developed by Weizhong Blockchain. With the help of Rust language features, it can achieve more powerful programming functions than Solidity language. For related tutorials, see: - Related Documentation: [Liquid Online Documentation](https://liquid-doc.readthedocs.io/zh_CN/latest/) - Related documents: [Rust language official tutorial](https://doc.rust-lang.org/book/) -- Related Documents: [Developing the First WBC _ Liquid Blockchain Application](../quick_start/wbc_liquid_application.md) +- Related documents: [Developing the first WBC _ Liquid blockchain application](../quick_start/wbc_liquid_application.md) diff --git a/3.x/en/docs/contract_develop/c++_contract/add_precompiled_impl.md b/3.x/en/docs/contract_develop/c++_contract/add_precompiled_impl.md index 44977cfc2..9a7d3bfdf 100644 --- a/3.x/en/docs/contract_develop/c++_contract/add_precompiled_impl.md +++ b/3.x/en/docs/contract_develop/c++_contract/add_precompiled_impl.md @@ -3,23 +3,23 @@ Tags: "Precompiled Contracts" "Development Guide" "" Blockchain Application Development "" ---------- -This article takes the HelloWorld contract as an example to show you how to use the pre-compiled contract version of HelloWorld.。 +This article takes the HelloWorld contract as an example to show you how to use the pre-compiled contract version of HelloWorld。 ## Development premise -The precompiled contract is to use C.++To implement a smart contract, the developer must have a C++Basic development ability, familiar with CMake operation。 +The precompiled contract is to use C++To implement a smart contract, the developer must have a C++Basic development ability, familiar with CMake operation。 The following rules must be followed before developing a precompiled contract: -1. Pre-compiled contracts are built into nodes and have a larger operational range than ordinary contracts, so the implementation must meet security audit requirements and must output critical storage write information in the log for information audit.。 -2. Before submitting the node code of the new precompiled contract, you must go through the code review of professional peers. For details, please refer to the FISCO BCOS code submission process.。 -3. The precompiled contract needs to agree on the write operation of the storage, so the execution result of the precompiled contract must be strongly consistent, and random numbers are not allowed to be used or indirectly referenced.。 -4. Multiple precompiled contracts should not share the same storage table, otherwise there may be inconsistent execution in multiple calls.。 -5. When pre-compiled contracts across versions have data compatibility issues, compatibility must be done.。 +1. Pre-compiled contracts are built into nodes and have a larger operational range than ordinary contracts, so the implementation must meet security audit requirements and must output critical storage write information in the log for information audit。 +2. Before submitting the node code of the new precompiled contract, you must go through the code review of professional peers. For details, please refer to the FISCO BCOS code submission process。 +3. The precompiled contract needs to agree on the write operation of the storage, so the execution result of the precompiled contract must be strongly consistent, and random numbers are not allowed to be used or indirectly referenced。 +4. Multiple precompiled contracts should not share the same storage table, otherwise there may be inconsistent execution in multiple calls。 +5. When pre-compiled contracts across versions have data compatibility issues, compatibility must be done。 ## step1 Defining the HelloWorld Interface -Let's first look at the Solidity version of the HelloWorld contract that we want to implement.。Solidity version of HelloWorld, there is a member name for storing data, two interfaces get(),set(string)for reading and setting the member variable respectively。 +Let's first look at the Solidity version of the HelloWorld contract that we want to implement。Solidity version of HelloWorld, there is a member name for storing data, two interfaces get(),set(string)for reading and setting the member variable respectively。 ```solidity pragma solidity>=0.6.10 <0.8.20; @@ -41,7 +41,7 @@ contract HelloWorld { } ``` -Solidity's interface calls are encapsulated as a transaction, where transactions that call read-only interfaces are not packaged into blocks, while write-interface transactions are packaged into blocks.。Since the underlying layer needs to determine the called interface and parse the parameters based on the ABI code in the transaction data, the interface needs to be defined first。The ABI interface rules for precompiled contracts are exactly the same as Solidity. When defining a precompiled contract interface, you usually need to define a Solidity contract with the same interface.**Interface Contract**。The interface contract needs to be used when calling the precompiled contract.。 +Solidity's interface calls are encapsulated as a transaction, where transactions that call read-only interfaces are not packaged into blocks, while write-interface transactions are packaged into blocks。Since the underlying layer needs to determine the called interface and parse the parameters based on the ABI code in the transaction data, the interface needs to be defined first。The ABI interface rules for precompiled contracts are exactly the same as Solidity. When defining a precompiled contract interface, you usually need to define a Solidity contract with the same interface**Interface Contract**。The interface contract needs to be used when calling the precompiled contract。 ```solidity pragma solidity >=0.6.10 <0.8.20; @@ -54,9 +54,9 @@ contract HelloWorldPrecompiled{ ## step2 Design storage structure -When precompiled contracts involve storage operations, you need to determine the stored table information.(Table name and table structure. The stored data is abstracted into a table structure in FISCO BCOS.)。If variable storage is not involved in the contract, you can ignore this step。 +When precompiled contracts involve storage operations, you need to determine the stored table information(Table name and table structure. The stored data is abstracted into a table structure in FISCO BCOS)。If variable storage is not involved in the contract, you can ignore this step。 -For HelloWorld, we design the following table。The table only stores a pair of key-value pairs. The key field is hello _ key, and the value field is hello _ value to store the corresponding string value.(string)Interface modification, through get()interface acquisition。 +For HelloWorld, we design the following table。The table only stores a pair of key-value pairs. The key field is hello _ key, and the value field is hello _ value to store the corresponding string value(string)Interface modification, through get()interface acquisition。 | key | value | |-----------|----------------| @@ -136,7 +136,7 @@ else ### Parsing and returning parameters -The parameters when calling the contract are included in the _ param parameter of the call function. If it is a Solidity call, the Solidity ABI encoding is used.-Liquid (WBC)-Liquid) uses Scale encoding。 +The parameters when calling the contract are included in the _ param parameter of the call function. If it is a Solidity call, the Solidity ABI encoding is used, and if it is a webankblockchain-liquid (WBC-Liquid for short), the Scale encoding is used。 PrecompiledCodec encapsulates the interface of two encoding formats. You can use PrecompiledCodec。 @@ -203,11 +203,11 @@ set interface implementation ## step4 Assign and register a contract address -When FSICO BCOS 3.0 executes a transaction, the contract address is used to distinguish whether it is a pre-compiled contract, so after the pre-compiled contract is developed, it needs to be registered as the pre-compiled contract registration address at the bottom.。 +When FSICO BCOS 3.0 executes a transaction, the contract address is used to distinguish whether it is a pre-compiled contract, so after the pre-compiled contract is developed, it needs to be registered as the pre-compiled contract registration address at the bottom。 -The user-allocated address space is 0x5001-0xffff, the user needs to assign an unused address to the newly added precompiled contract.**Precompiled contract addresses must be unique and non-conflicting**。 +The user-allocated address space is 0x5001-0xffff, and the user needs to allocate an unused address for the newly added precompiled contract**Precompiled contract addresses must be unique and non-conflicting**。 -Developers need to modify 'bcos-Executor / src / executor / TransactionExecutor.cpp 'file, insert the contract address and contract object instance into the' m _ constantPrecompiled 'Map in the initPrecompiled function, and register the HelloWorldPrecompiled contract as follows: +The developer needs to modify the 'bcos-executor / src / executor / TransactionExecutor.cpp' file, insert the contract address and contract object instance into the 'm _ constantPrecompiled' Map in the initPrecompiled function, and register the HelloWorldPrecompiled contract as follows: ```c++ auto helloPrecompiled = std::make_shared(m_hashImpl); @@ -216,7 +216,7 @@ m_constantPrecompiled->insert({"0000000000000000000000000000000000005001", std:: ## Step5 compiled source code -Refer to FISCO BCOS 3.x manual-> Get Executable Program-> [source code compilation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/get_executable.html)。Note that the implementations of HelloWorldPrecompile.cpp and HelloWorldPrecompile.h need to be placed in the FISCO-BCOS / libprecompiled / extension directory。 +Refer to FISCO BCOS 3.x User Manual ->Get executable program ->[source code compilation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/get_executable.html)。Note that the implementations of HelloWorldPrecompiled.cpp and HelloWorldPrecompiled.h need to be placed in the FISCO-BCOS / libprecompiled / extension directory。 ## HelloWorld precompiled contract call @@ -274,4 +274,4 @@ currentAccount: 0x3977d248ce98f3affa78a800c4f234434355aa77 Hello World! ``` -Here, you can congratulate you on the smooth completion of the development of the HelloWorld precompiled contract, the development process of other precompiled contracts is the same.。 +Here, you can congratulate you on the smooth completion of the development of the HelloWorld precompiled contract, the development process of other precompiled contracts is the same。 diff --git a/3.x/en/docs/contract_develop/c++_contract/precompiled_contract_api.md b/3.x/en/docs/contract_develop/c++_contract/precompiled_contract_api.md index 4dadf0959..b1c693776 100644 --- a/3.x/en/docs/contract_develop/c++_contract/precompiled_contract_api.md +++ b/3.x/en/docs/contract_develop/c++_contract/precompiled_contract_api.md @@ -4,7 +4,7 @@ Tags: "precompiled contract" "interface" --- -FISCO BCOS 3.x follows the FISCO BCOS 2.0 version of the precompiled contract。In the future, we will also try to abstract the existing typical business scenarios and develop them into pre-compiled contract templates as the basic capability provided by the underlying layer to help users use FISCO BCOS in their business faster and more conveniently.。 +FISCO BCOS 3.x follows the FISCO BCOS 2.0 version of the precompiled contract。In the future, we will also try to abstract the existing typical business scenarios and develop them into pre-compiled contract templates as the basic capability provided by the underlying layer to help users use FISCO BCOS in their business faster and more conveniently。 ## 1. SystemConfigPrecompiled @@ -15,7 +15,7 @@ FISCO BCOS 3.x follows the FISCO BCOS 2.0 version of the precompiled contract。 ### interface declaration -Take Solidity for example. +Take Solidity for example ```solidity pragma solidity ^0.6.0; @@ -31,12 +31,12 @@ contract SystemConfigPrecompiled **Intake:** -- key indicates the configuration item name. Currently supported parameters include 'tx _ count _ limit', 'tx _ gas _ lmit', 'consensus _ leader _ period', and 'compatibility _ version'。 -- value indicates the value of the corresponding configuration item. The default value of 'tx _ count _ limit' is 1000, and the minimum value is 1. The default value of 'consensus _ leader _ period' is 1, and the minimum value is 1. The default value of tx _ gas _ limit is 300000000, and the minimum value is 100000.。 +-key indicates the configuration item name. Currently supported parameters include 'tx _ count _ limit', 'tx _ gas _ lmit', 'consensus _ leader _ period', and 'compatibility _ version'。 +-value indicates the value of the corresponding configuration item. The default value of 'tx _ count _ limit' is 1000, the minimum value is 1, the default value of 'consensus _ leader _ period' is 1, the minimum value is 1, the default value of tx _ gas _ limit is 300000000, and the minimum value is 100000。 **Returns:** -- setValueByKey will be returned as an error code +-setValueByKey will be returned as an error code | error code| Description| |:-------|:-------------------| @@ -47,11 +47,11 @@ contract SystemConfigPrecompiled **Intake:** -- key indicates the configuration item name. Currently supported parameters include 'tx _ count _ limit', 'consensus _ leader _ period', 'consensus _ leader _ period', and 'compatibility _ version'。 +-key indicates the configuration item name. Currently supported parameters include 'tx _ count _ limit', 'consensus _ leader _ period', 'consensus _ leader _ period', and 'compatibility _ version'。 **Returns:** -- returns the specific value and the block height in effect +- Returns the specific value and the effective block height ### SDK support @@ -66,7 +66,7 @@ contract SystemConfigPrecompiled ### interface declaration -Take Solidity for example. +Take Solidity for example ```solidity // SPDX-License-Identifier: Apache-2.0 @@ -82,15 +82,15 @@ contract ConsensusPrecompiled { ### Interface Description -- addSealer adds a consensus node, the parameter is the hexadecimal representation of the new node's public key, and sets the weight, which can only be a positive number。 -- addObserver Add an observation node or change the identity of an existing consensus node to an observation node。 -- remove a node. If it is the last consensus node, it is not allowed to be deleted.。 -- setWeigh is used to set the weight of a consensus node。 -- Data stored in _ s _ consensus _ table。 +-addSealer adds a consensus node, the parameter is the hexadecimal representation of the new node's public key, and sets the weight, the weight can only be a positive number。 +-addObserver Add an observation node or change the identity of an existing consensus node to an observation node。 +-remove Deletes a node. If it is the last consensus node, deletion is not allowed。 +-setWeigh is used to set the weight of a consensus node。 +- Data is stored in the _ s _ consensus _ table。 **Interface Return Description:** -- interfaces are returned as error codes +- Interfaces are returned as error codes | error code| Description| | :----- | :--------------------- | @@ -124,19 +124,19 @@ struct TableInfo { string[] valueColumns; } -/ / The table management contract is static precompiled and has a fixed contract address. +/ / The table management contract is static precompiled and has a fixed contract address abstract contract TableManager { / / Create a table and pass in TableInfo function createTable(string memory path, TableInfo memory tableInfo) public virtual returns (int32); - / / Create a KV table and enter the key and value field names. + / / Create a KV table and enter the key and value field names function createKVTable(string memory tableName, string memory keyField, string memory valueField) public virtual returns (int32); / / Use only when calling the Solidity contract function openTable(string memory path) public view virtual returns (address); / / Change table fields - / / Only new fields can be added, and fields cannot be deleted. The default value of new fields is blank and cannot be duplicated with the original fields. + / / Only new fields can be added, and fields cannot be deleted. The default value of new fields is blank and cannot be duplicated with the original fields function appendColumns(string memory path, string[] memory newColumns) public virtual returns (int32); / / Get table information @@ -146,17 +146,17 @@ abstract contract TableManager { ### Interface Description -- CreateTable and createKVTable create a table. The parameters are the table name, primary key column name, and other column names separated by commas.。 - - CreateTable table name allows letters, numbers, underscores, table name does not exceed 50 characters - - KeyField cannot start with an underscore. Allows letters, numbers, and underscores. The total length cannot exceed 64 characters. - - valueField cannot start with an underscore. Letters, numbers, and underscores are allowed. The single-field name does not exceed 64 characters. The total length of valueFields does not exceed 1024. - - valueFields and keyField cannot have duplicate fields -- AppendColumns adds table fields. The field requirements are the same as those for creating a table. -- OpenTable obtains the real address of the table, which is dedicated to the Solidity contract. -- desc reads the key and valueFiles of the table. +-createTable, createKVTable Creates a table with the table name, primary key column name, and other column names separated by commas。 + -createTable table name allows letters, numbers, underscores, table name does not exceed 50 characters + -keyField cannot start with an underscore, letters, numbers, underscores are allowed, and the total length cannot exceed 64 characters + -valueField cannot start with an underscore. Allows letters, numbers, and underscores. The single-field name does not exceed 64 characters. The total length of valueFields does not exceed 1024 + -valueFields and keyField cannot have duplicate fields +-appendColumns adds table fields. The field requirements are the same as those for creating a table +-openTable to obtain the real address of the table, which is dedicated to the Solidity contract +-desc reads the key and valueFiles values of the table - **Interface Return Description:** - - interfaces are returned as error codes + - Interfaces are returned as error codes | error code| Description| | :----- | :---------------- | @@ -194,8 +194,8 @@ contract Crypto - `sm3`: calculate the hash of the specified data by using the national secret sm3 algorithm; - `keccak256Hash`: use the keccak256 algorithm to calculate the hash of specified data; -- `sm2Verify`: Signatures are verified using the sm2 algorithm'(publicKey, r, s)Valid. Verify the state secret account derived from the public key by returning 'true'. If the verification fails, return 'false' and all 0 addresses.; -- `curve25519VRFVerify`: Given the VRF input and the VRF public key, use the VRF algorithm based on the ed25519 curve to verify whether the VRF proof is valid, and if the VRF proof is verified successfully, return 'true' and the VRF random number derived from the proof.;Returns' if VRF attestation verification fails(false, 0)`。(Not currently supported) +- `sm2Verify`: Signatures are verified using the sm2 algorithm'(publicKey, r, s)Valid. Verify the state secret account derived from the public key by returning 'true'. If the verification fails, return 'false' and all 0 addresses; +- `curve25519VRFVerify`: Given the VRF input and the VRF public key, use the VRF algorithm based on the ed25519 curve to verify whether the VRF proof is valid, and if the VRF proof is verified successfully, return 'true' and the VRF random number derived from the proof;Returns' if VRF attestation verification fails(false, 0)`。(Not currently supported) ## 5. BFSPrecompiled @@ -229,22 +229,22 @@ contract BfsPrecompiled { ### Interface Description -- 'list ': The reference must be**absolute path**If it is a directory, the meta information of all files in the directory is returned.;Returns a single BfsInfo if it is a contract。 - - The absolute path cannot have special characters, the total length cannot exceed 56, and the total number of paths cannot exceed 6 levels - - If the input path is' link ', then the' ext 'field string will return the contract address and ABI corresponding to the soft link, the 0th is the contract address, and the first is the ABI string. +- 'list': the reference must be**absolute path**If it is a directory, the meta information of all files in the directory is returned;Returns a single BfsInfo if it is a contract。 + -The absolute path cannot have special characters, the total length cannot exceed 56, and the total number of paths cannot exceed 6 levels + - If the input path is' link ', then the' ext 'field string will return the contract address and ABI corresponding to the soft link, the 0th is the contract address, and the first is the ABI string -- 'mkdir ': the reference must be**absolute path**Create a directory file in the specified path. Multi-level creation is supported. If the creation fails, an error code will be returned.。 - - The absolute path cannot have special characters, the total length cannot exceed 56, and the total number of paths cannot exceed 6 levels +- 'mkdir': the reference must be**absolute path**Create a directory file in the specified path. Multi-level creation is supported. If the creation fails, an error code will be returned。 + -The absolute path cannot have special characters, the total length cannot exceed 56, and the total number of paths cannot exceed 6 levels -- 'link ': replace the function of CNS, create a contract alias, the created soft links are in the' / apps / 'directory。 - - The contract name cannot contain special characters, and a directory with the same contract name will be created under '/ apps', which will fail if there is a non-directory resource with the same name. +- 'link': replace the function of CNS, create an alias for the contract, the soft links created are in the '/ apps /' directory。 + - The contract name cannot contain special characters, and a directory with the same contract name will be created under '/ apps'. If a non-directory resource with the same name exists, it will fail - The version number cannot contain special characters, and a link resource of '/ apps / contract name / version number' will be created in the '/ apps / contract name' directory - '/ apps / contract name / version number' as an absolute path, the total length cannot exceed 56 - - The contract address must be real and in normal condition. + - The contract address must be real and in normal condition - ABI string does not exceed 16MB -- 'readlink ': Obtain the real address of the soft link. The parameter must be**absolute path** - - The absolute path cannot have special characters, the total length cannot exceed 56, and the total number of paths cannot exceed 6 levels +- 'readlink': Obtain the real address of the soft link. The parameter must be**absolute path** + -The absolute path cannot have special characters, the total length cannot exceed 56, and the total number of paths cannot exceed 6 levels | error code| Error Message| Error message / workaround| @@ -252,7 +252,7 @@ contract BfsPrecompiled { | 0 | Success | Success| | -53006 | Wrong file type| Appears when calling the BFS touch interface with the wrong file type| | -53005 | Wrong file path| This error occurs when calling the BFS interface to pass in the absolute path, the total length of the absolute path of the BFS cannot exceed 56, the total number of stages of the path cannot exceed 6, and it cannot contain special characters| -| -53003 | Failed to create folder| An exception occurs when creating a folder when calling the BFS link API. For example, the parent folder corresponding to the contract name already exists.| -| -51202 | Incoming version number or address is wrong| Appears when calling the BFS link interface. The version number cannot contain special characters and the contract address must also exist.| +| -53003 | Failed to create folder| An exception occurs when creating a folder when calling the BFS link API. For example, the parent folder corresponding to the contract name already exists| +| -51202 | Incoming version number or address is wrong| Appears when calling the BFS link interface. The version number cannot contain special characters and the contract address must also exist| | -53002 | File already exists| The file name created when calling the BFS writer interface already exists| | -53001 | File does not exist| The file corresponding to the absolute path does not exist when calling the BFS read interface| diff --git a/3.x/en/docs/contract_develop/c++_contract/precompiled_error_code.md b/3.x/en/docs/contract_develop/c++_contract/precompiled_error_code.md index 7f5093408..485d54ad5 100644 --- a/3.x/en/docs/contract_develop/c++_contract/precompiled_error_code.md +++ b/3.x/en/docs/contract_develop/c++_contract/precompiled_error_code.md @@ -4,30 +4,30 @@ Tags: "precompiled contract" "precompiled error code" "error message" "RetCode" --- -There are two main ways to pass errors in precompiled contracts, one is to return a specific numeric value on the interface, which is usually a negative number less than 0 at the time of the error.;The other is to throw an exception, when the status code of the receipt is 15, the user can take the initiative to parse the 'message' field in the receipt for further error analysis.。 +There are two main ways to pass errors in precompiled contracts, one is to return a specific numeric value on the interface, which is usually a negative number less than 0 at the time of the error;The other is to throw an exception, when the status code of the receipt is 15, the user can take the initiative to parse the 'message' field in the receipt for further error analysis。 -The following table mainly shows the error codes returned by the interface and the measures to be taken when the corresponding error codes are encountered.。 +The following table mainly shows the error codes returned by the interface and the measures to be taken when the corresponding error codes are encountered。 | error code| Error Message| Error message / workaround| |--------|------------------------------|-------------------------------------------------------------------------------------------------------------------------| | 0 | Success | Success| | -53006 | Wrong file type| Appears when calling the BFS touch interface with the wrong file type| | -53005 | Wrong file path| This error occurs when calling the BFS interface to pass in the absolute path, the total length of the absolute path of the BFS cannot exceed 56, the total number of stages of the path cannot exceed 6, and it cannot contain special characters| -| -53003 | Failed to create folder| An exception occurs when creating a folder when calling the BFS link API. For example, the parent folder corresponding to the contract name already exists.| -| -51202 | Incoming version number or address is wrong| Appears when the BFS link interface is called, the version number cannot have'/'The contract address must also exist.| +| -53003 | Failed to create folder| An exception occurs when creating a folder when calling the BFS link API. For example, the parent folder corresponding to the contract name already exists| +| -51202 | Incoming version number or address is wrong| Appears when the BFS link interface is called, the version number cannot have'/'The contract address must also exist| | -53002 | File already exists| The file name created when calling the BFS writer interface already exists| | -53001 | File does not exist| The file corresponding to the absolute path does not exist when calling the BFS read interface| | -51800 | Ring signature verification failed| Appears when calling the verification interface of ring precompiled fails to check whether the input parameters are correct| | -51700 | Group signature verification failed| Appears when calling the validation interface of group precompiled fails to check whether the passed-in parameters are correct| -| -51508 | The key of Remove does not exist.| When the remove interface of the table precompiled contract is called, the remove key does not exist.| -| -51507 | Update key does not exist.| Appears when the update interface of the table precompiled contract is called. The update key does not exist.| -| -51506 | The insert key already exists.| Appears when the insert interface of the table precompiled contract is called, and the insert key already exists| -| -51103 | Node ID does not exist| Appears when calling the Consensus precompiled contract. The passed-in node id parameter does not exist.| +| -51508 | The key of Remove does not exist| When the remove interface of the table precompiled contract is called, the remove key does not exist| +| -51507 | Update key does not exist| Appears when the update interface of the table precompiled contract is called. The update key does not exist| +| -51506 | The insert key already exists| Appears when the insert interface of the table precompiled contract is called, and the insert key already exists| +| -51103 | Node ID does not exist| Appears when calling the Consensus precompiled contract. The passed-in node id parameter does not exist| | -51102 | Wrong node weight value| Appears when calling the setWeight and addSealer interfaces of the Consensus precompiled contract. The set weight cannot be less than or equal to 0| | -51101 | Cannot delete last consensus node| Appears when the removeNode and addObserver interfaces of the Consensus precompiled contract are called, and the last consensus node in the chain cannot be deleted| | -51100 | Wrong node ID| Node ID must be a 128-length hexadecimal string| -| -51004 | ACL map decoding error for contract method| The ACL map decoding error of the permission method occurs when calling the contract permission precompilation contract, and it is necessary to consider whether the storage is written out.| -| -51003 | Wrong permission type| The permission type of the precompiled contract is displayed when the contract permission is called. Currently, only whitelist and blacklist types are supported.| +| -51004 | ACL map decoding error for contract method| The ACL map decoding error of the permission method occurs when calling the contract permission precompilation contract, and it is necessary to consider whether the storage is written out| +| -51003 | Wrong permission type| The permission type of the precompiled contract is displayed when the contract permission is called. Currently, only whitelist and blacklist types are supported| | -51002 | ACL type for contract method does not exist| The read interface of the precompiled contract will appear when calling the contract permission, the type does not exist generally because there is no setting, the default as all users can call| | -51001 | ACL for contract method does not exist| The read interface of the precompiled contract will appear when calling the contract permission. The ACL does not exist because it is not set. By default, all users can call the| | -50105 | Open table error | Internal error, failed to open storage table| @@ -42,5 +42,5 @@ The following table mainly shows the error codes returned by the interface and t | -50003 | The field name of Table is too long| Appears when you call the createTable and appendColumn interfaces of a TableManager precompiled contract with a field name that exceeds 64| | -50002 | Table name is too long| Appears when the createTable interface of the TableManager precompiled contract is called and the table name exceeds 50| | -50001 | Table already exists| Appears when the createTable interface of the TableManager precompiled contract is called, the table name already exists| -| -50000 | No access| When the permission mode is enabled, the precompiled contract for direct access to System, Consensus, and AuthManager appears without direct access permission.| +| -50000 | No access| When the permission mode is enabled, the precompiled contract for direct access to System, Consensus, and AuthManager appears without direct access permission| diff --git a/3.x/en/docs/contract_develop/c++_contract/use_crud_precompiled.md b/3.x/en/docs/contract_develop/c++_contract/use_crud_precompiled.md index 085930d5c..58fec56de 100644 --- a/3.x/en/docs/contract_develop/c++_contract/use_crud_precompiled.md +++ b/3.x/en/docs/contract_develop/c++_contract/use_crud_precompiled.md @@ -5,7 +5,7 @@ Tags: "Precompiled Contracts" "CRUD" "" Blockchain Applications "" ---------- -This article will introduce the CRUD storage capabilities of FISCO BCOS 3.x to help developers develop blockchain applications more efficiently and easily.。 +This article will introduce the CRUD storage capabilities of FISCO BCOS 3.x to help developers develop blockchain applications more efficiently and easily。 **Special note: Solidity contracts that use stored precompiled contracts must be older than version 0.6.0 and use ABIEncoderV2** @@ -15,10 +15,10 @@ There are currently two ways to use the CRUD storage feature: the Table contract ### 1. Table Contracts -- The Solidity contract only needs to introduce the KVTable.sol abstract interface contract file officially provided by FISCO BCOS.。 -- webankblockchain-liquid (hereinafter referred to as WBC-Liquid) The contract declares the use of the Table interface before implementing the contract.。 +-The Solidity contract only needs to introduce the KVTable.sol abstract interface contract file officially provided by FISCO BCOS。 +-webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract can be used by declaring the interface of Table before implementing the contract。 -Table contains a smart contract interface dedicated to distributed storage, which is implemented on blockchain nodes.。TableManager can create tables and add table fields. Table can be used as a table CRUD operation.。The following are introduced separately。 +Table contains a smart contract interface dedicated to distributed storage, which is implemented on blockchain nodes。TableManager can create tables and add table fields. Table can be used as a table CRUD operation。The following are introduced separately。 #### 1.1 TableManager contract interface @@ -26,9 +26,9 @@ Used to create a table and open the table. The fixed contract addresses are '0x1 | Interface| Function| Parameters| Return value| |---------------------------------|------------------|----------------------------|---------------------------------------------| -| createTable(string ,TableInfo) | Create Table| Table name, TableInfo structure| The error code (int32) is returned. For details, see the following table.| -| appendColumns(string, string[]) | Add Table Field| table name, array of field names| The error code (int32) is returned. For details, see the following table.| -| openTable(string) | Get Table Address| Table name. This interface is only used for Solidity| Returns the real address of the table. If it does not exist, 0x0 is returned.| +| createTable(string ,TableInfo) | Create Table| Table name, TableInfo structure| The error code (int32) is returned. For details, see the following table| +| appendColumns(string, string[]) | Add Table Field| table name, array of field names| The error code (int32) is returned. For details, see the following table| +| openTable(string) | Get Table Address| Table name. This interface is only used for Solidity| Returns the real address of the table. If it does not exist, 0x0 is returned| | desc(string) | Get Table Information Structure| Table Name| Returns the TableInfo structure| #### 1.2 Table Contracts @@ -40,11 +40,11 @@ Used to access table data. The interface is as follows: | select(string) | Get Single Row Value| Primary Key Value| Returns an Entry structure containing all field values in a single row| | select(Condition[], Limit) | Get multi-row values| Primary key filter criteria, limit on the number of rows returned| Returns an array of Entry structures containing all field values for multiple rows| | count(Condition[]) | Get number of matching rows| Primary Key Filter Criteria| Returns the number of all rows that meet the criteria| -| insert(Entry) | Set Single Line| Entry structure, containing all values of the current row| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes.| -| update(string, UpdateFiled[]) | Update single line| primary key, updating field values| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes.| -| update(Condition[], Limit, UpdateFiled[]) | Update multiple rows| Primary key filter, return row limit, update field value| The error code (int32) is returned. The number of updated rows is returned when the error code is successful. See the following table for details.| -| remove(string) | Delete Single Row Value| Primary Key Value| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes.| -| remove(Condition[], Limit) | Delete multi-row values| Primary key filter criteria, limit on the number of rows returned| The error code (int32) is returned. If the error code succeeds, the number of deleted rows is returned. See the following table for other error codes.| +| insert(Entry) | Set Single Line| Entry structure, containing all values of the current row| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes| +| update(string, UpdateFiled[]) | Update single line| primary key, updating field values| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes| +| update(Condition[], Limit, UpdateFiled[]) | Update multiple rows| Primary key filter, return row limit, update field value| The error code (int32) is returned. The number of updated rows is returned when the error code is successful. See the following table for details| +| remove(string) | Delete Single Row Value| Primary Key Value| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes| +| remove(Condition[], Limit) | Delete multi-row values| Primary key filter criteria, limit on the number of rows returned| The error code (int32) is returned. If the error code succeeds, the number of deleted rows is returned. See the following table for other error codes| The interface returns an error code: @@ -64,7 +64,7 @@ The interface returns an error code: | -51506 | Primary key does not exist at insert time| | 其他| Other errors encountered while creating| -With the above understanding of the KVTable abstract interface contract, the development of the KVTable contract can now be formally carried out.。 +With the above understanding of the KVTable abstract interface contract, the development of the KVTable contract can now be formally carried out。 ### 2. Solidity contract uses Table @@ -89,7 +89,7 @@ TableManager constant tm = TableManager(address(0x1002)); Table table; string constant TABLE_NAME = "t_test"; constructor () public{ - / / Create the t _ test table. The primary key of the table is id, and the other fields are name and age. + / / Create the t _ test table. The primary key of the table is id, and the other fields are name and age string[] memory columnNames = new string[](2); columnNames[0] = "name"; columnNames[1] = "age"; @@ -103,7 +103,7 @@ constructor () public{ } ``` -**Note:** This step is optional: for example, if the new contract only reads and writes the table created by the old contract, you do not need to create the table.。 +**Note:** This step is optional: for example, if the new contract only reads and writes the table created by the old contract, you do not need to create the table。 #### 2.3 CRUD read and write operations for tables @@ -164,11 +164,11 @@ function select(string memory id) public view returns (string memory,string memo #### 2.4 Use Condition to read and write multiple rows of data -Users can read and write multiple rows of data by using the interface provided by Table with the Condition parameter.。 +Users can read and write multiple rows of data by using the interface provided by Table with the Condition parameter。 **Note:** Currently, Condition only supports range filtering for primary key fields。 -The core code for reading multiple rows of data is as follows, similar to writing multiple rows of data. +The core code for reading multiple rows of data is as follows, similar to writing multiple rows of data ```solidity function selectMore(string memory gt_id) @@ -185,11 +185,11 @@ function selectMore(string memory gt_id) } ``` -### 3. WBC-The Liquid contract uses the Table interface +### 3. The WBC-Liquid contract uses the Table interface #### 3.1 Declaring the Table interface -at WBC-Declare the interface of the KVTable before using the interface in the Liquid contract。 +Declare the KVTable interface before using it in the WBC-Liquid contract。 ```rust #![cfg_attr(not(feature = "std"), no_std)] @@ -268,9 +268,9 @@ mod table { #### 3.2 WBC-Liquid Create Table -Available at WBC-The logic for creating a table is implemented in the constructor of Liquid. The address of the TableManager introduced here is the BFS path '/ sys / table _ manager'. Note that WBC-Difference between Liquid and Solidity。 +The logic for creating a table can be implemented in the constructor of WBC-Liquid. The address of the TableManager introduced here is the BFS path '/ sys / table _ manager'. Note the difference between WBC-Liquid and Solidity。 -The principle of creating a table is similar to that of Solidity, so I won't repeat it again.。 +The principle of creating a table is similar to that of Solidity, so I won't repeat it again。 ```rust pub fn new(&mut self) { @@ -362,4 +362,4 @@ pub fn select(&self, id: String) -> (String, String) { ### 4. SDK TableCRUDService interface -FISCO BCOS 3.x SDK provides TableCRUDService data connection ports. These interfaces are implemented by calling a precompiled KVTable contract built into the blockchain to read and write user tables.。The Java SDK TableCRUDService is implemented in the org.fisco.bcos.sdk.v3.contract.precompiled.crud.TableCRUDService class。The call to the write interface will generate the same transaction as the call to the Table contract interface, which will not be stored until the consensus node consensus is consistent.。 +FISCO BCOS 3.x SDK provides TableCRUDService data connection ports. These interfaces are implemented by calling a precompiled KVTable contract built into the blockchain to read and write user tables。The Java SDK TableCRUDService is implemented in the org.fisco.bcos.sdk.v3.contract.precompiled.crud.TableCRUDService class。The call to the write interface will generate the same transaction as the call to the Table contract interface, which will not be stored until the consensus node consensus is consistent。 diff --git a/3.x/en/docs/contract_develop/c++_contract/use_group_ring_sig.md b/3.x/en/docs/contract_develop/c++_contract/use_group_ring_sig.md index 9ed4cdb6e..efe5cca3c 100644 --- a/3.x/en/docs/contract_develop/c++_contract/use_group_ring_sig.md +++ b/3.x/en/docs/contract_develop/c++_contract/use_group_ring_sig.md @@ -3,26 +3,26 @@ Tags: "Privacy Contract" "Privacy Protection" "Contract Development" "Ring Signature" " ---- -Privacy protection is a major technical challenge for the alliance chain。In order to protect on-chain data, protect the privacy of alliance members, and ensure the effectiveness of supervision, FISCO BCOS [pre-compiled contract](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/precompiled/index.html)The form of the integrated group / ring signature verification function, providing a variety of privacy protection means.。 +Privacy protection is a major technical challenge for the alliance chain。In order to protect on-chain data, protect the privacy of alliance members, and ensure the effectiveness of supervision, FISCO BCOS [pre-compiled contract](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/precompiled/index.html)The form of the integrated group / ring signature verification function, providing a variety of privacy protection means。 -The document briefly introduces the group / ring signature algorithm and related application scenarios and provides a reference for the call method.。 +The document briefly introduces the group / ring signature algorithm and related application scenarios and provides a reference for the call method。 ## 1 Introduction to the algorithm **group signature** -group signature(Group Signature)It is a relatively anonymous digital signature scheme that protects the identity of the signer, where the user can sign the message in place of their group, and the verifier can verify that the signature is valid, but does not know which group member the signature belongs to.。At the same time, users cannot abuse this anonymity because the group administrator can open the signature through the group master's private key, exposing the signature's attribution information.。Features of a group signature include: +group signature(Group Signature)It is a relatively anonymous digital signature scheme that protects the identity of the signer, where the user can sign the message in place of their group, and the verifier can verify that the signature is valid, but does not know which group member the signature belongs to。At the same time, users cannot abuse this anonymity because the group administrator can open the signature through the group master's private key, exposing the signature's attribution information。Features of a group signature include: -- Anonymity: Group members use group parameters to generate signatures, others can only verify the validity of the signature, and know that the signer belongs to the group through the signature, but cannot obtain the signer's identity information.; -- Non-forgeability: only group members can generate valid verifiable group signatures; -- Non-linkability: Given two signatures, it is impossible to tell if they are from the same signer; -- Traceability: In the case of regulatory intervention, group owners can obtain the signer's identity by signing.。 +-Anonymity: Group members use group parameters to generate signatures, others can only verify the validity of the signature, and know that the signer belongs to the group through the signature, but cannot obtain the signer's identity information; +- Non-forgery: only group members can generate valid verifiable group signatures; +-Unlinkability: Given two signatures, it is impossible to tell whether they are from the same signer; +- Traceability: In the case of regulatory intervention, group owners can obtain the identity of the signer by signing。 **ring signature** -ring signature(Ring Signature)Is a special group signature scheme, but with complete anonymity, that is, there is no administrator role, all members can actively join the ring, and the signature cannot be opened.。The characteristics of ring signatures include: +ring signature(Ring Signature)Is a special group signature scheme, but with complete anonymity, that is, there is no administrator role, all members can actively join the ring, and the signature cannot be opened。The characteristics of ring signatures include: -- Non-forgery: No other member of the ring can forge a true signer's signature; +- Non-forgery: other members of the ring cannot forge the signature of the real signer; - Complete anonymity: no group owner, only ring members, others can only verify the validity of the ring signature, but no one can obtain the signer's identity information。 ## 2 Functional components @@ -31,13 +31,13 @@ The FISCO BCOS group / ring signature module provides the following functional c - Group / Ring [Signature Library](https://github.com/FISCO-BCOS/group-signature-lib), provides a complete group / ring signature algorithm c++Interface -- Group / Ring Signature Precompiled Contracts: Provide group / ring signature verification interface。 +- Group / ring signature pre-compiled contracts: Provide group / ring signature verification interface。 FISCO BCOS provides users with examples of group / ring signature development, including: - Group / ring signature server: Provides complete group / ring signed RPC services。[GitHub source code](https://github.com/FISCO-BCOS/group-signature-server)[Gitee source code](https://gitee.com/FISCO-BCOS/group-signature-server) -- Group / Ring Signing Client: Call the RPC service to sign the data, and provide signature on the chain and on-chain verification and other functions.。[GitHub source code](https://github.com/FISCO-BCOS/group-signature-client)[Gitee source code](https://gitee.com/FISCO-BCOS/group-signature-client) +- Group / Ring Signing Client: Call the RPC service to sign the data, and provide signature on the chain and on-chain verification and other functions。[GitHub source code](https://github.com/FISCO-BCOS/group-signature-client)[Gitee source code](https://gitee.com/FISCO-BCOS/group-signature-client) The sample framework is shown in the following figure. Please refer to [Client Guide Github Link](https://github.com/FISCO-BCOS/group-signature-client)or [Client Guide Gitee Link](https://gitee.com/FISCO-BCOS/group-signature-client)。 @@ -45,14 +45,14 @@ The sample framework is shown in the following figure. Please refer to [Client G ## 3 Application scenarios -Businesses with signer identity concealment requirements can use this module to achieve related functions.。The signer signs the data by calling the group / ring signature library, links the signature, and the business contract verifies the signature by calling the group / ring signature precompiled contract, and returns the verification result to the business layer.。If it is a group signature, the supervisor can also open the specified signature data to obtain the signer's identity.。The specific process is shown in the following figure: +Businesses with signer identity concealment requirements can use this module to achieve related functions。The signer signs the data by calling the group / ring signature library, links the signature, and the business contract verifies the signature by calling the group / ring signature precompiled contract, and returns the verification result to the business layer。If it is a group signature, the supervisor can also open the specified signature data to obtain the signer's identity。The specific process is shown in the following figure: ![](../../../images/privacy/group_sig.jpg) -Due to its natural anonymity, group / ring signatures have a wide range of applications in scenarios where the identity of participants needs to be concealed, such as anonymous voting, anonymous auctions, anonymous auctions, etc., and can even be used to implement anonymous transfers in the blockchain UTXO model.。At the same time, because the group signature is traceable, it can be used in scenarios that require regulatory intervention, and the regulator acts as the group owner or entrusts the group owner to reveal the identity of the signer.。 +Due to its natural anonymity, group / ring signatures have a wide range of applications in scenarios where the identity of participants needs to be concealed, such as anonymous voting, anonymous auctions, anonymous auctions, etc., and can even be used to implement anonymous transfers in the blockchain UTXO model。At the same time, because the group signature is traceable, it can be used in scenarios that require regulatory intervention, and the regulator acts as the group owner or entrusts the group owner to reveal the identity of the signer。 -## 4. Pre-compiled contract interface. +## 4. Pre-compiled contract interface **The precompiled contract addresses for group signature verification and ring signature verification are as follows:** @@ -64,7 +64,7 @@ Due to its natural anonymity, group / ring signatures have a wide range of appli **The group / signature verification interface is as follows:** -- group signature +- Group Signature ```cpp // GroupSigPrecompiled.sol @@ -76,14 +76,14 @@ Due to its natural anonymity, group / ring signatures have a wide range of appli * @tparam message: Plaintext message corresponding to group signature * @param gpkInfo: Group Information * @param paramInfo: Group meta information - * @return int: Error code, 0 means no exception occurred; -51700 indicates group signature verification failed; -50101 indicates that an illegal group signature verification interface was called. - * bool: Group signature verification result. False indicates that the verification fails. True indicates that the verification succeeds. + * @return int: Error code, 0 means no exception occurred; -51700 indicates group signature verification failed; -50101 indicates that an illegal group signature verification interface was called + * bool: Group signature verification result. False indicates that the verification fails. True indicates that the verification succeeds */ function groupSigVerify(string signature, string message, string gpkInfo, string paramInfo) public constant returns(int, bool); } ``` -- ring signature +- Ring Signature ```cpp // RingSigPrecompiled.sol @@ -92,10 +92,10 @@ Due to its natural anonymity, group / ring signatures have a wide range of appli /** * Ring Signature Verification Interface * @tparam signature: ring signature - * @tparam message: The plaintext message corresponding to the ring signature. + * @tparam message: The plaintext message corresponding to the ring signature * @param paramInfo: Ring Information - * @return int: Error code, 0 means no exception occurred; -51800 indicates that ring signature verification failed; -50101 indicates that an illegal ring signature verification interface was called. - * bool: The result of ring signature verification. False indicates that the verification fails. True indicates that the verification succeeds. + * @return int: Error code, 0 means no exception occurred; -51800 indicates that ring signature verification failed; -50101 indicates that an illegal ring signature verification interface was called + * bool: The result of ring signature verification. False indicates that the verification fails. True indicates that the verification succeeds */ function ringSigVerify(string signature, string message, string paramInfo) public constant returns(int, bool); } diff --git a/3.x/en/docs/contract_develop/c++_contract/use_kv_precompiled.md b/3.x/en/docs/contract_develop/c++_contract/use_kv_precompiled.md index ed1780388..d62b593ae 100644 --- a/3.x/en/docs/contract_develop/c++_contract/use_kv_precompiled.md +++ b/3.x/en/docs/contract_develop/c++_contract/use_kv_precompiled.md @@ -4,20 +4,20 @@ Tags: "Precompiled Contracts" "CRUD" "" Blockchain Applications "" ---------- -This article will introduce the KV storage function of FISCO BCOS 3.x to help developers develop block chain applications more efficiently and easily.。 +This article will introduce the KV storage function of FISCO BCOS 3.x to help developers develop block chain applications more efficiently and easily。 -**Special note: Solidity contracts that use KV to store precompiled contracts must be higher than version 0.6.0 and use ABIEncoderV2.** +**Special note: Solidity contracts that use KV to store precompiled contracts must be higher than version 0.6.0 and use ABIEncoderV2** ## KV storage usage -Currently, you can use the KV storage function in two ways: the KVTable contract and the Java SDK KVTable Service interface.。 +Currently, you can use the KV storage function in two ways: the KVTable contract and the Java SDK KVTable Service interface。 ### 1. KVTable contract -- The Solidity contract only needs to introduce the Table.sol abstract interface contract file officially provided by FISCO BCOS.。 -- webankblockchain-liquid (hereinafter referred to as WBC-Liquid) The contract declares the KVTable interface before implementing the contract.。 +-The Solidity contract only needs to introduce the Table.sol abstract interface contract file provided by FISCO BCOS。 +-webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract can be used by declaring the interface of KVTable before implementing the contract。 -Table contains a dedicated smart contract interface for distributed storage. The interface is implemented on a blockchain node. TableManager can create a dedicated KV table, and KVTable can be used as a table for get / set operations.。The following are introduced separately。 +Table contains a dedicated smart contract interface for distributed storage. The interface is implemented on a blockchain node. TableManager can create a dedicated KV table, and KVTable can be used as a table for get / set operations。The following are introduced separately。 #### 1.1 TableManager contract interface @@ -25,8 +25,8 @@ Used to create a KV table and open the KV table. The fixed contract addresses ar | Interface| Function| Parameters| Return value| |-------------------------------|------------|--------------------------------------------|---------------------------------------------| -| createKVTable(string ,string) | Create Table| Table name, primary key name (currently only a single primary key is supported), field name| The error code (int32) is returned. For details, see the following table.| -| openTable(string) | Get Table Address| Table name. This interface is only used for Solidity| Returns the real address of the table. If it does not exist, 0x0 is returned.| +| createKVTable(string ,string) | Create Table| Table name, primary key name (currently only a single primary key is supported), field name| The error code (int32) is returned. For details, see the following table| +| openTable(string) | Get Table Address| Table name. This interface is only used for Solidity| Returns the real address of the table. If it does not exist, 0x0 is returned| #### 1.2 The KVTable Contract @@ -35,7 +35,7 @@ Used to access table data. The interface is as follows: | Interface| Function| Parameters| Return value| |--------------------|--------|--------------|-----------------------------------------------------------| | get(string) | Get Value| primary key| Return bool value and string. If the query fails, the first return value will be false| -| set(string,string) | Set value| Primary key, field value| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes.| +| set(string,string) | Set value| Primary key, field value| The error code (int32) is returned. If the error code is successful, 1 is returned. See the following table for other error codes| The interface returns an error code: @@ -52,7 +52,7 @@ The interface returns an error code: | -50008 | Illegal character in field| | 其他| Other errors encountered while creating| -With the above understanding of the KVTable abstract interface contract, the development of the KVTable contract can now be formally carried out.。 +With the above understanding of the KVTable abstract interface contract, the development of the KVTable contract can now be formally carried out。 ### 2. Solidity contract uses KVTable @@ -80,7 +80,7 @@ constructor () public{ / / Create a TableManager object whose fixed address on the blockchain is 0x1002 tm = TableManager(address(0x1002)); - / / Create the t _ kv _ test table. The primary key of the table is id, and other fields are item _ name. + / / Create the t _ kv _ test table. The primary key of the table is id, and other fields are item _ name tm.createKVTable(tableName, "id", "item_name"); / / Get the real address, which is stored in the contract @@ -89,7 +89,7 @@ constructor () public{ } ``` -**Note:** This step is optional: for example, if the new contract only reads and writes the table created by the old contract, you do not need to create the table.。 +**Note:** This step is optional: for example, if the new contract only reads and writes the table created by the old contract, you do not need to create the table。 #### 2.3 KV read and write operation for the table @@ -117,11 +117,11 @@ function get(string memory id) public view returns (bool, string memory) { } ``` -### 3. WBC-The Liquid contract uses the KVTable interface +### 3. The WBC-Liquid contract uses the KVTable interface #### 3.1 Declaring the KVTable interface -at WBC-Declare the interface of the KVTable before using the interface in the Liquid contract。 +Declare the KVTable interface before using it in the WBC-Liquid contract。 ```rust #![cfg_attr(not(feature = "std"), no_std)] @@ -165,9 +165,9 @@ mod kv_table { #### 3.2 WBC-Liquid Create Table -Available at WBC-The logic for creating a table is implemented in the constructor of Liquid. The address of the TableManager introduced here is the BFS path '/ sys / table _ manager'. Note that WBC-Difference between Liquid and Solidity。 +The logic for creating a table can be implemented in the constructor of WBC-Liquid. The address of the TableManager introduced here is the BFS path '/ sys / table _ manager'. Note the difference between WBC-Liquid and Solidity。 -The principle of creating a table is similar to that of Solidity, so I won't repeat it again.。 +The principle of creating a table is similar to that of Solidity, so I won't repeat it again。 ```rust pub fn new(&mut self) { @@ -214,7 +214,7 @@ pub fn get(&self, id: String) -> (bool, String) { ### 4. SDK KVTable Service interface -FISCO BCOS 3.x SDK provides KVTable Service data connection ports. These interfaces are implemented by calling a precompiled KVTable contract built into the blockchain to read and write user tables.。The Java SDK KVTable Service is implemented in the org.fisco.bcos.sdk.v3.contract.precompiled.crud.KVTableService class. Its interfaces are as follows: +FISCO BCOS 3.x SDK provides KVTable Service data connection ports. These interfaces are implemented by calling a precompiled KVTable contract built into the blockchain to read and write user tables。The Java SDK KVTable Service is implemented in the org.fisco.bcos.sdk.v3.contract.precompiled.crud.KVTableService class. Its interfaces are as follows: | Interface| Function| Parameters| Return value| |-------------------------------------|--------------|----------------------|--------------------------| @@ -223,4 +223,4 @@ FISCO BCOS 3.x SDK provides KVTable Service data connection ports. These interfa | get(String, String) | Read Data| Table name, primary key name| String | | desc(String) | Query table information| Table Name| KeyField and valueField for tables| -The call to the write interface will generate the equivalent transaction to the call to the KV contract interface, which will not be stored until the consensus node consensus is consistent.。 +The call to the write interface will generate the equivalent transaction to the call to the KV contract interface, which will not be stored until the consensus node consensus is consistent。 diff --git a/3.x/en/docs/contract_develop/c++_contract/use_precompiled.md b/3.x/en/docs/contract_develop/c++_contract/use_precompiled.md index bc7dfafd1..7d9e72617 100644 --- a/3.x/en/docs/contract_develop/c++_contract/use_precompiled.md +++ b/3.x/en/docs/contract_develop/c++_contract/use_precompiled.md @@ -4,13 +4,13 @@ Tags: "Precompiled Contracts" "BFS" "CRUD" --- -FISCO BCOS 3.0 follows the FISCO BCOS 2.0 version of the precompiled contract。In the future, we will also try to abstract the existing typical business scenarios and develop them into pre-compiled contract templates as the basic capability provided by the underlying layer to help users use FISCO BCOS in their business faster and more conveniently.。 +FISCO BCOS 3.0 follows the FISCO BCOS 2.0 version of the precompiled contract。In the future, we will also try to abstract the existing typical business scenarios and develop them into pre-compiled contract templates as the basic capability provided by the underlying layer to help users use FISCO BCOS in their business faster and more conveniently。 Principles of Precompiled Contracts and FISCO BCOS 2.0+Similar to the version, users can refer to the link when studying its principles: [FISCO BCOS Precompiled Contract Architecture](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/precompiled_contract.html?highlight=%E9%A2%84%E7%BC%96%E8%AF%91#fisco-bcos)。 ## Advantages of Precompiled Contracts -**Access to distributed storage interfaces**Based on this framework, users can access the local DB storage state and implement any logic they need.。 +**Access to distributed storage interfaces**Based on this framework, users can access the local DB storage state and implement any logic they need。 **Better performance**Since the implementation is C++The code will be compiled in the underlying layer without entering the EVM for execution, which can have better performance。 @@ -18,7 +18,7 @@ Principles of Precompiled Contracts and FISCO BCOS 2.0+Similar to the version, u ## FISCO BCOS 3.x Precompiled Contracts and Addresses -Currently, the Solidity contract only supports the address type of 20 bytes as the calling object, while Liquid supports the address of the string to call the contract, so the address of the precompiled contract is divided into two versions: Solidity and Liquid.。 +Currently, the Solidity contract only supports the address type of 20 bytes as the calling object, while Liquid supports the address of the string to call the contract, so the address of the precompiled contract is divided into two versions: Solidity and Liquid。 Addresses in this table are for Solidity contracts only。 @@ -36,7 +36,7 @@ Addresses in this table are for Solidity contracts only。 | 0x5005 | RingSignPrecompield | Ring Signature System Contract| | 0x5100 | ZKPPrecompiled | ZKP System Contract| -The BFS path of the following table is only used for webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract。 +The BFS paths in the following table are only used for webankblockchain-liquid (WBC-Liquid) contracts。 | BFS Path| 合同| Description| | :----------------- | :---------------------- | :------------------------- | @@ -53,11 +53,11 @@ The BFS path of the following table is only used for webankblockchain-liquid (he ## How to use the FISCO BCOS precompiled contract interface -The steps for a smart contract to invoke a precompiled contract are similar to those for invoking a normal contract, as follows. +The steps for a smart contract to invoke a precompiled contract are similar to those for invoking a normal contract, as follows -- Introducing an interface declaration: Introducing a contract file for a precompiled contract, or declaring an interface in the same smart contract file; -- Specify the contract address: According to the address table of the precompiled contract, the corresponding address can be used when initializing the object.; -- Call the object interface: After initializing the object, use the object to call the method interface.; +- Introduction of interface declarations: Introduction of pre-compiled contract contract files, or declaration of interfaces in the same smart contract file; +- Specify the contract address: According to the address table of the precompiled contract, use the corresponding address when initializing the object; +-Call the object interface: After initializing the object, use the object to call the method interface; The following uses the Table contract as an example to call the Table interface: @@ -79,7 +79,7 @@ The following uses the Table contract as an example to call the Table interface: TableManager constant tm = TableManager(address(0x1002)); ``` -3. Call the object interface. +3. Call the object interface Call the initialized 'TableManager' object interface to create a 'Table' contract object @@ -87,7 +87,7 @@ The following uses the Table contract as an example to call the Table interface: Table table; string constant TABLE_NAME = "t_test"; constructor () public{ - / / Create the t _ test table. The primary key of the table is id, and the other fields are name and age. + / / Create the t _ test table. The primary key of the table is id, and the other fields are name and age string[] memory columnNames = new string[](2); columnNames[0] = "name"; columnNames[1] = "age"; diff --git a/3.x/en/docs/contract_develop/opcode_diff.md b/3.x/en/docs/contract_develop/opcode_diff.md index 1fc47ea79..4eced87eb 100644 --- a/3.x/en/docs/contract_develop/opcode_diff.md +++ b/3.x/en/docs/contract_develop/opcode_diff.md @@ -2,8 +2,8 @@ This document describes the differences in execution behavior between FISCO BCOS and Ethereum from the perspective of OPCODE: -1. The basic operating instructions are the same as Ethereum. -2. The password calculation OPCODE of the state secret version is the state secret algorithm. +1. The basic operating instructions are the same as Ethereum +2. The password calculation OPCODE of the state secret version is the state secret algorithm 3. Alliance chain design * Balance related OPCODE needs to be opened with a switch * POW, POS related OPCODE default return 0 @@ -24,36 +24,36 @@ FISCO BCOS belongs to [Type 3](https://vitalik.eth.limo/general/2022/08/04/zkevm ## OPCODE Compatibility Description List -(*OPCODE not given in the table is supported by default, no difference-, half difference ○, full difference ●) +(*OPCODE not given in the table is the default support, no difference-, semi-difference ○, full difference ●) | Stack | Name | Differences| 描述| Supported Versions| | :---: | :----------- | ---- | ------------------------------------------------------------ | ------------------ | -| 20 | KECCAK256 | ○ | The type of national secret is SM3, and the non-national secret is unchanged.| | +| 20 | KECCAK256 | ○ | The type of national secret is SM3, and the non-national secret is unchanged| | | 31 | BALANCE | ○ | Effective after feature _ balance is enabled| 3.6.0+ | | 34 | CALLVALUE | ○ | Effective after feature _ balance is enabled| 3.6.0+ | -| 38 | CODESIZE | ○ | FISCO BCOS's precompiled contract returns 1 < br > Ethereum's original precompiled contract returns 0| 3.1.0+ | +| 38 | CODESIZE | ○ | FISCO BCOS's precompiled contract returns 1
Ethereum's original pre-compiled contract returns 0| 3.1.0+ | | 39 | CODECOPY | - | | 3.2.4+
3.6.0+ | -| 3A | GASPRICE | ○ | It takes effect after feature _ balance _ policy1 is enabled, and is configured in the tx _ gas _ price system configuration item.| 3.6.0+ | +| 3A | GASPRICE | ○ | It takes effect after feature _ balance _ policy1 is enabled, and is configured in the tx _ gas _ price system configuration item| 3.6.0+ | | 3B | EXTCODESIZE | - | | 3.1.0+ | | 3C | EXTCODECOPY | - | | 3.2.4+
3.6.0+ | | 3F | EXTCODEHASH | - | | 3.1.0+ | | 40 | BLOCKHASH | - | | 3.1.0+ | | 41 | COINBASE | ● | Return to 0| | -| 42 | TIMESTAMP | ○ | Returns the block timestamp in milliseconds < br > (in seconds in Ethereum).| | +| 42 | TIMESTAMP | ○ | Returns the block timestamp in milliseconds
(Unit is "seconds" in Ethereum)| | | 44 | PREVRANDAO | ● | Return to 0| | -| 45 | GASLIMIT | ○ | Returns the upper limit of gas for a single transaction through system contract configuration < br > (in Ethereum, the upper limit of gas for a block)| | +| 45 | GASLIMIT | ○ | Returns the gas limit for a "single transaction" through system contract configuration
(In Ethereum is the gas upper limit of the block)| | | 46 | CHAINID | ● | Return to 0| | | 47 | SELFBALANCE | ○ | Effective after feature _ balance is enabled| 3.6.0+ | | 48 | BASEFEE | ● | Return to 0| | | F2 | CALLCODE | - | | 3.1.0+ | | F4 | DELEGATECALL | - | | 3.1.0+ | -| F5 | CREATE2 | ○ | The calculation of the state secret calculates the address according to the SM3 algorithm.| 3.2.4+
3.6.0+ | +| F5 | CREATE2 | ○ | The calculation of the state secret calculates the address according to the SM3 algorithm| 3.2.4+
3.6.0+ | | FA | STATICCALL | - | The historical version is the same as CALL, 3.2.4 and 3.6.0 start to support| 3.2.4+
3.6.0+ | -| FF | SELFDESTRUCT | ○ | 3.1.0+Support Contract Destruction < br > 3.6.0+Support to destroy contract and recycle balance after opening featrue _ balance| 3.6.0+ | +| FF | SELFDESTRUCT | ○ | 3.1.0+Support Contract Destruction
3.6.0+Support to destroy contract and recycle balance after opening featrue _ balance| 3.6.0+ | ## Appendix: Compatibility Classification -Article [The different types of ZK-EVMs》](https://vitalik.eth.limo/general/2022/08/04/zkevm.html)Ethereum compatibility is ranked in +The Different Types of ZK-EVMs(https://vitalik.eth.limo/general/2022/08/04/zkevm.html)Ethereum compatibility is ranked in * Type 1:fully Ethereum-equivalent * Fully compatible with Ethereum, including RPC interfaces, EVM, and execution environments outside of EVM diff --git a/3.x/en/docs/contract_develop/solidity_develop.md b/3.x/en/docs/contract_develop/solidity_develop.md index d3d675f09..9fbe1d713 100644 --- a/3.x/en/docs/contract_develop/solidity_develop.md +++ b/3.x/en/docs/contract_develop/solidity_develop.md @@ -4,19 +4,19 @@ Tags: "Solidity" "Contract Development" ---- FISCO BCOS supports implementing smart contracts in several ways -* [Solidity](https://solidity.readthedocs.io/en/latest/)The contract programming language used in the Ethereum ecosystem, FISCO BCOS expands a series of functions for the alliance chain, and is the most commonly used way to develop smart contracts on FISCO BCOS.。 +* [Solidity](https://solidity.readthedocs.io/en/latest/)The contract programming language used in the Ethereum ecosystem, FISCO BCOS expands a series of functions for the alliance chain, and is the most commonly used way to develop smart contracts on FISCO BCOS。 * [Pre-Compiled Contract](./c++_contract/add_precompiled_impl.md): Built-in customized smart contracts directly inside blockchain nodes, using c++Language implementation, can directly call the node internal various interfaces, for complex scenarios, the use of high barriers to entry。 -* [WBC-Liquid](./Liquid_develop.md)The Rust-based smart contract programming language developed by the micro-blockchain, with the help of Rust language features, can achieve more powerful programming functions than the Solidity language.。 +* [WBC-Liquid](./Liquid_develop.md)The Rust-based smart contract programming language developed by the micro-blockchain, with the help of Rust language features, can achieve more powerful programming functions than the Solidity language。 Contract Development of Solidity Language on FISCO BCOS -- 基础的 - - The syntax is the same as that of Ethereum. For more information, see [Solidity official document](https://solidity.readthedocs.io/en/latest/)carry on learning。 +- Basic + - The syntax is the same as that of Ethereum, please refer to [Solidity official document](https://solidity.readthedocs.io/en/latest/)carry on learning。 -- Alliance Chain Expansion Oriented - - Write Solidity code to call the [built-in contract interface](./c++_contract/use_precompiled.html#fisco-bcos-3-x)CRUD contract, KVTable contract, group ring signature, system management, etc. +- For Alliance Chain Expansion + - Write Solidity code to call the [built-in contract interface](./c++_contract/use_precompiled.html#fisco-bcos-3-x)CRUD contract, KVTable contract, group ring signature, system management, etc ```eval_rst diff --git a/3.x/en/docs/design/amop_protocol.md b/3.x/en/docs/design/amop_protocol.md index 013ae6eb6..fdc406372 100644 --- a/3.x/en/docs/design/amop_protocol.md +++ b/3.x/en/docs/design/amop_protocol.md @@ -1,4 +1,4 @@ -# 20. On-chain information transfer protocol AMOP. +# 20. On-chain information transfer protocol AMOP Tags: "AMOP" "Messenger on Chain" "Private Topics" "Certification Process" " @@ -6,27 +6,27 @@ Tags: "AMOP" "Messenger on Chain" "Private Topics" "Certification Process" " ## Introduction to On-Chain Messenger Protocol The Advanced Messages Onchain Protocol (AMOP) system is designed to provide a secure and efficient message channel for the consortium chain. All institutions in the consortium chain can use AMOP to communicate as long as they deploy blockchain nodes, whether they are consensus nodes or observation nodes. AMOP has the following advantages: -- Real-time: AMOP messages do not rely on blockchain transactions and consensus. Messages are transmitted between nodes in real time with a latency of milliseconds.。 -- Reliable: When AMOP messages are transmitted, all feasible links in the blockchain network are automatically searched for communication, and as long as at least one link is available between the sending and receiving parties, the message is guaranteed to be reachable.。 -- Efficient: AMOP message structure is simple, efficient processing logic, only a small amount of cpu occupation, can make full use of network bandwidth。 -- Security: All communication links of AMOP use SSL encryption, the encryption algorithm can be configured, support authentication mechanism。 -- Easy to use: when using AMOP, no need to do any additional configuration in the SDK。 +- Real-time: AMOP messages do not rely on blockchain transactions and consensus, and messages are transmitted in real time between nodes with a latency of milliseconds。 +-Reliable: When AMOP messages are transmitted, all feasible links in the blockchain network are automatically searched for communication, as long as at least one link is available between the sending and receiving parties, the message is guaranteed to be reachable。 +- Efficient: AMOP message structure is simple, efficient processing logic, only a small amount of CPU occupation, can make full use of network bandwidth。 +- Security: AMOP all communication links using SSL encryption, encryption algorithm can be configured, support authentication mechanism。 +-Easy to use: When using AMOP, no additional configuration is required in the SDK。 -Please refer to [Java SDK AMOP] to use the AMOP function.(../sdk/java_sdk/amop.md). +Please refer to [Java SDK AMOP] to use the AMOP function(../sdk/java_sdk/amop.md). ## logical architecture ![](../../images/sdk/AMOP.jpg) Taking the typical IDC architecture of a bank as an example, the regional overview: -- Off-chain area: The business service area within the organization. The business subsystems in this area use the blockchain SDK to connect to blockchain nodes.。 -- Blockchain P2P network: This area deploys the blockchain nodes of each organization. This area is a logical area. Blockchain nodes can also be deployed inside the organization.。 +- Off-chain area: the business service area inside the organization. The business subsystems in this area use the blockchain SDK to connect to the blockchain nodes。 +- Blockchain P2P network: This area deploys blockchain nodes of various institutions. This area is a logical area. Blockchain nodes can also be deployed inside institutions。 ## Common Topics -AMOP's messaging is based on the topic (Topic) subscription mechanism, where the subscriber first subscribes to a topic, and the sender sends a message to the topic, which the subscriber receives.。 +AMOP's messaging is based on the topic (Topic) subscription mechanism, where the subscriber first subscribes to a topic, and the sender sends a message to the topic, which the subscriber receives。 **Send method and content** @@ -44,18 +44,18 @@ Send content: ## Private Topics -Under normal configuration, any recipient who subscribes to a topic can receive messages pushed by the sender.。However, in some scenarios, the sender only wants a specific recipient to receive the message and does not want unrelated recipients to listen to the topic arbitrarily.。In this scenario, you need to use the private topic function。 +Under normal configuration, any recipient who subscribes to a topic can receive messages pushed by the sender。However, in some scenarios, the sender only wants a specific recipient to receive the message and does not want unrelated recipients to listen to the topic arbitrarily。In this scenario, you need to use the private topic function。 -**Private Topics**: For a specific topic, the sender configures the public key of the desired recipient, and only the subscriber corresponding to the public key can receive the message of the private topic.。 +**Private Topics**: For a specific topic, the sender configures the public key of the desired recipient, and only the subscriber corresponding to the public key can receive the message of the private topic。 The Private Topics feature is new since FISCO BCOS 2.1.0。The use process is as follows: -- 1: The receiver uses [Generate Public-Private Key Script](./account.md)Generate a public-private key file, keep the private key, and give the public key to the producer.。 -- 2: Refer to the configuration case to match the configuration file。Start the receiving end and the sending end to send and receive messages。 +-1: Receiver uses [Generate Public-Private Key Script](./account.md)Generate a public-private key file, keep the private key, and give the public key to the producer。 +-2: Refer to the configuration case to match the configuration file。Start the receiving end and the sending end to send and receive messages。 ```eval_rst .. important:: - Note: Currently, AMOP private topics only support non-state secret algorithms. Therefore, when generating public and private key files, use non-state secret tools to generate them.。 + Note: Currently, AMOP private topics only support non-state secret algorithms. Therefore, when generating public and private key files, use non-state secret tools to generate them。 ``` **Certification process for private topics** @@ -64,14 +64,14 @@ Assume that off-chain system 1 is the topic message sender (message sender) and ![](../../images/sdk/AMOP_AUTHOR.jpg) -- 1: The off-chain system 2 connects to Node2 and claims to subscribe to T1. Node2 adds T1 to the topic list and adds 1 to seq。Simultaneously synchronize seq to other nodes every 5 seconds。 -- 2: After receiving the seq, Node1 compares the local seq with the synchronized seq and finds that there is an inconsistency, then obtains the latest topic list from Node2 and updates the topic list to the p2p topic list. For private topics that have not yet been authenticated, the status is set to Pending Authentication。Node1 traversal list。For each private topic to be certified, do the following: - - 2.1: Node1 pushes messages to Node1(Message type 0x37), request off-chain system 1 to initiate the private topic authentication process。 - - 2.2: After receiving the message, the off-chain system 1 generates a random number and uses the amop message(Message type 0x30)Send the message out and listen back for the packet。 - - 2.3: Messages pass through off-chain system 1-->Node1-->Node2--> The route of the off-chain system 2. After receiving the message, the off-chain system 2 resolves the random number and signs the random number with the private key。 - - 2.4: Signature Package(Message type 0x31)Pass through out-of-chain system 2-->Node2-->Node1-> The route of the off-chain system 1. After the off-chain system 1 receives the signature packet, it parses the signature and uses the public key to verify the signature。 - - 2.5: After the signature is verified by the off-chain system 1, the message is sent(Message type 0x38)request the node to update the topic status (authentication success or authentication failure)。 -- 3: If the authentication is successful, after a message from the off-chain system reaches Node1, Node1 will forward the message to Node2, and Node2 will push the message to the off-chain system 2。 +-1: The off-chain system 2 connects to Node2 and claims to subscribe to T1. Node2 adds T1 to the topic list and adds 1 to seq。Simultaneously synchronize seq to other nodes every 5 seconds。 +-2: After receiving the seq, Node1 compares the local seq with the synchronized seq and finds that there is an inconsistency, then obtains the latest topic list from Node2 and updates the topic list to the p2p topic list. For private topics that have not yet been authenticated, the status is set to Pending Authentication。Node1 traversal list。For each private topic to be certified, do the following: + -2.1: Node1 pushes messages to Node1(Message type 0x37), request off-chain system 1 to initiate the private topic authentication process。 + -2.2: After receiving the message, the off-chain system 1 generates a random number and uses the amop message(Message type 0x30)Send the message out and listen back for the packet。 + -2.3: Messages pass through the off-chain system 1-->Node1-->Node2-->The route of the off-chain system 2, which resolves the random number after receiving the message and signs the random number with the private key。 + - 2.4: Signature package(Message type 0x31)Through out-of-chain system 2 ->Node2-->Node1->The route of the off-chain system 1, after the off-chain system 1 receives the signature packet, parses the signature and verifies the signature using the public key。 + -2.5: After the signature is verified by the out-of-chain system 1, the message is sent(Message type 0x38)request the node to update the topic status (authentication success or authentication failure)。 +-3: If the authentication is successful, after a message from the off-chain system reaches Node1, Node1 will forward the message to Node2, and Node2 will push the message to the off-chain system 2。 @@ -83,9 +83,9 @@ Same private topics support unicast and multicast, send text and files。 ## error code -- 99: Failed to send the message. After AMOP attempts through all links, the message cannot be sent to the server. It is recommended to use the 'seq' generated during sending to check the processing of each node on the link。 -- 100: After attempting to pass through all links between blockchain nodes, the message cannot be sent to the node that can receive the message, and like the error code '99', it is recommended to use the 'seq' generated at the time of sending to check the processing of each node on the link.。 -- 101: The blockchain node pushes the message to the Sdk, and after trying to pass through all the links, it fails to reach the Sdk end, which is the same as the error code '99'. It is recommended to use the 'seq' generated at the time of sending to check the processing of each node on the link and the Sdk。 -- 102: The message timed out. It is recommended to check whether the server correctly handles the message and whether the bandwidth is sufficient。 -- 103: The AMOP request sent by the SDK to the node is rejected due to the bandwidth limit of the node.。 +-99: Failed to send the message. After AMOP attempts to pass through all links, the message cannot be sent to the server. It is recommended to use the 'seq' generated during sending to check the processing of each node on the link。 +-100: After attempting to pass through all links between blockchain nodes, the message cannot be sent to the node that can receive the message. Like the error code '99', it is recommended to use the 'seq' generated at the time of sending to check the processing of each node on the link。 +-101: The blockchain node pushes a message to the Sdk, and after trying to pass through all links, it fails to reach the Sdk end. Like the error code '99', it is recommended to use the 'seq' generated during sending to check the processing of each node on the link and the Sdk。 +-102: Message timeout, it is recommended to check whether the server correctly handles the message and whether the bandwidth is sufficient。 +-103: The AMOP request sent by the SDK to the node is rejected due to the bandwidth limit of the node。 diff --git a/3.x/en/docs/design/architecture.md b/3.x/en/docs/design/architecture.md index 7713df751..8c5f2eacf 100644 --- a/3.x/en/docs/design/architecture.md +++ b/3.x/en/docs/design/architecture.md @@ -4,39 +4,39 @@ Tags: "Design" "Architecture" ---------- -FISCO BCOS 3.x version adopted**Microservices Modularization**Design architecture, the overall system includes five aspects: access layer, scheduling layer, computing layer, storage layer and management layer.。The following describes the functional design of each layer。 +FISCO BCOS 3.x version adopted**Microservices Modularization**Design architecture, the overall system includes five aspects: access layer, scheduling layer, computing layer, storage layer and management layer。The following describes the functional design of each layer。 -- **access layer**The access layer is mainly responsible for the blockchain.**The ability to connect**, including the "external gateway service" that provides P2P capabilities and the "internal gateway service" that provides SDK access.。In the system of the alliance chain, the "external gateway service" manages the entrance and exit of the organization's external connection and is responsible for the security certification at the organization level.。The "internal gateway service" provides access to clients (applications) within the organization.。Both gateway services can be scaled in parallel, deployed in multiple locations, and load-balanced to meet high availability requirements.。 +- **access layer**The access layer is mainly responsible for the blockchain**The ability to connect**, including the "external gateway service" that provides P2P capabilities and the "internal gateway service" that provides SDK access。In the system of the alliance chain, the "external gateway service" manages the entrance and exit of the organization's external connection and is responsible for the security certification at the organization level。The "internal gateway service" provides access to clients (applications) within the organization。Both gateway services can be scaled in parallel, deployed in multiple locations, and load-balanced to meet high availability requirements。 *** -- **scheduling layer**: The scheduling layer is the "brain center" system for the operation and scheduling of the blockchain kernel and is responsible for the entire blockchain system.**operation scheduling**, including network distribution scheduling, transaction pool management, consensus mechanism, calculation scheduling and other modules.。Among them, the network distribution module is mainly to realize the interconnection communication function with the access layer and handle the message distribution logic.;Trading pool management is mainly responsible for the receipt of transactions, signature verification, elimination and other functions.;The consensus mechanism is responsible for transaction sequencing, block packaging, and distributed consensus on block results to ensure consistency.;The calculation scheduling completes the scheduling processing of transaction verification (the core is the verification of smart contracts), and realizes parallel verification, which is the key to the throughput of the entire system.。 +- **scheduling layer**: The scheduling layer is the "brain center" system for the operation and scheduling of the blockchain kernel and is responsible for the entire blockchain system**operation scheduling**, including network distribution scheduling, transaction pool management, consensus mechanism, calculation scheduling and other modules。Among them, the network distribution module is mainly to realize the interconnection communication function with the access layer and handle the message distribution logic;Trading pool management is mainly responsible for the receipt of transactions, signature verification, elimination and other functions;The consensus mechanism is responsible for transaction sequencing, block packaging, and distributed consensus on block results to ensure consistency;The calculation scheduling completes the scheduling processing of transaction verification (the core is the verification of smart contracts), and realizes parallel verification, which is the key to the throughput of the entire system。 *** -- **calculation layer**Mainly responsible for:**Transaction Validation**The transaction decoding needs to be executed in the contract virtual machine to obtain the transaction execution result.。Transaction verification is the core of the entire blockchain, especially for blockchain systems based on smart contracts, and the calculation of transaction verification can cost a lot of CPU overhead.。Therefore, it is very important to realize the parallel expansion of transaction verification calculation through clustering mode.。 +- **calculation layer**Mainly responsible for:**Transaction Validation**The transaction decoding needs to be executed in the contract virtual machine to obtain the transaction execution result。Transaction verification is the core of the entire blockchain, especially for blockchain systems based on smart contracts, and the calculation of transaction verification can cost a lot of CPU overhead。Therefore, it is very important to realize the parallel expansion of transaction verification calculation through clustering mode。 *** -- **Storage Tier**: The storage layer is responsible for**Drop Disk Storage**The storage layer focuses on how to support the storage of massive amounts of data, using distributed storage clusters to achieve scalable storage capacity.。The distributed storage industry already has many stable and reusable open source components (such as TiKV), and this layer will reuse mature components.。 +- **Storage Tier**: The storage layer is responsible for**Drop Disk Storage**The storage layer focuses on how to support the storage of massive amounts of data, using distributed storage clusters to achieve scalable storage capacity。The distributed storage industry already has many stable and reusable open source components (such as TiKV), and this layer will reuse mature components。 *** -- **Management**: The management is implemented for each module of the entire blockchain system.**visual management**platform, including management functions such as deployment, configuration, logging, and network routing。The FISCO BCOS 3.x system architecture is built based on the open-source microservice framework Tars, and the capabilities of this layer reuse the mature Tars.-Framework Management Components。 +- **Management**: The management is implemented for each module of the entire blockchain system**visual management**platform, including management functions such as deployment, configuration, logging, and network routing。The FISCO BCOS 3.x system architecture is built based on the open-source microservice framework Tars. The capabilities of this layer reuse mature Tars-Framework management components。 ![](../../images/design/fisco_bcos_system_architecture.png) FISCO BCOS 3.x uses a microservices architecture, but also supports**Flexible split combination**microservices modules, thereby building different morphologies of service patterns, including "**Lightweight Air version**”、“**Pro Edition**"和"**Large Capacity Max Edition**”。 -- **Lightweight Air Edition**: Adopting all-in-The one encapsulation mode compiles all modules into a binary (process), a process is a blockchain node, including all functional modules such as network, consensus, access, etc., using local RocksDB storage。It is suitable for beginners, functional verification, POC products, etc.。 +- **Lightweight Air Edition**All-in-one encapsulation mode is used to compile all modules into a binary (process), a process is a blockchain node, including all functional modules such as network, consensus, and access, using local RocksDB storage。It is suitable for beginners, functional verification, POC products, etc。 *** -- **Pro Edition**It consists of two access layer services: RPC and Gateway, and multiple blockchain node services. One node service represents a group, and the storage uses local RocksDB. All nodes share access layer services. The two access layer services can be extended in parallel.。It is suitable for production environments with controllable capacity (within T level) and can support multi-group expansion.。 +- **Pro Edition**It consists of two access layer services: RPC and Gateway, and multiple blockchain node services. One node service represents a group, and the storage uses local RocksDB. All nodes share access layer services. The two access layer services can be extended in parallel。It is suitable for production environments with controllable capacity (within T level) and can support multi-group expansion。 *** -- **Large Capacity Max Edition**: Consists of all services at each layer, each service can be independently extended, storage adopts distributed storage TiKV, management adopts Tars-Framework Services。It is suitable for scenarios where massive transactions are linked and a large amount of data needs to be stored on disk.。 +- **Large Capacity Max Edition**Consists of all services at each layer, each service can be independently extended, storage adopts distributed storage TiKV, management adopts Tars-Framework service。It is suitable for scenarios where massive transactions are linked and a large amount of data needs to be stored on disk。 ![](../../images/design/fisco_bcos_version.png) diff --git a/3.x/en/docs/design/boostssl.md b/3.x/en/docs/design/boostssl.md index 5a870de07..10889418c 100644 --- a/3.x/en/docs/design/boostssl.md +++ b/3.x/en/docs/design/boostssl.md @@ -1,17 +1,17 @@ -# 19. Public network component BoostSSL. +# 19. Public network component BoostSSL Tags: "network components" "boostssl" ---- -'boostssl 'is'fisco-Bcos' provides a public network component, built-in http, websocket two protocols, support for state-secret, non-state-secret SSL connection, in 'FISCO-BCOS 3.0 'used in multiple modules。 +'boostssl 'is a public network component provided by' fisco-bcos', built-in http, websocket two protocols, supports the state secret, non-state secret SSL connection, used in 'FISCO-BCOS 3.0' multiple modules。 ## 1. 目标 - Support state secret, non-state secret SSL connection - Support HTTP protocol -- Support for WebSocket protocol -- Simple, easy-to-use interface +- Support WebSocket protocol +- Simple, easy to use interface ## 2. 设计 @@ -53,17 +53,17 @@ To be added **Compile:** -- 'Linux 'Compile +- 'Linux' Compile ```shell # source /opt/rh/devtoolset-7/enable # centos execution cd bcos-boostssl mkdir build && cd build -cmake ../ -DBUILD_SAMPLE=ON # Centos uses cmake3, BUILD _ SAMPLE to compile the sample program of the sample directory. +cmake ../ -DBUILD_SAMPLE=ON # Centos uses cmake3, BUILD _ SAMPLE to compile the sample program of the sample directory ``` -- 'macOS 'Compile +- 'macOS' Compile ```shell cd bcos-boostssl @@ -83,11 +83,11 @@ MSBuild bcos-boostssl.sln /p:Configuration=Release /p:Platform=x64 ## 5. Case -`bcos-boostssl 'in' FISCO-Use in BCOS ': +Use of 'bcos-boostssl' in 'FISCO-BCOS': - `bcos-cpp-sdk`: Use 'boostssl' as the client to connect to the node 'rpc', see: - [github]() - [gitee]() -- `FISCO-BCOS rpc 'module: Use 'boostssl' as the server to provide 'RPC' services for both the 'http' and 'websocket' protocols. For more information, see: +- 'FISCO-BCOS rpc' module: Use 'boostssl' as the server to provide 'RPC' services for both the 'http' and 'websocket' protocols. For more information, see: - [github]() - [gitee]() diff --git a/3.x/en/docs/design/cns_contract_name_service.md b/3.x/en/docs/design/cns_contract_name_service.md index e527080b0..c0f24f871 100644 --- a/3.x/en/docs/design/cns_contract_name_service.md +++ b/3.x/en/docs/design/cns_contract_name_service.md @@ -6,9 +6,9 @@ Tags: "Contract Naming Service" "CNS Table Structure" ## 注意 -CNS service only in 'FISCO BCOS 2.+Version 'can be used in' FISCO BCOS 3.+Version 'has been deprecated and is managed in a more friendly tree contract directory. For more information, see [Contract File System BFS](./contract_directory.md) +CNS service only in 'FISCO BCOS 2+Version 'can be used in' FISCO BCOS 3+Version 'has been deprecated and is managed in a more friendly tree contract directory. For more information, see [Contract File System BFS](./contract_directory.md) -**Migration Instructions:** Due to the abandonment of the CNS interface, BFS contains the functions of the CNS and also provides the corresponding adaptation interface.。You can change the original CNS service interface to the BFS interface. The interface corresponds to the following table: +**Migration Instructions:** Due to the abandonment of the CNS interface, BFS contains the functions of the CNS and also provides the corresponding adaptation interface。You can change the original CNS service interface to the BFS interface. The interface corresponds to the following table: | Method Name| CNSService | BFSService | |--------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------| @@ -22,29 +22,29 @@ CNS service only in 'FISCO BCOS 2.+Version 'can be used in' FISCO BCOS 3.+Versio The process of invoking an Ethereum smart contract includes: 1. Preparation of contracts; -2. Compile the contract to get the contract interface abi description.; -3. Deploy the contract to get the contract address.; -4. Encapsulate the abi and address of the contract, and call the contract through tools such as SDK.。 +2. Compile the contract to get the contract interface abi description; +3. Deploy the contract to get the contract address; +4. Encapsulate the abi and address of the contract, and call the contract through tools such as SDK。 -As can be seen from the contract call process, the contract abi and the contract address must be prepared before the call.。This use of the following problems: +As can be seen from the contract call process, the contract abi and the contract address must be prepared before the call。This use of the following problems: -1. The contract abi is a long JSON string, and the caller does not need to directly sense it.; -2. The contract address is a magic number of 20 bytes, which is inconvenient to remember. If it is lost, the contract will be inaccessible.; +1. The contract abi is a long JSON string, and the caller does not need to directly sense it; +2. The contract address is a magic number of 20 bytes, which is inconvenient to remember. If it is lost, the contract will be inaccessible; 3. After the contract is redeployed, one or more callers need to update the contract address; -4. It is not convenient for version management and contract gray scale upgrade.。 +4. It is not convenient for version management and contract gray scale upgrade。 In order to solve the above problems and provide callers with a good smart contract invocation experience, FISCO BCOS proposes**CNS Contract Naming Service**。 ## noun explanation -- **CNS**(Contract Name Service) By providing a record of the mapping between the contract name and the contract address on the chain and the corresponding query function, it is convenient for the caller to call the contract on the chain by memorizing the simple contract name.。 +- **CNS**(Contract Name Service) By providing a record of the mapping between the contract name and the contract address on the chain and the corresponding query function, it is convenient for the caller to call the contract on the chain by memorizing the simple contract name。 - **CNS information**For contract name, contract version, contract address and contract abi - **CNS Table**Used to store CNS information ## Advantages of CNS compared with the original calling method of Ethereum -- Simplify contract invocation; -- Contract upgrade is transparent to callers and supports contract grayscale upgrade。 +- Simplify the way contracts are invoked; +- The contract upgrade is transparent to the caller and supports contract grayscale upgrade。 ## Benchmarking ENS @@ -52,15 +52,15 @@ ENS (Ethereum Name Service) , Ethereum Name Service。 ENS functions like our more familiar DNS(Domain Name Service)Domain name system, but provides not an Internet URL, but will Ethereum(Ethereum)The contract address and wallet address are expressed as xxxxxx.eth for accessing the contract or transferring money。Compared to the two: -- The address types mapped by ENS include contract address and wallet address. CNS can support this. When the address type is wallet address, the contract abi is empty.。 -- ENS has auction function, CNS does not need support。 +- ENS mapping address type includes contract address and wallet address, CNS can support, when the address type is wallet address contract abi is empty。 +-ENS has auction function, CNS does not need support。 - ENS supports multi-level domain names, CNS does not need to support。 ## Module Architecture ![](../../images/design/contract_name_service/cns_architecture.png) -< center > CNS architecture < / center > +
CNS architecture
## Core Process @@ -68,12 +68,12 @@ The process for calling an SDK deployment contract and calling the contract is a ![](../../images/design/contract_name_service/deploy_and_call.png) -< center > SDK deployment contract and call contract process < / center > +
SDK Deployment Contract and Call Contract Process
- When deploying a contract, the SDK generates the Java class corresponding to the contract, calls the deploy interface of the class to publish the contract to obtain the contract address, and then calls the CNS contract insert interface to link the CNS information。 -- When calling a contract, the SDK introduces the Java class of the contract and loads the instantiated。The load loading interface can pass in the contract address (the original Ethereum method) or the combination of the contract name and the contract version (CNS method). When the SDK processes the CNS method, the contract address is obtained by calling the CNS module to query the information on the chain.。 -- For contract invocations without a version number, the SDK implements the default invocation of the latest version of the contract.。 -- The abi information of the contract on the chain is an optional field.。 +- When calling the contract, the SDK introduces the Java class of the contract and loads the instantiated。The load loading interface can pass in the contract address (the original Ethereum method) or the combination of the contract name and the contract version (CNS method). When the SDK processes the CNS method, the contract address is obtained by calling the CNS module to query the information on the chain。 +- For contract invocations without a version number, the SDK implements the default invocation of the latest version of the contract。 +- The contract abi information on the chain is an optional field。 ## Data Structure @@ -89,11 +89,11 @@ CNS information is stored in system tables, with separate ledgers。The CNS tabl
Key
Expain
-< tr > < td > name < / td > < td > string < / td > < td > No < / td > < td > PRI < / td > < td > contract name, where name and version are joint primary keys < / td > < / tr > -< tr > < td > version < / td > < td > string < / td > < td > No < / td > < td > < / td > < td > contract version, name and version are joint primary keys < / td > < / tr > -< tr > < td > address < / td > < td > string < / td > < td > No < / td > < td > < / td > < td > contract address < / td > < / tr > -< tr > < td > abi < / td > < td > string < / td > < td > YES < / td > < td > < / td > < td > contract abi < / td > < / tr > -< tr > < td > _ status _ < / td > < td > string < / td > < td > No < / td > < td > < / td > < td > Distributed storage common field, "0" can be deleted with "1" < / td > < / tr > +namestringNoPRIContract name, name and version are joint primary keys +versionstringNoContract version, name and version are joint primary keys +addressstringNoContract Address +abistringYESContract abi +_status_stringNoDistributed storage general field, '0' can be deleted with' 1' ### Contract Interface @@ -108,16 +108,16 @@ contract CNS } ``` -- The CNS contract is not exposed to the user. It is the interface between the SDK and the underlying CNS table.。 -- The insert interface provides the function of uploading CNS information. The four parameters of the interface are contract name, contract version version, contract address addr, and contract ABI information abi.。The SDK call interface needs to determine whether the combination of name and version is duplicated with the original database record, and only on the premise that it is not duplicated can the chain transaction be initiated.。When a node executes a transaction, the precompiled logic will Double Check and discard the transaction if it finds duplicate data.。The insert interface only increases and does not change the contents of CNS tables.。 -- The selectByName interface parameter is the contract name name, which returns all different version records based on the contract in the table.。 -- The selectByNameAndVersion interface parameter is the contract name and contract version version, which returns the unique address of the contract version in the table.。 +- CNS contracts are not exposed to the user, for the SDK to interact with the underlying CNS tables。 +The -insert interface provides the function of uploading CNS information. The four parameters of the interface are contract name, contract version version, contract address addr, and contract ABI information abi。The SDK call interface needs to determine whether the combination of name and version is duplicated with the original database record, and only on the premise that it is not duplicated can the chain transaction be initiated。When a node executes a transaction, the precompiled logic will Double Check and discard the transaction if it finds duplicate data。The insert interface only increases and does not change the contents of CNS tables。 +The parameter of the -selectByName interface is the contract name name, which returns all different version records based on the contract in the table。 +The parameters of the -selectByNameAndVersion interface are the contract name and the contract version version. The unique address of the contract version in the table is returned。 #### Update CNS table mode -**Precompiled Contracts**Is the FISCO BCOS underlying through C++An efficient smart contract implemented for the configuration and management of system information at the bottom of FISCO BCOS.。After the introduction of the precompiled logic, the FISCO BCOS node executes the transaction as follows. +**Precompiled Contracts**Is the FISCO BCOS underlying through C++An efficient smart contract implemented for the configuration and management of system information at the bottom of FISCO BCOS。After the introduction of the precompiled logic, the FISCO BCOS node executes the transaction as follows -The CNS contract is a pre-compiled contract type, and the node will pass the built-in C.++Code logic implements insert and query operations on CNS tables without EVM execution, so CNS contracts only provide function interface descriptions and no function implementations。**Preset precompiled address of CNS contract to 0x1004。** +The CNS contract is a pre-compiled contract type, and the node will pass the built-in C++Code logic implements insert and query operations on CNS tables without EVM execution, so CNS contracts only provide function interface descriptions and no function implementations。**Preset precompiled address of CNS contract to 0x1004。** #### Contract interface return example @@ -151,14 +151,14 @@ SDK developers can use the following two interfaces in 'org.fisco.bcos.web3j.pre - Description: public TransactionReceipt registerCns(String name, String version, String addr, String abi) - Function: on the chain contract information - Parameters: name - contract name, version - contract version, addr - contract address, abi - contract abi -- Return: the receipt of the up-chain transaction, which contains the up-chain result information and error information (if any). +- Return: the receipt of the chain transaction, the receipt contains the chain result information and error information (if any) ### resolve - Description: public String resolve(String contractNameAndVersion) - Function: Query contract address based on contract name and contract version - Parameter: contractNameAndVersion - contract name+Contract Version Information -- Return: The contract address. If there is no contract information of the specified version, the interface throws an exception. -- (contractNameAndVersion by ':'to split the contract name and contract version, when the contract version is missing, the SDK default call uses the latest version of the contract to query +- Return: the contract address. If there is no contract information of the specified version, the interface throws an exception +- Description: contractNameAndVersion via ':'to split the contract name and contract version, when the contract version is missing, the SDK default call uses the latest version of the contract to query Note: 1. Before calling the interface, convert the sol contract into a Java class and place the generated Java class and abi and bin files in the correct directory. For more information, see [Web3SDK](../sdk/java_sdk.md); @@ -170,6 +170,6 @@ The console provides the functions of deploying contracts, invoking contracts, a The commands provided by the console include: -- deployByCNS: Deploying Contracts Through the CNS -- callByCNS: Invoking Contracts Through the CNS -- queryCNS: Query CNS table information based on contract name and contract version number (optional parameters) +-deployByCNS: Deploy contracts via CNS +-callByCNS: contract invoked via CNS +-queryCNS: Query CNS table information based on contract name and contract version number (optional parameters) diff --git a/3.x/en/docs/design/committee_design.md b/3.x/en/docs/design/committee_design.md index ec1edc1cf..373eff30b 100644 --- a/3.x/en/docs/design/committee_design.md +++ b/3.x/en/docs/design/committee_design.md @@ -4,37 +4,37 @@ Tags: "contract permissions" "deployment permissions" "permission control" "perm ---- -FISCO BCOS 3.x introduces the authority governance system of contract granularity.。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting.。 +FISCO BCOS 3.x introduces the authority governance system of contract granularity。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting。 Please refer to the link for detailed permission governance usage documentation: [Permission Governance Usage Guide](../develop/committee_usage.md) ## Overall design -In the FISCO BCOS3.0 framework, the governance system is implemented by a system contract, which provides relatively flexible and versatile functional modules that meet the demands of almost all scenarios while ensuring pluggability.。 +In the FISCO BCOS3.0 framework, the governance system is implemented by a system contract, which provides relatively flexible and versatile functional modules that meet the demands of almost all scenarios while ensuring pluggability。 ### 1. Role division -In FISCO BCOS3.0, on-chain roles can be divided into three categories according to their responsibilities: governance roles, contract administrator roles, and user roles, which are managed and managed in turn.。 +In FISCO BCOS3.0, on-chain roles can be divided into three categories according to their responsibilities: governance roles, contract administrator roles, and user roles, which are managed and managed in turn。 -**Governance role**: Governance of chain governance rules, governance committees, top chain managers。Including: governance rule setting, governance committee election, account freezing, unfreezing, etc.。At the same time, the governance role can control the role of the lower-level contract administrator.。 +**Governance role**: Governance of chain governance rules, governance committees, top chain managers。Including: governance rule setting, governance committee election, account freezing, unfreezing, etc。At the same time, the governance role can control the role of the lower-level contract administrator。 -**Contract Administrator Role**: The Contract Administrator role manages access to contract interfaces。For on-chain participants, any user can deploy contracts when the contract administrator does not set contract deployment permissions.。The contract deployment account can specify the contract administrator account when deploying the contract, if not specified, the contract administrator defaults to the contract deployer.。It should be noted that once the governance committee finds that the contract administrator has not performed his or her duties as contract administrator, the contract administrator can be reset by a vote of the governance committee.。 +**Contract Administrator Role**: The Contract Administrator role manages access to contract interfaces。For on-chain participants, any user can deploy contracts when the contract administrator does not set contract deployment permissions。The contract deployment account can specify the contract administrator account when deploying the contract, if not specified, the contract administrator defaults to the contract deployer。It should be noted that once the governance committee finds that the contract administrator has not performed his or her duties as contract administrator, the contract administrator can be reset by a vote of the governance committee。 -**User Roles**A user role is a role that participates in the business. Any account (including the governance role and the contract administrator role) belongs to the user role。Whether the user role can participate in the relevant business (issuing transactions) depends on whether the contract administrator has set the relevant permissions.。If the contract administrator does not set a permission type for the contract interface (blacklist or whitelist mode), anyone can call the contract interface。If the whitelist is set, you can only access it when the whitelist is hit. If the whitelist is in blacklist mode, you cannot access the corresponding interface if the whitelist is hit.。 +**User Roles**A user role is a role that participates in the business. Any account (including the governance role and the contract administrator role) belongs to the user role。Whether the user role can participate in the relevant business (issuing transactions) depends on whether the contract administrator has set the relevant permissions。If the contract administrator does not set a permission type for the contract interface (blacklist or whitelist mode), anyone can call the contract interface。If the whitelist is set, you can only access it when the whitelist is hit. If the whitelist is in blacklist mode, you cannot access the corresponding interface if the whitelist is hit。 ### 2. Governance rules management -- Governance roles complete the governance committee election through the governance module and set governance rules such as the weight of voting rights for each governance committee member, turnout and participation in the governance decision-making process。Also set contract deployment permissions; +- Governance role through the governance module to complete the governance committee election, and set governance rules, such as the voting weight of each governance committee member, the turnout and participation rate in the governance decision-making process。Also set contract deployment permissions; - The contract administrator role deploys business contracts and sets permissions on business contract-related interfaces; -- User roles complete business operations by calling the contract interface.。 +- User roles complete business operations by calling the contract interface。 ## Detailed design ### 1. Governance module -The governance module provides governance functions, which are completed by the governance committee through multi-party voting according to the decision rules.。The governance contract data structure is as follows. +The governance module provides governance functions, which are completed by the governance committee through multi-party voting according to the decision rules。The governance contract data structure is as follows ```solidity / / address list of governors @@ -49,31 +49,31 @@ uint8 public _winRate; #### Types of governance proposals -The types of proposals of the Governance Committee mainly include the following types. +The types of proposals of the Governance Committee mainly include the following types -- Meta-governance classes: add, remove members, modify governance member weights, modify thresholds for voting, set deployment permissions, proposal voting, and withdrawal。 +- MetaGovernance classes: add, remove members, modify governance member weights, modify thresholds for voting, set deployment permissions, proposal voting, and withdrawal。 - Permission Class: Reset Contract Agent。 #### Governance Committee Decision Planning -Decision rules make decisions based on data from three dimensions: the weight of the governor's voting rights, turnout and participation.。When the governance committee has only one administrator, it degenerates to the administrator model, and all proposals pass。If the governance committee has more than one yes, it will be judged by the following rules。When the manager changes, all outstanding decision proposals are decided according to the new manager parameters.。 +Decision rules make decisions based on data from three dimensions: the weight of the governor's voting rights, turnout and participation。When the governance committee has only one administrator, it degenerates to the administrator model, and all proposals pass。If the governance committee has more than one yes, it will be judged by the following rules。When the manager changes, all outstanding decision proposals are decided according to the new manager parameters。 First, for the participation rate threshold, the range of values is 1-100。When the participation rate threshold is set to 0 Yes, the participation rate threshold rule is invalid。When the participation rate threshold is adjusted, all outstanding decision proposals are decided according to the new participation rate threshold。The participation rate threshold decision can be calculated according to the following formula, and if not satisfied, the status of the proposal is' noEnoughVotes'。 -**Total Voting Weight / Total Weight > = Participation Threshold** +**Total Voting Weight / Total Weight>= Participation threshold** Second, for the weight pass rate threshold, the range of values is 0-100。When the weight pass rate threshold is set to 0, the weight pass rate threshold rule fails。When the weight pass rate threshold is adjusted, all proposals for outstanding decisions are decided according to the new weight pass rate threshold。The weight pass rate threshold decision can be calculated as follows。If established, the representative proposal has been passed, if not established, the representative proposal has failed。 -**Total consent weight / total voting weight > = weight pass rate threshold** +**Total consent weight / total voting weight>= Weight Pass Rate Threshold** #### Governance operation process -- Initial Phase +- Initial phase -To simplify the initialization operation and improve the user experience, you only need to configure one account as the initial member of the governance committee when building the chain.。If not specified, the system will automatically randomly generate a private key, as a member of the governance committee, the administrator weight is 1, the turnout threshold and participation threshold are 0, that is, after initialization, the governance committee is administrator mode.。 +To simplify the initialization operation and improve the user experience, you only need to configure one account as the initial member of the governance committee when building the chain。If not specified, the system will automatically randomly generate a private key, as a member of the governance committee, the administrator weight is 1, the turnout threshold and participation threshold are 0, that is, after initialization, the governance committee is administrator mode。 -- Operation Phase +- Operation phase During the operational phase, the governance committee operates on the meta-governance class, the permission class。All operations can be divided into proposal, voting, decision-making through the automatic execution of the stage。 @@ -81,26 +81,26 @@ During the operational phase, the governance committee operates on the meta-gove #### Permission Management -Permissions include creation permissions, contract access management permissions, and table access management permissions.。 +Permissions include creation permissions, contract access management permissions, and table access management permissions。 - Create Contract Permissions: Permissions to deploy contracts, managed by the governance committee。 - Contract Access Management: Access to the contract interface, managed by the contract administrator。 -The so-called contract administrator mode, that is, when the contract is deployed, an account is designated as the administrator of the contract to manage the access rights of the relevant interface.。For contract or table access, the main reason for using the contract administrator model instead of the governance committee model for permission management is to consider the user experience and decision efficiency.。At the same time, the contract administrator can be modified by the governance committee to ensure the security of contract authority management.。 +The so-called contract administrator mode, that is, when the contract is deployed, an account is designated as the administrator of the contract to manage the access rights of the relevant interface。For contract or table access, the main reason for using the contract administrator model instead of the governance committee model for permission management is to consider the user experience and decision efficiency。At the same time, the contract administrator can be modified by the governance committee to ensure the security of contract authority management。 #### Permission Policy Considering the efficiency of rights management operations, the rights module provides two rights management policies: whitelist mode and blacklist mode。 -- Whitelist mode: When an account is in the interface whitelist, the account can access the current interface; -- Blacklist mode: When an account is in the interface blacklist, the account cannot access the current interface; +-Whitelist mode: When an account is in the interface whitelist, the account can access the current interface; +-Blacklist mode: When the account is in the interface blacklist, the account cannot access the current interface; #### Operation process The operation process of contract authority is as follows。 -1. Deployment policy setting: The governance committee decides to set the deployment policy of the group, and selects whether it is a blacklist or a whitelist.。 -2. Access policy setting: The contract administrator has the right to set the ACL policy of the contract access interface, and select the blacklist or whitelist mode.。The contract administrator directly invokes the setMethodAuthType of the permission contract.(address contractAddr, bytes4 func, uint8 acl)to set the type of ACL。 +1. Deployment policy setting: The governance committee decides to set the deployment policy of the group, and selects whether it is a blacklist or a whitelist。 +2. Access policy setting: The contract administrator has the right to set the ACL policy of the contract access interface, and select the blacklist or whitelist mode。The contract administrator directly invokes the setMethodAuthType of the permission contract(address contractAddr, bytes4 func, uint8 acl)to set the type of ACL。 3. Add access rules。Contract administrator can choose to add rules for access。then all rules are saved in mapping\ [methodId]\ [account] [bool] ### 3. Contract Design @@ -110,19 +110,19 @@ For the address of the permission management contract, see https://github.com/FI Major contracts include: - CommitteeManager: the only entry point for permission governance, management proposal and governance committee, governance committee can call the corresponding interface of the contract to initiate governance proposal。The underlying node has a unique address 0x10001 -- ProposalManager: Proposal management contract, managed by CommitteeManager, for storing proposals +- ProposalManager: proposal management contract, managed by the CommitteeManager, for storing proposals - Committee: governance committee contract, managed by the CommitteeManager, records governance committee information -- ContractAuthPrecompiled: Permission information read / write interface provided by the underlying node. The write interface has permission control. The underlying node has a unique address 0x1005. +-ContractAuthPrecompiled: Permission information read and write interface provided by the underlying node, the write interface has permission control, and the underlying node has a unique address 0x1005 Permission governance performs the following steps: 1. Governance member A initiates a proposal to modify the system configuration and calls the CommitteeManager interface -2. The CommitteeManager obtains relevant information about the governance committee from the existing Committee. +2. The CommitteeManager obtains relevant information about the governance committee from the existing Committee 3. CommitteeManager calls ProposalManager, creates a proposal and pushes into the proposal list 4. Governance Committee B calls the CommitteeManager interface to vote on the proposal 5. CommitteeManager calls ProposalManager, votes on the proposal, and writes to the voting list 6. The ProposalManager collects the voting results of the proposal and calls the Committee interface to confirm whether the proposal threshold is reached -7. Committee returns the confirmation result. +7. Committee returns the confirmation result 8. After the CommitteeManager confirms the status of the proposal and reaches the executable state, it initiates a call to 'SystemConfigPrecompiled' or 'SensusPrecompiled' 9. The system pre-compilation contract will first confirm whether the called sender starts with / sys /, and then execute。(CommitteeManager is a built-in on-chain contract with a fixed address / sys / 10001) @@ -134,12 +134,12 @@ Permission governance performs the following steps: Each time a contract is deployed, it will be created in the same directory with the contract name+Storage table of "_ accessAuth" for storing interface-to-user whitelist data。 -The underlying layer can directly access the storage through the table name to obtain permission information.。In order for solidity and liquid to access the permission table corresponding to the directory contract, open the / sys / contractAuth system contract, you can access the permission storage table corresponding to the contract by accessing the / sys / contractAuth method to determine the permissions。 +The underlying layer can directly access the storage through the table name to obtain permission information。In order for solidity and liquid to access the permission table corresponding to the directory contract, open the / sys / contractAuth system contract, you can access the permission storage table corresponding to the contract by accessing the / sys / contractAuth method to determine the permissions。 #### concrete realization -1. Create a permission table when creating a contract: When executing the creation, you can create an additional permission table.。 -2. Provide the read and write operation interface of the permission table: provide the / sys / contractAuth system contract, which is specially used as the system contract to access the permission table.。Solidity uses the 0x1005 address。 +1. Create a permission table when creating a contract: When executing the creation, you can create an additional permission table。 +2. Provide the read and write operation interface of the permission table: provide the / sys / contractAuth system contract, which is specially used as the system contract to access the permission table。Solidity uses the 0x1005 address。 3. System contract ContractAuth interface ```solidity diff --git a/3.x/en/docs/design/compatibility.md b/3.x/en/docs/design/compatibility.md index ec8b6fcb5..eeab89657 100644 --- a/3.x/en/docs/design/compatibility.md +++ b/3.x/en/docs/design/compatibility.md @@ -4,13 +4,13 @@ Tags: "Compatibility" "Version Upgrade" ----- ## 1. Design Objectives FISCO BCOS version 3.X iteration, in order to achieve compatibility between versions, FISCO BCOS for the network level and data level compatibility designed the corresponding compatibility program。 the objectives of the program mainly include two points, first, -It can guarantee the network, data, execution module and common codec protocol between various versions.(scale/abi)All can be backward compatible, second, support can be gray-scale upgrade, and gray-scale upgrade process, the system can be normal consensus, out of the block.。 +It can guarantee the network, data, execution module and common codec protocol between various versions(scale/abi)All can be backward compatible, second, support can be gray-scale upgrade, and gray-scale upgrade process, the system can be normal consensus, out of the block。 ## 2. Network Compatibility Design When the FISCO BCOS version is updated, the network module will exchange and negotiate version information during the establishment / disconnection of the network connection to achieve compatibility between network module versions. The specific design is as follows: 1. After establishing connection / disconnecting reconnection, between nodes(Between or node and SDK)Exchange version information and conduct version negotiation - 2. Each network-related functional module corresponds to a compatibility version. + 2. Each network-related functional module corresponds to a compatibility version Specifically, ProtocolVersion is designed to identify different versions and achieve compatibility between versions: ```c++ @@ -29,7 +29,7 @@ V2 = 2, ### 2.2 Network Codec Protocol Compatibility - For network codec protocol compatibility, FISCO BCOS implements compatibility by adding the version field in the P2PMessage / WSMessage design, and when the node receives the message, it calls the corresponding codec method according to the version in the message.。 + For network codec protocol compatibility, FISCO BCOS implements compatibility by adding the version field in the P2PMessage / WSMessage design, and when the node receives the message, it calls the corresponding codec method according to the version in the message。 The specific data structure of P2PMessage / WSMessage is designed as follows: ```c++ class P2PMessageFactory @@ -39,7 +39,7 @@ P2PMessageFactory() { / / set m _ protocol2Codec } -/ / The mapping between the network message package version and its corresponding codec protocol. +/ / The mapping between the network message package version and its corresponding codec protocol std::shared_ptr> m_protocol2Codec; } @@ -79,8 +79,8 @@ std::shared_ptr> m_protocol2Codec; ``` -In general, FISCOBCOS uses PB encoding in the network application layer (synchronization, consensus) messages, with backward compatibility.;For AMOP message packets, it uses binary encoding, and the design adds a version field to the AMOP information for easy expansion.; -FISCO BCOS uses JSON encoding and decoding in block high push, group information, and EventLog push, with backward compatibility.。 +In general, FISCOBCOS uses PB encoding in the network application layer (synchronization, consensus) messages, with backward compatibility;For AMOP message packets, it uses binary encoding, and the design adds a version field to the AMOP information for easy expansion; +FISCO BCOS uses JSON encoding and decoding in block high push, group information, and EventLog push, with backward compatibility。 ###2.3 Network Layer Protocol Compatibility FISCO BCOS negotiates Peer version information through handshake: the main modules involved include Gateway, AMOP, EventSub, RPC;The version information of handshake negotiation is stored in the Session. The specific related protocol information is designed as follows: @@ -159,12 +159,12 @@ TxsSync = 2001, // executorservice Executor = xxx, // rpcservice -AMOPClient = 3001, // SDK/bcos-AMOP protocol ID in rpc +AMOPClient = 3001, / / The AMOP protocol ID in SDK / bcos-rpc EventSub = 5000, / / Event Listening Protocol -RPC = 6000, / / RPC protocol, including block high push, handshake protocol, group information push, etc. +RPC = 6000, / / RPC protocol, including block high push, handshake protocol, group information push, etc // gatewayservice -AMOPServer = 3000, // bcos-AMOP protocol ID between gateway -Gateway = 4000, // bcos-Agreement between gateways(such as exchanging basic information) +AMOP protocol ID between AMOPServer = 3000, / / bcos-gateway +Gateway = 4000, / / protocol between bcos-gateway(such as exchanging basic information) }; ``` @@ -180,7 +180,7 @@ std::map m_negotiatedVersion; }; ``` -consensus / synchronization can be bcos-gateway query protocol version, bcos-Gateway Pushable Protocol Version +Consensus / synchronization can query the protocol version of bcos-gateway, and bcos-gateway can push the protocol version ```c++ class FrontServiceInterface { @@ -193,11 +193,11 @@ std::function)>) = 0; ``` ### 3. Data Compatibility Design -Compared with the compatible line design at the network level, the data level cannot be upgraded smoothly like the network layer, and the system upgrade must be triggered through the system contract. +Compared with the compatible line design at the network level, the data level cannot be upgraded smoothly like the network layer, and the system upgrade must be triggered through the system contract Specifically, FISCO BCOS data compatibility is mainly related to: -- Basic data structures such as block, blockHeader, transaction, and receive: +Basic data structures such as -block, blockHeader, transaction, and receive: - (1) Each of these fields has a version. You can use version to achieve codec compatibility.; + (1) Each of these fields has a version. You can use version to achieve codec compatibility; (2) block, blockHeader, transaction generated by the consensus packaging module sealer @@ -218,11 +218,11 @@ FISCOBCOS designs major and minor version numbers. For example, for FISCOBCOS 3. ####3.2 Data protocol storage and change The sys _ config system table designed by FISCOBCOS stores information about the version number of record data. The key and value are as follows: - key: compatibility_version -- value: The protocol version used by the next block, such as 3.0.x, 3.1.x, etc. +- value: The protocol version used by the next block, such as 3.0.x, 3.1.x, etc -If a version update occurs, you need to change the data protocol and modify the compatibility _ version configuration item of sys _ config.: +If a version update occurs, you need to change the data protocol and modify the compatibility _ version configuration item of sys _ config: - Packaging module: Sealer, used to support data structure changes such as blocks and transactions -- BlockContext: Used to support execution compatibility(Precompiled contracts, etc.) +-BlockContext: Used to support execution compatibility(Precompiled contracts, etc) ## 4. 结论 In summary, FISCOBCOS design compatibility scheme, in the upper module of the codec protocol unified use of PB encoding;For network protocol compatibility, it is divided according to the service process, according to the service process to achieve the compatibility of different services, for data compatibility, through the block, blockHeader, transaction, receipt and other basic data structures and sys _ config system table version information to achieve version compatibility。 @@ -238,12 +238,12 @@ TxsSync = 2001, // executorservice Executor = xxx, // rpcservice -AMOPClient = 3001, // SDK/bcos-AMOP protocol ID in rpc +AMOPClient = 3001, / / The AMOP protocol ID in SDK / bcos-rpc EventSub = 5000, / / Event Listening Protocol -RPC = 6000, / / RPC protocol, including block high push, handshake protocol, group information push, etc. +RPC = 6000, / / RPC protocol, including block high push, handshake protocol, group information push, etc // gatewayservice -AMOPServer = 3000, // bcos-AMOP protocol ID between gateway -Gateway = 4000, // bcos-Agreement between gateways(such as exchanging basic information) +AMOP protocol ID between AMOPServer = 3000, / / bcos-gateway +Gateway = 4000, / / protocol between bcos-gateway(such as exchanging basic information) // SDK }; ``` diff --git a/3.x/en/docs/design/consensus/consensus.md b/3.x/en/docs/design/consensus/consensus.md index d76396559..8d6c522b0 100644 --- a/3.x/en/docs/design/consensus/consensus.md +++ b/3.x/en/docs/design/consensus/consensus.md @@ -6,57 +6,57 @@ Tag: "consensus" "" BFT "" ```eval_rst .. note:: - The implementation of the FISCO BCOS 3.x consensus module is located in the repository 'bcos-pbft `_ + The implementation of the FISCO BCOS 3.x consensus module is located in the repository 'bcos-pbft`_ ``` -In order to ensure the security and performance of the blockchain system, the current consortium blockchain system generally uses the Byzantine consensus algorithm.。However, since each block header of the chain blockchain system must contain the hash of the parent block and the execution result of the current block, on the one hand, the block consensus must be carried out serially.(That is, the consensus on the Nth block must be reached at the beginning of the consensus on the Nth block.+Completed before 1 block)On the other hand, blockchain consensus is tightly coupled with block execution and submission (that is, the entire block consensus process must include block execution and block submission steps), and block execution cannot be performed in parallel during low CPU usage such as block packaging and broadcast consensus message packets, which seriously reduces the efficiency of system resources and reduces the performance of the blockchain system.。 +In order to ensure the security and performance of the blockchain system, the current consortium blockchain system generally uses the Byzantine consensus algorithm。However, since each block header of the chain blockchain system must contain the hash of the parent block and the execution result of the current block, on the one hand, the block consensus must be carried out serially(That is, the consensus on the Nth block must be reached at the beginning of the consensus on the Nth block+Completed before 1 block)On the other hand, blockchain consensus is tightly coupled with block execution and submission (that is, the entire block consensus process must include block execution and block submission steps), and block execution cannot be performed in parallel during low CPU usage such as block packaging and broadcast consensus message packets, which seriously reduces the efficiency of system resources and reduces the performance of the blockchain system。 -In order to solve the performance problem of the serial consensus of the current blockchain system, FISCO BCOS 3.x proposes a two-stage parallel Byzantine consensus algorithm, which divides the consensus of the blockchain system into two stages: the consensus of the block batch parallel sorting and the consensus of the block execution result pipeline, and the two stages can be carried out in parallel.。Block batch parallel sorting consensus and block execution result pipeline consensus both support parallel consensus on multiple blocks, thus improving blockchain throughput。 +In order to solve the performance problem of the serial consensus of the current blockchain system, FISCO BCOS 3.x proposes a two-stage parallel Byzantine consensus algorithm, which divides the consensus of the blockchain system into two stages: the consensus of the block batch parallel sorting and the consensus of the block execution result pipeline, and the two stages can be carried out in parallel。Block batch parallel sorting consensus and block execution result pipeline consensus both support parallel consensus on multiple blocks, thus improving blockchain throughput。 -The block batch parallel sorting consensus is responsible for sorting the transactions received in the transaction pool and generating unexecuted and sorted blocks in parallel.;Block Execution Results Pipeline Consensus Perform pipeline consensus on block execution results and submit blocks with successful consensus。 +The block batch parallel sorting consensus is responsible for sorting the transactions received in the transaction pool and generating unexecuted and sorted blocks in parallel;Block Execution Results Pipeline Consensus Perform pipeline consensus on block execution results and submit blocks with successful consensus。 -The two stages of block batch parallel sorting consensus and block execution result pipeline consensus can be carried out in parallel.;And both stages can agree on multiple blocks at the same time. +The two stages of block batch parallel sorting consensus and block execution result pipeline consensus can be carried out in parallel;And both stages can agree on multiple blocks at the same time -The following figure shows the architecture of the two-stage parallel Byzantine consensus algorithm. The whole system mainly includes two parts: block batch sorting parallel consensus and block execution result pipeline consensus. +The following figure shows the architecture of the two-stage parallel Byzantine consensus algorithm. The whole system mainly includes two parts: block batch sorting parallel consensus and block execution result pipeline consensus ![](../../../images/design/consensus_design.png) -## 1. Block batch parallel sorting consensus process. +## 1. Block batch parallel sorting consensus process -In this phase, the PBFT consensus algorithm is used to package the transactions in the transaction pool into multiple blocks and agree on these blocks in parallel to produce sorted, unexecuted blocks, with the current block height of the blockchain set at h. +In this phase, the PBFT consensus algorithm is used to package the transactions in the transaction pool into multiple blocks and agree on these blocks in parallel to produce sorted, unexecuted blocks, with the current block height of the blockchain set at h -(1) Leader packages several blocks from the trading pool, set to '{block(i), block(i+1), …, block(i+N)}', and place these blocks into the consensus pre-preparation pre-Prepare message packet, resulting in '{PrePrepare(i), PrePrepare(i+1), …, PrePrepare(i+N)}', each prepared message packet contains the packet index, view information, packaged chunks, and the packet signature, that is,' PrePrepare(i) = {i, view, block(i), sig}`; +(1) Leader packages several blocks from the trading pool, set to '{block(i), block(i+1), …, block(i+N)}', and place these blocks into the consensus pre-prepare message packet, resulting in'{PrePrepare(i), PrePrepare(i+1), …, PrePrepare(i+N)}', each prepared message packet contains the packet index, view information, packaged chunks, and the packet signature, that is,' PrePrepare(i) = {i, view, block(i), sig}`; (2) The leader broadcasts the generated multiple pre-prepared message packets to all other consensus nodes at the same time, and other consensus nodes receive the consensus pre-prepared message packet 'PrePrepare(i) = {i, view, block(i), sig}After that, verify the validity of the message package, the main verification includes: - Verify that the prepared message packet has been received locally -- Verify the validity of the message package index i: i must be greater than the current blockchain height h and less than '(h + waterMarkLimit)',' waterMarkLimit 'is a parameter used to limit the number of blocks that can be agreed at the same time to ensure the stability of the blockchain system; +-Verify the validity of the message packet index i: i must be greater than the current height h of the blockchain and less than '(h + waterMarkLimit)',' waterMarkLimit 'is a parameter used to limit the number of blocks that can be agreed at the same time to ensure the stability of the blockchain system; -- Verify the validity of the view 'view': the view must not be less than the view of the current node; +-Verify the validity of the view'view ': the view must not be less than the view of the current node; - Verify the validity of the signature 'sig': calculate the hash of the index, view, and 'view'(i, view, block(i))', and use the hash as plaintext, take out the Leader's public key to verify the validity of the signature' sig '; (3) After the other nodes verify the success of the prepared message package, add it to the local cache and broadcast the 'Prepare' message package to all other nodes(i) = {i, view, blockHash(i), sig}', where i is the index of the message packet, which corresponds to the prepared message packet index one by one, view is the node view when the message packet is sent,' blockHash(i)'is the hash of the block contained in the received prepared message packet, and 'sig' is the node pairs'{i, view, blockHash(i)}'s signature; (4) Other nodes receive Prepare message package 'Prepare(i) = {i, view, blockHash(i), sig}After that, verify the validity of the message packet, the verification steps include: -- Verify the validity of Prepare message package index i: index i must be greater than the current block height h of the blockchain and less than '(h + waterMarkLimit)` +-Verify the validity of Prepare message packet index i: index i must be greater than the current block height h of the blockchain and less than '(h + waterMarkLimit)` -- Verify the validity of the view: the view must not be less than the current view of the blockchain node +- Verify the validity of the view view: the view must not be less than the current view of the blockchain node -- Verify the validity of the signature sig: calculate the message package hash(i)=hash(i, view, blockHash(i))'and with 'hash(i)'As clear text, use the public key of the message packet sending node to verify the validity of the signature sig. +- Verify the validity of the signature sig: calculate the message package hash(i)=hash(i, view, blockHash(i))'and with 'hash(i)'As clear text, use the public key of the message packet sending node to verify the validity of the signature sig -(5) If the Prepare message packet passes the verification, the node places the message packet in the local cache. After the node collects two-thirds of the Prepare message packets, the node enters the pre-Commit stage, broadcast the commit message package to all other nodes CommitReq(i) = {i, view, blockHash(i), sig}where i is the index of the packet, view is the current view, blockHash(i)is the hash of block i, and sig is the signature of the packet; +(5) If the Prepare message package passes the verification, the node places the message package in the local cache. After the node collects two-thirds of the Prepare message packages, the node enters the pre-commit phase and broadcasts the commit message package to all other nodes. CommitReq(i) = {i, view, blockHash(i), sig}where i is the index of the packet, view is the current view, blockHash(i)is the hash of block i, and sig is the signature of the packet; (6) The other node received the commit message package 'CommitReq(i) = {i, view, blockHash(i), sig}'After that, verify the validity of the message package, the verification steps include: -- Verify the validity of i of the submitted message packet index: index i must be greater than the current block height h of the blockchain and less than '(h + waterMarkLimit)` +-Verify the validity of i submitting the message package index: the index i must be greater than the current block height h of the blockchain and less than '(h + waterMarkLimit)` - Verify the validity of the view: 'view' must not be less than the current view of the blockchain node -- Verify the validity of the signature sig: calculate the message package hash(i)=hash(i, view, blockHash(i))'and with 'hash(i)'As clear text, use the public key of the message packet sending node to verify the validity of the signature sig. +- Verify the validity of the signature sig: calculate the message package hash(i)=hash(i, view, blockHash(i))'and with 'hash(i)'As clear text, use the public key of the message packet sending node to verify the validity of the signature sig -(7) When submitting a message package CommitReq(i)After the verification is passed, the node adds the message packet to the local cache. When the node collects two-thirds of the submitted message packets, it takes out the block from the preprocessed message packet.(i)and commit it to the store。 +(7) When submitting a message package CommitReq(i)After the verification is passed, the node adds the message packet to the local cache. When the node collects two-thirds of the submitted message packets, it takes out the block from the preprocessed message packet(i)and commit it to the store。 Repeat steps 2 to 7 for all preprocessed message packets generated in step 1 to complete the parallel sorting consensus of N blocks。 @@ -64,14 +64,14 @@ Repeat steps 2 to 7 for all preprocessed message packets generated in step 1 to The parallel sorting consensus generates N deterministic blocks that are put into the block queue, denoted as' BlockQueue ={block(i), block(i+1),…,block(i+N)}', the second stage consensus continuously takes out the unexecuted blocks from the block queue for execution, and carries out pipeline consensus on the block execution results, the specific steps are as follows: -- The consensus engine fetches unexecuted blocks from the block queue, denoted as' block(i), and put it into the execution engine for execution, and the status of the execution result is recorded as'checkPoint(i)', its corresponding hash is recorded as' checkPointHash(i)` +- The consensus engine fetches unexecuted blocks from the block queue, which is recorded as' block(i), and put it into the execution engine for execution, and the status of the execution result is recorded as'checkPoint(i)', its corresponding hash is recorded as' checkPointHash(i)` - After the block is executed, the node generates a checkpoint message package 'CheckPointReq(i) = {i, checkPointHash(i), sig}', where i is the block height,' checkPointHash(i)'is the hash of the block execution result, sig is the signature of the message packet, and broadcasts the checkpoint message packet to all nodes -- Checkpoint Message Packet 'CheckPointReq received by another node(i) = {i, checkPointHash(i), sig}', verify the validity of the signature, and if the signature verification passes, the message package is placed in the local cache.; +- Other node received checkpoint message packet 'CheckPointReq(i) = {i, checkPointHash(i), sig}', verify the validity of the signature, and if the signature verification passes, the message package is placed in the local cache; -- When a node collects two-thirds of the checkpoint packets from different consensus nodes that have the same local execution results, it is considered that all consensus nodes have the block execution results' checkPointHash(i)'If an agreement is reached, the result status will be executed 'checkPoint(i)'Submit to storage and update blockchain status to latest。 +-When a node collects two-thirds of the checkpoint packets from different consensus nodes that have the same local execution results, it is considered that all consensus nodes have the block execution results' checkPointHash(i)'If an agreement is reached, the result status will be executed 'checkPoint(i)'Submit to storage and update blockchain status to latest。 When the block'block(i)'After execution, block '(i+1)'can be block-based '(i)The state of 'is executed and the block'(i)'The executed block hash is used as the parent block hash to generate a new execution result 'checkPoint(i+1)'and continue to repeat the above steps based on the block execution results for the first(i+1)Consensus on block execution results。 -In addition, while sorting the block batch consensus, the block execution result pipeline consensus can be carried out in parallel, optimizing the efficiency of system resource utilization and improving the consensus performance of the blockchain system.。The two-stage parallel Byzantine consensus requires that the minimum number of nodes is not less than 4 nodes. At the same time, in order to ensure performance, it is recommended that no more than 100 nodes be used.。 +In addition, while sorting the block batch consensus, the block execution result pipeline consensus can be carried out in parallel, optimizing the efficiency of system resource utilization and improving the consensus performance of the blockchain system。The two-stage parallel Byzantine consensus requires that the minimum number of nodes is not less than 4 nodes. At the same time, in order to ensure performance, it is recommended that no more than 100 nodes be used。 diff --git a/3.x/en/docs/design/consensus/index.rst b/3.x/en/docs/design/consensus/index.rst new file mode 100644 index 000000000..155828e64 --- /dev/null +++ b/3.x/en/docs/design/consensus/index.rst @@ -0,0 +1,35 @@ +############################################################## +4. Consensus Algorithm +############################################################## + +Tags: "consensus algorithm" "design scheme" "extensible consensus framework" + +---- + +Blockchain system guarantees system consistency through consensus algorithm。 +In theory, consensus is a proposal(proposal)The process of reaching an agreement, the meaning of a proposal in a distributed system is very broad, including the order in which events occur, who is the leader, etc。In a blockchain system, consensus is the process by which consensus nodes agree on the results of transaction execution。 + + +**consensus algorithm classification** + +according to whether or not to tolerate 'Byzantine errors'_, consensus algorithms can be classified as fault-tolerant(Crash Fault Tolerance, CFT)Class Algorithms and Byzantine Fault Tolerance(Byzantine Fault Tolerance, BFT)Class algorithm: + +- **CFT class algorithm** Common fault-tolerant algorithms, when the system network, disk failure, server downtime and other common failures, can still reach a consensus on a proposal, the classic algorithm includes Paxos, Raft, etc., this kind of algorithm has better performance, faster processing speed, can tolerate no more than half of the failed nodes; +- **BFT class algorithm** : Byzantine fault-tolerant algorithms, in addition to tolerating ordinary failures in the system consensus process, but also tolerating deliberate deception by some nodes(Such as falsifying transaction execution results)Such as Byzantine errors, classical algorithms include PBFT, etc., which have poor performance and can tolerate no more than one-third of faulty nodes。 + + +**FISCO BCOS consensus algorithm** + +FISCO BCOS implements a plug-in consensus algorithm based on a multi-group architecture, different groups can run different consensus algorithms, and the consensus process between groups does not affect each other, FISCO BCOS currently supports PBFT(Practical Byzantine Fault Tolerance)and Raft(Replication and Fault Tolerant)Two consensus algorithms: + +- **PBFT consensus algorithm**: BFT class algorithm, can tolerate no more than one-third of the failure node and the bad node, can achieve the final consistency; + + + +.. toctree:: + :maxdepth: 1 + + pbft.md + raft.md + rpbft.md + consensus.md diff --git a/3.x/en/docs/design/consensus/pbft.md b/3.x/en/docs/design/consensus/pbft.md index ed5fad0a2..eebe749f0 100644 --- a/3.x/en/docs/design/consensus/pbft.md +++ b/3.x/en/docs/design/consensus/pbft.md @@ -4,7 +4,7 @@ Tags: "PBFT" "Consensus Algorithm" "Design Scheme" ---- -**PBFT**(Practical Byzantine Fault Tolerance)Consensus algorithms can do evil at a few nodes(such as forged messages)In the scenario, it uses cryptographic algorithms such as signature, signature verification, and hash to ensure tamper-proof, forgery-proof, and non-repudiation in the messaging process, and optimizes the work of previous people to reduce the complexity of the Byzantine fault-tolerant algorithm from the exponential level to the polynomial level.(3\*f+1)In a system of nodes, as long as there are not less than(2\*f+1)If a non-malicious node works properly, the system can achieve consistency, e.g., a 7-node system allows 2 nodes to have Byzantine errors。 +**PBFT**(Practical Byzantine Fault Tolerance)Consensus algorithms can do evil at a few nodes(such as forged messages)In the scenario, it uses cryptographic algorithms such as signature, signature verification, and hash to ensure tamper-proof, forgery-proof, and non-repudiation in the messaging process, and optimizes the work of previous people to reduce the complexity of the Byzantine fault-tolerant algorithm from the exponential level to the polynomial level(3\*f+1)In a system of nodes, as long as there are not less than(2\*f+1)If a non-malicious node works properly, the system can achieve consistency, e.g., a 7-node system allows 2 nodes to have Byzantine errors。 FISCO BCOS blockchain system implements PBFT consensus algorithm。 @@ -14,21 +14,21 @@ Node type, node ID, node index and view are key concepts of PBFT consensus algor ### 1.1 Node type -- **Leader/Primary**: Consensus node, responsible for packaging transactions into blocks and block consensus, each round of consensus process has and only one leader, in order to prevent the leader from forging blocks, after each round of PBFT consensus, will switch the leader.; +- **Leader/Primary**: Consensus node, responsible for packaging transactions into blocks and block consensus, each round of consensus process has and only one leader, in order to prevent the leader from forging blocks, after each round of PBFT consensus, will switch the leader; - **Replica**: Replica node, which is responsible for block consensus. There are multiple Replica nodes in each round of consensus. The process of each Replica node is similar; -- **Observer**: The observer node, which is responsible for obtaining the latest block from the consensus node or the replica node, and after executing and verifying the block execution result, the resulting block is on the chain.。 +- **Observer**: The observer node, which is responsible for obtaining the latest block from the consensus node or the replica node, and after executing and verifying the block execution result, the resulting block is on the chain。 where Leader and Replica are collectively referred to as consensus nodes。 ### 1.2 Node ID & & Node Index -In order to prevent nodes from doing evil, each consensus node in the PBFT consensus process signs the messages it sends and checks the signatures of the received message packets, so each node maintains a public-private key pair, the private key is used to sign the messages it sends, and the public key is used as the node ID to identify and check the signatures.。 +In order to prevent nodes from doing evil, each consensus node in the PBFT consensus process signs the messages it sends and checks the signatures of the received message packets, so each node maintains a public-private key pair, the private key is used to sign the messages it sends, and the public key is used as the node ID to identify and check the signatures。 -> **Node ID** : Consensus node signature public key and consensus node unique identifier, usually a 64-byte binary string, other nodes use the node ID of the message packet sender to verify the message packet. +> **Node ID** : Consensus node signature public key and consensus node unique identifier, usually a 64-byte binary string, other nodes use the node ID of the message packet sender to verify the message packet -Considering that the node ID is very long, including this field in the consensus message will consume part of the network bandwidth, FISCO BCOS introduces a node index, each consensus node maintains a public consensus node list, and the node index records the position of each consensus node ID in this list.: +Considering that the node ID is very long, including this field in the consensus message will consume part of the network bandwidth, FISCO BCOS introduces a node index, each consensus node maintains a public consensus node list, and the node index records the position of each consensus node ID in this list: > **node index** : The position of each consensus node ID in this list of common node IDs @@ -49,21 +49,21 @@ The following figure simply shows' 4(3*f+1, f=1)'Node FISCO BCOS system, third n - The first three rounds of consensus: node0, node1, and node2 are leaders, and the number of non-malicious nodes is equal to '2*f+1 ', node normal out of block consensus; -- Fourth round of consensus: node3 is a leader, but node3 is a Byzantine node, node0-node2 did not receive the node3 packed blocks within the given time, triggering a view switch and attempting to switch to 'view _ new = view+1 'of the new view, and broadcast the viewchange package to each other, the nodes are collected all over the view' view _ new'(2*f+1)After 'viewchange package', switch your view to 'view _ new' and calculate a new leader; +-The fourth round of consensus: node3 is the leader, but node3 is the Byzantine node, node0-node2 did not receive the block packaged by node3 within the given time, triggering the view switch, trying to switch to 'view _ new = view+1 'of the new view, and broadcast the viewchange package to each other, the nodes are collected all over the view' view _ new'(2*f+1)After 'viewchange package', switch your view to 'view _ new' and calculate a new leader; -- For the fifth round of consensus: node0 is leader, continue to package blocks。 +- For the fifth round of consensus: node0 is leader, continue to pack out blocks。 ### 1.4 Consensus Message The PBFT module mainly includes**PrepareReq, SignReq, CommitReq, and ViewChangeReq**Four consensus messages: -- **PrepareReqPacket**: A request packet containing blocks, which is generated by the leader and broadcast to all Replica nodes. After receiving the Prepare packet, the Replica node verifies the PrepareReq signature, executes the block, and caches the block execution result to prevent the Byzantine node from doing evil and ensure the finality of the block execution result.; +- **PrepareReqPacket**: A request packet containing blocks, which is generated by the leader and broadcast to all Replica nodes. After receiving the Prepare packet, the Replica node verifies the PrepareReq signature, executes the block, and caches the block execution result to prevent the Byzantine node from doing evil and ensure the finality of the block execution result; -- **SignReqPacket**: The signature request with the block execution result, which is generated by the consensus node that has received the Prepare packet and executed the block. The SignReq request contains the hash of the block after execution and the signature of the hash, which are respectively recorded as SignReq.block _ hash and SignReq.sig.(That is, the block execution result)consensus; +- **SignReqPacket**: The signature request with the block execution result, which is generated by the consensus node that has received the Prepare packet and executed the block. The SignReq request contains the hash of the block after execution and the signature of the hash, which are respectively recorded as SignReq.block _ hash and SignReq.sig(That is, the block execution result)consensus; -- **CommitReqPacket**: The submission request used to confirm the block execution result, which is collected by the full '(2*f+1)'A block _ hash is generated from nodes that have the same SignReq request from different nodes. CommitReq is broadcast to all other consensus nodes, and the other nodes are fully collected'(2*f+1)'After the same block _ hash and CommitReq from different nodes, the latest block cached by the local node is linked.; +- **CommitReqPacket**: The submission request used to confirm the block execution result, which is collected by the full '(2*f+1)'A block _ hash is generated from nodes that have the same SignReq request from different nodes. CommitReq is broadcast to all other consensus nodes, and the other nodes are fully collected'(2*f+1)'After the same block _ hash and CommitReq from different nodes, the latest block cached by the local node is linked; -- **ViewChangeReqPacket**: View switching request, when leader cannot provide normal service(Such as abnormal network connection, server downtime, etc.)The other consensus node will actively trigger the view switch, with the view that the node is about to switch to in ViewChangeReq.(toView, plus one for the current view), a node collects full(2*f+1)After 'views equal toView, ViewChangeReq from different nodes, the current view is switched to toView。 +- **ViewChangeReqPacket**: View switching request, when leader cannot provide normal service(Such as abnormal network connection, server downtime, etc)The other consensus node will actively trigger the view switch, with the view that the node is about to switch to in ViewChangeReq(toView, plus one for the current view), a node collects full(2*f+1)After 'views equal toView, ViewChangeReq from different nodes, the current view is switched to toView。 These four types of message packets contain roughly the same fields, which are common to all message packets as follows: @@ -72,7 +72,7 @@ These four types of message packets contain roughly the same fields, which are c | Field Name| Field Meaning| | idx | Current Node Index| | packetType | Request Package Type(Includes PrepareReqPacket / SignReqPacket / CommitReqPacket / ViewChangeReqPacket) | -| height | Height of blocks currently being processed(Generally, the height of the local block is increased by one.) | +| height | Height of blocks currently being processed(Generally, the height of the local block is increased by one) | | blockHash | Hash of block currently being processed| | view | View in which the current node is located| | sig | Signature of the current node to blockHash| @@ -91,13 +91,13 @@ The system framework is as follows: PBFT consensus mainly includes two threads: -- PBFTSealer: The PBFT package thread, which is responsible for fetching transactions from the transaction pool and encapsulating the packaged blocks into PBFT Prepare packages, which are handed over to PBFTEngine for processing.; +- PBFTSealer: The PBFT package thread, which is responsible for fetching transactions from the transaction pool and encapsulating the packaged blocks into PBFT Prepare packages, which are handed over to PBFTEngine for processing; -- PBFTEngine: PBFT consensus thread, receiving PBFT consensus message packets from PBFTSealer or P2P network, block verifier(Blockverifier)Responsible for starting the block execution, completing the consensus process, writing the consensus block to the blockchain, and deleting the transactions that have been chained from the transaction pool after the block is chained.。 +- PBFTEngine: PBFT consensus thread, receiving PBFT consensus message packets from PBFTSealer or P2P network, block verifier(Blockverifier)Responsible for starting the block execution, completing the consensus process, writing the consensus block to the blockchain, and deleting the transactions that have been chained from the transaction pool after the block is chained。 ## 3. Core processes -PBFT consensus mainly includes Pre-Prepare, Prepare, and Commit in three stages: +PBFT consensus mainly includes three stages: Pre-prepare, Prepare, and Commit: - **Pre-prepare**: Responsible for executing blocks, generating signature packets, and broadcasting the signature packets to all consensus nodes; - **Prepare**: responsible for collecting signature packages, a node collects full '2*f+1 'after the signature package, indicating that it has reached the state of being able to submit blocks, start broadcasting the Commit package; @@ -121,19 +121,19 @@ After the node calculates that the current leader index is the same as its own i - **Package transactions from a transaction pool**: After generating a new empty block, get the transaction from the transaction pool and insert the obtained transaction into the generated new block; -- **Assemble new block**: After the Sealer thread is packaged into the transaction, the packager of the new block(Sealer Field)Set up your own index and calculate the transactionRoot for all transactions based on the packaged transactions.; +- **Assemble new block**: After the Sealer thread is packaged into the transaction, the packager of the new block(Sealer Field)Set up your own index and calculate the transactionRoot for all transactions based on the packaged transactions; - **Generate Prepare package**: Encode the assembled new block into the Prepare packet and broadcast it to all consensus nodes in the group through the PBFTEngine thread. After other consensus nodes receive the Prepare packet, they start the three-stage consensus。 -### 3.2 pre-Prepare phase +### 3.2 Pre-prepare Phase -After receiving the Prepare packet, the consensus node enters the pre-Prepare phase, the main workflow of this phase includes: +After receiving the Prepare package, the consensus node enters the pre-prepare phase. The main workflow in this phase includes: -- **Prepare package legality judgment**It is mainly used to determine whether the Prepare package is a duplicate and whether the parent hash of the block included in the Prepare request is the highest block hash of the current node.(Prevention of bifurcation)Is the block height of the block included in the Prepare request equal to the highest block height plus one; +- **Prepare package legality judgment**It is mainly used to determine whether the Prepare package is a duplicate and whether the parent hash of the block included in the Prepare request is the highest block hash of the current node(Prevention of bifurcation)Is the block height of the block included in the Prepare request equal to the highest block height plus one; -- **Caching legal Prepare packages**: If the Prepare request is valid, it is cached locally to filter duplicate Prepare requests.; +- **Caching legal Prepare packages**: If the Prepare request is valid, it is cached locally to filter duplicate Prepare requests; -- **Empty Block Judgment**If the number of transactions in the block included in the Prepare request is 0, the empty block view switch is triggered, the current view is increased by one, and the view switch request is broadcast to all other nodes.; +- **Empty Block Judgment**If the number of transactions in the block included in the Prepare request is 0, the empty block view switch is triggered, the current view is increased by one, and the view switch request is broadcast to all other nodes; - **Execute blocks and cache block execution results**: If the number of transactions in the block included in the Prepare request is greater than 0, the executor is called to execute the block and the executed block is cached; @@ -143,24 +143,24 @@ After receiving the Prepare packet, the consensus node enters the pre-Prepare ph After the consensus node receives the signed package, it enters the Prepare phase. The main workflow of this phase includes: -- **Signature Package Legality Judgment**The hash and pre of the signature package are mainly determined.-The block hashes of the cache after execution in the prepare phase are the same. If they are not the same, continue to determine whether the request is a future block signature request.(The future block is generated because the processing performance of this node is lower than that of other nodes, and the previous round of consensus is still in progress. The condition for determining the future block is that the height field of the signature package is greater than the highest local block plus one)If the request is not a future block, it is an illegal signature request, and the node directly rejects the signature request.; +- **Signature Package Legality Judgment**The hash of the signature package is the same as the block hash cached in the pre-prepare phase. If the hash is not the same, the request is determined to be a future block signature request(The future block is generated because the processing performance of this node is lower than that of other nodes, and the previous round of consensus is still in progress. The condition for determining the future block is that the height field of the signature package is greater than the highest local block plus one)If the request is not a future block, it is an illegal signature request, and the node directly rejects the signature request; - **Caching Legally Signed Packages**: The node caches valid signed packages; -- **Judge pre-Whether the signature packet cache corresponding to the block cached in the prepare phase reaches' 2*f+1 ', if the collected full signature package, broadcast Commit package**: if pre-The number of signature packets corresponding to the block hash cached in the prepare phase exceeds' 2*f+1 ', it means that most nodes have executed the block, and the execution results are consistent, indicating that the node has reached the state where the block can be submitted, and starts to broadcast the Commit package; +- **Determine whether the signature packet cache corresponding to the block cached in the pre-prepare phase reaches' 2*f+1 ', if the collected full signature package, broadcast Commit package**If the number of signature packets corresponding to the block hash cached in the pre-prepare phase exceeds' 2*f+1 ', it means that most nodes have executed the block, and the execution results are consistent, indicating that the node has reached the state where the block can be submitted, and starts to broadcast the Commit package; -- **If a full signature package is collected, back up the pre-Prepare Cached Prepare Package in Prepare Phase**To prevent Commit phase blocks from exceeding '2 before they are committed to the database*f+1 'node down, these nodes start again after the block, resulting in blockchain fork(The latest blocks of the remaining nodes are different from the highest blocks of these nodes)You also need to back up the pre-The prepare package cached in the prepare phase is transferred to the database. After the node is restarted, the backup prepare package is processed first。 +- **If a full signature package is collected, the Prepare package cached in the pre-prepare phase is backed up**To prevent Commit phase blocks from exceeding '2 before they are committed to the database*f+1 'node down, these nodes start again after the block, resulting in blockchain fork(The latest blocks of the remaining nodes are different from the highest blocks of these nodes)You also need to back up the Prepare package cached in the pre-prepare phase to the database. After the node restarts, the backup Prepare package is processed first。 ### 3.4 Commit Phase After receiving the Commit packet, the consensus node enters the Commit phase. The workflow of this phase includes: -- **Commit Package Legality Judgment**The hash and pre of the Commit package are mainly judged.-The block hashes of the cache after execution in the prepare phase are the same. If they are not the same, continue to determine whether the request is a future block commit request.(The future block is generated because the processing performance of this node is lower than that of other nodes, and the previous round of consensus is still in progress. The condition for determining the future block is that the height field of the Commit is greater than the highest local block plus one)If the request is not a future block, it is an illegal Commit request. The node directly rejects the request.; +- **Commit Package Legality Judgment**This function determines whether the hash of the Commit packet is the same as the hash of the block cached in the pre-prepare phase after execution. If the hash is not the same, it continues to determine whether the request is a future block Commit request(The future block is generated because the processing performance of this node is lower than that of other nodes, and the previous round of consensus is still in progress. The condition for determining the future block is that the height field of the Commit is greater than the highest local block plus one)If the request is not a future block, it is an illegal Commit request. The node directly rejects the request; - **Caching legitimate Commit packages**The node caches the valid Commit package; -- **Judge pre-Does the Commit package cache corresponding to the block cached in the prepare phase reach '2*f+1 ', if full Commit packets are collected, the new block is dropped**: if pre-The number of Commit requests corresponding to the block hash cached in the prepare phase exceeds' 2*f+1 ', it means that most nodes have reached the committable block state, and the execution results are consistent, then call the Blockchain module to pre-Write cached blocks to the database during the prepare phase; +- **Determine whether the Commit packet cache corresponding to the block cached in the pre-prepare phase reaches' 2*f+1 ', if full Commit packets are collected, the new block is dropped**If the number of Commit requests corresponding to the block hash cached in the pre-prepare phase exceeds' 2*f+1 ', it means that most nodes have reached the committable block state, and the execution results are consistent, then call the Blockchain module to write the block cached in the pre-prepare phase to the database; ### 3.5 View switching processing flow diff --git a/3.x/en/docs/design/consensus/raft.md b/3.x/en/docs/design/consensus/raft.md index 66a70757d..7e688a35b 100644 --- a/3.x/en/docs/design/consensus/raft.md +++ b/3.x/en/docs/design/consensus/raft.md @@ -6,20 +6,20 @@ Tags: "Raft" "Consensus Algorithm" "Design Scheme" ## 1 Noun explanation ### 1.1 Raft -Raft (Replication and Fault Tolerant) is a consistency protocol that allows network partitioning (Partition Tolerant), which ensures that there are(N+1)/ 2 (Round up) The consistency of the system when the nodes are working properly. For example, in a five-node system, two nodes are allowed to have non-Byzantine errors, such as node downtime, network partitioning, and message latency.。Raft is easier to understand than Paxos and has been proven to provide the same fault tolerance and performance as Paxos.(https://raft.github.io/)and [dynamic presentation](http://thesecretlivesofdata.com/raft/)。 +Raft (Replication and Fault Tolerant) is a consistency protocol that allows network partitioning (Partition Tolerant), which ensures that there are(N+1)/ 2 (Round up) The consistency of the system when the nodes are working properly. For example, in a five-node system, two nodes are allowed to have non-Byzantine errors, such as node downtime, network partitioning, and message latency。Raft is easier to understand than Paxos and has been proven to provide the same fault tolerance and performance as Paxos(https://raft.github.io/)and [dynamic presentation](http://thesecretlivesofdata.com/raft/)。 ### 1.2 Node type In the Raft algorithm, each network node can only have one of the following three identities:**Leader**、**Follower**以及**Candidate**, of which: -* **Leader**: Mainly responsible for interacting with the outside world, elected by the Follower node, in each consensus process there is and only one Leader node, the Leader is fully responsible for taking out transactions from the transaction pool, packaging transactions to form blocks and putting the block on the chain.; +* **Leader**: Mainly responsible for interacting with the outside world, elected by the Follower node, in each consensus process there is and only one Leader node, the Leader is fully responsible for taking out transactions from the transaction pool, packaging transactions to form blocks and putting the block on the chain; * **Follower**: Synchronize based on the Leader node, and hold an election to select a new Leader node when the Leader node expires; * **Candidate**The temporary identity that the follower node has when running for Leader。 ### 1.3 Node ID & Node Index -In the Raft algorithm, each network node will have a fixed and globally unique ID used to indicate the identity of the node (usually a 64-byte number), which is called the node ID;At the same time, each consensus node also maintains a public consensus node list, which records the ID of each consensus node, and its position in this list is called the node index.。 +In the Raft algorithm, each network node will have a fixed and globally unique ID used to indicate the identity of the node (usually a 64-byte number), which is called the node ID;At the same time, each consensus node also maintains a public consensus node list, which records the ID of each consensus node, and its position in this list is called the node index。 ### 1.4 Term of office -The Raft algorithm divides time into tenure terms of indeterminate length, where Terms are consecutive numbers。Each term starts with an election, and if the election is successful, the current leader is responsible for the block, and if the election fails and a new single leader is not elected, a new term is opened and the election is restarted.。 +The Raft algorithm divides time into tenure terms of indeterminate length, where Terms are consecutive numbers。Each term starts with an election, and if the election is successful, the current leader is responsible for the block, and if the election fails and a new single leader is not elected, a new term is opened and the election is restarted。 ![.](../../../images/consensus/raft_terms.png) @@ -27,7 +27,7 @@ The Raft algorithm divides time into tenure terms of indeterminate length, where In the Raft algorithm, each network node communicates by sending messages, and the current Raft module includes four messages:**VoteReq**、**VoteResp**、**Heartbeat**、**HeartbeatResp**, where: * **VoteReq**Vote request, which is actively sent by the Candidate node and used to request votes from other nodes in the network to run for the Leader; * **VoteResp**: The voting response, which is used to respond to the voting request after the node receives the voting request, and the response content is to agree or reject the voting request; -* **Heartbeat**: Heartbeat, which is sent out by the leader node actively and has two functions:(1) Used to maintain the identity of the leader node. As long as the leader can always send heartbeats and other nodes respond, the leader identity will not change.;(2) Block data replication. When the leader node successfully packages a block, it encodes the block data into a heartbeat to broadcast the block. After receiving the heartbeat, other nodes decode the block data and put the block into their own buffer; +* **Heartbeat**: Heartbeat, which is sent out by the leader node actively and has two functions:(1) Used to maintain the identity of the leader node. As long as the leader can always send heartbeats and other nodes respond, the leader identity will not change;(2) Block data replication. When the leader node successfully packages a block, it encodes the block data into a heartbeat to broadcast the block. After receiving the heartbeat, other nodes decode the block data and put the block into their own buffer; * **HeartbeatResp**: Heartbeat response. After the node receives the heartbeat, it is used to respond to the heartbeat. In particular, when a heartbeat containing block data is received, the hash of the block will be included in the heartbeat response; The fields common to all messages are shown in the following table: @@ -43,50 +43,50 @@ The fields specific to each message type are shown in the following table: - < td > Message type < / td > - < td > Field name < / td > - < td > Field Meaning < / td > + + + - < td > Candidate's own node index < / td > + - < td > The Term of the last Leader that Candidate has seen. See Section 3.1.2 for details. < / td > + - < td > The block height of the latest block that Candidate has seen, and its detailed function is shown in Section 3.1.2 < / td > + - < td > The response flag to the voting request, which is used to mark whether to agree to the voting request, and if it is rejected, the reason for the rejection will be specifically marked. See Section 3.1.2 for details. < / td > + - < td > The block height of the latest block seen by the node receiving VoteReq. For details, see Section 3.1.2 < / td > + - < td > The node index of the leader node that issued the heartbeat < / td > + - < td > When the Leader node is ready to submit a new block, it will first encode the block data into this field and broadcast it through the heartbeat. For details, see Section 3.2 < / td > + - < td > The block height corresponding to uncommitedBlock. See Section 3.2 for its detailed function < / td > + - < td > When receiving the uncommitedBlock data sent by the leader, the node writes the hash (fingerprint) corresponding to the uncommitedBlock in the heartbeat response and sends it back to the leader, indicating that the node has received the block data to be submitted by the leader and has written it to the local cache. For details, see Section 3.2 < / td > +
Message typeField NameField Meaning
VoteReq candidateCandidate's own node index
lastLeaderTermThe Term of the last Leader that Candidate has seen, and its detailed role is described in Section 3.1.2
lastBlockNumberThe block height of the latest block that Candidate has seen, and its role is detailed in Section 3.1.2
VoteResp voteFlagThe response flag to the voting request is used to mark whether the voting request is approved or not, and if it is rejected, the reason for the rejection is also specifically marked, as detailed in Section 3.1.2
lastLeaderTermThe block height of the latest block seen by the node receiving the VoteReq, for details, see Section 3.1.2
Heartbeat leaderNode index of the leader node that issued the heartbeat
uncommitedBlockWhen the Leader node prepares to submit a new block, it first encodes the block data into this field and broadcasts it through the heartbeat, as detailed in Section 3.2
uncommitedBlockNumberThe block height corresponding to uncommitedBlock. See Section 3.2 for details
HeartbeatResp uncommitedBlockHashWhen receiving the uncommitedBlock data sent by the leader, the node writes the hash (fingerprint) corresponding to the uncommitedBlock in the heartbeat response and sends it back to the leader, indicating that the node has received the block data to be submitted by the leader and has written it to the local cache. See Section 3.2 for details
@@ -101,7 +101,7 @@ The system framework is shown in the following figure: ## 3 Core processes ### 3.1 Node state transition -The transition relationships between node types are shown in the following figure, and each form of state transition is described in the following sections. +The transition relationships between node types are shown in the following figure, and each form of state transition is described in the following sections ![](../../../images/consensus/raft_nodes_transfer.jpg) @@ -109,38 +109,38 @@ The transition relationships between node types are shown in the following figur Heartbeat mechanism used in Raft consensus module to trigger Leader election。When the node starts, the node automatically becomes Follower and the Term is set to 0。As long as the Follower receives a valid Heartbeat or RequestVote message from the Leader or Candidate, it will remain in the Follower state if the Follower is within a period of time (this period of time is called***Election Timeout***) If it does not receive the above message, it will assume that the current Leader of the system has been deactivated, then add its own Term and convert it to Candidate, and start a new round of Leader election process. The process is as follows: 1. Follower increases the current Term and converts it to Candidate; 2. Candidate will vote for itself and broadcast RequestVote to other nodes to request a vote; -3. The Candidate node remains in the Candidate state until one of the following three situations occurs:(1)The node wins the election;(2) While waiting for the election, Candidate received Heartbeat from other nodes;(3) Pass*Election Timeout*No leader was elected.。Raft algorithm uses a random timer method to avoid the average division of node votes to ensure that most of the time only one node will time out to enter the Candidate state and get the votes of most nodes to become Leader.。 +3. The Candidate node remains in the Candidate state until one of the following three situations occurs:(1)The node wins the election;(2) While waiting for the election, Candidate received Heartbeat from other nodes;(3) Pass*Election Timeout*No leader was elected。Raft algorithm uses a random timer method to avoid the average division of node votes to ensure that most of the time only one node will time out to enter the Candidate state and get the votes of most nodes to become Leader。 #### 3.1.2 Voting After receiving a VoteReq message, the node selects different response strategies based on the message content: 1. ***VoteReq's Term is less than or equal to its own Term*** - * If the node is a Leader, the voting request is rejected. After receiving this response, Candidate will abandon the election and change to Follower, and increase the voting timeout.; + * If the node is a Leader, the voting request is rejected. After receiving this response, Candidate will abandon the election and change to Follower, and increase the voting timeout; * If the node is not a Leader: - * If VoteReq's Term is less than its own Term, it rejects the vote request, and if Candidate receives more than half of the response, it indicates that it is outdated, at which point Candidate will abandon the election and change to Follower and increase the voting timeout.; + * If VoteReq's Term is less than its own Term, it rejects the vote request, and if Candidate receives more than half of the response, it indicates that it is outdated, at which point Candidate will abandon the election and change to Follower and increase the voting timeout; * If VoteReq's Term equals its own Term, the vote request is rejected and nothing is done with the vote request。For each node, only one Candidate can be voted on a first-come, first-served basis, thus ensuring that at most only one Candidate is selected as the Leader in each round of elections。 2. ***VoteReq's lastLeaderTerm is less than its own lastLeaderTerm*** - Each node will have a lastLeaderTerm field that represents the term of the last leader that the node has seen, and lastLeaderTerm can only be updated by Heartbeat。If the lastLeaderTerm in VoteReq is less than its own lastLeaderTerm, it indicates that there is a problem for the leader to access the Candidate. If the Candidate is in an isolated network environment, it will continue to raise voting requests, so it needs to interrupt its voting request, so the node will reject the voting request at this time.。 + Each node will have a lastLeaderTerm field that represents the term of the last leader that the node has seen, and lastLeaderTerm can only be updated by Heartbeat。If the lastLeaderTerm in VoteReq is less than its own lastLeaderTerm, it indicates that there is a problem for the leader to access the Candidate. If the Candidate is in an isolated network environment, it will continue to raise voting requests, so it needs to interrupt its voting request, so the node will reject the voting request at this time。 3. ***VoteReq's lastBlockNumber is less than its own lastBlockNumber*** - Each node will have a lastBlockNumber field indicating the block height of the latest block the node has seen。During the block release process, block replication occurs between nodes (see Section 3.2 for details). During the block replication process, some nodes may receive newer block data but some do not, resulting in inconsistent lastBlockNumbers for different nodes.。In order for the system to agree, it is necessary to require the node to vote for the node with the newer data, so in this case the node will reject the vote request。 + Each node will have a lastBlockNumber field indicating the block height of the latest block the node has seen。During the block release process, block replication occurs between nodes (see Section 3.2 for details). During the block replication process, some nodes may receive newer block data but some do not, resulting in inconsistent lastBlockNumbers for different nodes。In order for the system to agree, it is necessary to require the node to vote for the node with the newer data, so in this case the node will reject the vote request。 4. ***Node is the first vote*** - In order to prevent Follower from re-initiating the election due to network jitter, it is stipulated that if the node is voting for the first time, it will directly reject the voting request and set its firstVote field to the node index of the Candidate.。 + In order to prevent Follower from re-initiating the election due to network jitter, it is stipulated that if the node is voting for the first time, it will directly reject the voting request and set its firstVote field to the node index of the Candidate。 -5. ***The voting request was not rejected in steps 1 to 4.*** +5. ***The voting request was not rejected in steps 1 to 4*** Agree to the vote request。 #### 3.1.3 Heartbeat Timeout -When the Leader becomes a network island, the Leader can send a heartbeat, and the Follower can receive a heartbeat, but the Leader cannot receive a heartbeat response. In this case, the Leader already has a network exception at this time, but because the heartbeat packet can be sent out all the time, the Follower cannot switch states for selection, and the system stagnates.。In order to avoid the second situation, the heartbeat timeout mechanism is set in the module. Each time the leader receives a heartbeat response, it will record accordingly. Once the record is not updated after a period of time, the leader will give up the leader identity and switch to the follower node.。 +When the Leader becomes a network island, the Leader can send a heartbeat, and the Follower can receive a heartbeat, but the Leader cannot receive a heartbeat response. In this case, the Leader already has a network exception at this time, but because the heartbeat packet can be sent out all the time, the Follower cannot switch states for selection, and the system stagnates。In order to avoid the second situation, the heartbeat timeout mechanism is set in the module. Each time the leader receives a heartbeat response, it will record accordingly. Once the record is not updated after a period of time, the leader will give up the leader identity and switch to the follower node。 ### 3.2 Block replication -The Raft protocol strongly relies on the availability of the leader node to ensure the consistency of cluster data, because data can only be transferred from the leader node to the follower node.。After Raft Sealer submits block data to the cluster leader, the leader sets the data to the uncommitted state, and then the leader node will concurrently copy the data to all follower nodes in the form of additional data in the Heartbeat and wait for the response to be received. After ensuring that more than half of the nodes in the network have received the data, the block data is written to the underlying storage, and the block data status has entered the committed state.。After that, the Leader node broadcasts the block data to other Follower nodes through the Sync module. The flowchart of block replication and submission is shown in the following figure: +The Raft protocol strongly relies on the availability of the leader node to ensure the consistency of cluster data, because data can only be transferred from the leader node to the follower node。After Raft Sealer submits block data to the cluster leader, the leader sets the data to the uncommitted state, and then the leader node will concurrently copy the data to all follower nodes in the form of additional data in the Heartbeat and wait for the response to be received. After ensuring that more than half of the nodes in the network have received the data, the block data is written to the underlying storage, and the block data status has entered the committed state。After that, the Leader node broadcasts the block data to other Follower nodes through the Sync module. The flowchart of block replication and submission is shown in the following figure: ```eval_rst .. mermaid:: @@ -152,7 +152,7 @@ The Raft protocol strongly relies on the availability of the leader node to ensu Sealer->>Leader: Packing transactions into blocks, blocking itself Leader->>Follower: Encoding blocks as RLP encoding sent with heartbeat packets - Note right of Follower: Decode the heartbeat packet, < br / > and write the decoded block < br / > into the cache + Note right of Follower: decoding the heartbeat packet,
And the decoded block
Write to cache Follower->>Leader: Send ACK loop collect ACK Leader->>Leader: Check if most nodes have received a block copy @@ -163,4 +163,4 @@ The Raft protocol strongly relies on the availability of the leader node to ensu ``` -The verification conditions for RaftSealer to verify whether the transaction can be packaged currently include:(1) Is Leader;(2) Whether there are peers that have not yet completed synchronization; (3) Whether the uncommitBlock field is empty. Packing is allowed only if all three conditions are met.。 +The verification conditions for RaftSealer to verify whether the transaction can be packaged currently include:(1) Is Leader;(2) Whether there are peers that have not yet completed synchronization; (3) Whether the uncommitBlock field is empty. Packing is allowed only if all three conditions are met。 diff --git a/3.x/en/docs/design/consensus/rpbft.md b/3.x/en/docs/design/consensus/rpbft.md index d3d8e7ac6..365af71a5 100644 --- a/3.x/en/docs/design/consensus/rpbft.md +++ b/3.x/en/docs/design/consensus/rpbft.md @@ -7,25 +7,25 @@ Tags: "rPBFT" "consensus algorithm" ### POW class algorithm -The POW algorithm is not suitable for alliance chain scenarios with large transaction throughput and low transaction latency requirements due to the following characteristics. -- Low performance: 10 minutes out of a block, transaction confirmation delay of one hour, power consumption -- No Final Consistency Guarantee +The POW algorithm is not suitable for alliance chain scenarios with large transaction throughput and low transaction latency requirements due to the following characteristics +- Low performance: 10 minutes out of a block, transaction confirmation delay of an hour, power consumption +- No final consistency guarantee - Low throughput ### Consensus Algorithm Based on Distributed Consistency Principle -Consensus algorithms based on the principle of distributed consistency, such as BFT and CFT consensus algorithms, have the advantages of second-level transaction confirmation delay, final consistency, high throughput, and no power consumption.。 +Consensus algorithms based on the principle of distributed consistency, such as BFT and CFT consensus algorithms, have the advantages of second-level transaction confirmation delay, final consistency, high throughput, and no power consumption。 -However, the complexity of these algorithms is related to the size of the nodes, and the size of the network that can be supported is limited, which greatly limits the size of the alliance chain nodes.。 +However, the complexity of these algorithms is related to the size of the nodes, and the size of the network that can be supported is limited, which greatly limits the size of the alliance chain nodes。 -In summary, FISCO BCOS v2.3.0 proposes the rPBFT consensus algorithm, which aims to minimize the impact of node size on the consensus algorithm while preserving the high performance, high throughput, high consistency, and security of BFT-like consensus algorithms.。 +In summary, FISCO BCOS v2.3.0 proposes the rPBFT consensus algorithm, which aims to minimize the impact of node size on the consensus algorithm while preserving the high performance, high throughput, high consistency, and security of BFT-like consensus algorithms。 ## rPBFT consensus algorithm ### Node Type -- Consensus member: the node that executes the PBFT consensus process, with the authority to take turns out blocks -- Validation node: does not execute the consensus process, verifies whether the consensus node is legal, block validation, after several rounds of consensus, will switch to the consensus node +- Consensus Committee: the node that executes the PBFT consensus process, with the authority to take turns out blocks +- Verification node: does not implement the consensus process, verifies whether the consensus node is legal, block verification, after several rounds of consensus, will switch to the consensus node ### core thought @@ -33,8 +33,8 @@ In summary, FISCO BCOS v2.3.0 proposes the rPBFT consensus algorithm, which aims The rPBFT algorithm selects only a number of consensus nodes for each round of consensus process, and periodically replaces the consensus nodes according to the block height to ensure system security, including two system parameters: -- 'epoch _ sealer _ num ': the number of nodes participating in the consensus process in each round of consensus. You can dynamically configure this parameter by issuing transactions on the console. -- `epoch_block_num`: The consensus node replacement cycle. To prevent the selected consensus nodes from being associated with each other, the rPBFT replaces a consensus node for each epoch _ block _ num block. You can dynamically configure this parameter by issuing transactions on the console. +- 'epoch _ sealer _ num': the number of nodes participating in the consensus process in each round of consensus. You can dynamically configure this parameter by sending transactions on the console +- `epoch_block_num`: The consensus node replacement cycle. To prevent the selected consensus nodes from being associated with each other, the rPBFT replaces a consensus node for each epoch _ block _ num block. You can dynamically configure this parameter by issuing transactions on the console These two configuration items are recorded in the system configuration table. The configuration table mainly includes three fields: configuration keyword, configuration corresponding value, and effective block height. The effective block height records the latest effective block height of the latest configuration value. For example, set 'epoch _ sealer _ num' and 'epoch _ block _ num' to 4 and 10000 respectively in a 100-block transaction. The system configuration table is as follows: @@ -54,19 +54,19 @@ Sort the NodeIDs of all consensus nodes, as shown in the following figure. The N #### **chain initialization** -During chain initialization, rPBFT needs to select 'epoch _ sealer _ num' consensus nodes to participate in consensus among consensus members. Currently, the initial implementation is to select the index from 0 to 'epoch _ sealer _ num'.-1 'node participates in the previous' epoch _ block _ num 'block consensus。 +During chain initialization, rPBFT needs to select 'epoch _ sealer _ num' consensus nodes to participate in consensus among consensus members. Currently, the initial implementation is to select nodes with indexes from 0 to 'epoch _ sealer _ num-1' to participate in the consensus of the previous' epoch _ block _ num 'blocks。 -#### **The consensus member node runs the PBFT consensus algorithm.** +#### **The consensus member node runs the PBFT consensus algorithm** -The selected 'epoch _ sealer _ num' consensus member nodes run the PBFT consensus algorithm to verify node synchronization and verify the blocks generated by the consensus of these consensus member nodes. +The selected 'epoch _ sealer _ num' consensus member nodes run the PBFT consensus algorithm to verify node synchronization and verify the blocks generated by the consensus of these consensus member nodes -- Checked block signature list: Each block must contain at least two-thirds of the consensus members' signatures -- Verify the execution result of the block: the execution result of the local block must be consistent with the execution result recorded by the consensus committee in the block header. +- Checked list of block signatures: each block must contain the signatures of at least two-thirds of the consensus members +- Check the execution result of the block: the execution result of the local block must be consistent with the execution result recorded by the consensus committee in the block header #### **Dynamic replacement of consensus committee list** -To ensure system security, the rPBFT algorithm removes a node from the consensus member list as a validation node and adds a validation node to the consensus member list after each 'epoch _ block _ num' block, as shown in the following figure. +To ensure system security, the rPBFT algorithm removes a node from the consensus member list as a validation node and adds a validation node to the consensus member list after each 'epoch _ block _ num' block, as shown in the following figure ![](../../../images/consensus/epoch_rotating.png) @@ -82,14 +82,14 @@ After the node is restarted, the rPBFT algorithm needs to quickly determine the If the current block height is' blockNum 'and the effective block height of' epoch _ block _ num 'is' enableNum ', the consensus period is: `rotatingRound = (blockNumber - enableNum) / epoch_block_num` -**Determine the starting node index of the consensus member.**: 'N 'is the total number of consensus nodes, indexed from'(rotatingRound * epoch_block_num) % N 'to'(rotatingRound * epoch_block_num + epoch_sealer_num) Nodes between% N 'belong to consensus member nodes +**Determine the starting node index of the consensus member**: 'N 'is the total number of consensus nodes, indexed from'(rotatingRound * epoch_block_num) % N 'to'(rotatingRound * epoch_block_num + epoch_sealer_num) Nodes between% N 'belong to consensus member nodes ### Analysis of rPBFT Algorithm -- Network Complexity: O(epoch_sealer_num * epoch_sealer_num), regardless of node size, and is more scalable than the PBFT consensus algorithm -- Performance: can be confirmed in seconds, and because the complexity of the algorithm is independent of the number of nodes, the performance attenuation is much less than PBFT -- Consistency and availability requirements: at least two-thirds of the consensus member nodes need to work properly before the system can reach a normal consensus. -- Security: The VRF algorithm will be introduced in the future to randomly and privately replace consensus members to enhance the security of consensus algorithms. +- Network complexity: O(epoch_sealer_num * epoch_sealer_num), regardless of node size, and is more scalable than the PBFT consensus algorithm +-Performance: It can be confirmed in seconds, and because the complexity of the algorithm has nothing to do with the number of nodes, the performance attenuation is much smaller than PBFT +- Consistency and availability requirements: at least two-thirds of the consensus member nodes need to work properly before the system can reach a normal consensus +- Security: The VRF algorithm will be introduced in the future to randomly and privately replace consensus members to enhance the security of consensus algorithms ## rPBFT Network Optimization @@ -101,12 +101,12 @@ To further improve the broadcast efficiency of Prepare packages in bandwidth-lim ![](../../../images/consensus/broadcast_prepare_by_tree.png) -- According to the consensus node index, a complete n-tree is formed.(The default is 3) -- After the leader generates the Prepare packet, it forwards the Prepare packet to all its subordinate child nodes along the tree topology. +- Complete n-tree based on consensus node index(The default is 3) +- After the Leader generates the Prepare packet, it forwards the Prepare packet to all its subordinate child nodes along the tree topology **Advantage**: -- Faster propagation speed than gossip, no redundant message packets -- Divide and conquer, each node out of the bandwidth is O(1), strong scalability +- Propagation speed is faster than gossip, no redundant message packets +- Divide and conquer, each node out bandwidth is O(1), strong scalability **Disadvantages**: The intermediate node is a single point and requires additional fault tolerance strategies @@ -129,15 +129,15 @@ The main processes include: (2) After receiving the prepareStatus randomly broadcast by node A, node B determines whether the status of the Prepare package of node A is newer than the localPrepare status of the current Prepare package of node B. The main determinations include: -- Is prepareStatus.blockNumber greater than the current block height -- Is prepareStatus.blockNumber greater than localPrepare.blockNumber -- Is prepareStatus.view greater than localPrepare.view when prepareStatus.blockNumber equals localPrepare.blockNumber +-prepareStatus.blockNumber is higher than the current block +-prepareStatus.blockNumber is greater than localPrepare.blockNumber +-PrepareStatus.blockNumber equals localPrepare.blockNumber, prepareStatus.view is greater than localPrepare.view -Any of the above conditions holds, indicating that the Prepare package state of node A is newer than the state of node B. +Any of the above conditions holds, indicating that the Prepare package state of node A is newer than the state of node B (3) If the status of node B is behind that of node A and node B is disconnected from its parent node, node B sends a prepareRequest request to node A, requesting the corresponding Prepare package -(4) If the state of node B is behind node A, but node B is connected to its parent node, if node B waits up to [100ms(Can be matched)](../../manual/configuration.html#rpbft)node B sends a prepareRequest request to node A, requesting the corresponding Prepare package. +(4) If the state of node B is behind node A, but node B is connected to its parent node, if node B waits up to [100ms(Can be matched)](../../manual/configuration.html#rpbft)node B sends a prepareRequest request to node A, requesting the corresponding Prepare package (5) After receiving the prepareRequest request from node A, node B replies to the corresponding Prepare message packet @@ -165,7 +165,7 @@ After rPBFT is enabled to optimize the Prepare package structure and other conse leader->>sealerA(parent node): Send PrepareStatus sealerA(parent node)->>sealerA(parent node): Update Prepare status cache{leader, PrepareStatus} sealerA(parent node)->>sealerB(Child Node): Forward Prepare - sealerA(parent node)->>sealerA(parent node): Request and obtain the missing transaction from the leader. The Prepare package is added to the cache. + sealerA(parent node)->>sealerA(parent node): Request and obtain the missing transaction from the leader. The Prepare package is added to the cache sealerA(parent node)->>sealerB(Child Node): Send PrepareStatus sealerB(Child Node)->>sealerB(Child Node): Update Prepare status cache{sealerA, PrepareStatus} sealerB(Child Node)->>sealerB(Child Node): Request missing from sealerA and get @@ -181,18 +181,18 @@ After rPBFT is enabled to optimize the Prepare package structure and other conse (3) The child node sealerA starts processing the Prepare package: - - Get the hit transaction from the transaction pool and populate the block in the Prepare package - - Request missing transactions from parent leader + - Get hit transactions from the transaction pool and populate the blocks in the Prepare package + - Request the missing transaction from the parent node Leader (4) After receiving the leader's return packet, sealerA fills the transactions in the return packet into the Prepare packet and randomly selects 33% of the nodes to broadcast the status of the Prepare packet, mainly including{blockNumber, blockHash, view, idx}After other nodes receive the status package, update the latest status package of sealerA to the cache **The main processing flow of the child node sealerB of sealerA is as follows** -(1) After receiving the Prepare packet forwarded by SealerA, sealerB also forwards the Prepare packet to the child nodes of sealerB. +(1) After receiving the Prepare packet forwarded by SealerA, sealerB also forwards the Prepare packet to the child nodes of sealerB -(2) sealerB starts processing the Prepare package by first obtaining the hit transactions from the transaction pool, populating the blocks in the Prepare package, and selecting nodes to obtain the missing transactions. +(2) sealerB starts processing the Prepare package by first obtaining the hit transactions from the transaction pool, populating the blocks in the Prepare package, and selecting nodes to obtain the missing transactions - If sealerB caches prepareStatus.blockHash from node sealerA equal to Prepare.blockHash, it directly requests the missing transaction from the parent node sealerA - - If the sealerB cached sealerA status package hash is not equal to Prepare.blockHash, but there is a prepareStatus.blockHash from another node C equal to prepare.blockHash, request the missing transaction from C + - If the sealerB cached sealerA status package hash is not equal to Prepare.blockHash, but there is prepareStatus.blockHash from another node C equal to prepare.blockHash, request the missing transaction from C - If the hash of prepareStatus of any node cached by sealerB is not only equal to prepare.blockHash, wait up to [100ms(Can be matched)](../../manual/configuration.html#rpbft)After that, request the missing transaction from the Leader (3) After receiving the transaction replied by the requested node, sealerB populates the block in the Prepare package and randomly selects [33%(Can be matched)](../../manual/configuration.html#rpbft)Node Broadcast Prepare Package Status diff --git a/3.x/en/docs/design/contract.md b/3.x/en/docs/design/contract.md index 123b42552..c9fb7f1e7 100644 --- a/3.x/en/docs/design/contract.md +++ b/3.x/en/docs/design/contract.md @@ -6,17 +6,17 @@ Tags: "smart contract" "virtual machine" ---- -The execution of transactions is an important function on a blockchain node。The execution of the transaction is to take out the binary code of the smart contract in the transaction and execute it with the executor (Executor _)。The consensus module (Consensus _) removes transactions from the trading pool.(TxPool_)is taken out of the block, packaged into blocks, and the executor is called to execute the transactions in the block.。During the execution of the transaction, the state of the blockchain (State) is modified to form a new block state stored (Storage)。Executor in this process, similar to a black box, the input is the smart contract code, the output is the change of state。 +The execution of transactions is an important function on a blockchain node。The execution of the transaction is to take out the binary code of the smart contract in the transaction and execute it with the executor (Executor _)。The consensus module (Consensus _) removes transactions from the trading pool(TxPool_)is taken out of the block, packaged into blocks, and the executor is called to execute the transactions in the block。During the execution of the transaction, the state of the blockchain (State) is modified to form a new block state stored (Storage)。Executor in this process, similar to a black box, the input is the smart contract code, the output is the change of state。 -With the development of technology, people began to pay attention to the performance and ease of use of actuators。On the one hand, people hope that smart contracts can be executed faster on the blockchain to meet the needs of large-scale transactions.。On the other hand, people want to develop in a more familiar and better language.。Then there are some alternatives to the traditional actuator (EVM), such as: WASM。 +With the development of technology, people began to pay attention to the performance and ease of use of actuators。On the one hand, people hope that smart contracts can be executed faster on the blockchain to meet the needs of large-scale transactions。On the other hand, people want to develop in a more familiar and better language。Then there are some alternatives to the traditional actuator (EVM), such as: WASM。 EVMC (Ethereum Client-VM Connector API), Is the interface of the actuator abstracted by Ethereum, designed to be able to interface with various types of actuators。 ![Virtual Machine](../../images/evm/evmc_frame.png) -On the node, the consensus module hands over the packaged blocks to the executor for execution.。When the virtual machine is executed, the reading and writing of the state will in turn operate the state data on the node through the EVMC callback.。 +On the node, the consensus module hands over the packaged blocks to the executor for execution。When the virtual machine is executed, the reading and writing of the state will in turn operate the state data on the node through the EVMC callback。 -After EVMC's abstraction, FISCO BCOS can interface with more efficient and easy-to-use actuators that will emerge in the future.。Currently, FISCO BCOS supports two contract engines, evm and wasm. As the interpreter of evm, evone supports a smart contract language based on Solidity that can be compiled into evm bytecode.。The wasm engine is implemented using wasmtime and supports contracts that can be compiled into wasm languages, such as [liquid](https://liquid-doc.readthedocs.io/zh_CN/latest/index.html)Toolchains for other languages are being planned.。 +After EVMC's abstraction, FISCO BCOS can interface with more efficient and easy-to-use actuators that will emerge in the future。Currently, FISCO BCOS supports two contract engines, evm and wasm. As the interpreter of evm, evone supports a smart contract language based on Solidity that can be compiled into evm bytecode。The wasm engine is implemented using wasmtime and supports contracts that can be compiled into wasm languages, such as [liquid](https://liquid-doc.readthedocs.io/zh_CN/latest/index.html)Toolchains for other languages are being planned。 .. toctree:: :maxdepth: 1 diff --git a/3.x/en/docs/design/contract_directory.md b/3.x/en/docs/design/contract_directory.md index d8a85262b..c4fa03a40 100644 --- a/3.x/en/docs/design/contract_directory.md +++ b/3.x/en/docs/design/contract_directory.md @@ -1,4 +1,4 @@ -# 18. Contract file system BFS. +# 18. Contract file system BFS Tags: "Contract Directory," "Blockchain File System," "BFS" @@ -6,13 +6,13 @@ Tags: "Contract Directory," "Blockchain File System," "BFS" "**Everything is a file descriptor**." -FISCO BCOS version 3.x introduces the concept of blockchain contract file system (BFS), similar to Linux VFS, which organizes and manages contract resources in the blockchain using a tree file directory.。 +FISCO BCOS version 3.x introduces the concept of blockchain contract file system (BFS), similar to Linux VFS, which organizes and manages contract resources in the blockchain using a tree file directory。 ## 1. Use examples ### 1.1 Examples of use -- The user can operate the smart contract through the console similar to the Linux terminal experience +- Users can operate the smart contract through the console similar to the Linux terminal experience ```shell # Use ls to view resources @@ -60,9 +60,9 @@ FISCO BCOS version 3.x introduces the concept of blockchain contract file system ``` -- Users can call the BFS interface in the smart contract to operate the file system in the blockchain. +- Users can call the BFS interface in the smart contract to operate the file system in the blockchain - - BFS is a pre-compiled contract. Users can directly call the BFS interface in a smart contract by introducing the fixed address of the BFS pre-compiled contract.。 + -BFS is a pre-compiled contract. Users can directly call the BFS interface in a smart contract by introducing the fixed address of the BFS pre-compiled contract。 - Fixed address is' 0x100e 'in Solidity smart contract and' / sys / bfs' in WASM execution environment。 @@ -87,30 +87,30 @@ FISCO BCOS version 3.x introduces the concept of blockchain contract file system The use experience of BFS is mainly reflected in the console. Please refer to the BFS-related operation commands in the console, as well as the precautions and errors when using the BFS commands in the console. Please refer to [link](../operation_and_maintenance/console/console_commands.html#bfs)。 -Please refer to [link] for precautions and errors when using the BFS interface called by the contract.(../contract_develop/c++_contract/precompiled_contract_api.html#bfsprecompiled) 。 +Please refer to [link] for precautions and errors when using the BFS interface called by the contract(../contract_develop/c++_contract/precompiled_contract_api.html#bfsprecompiled) 。 -BFS is supported in both the deployment contract and the call contract. For details, see [deploy command] in the console.(../operation_and_maintenance/console/console_commands.html#deploy)with [call command](../operation_and_maintenance/console/console_commands.html#call)。 +BFS is supported in both the deployment contract and the call contract. For details, see [deploy command] in the console(../operation_and_maintenance/console/console_commands.html#deploy)with [call command](../operation_and_maintenance/console/console_commands.html#call)。 ## 2. Design Documents -In the blockchain storage architecture, the contract data is stored in a storage table corresponding to the contract address.。When initiating a call to a contract, the executor accesses the storage table corresponding to the contract address from the storage, obtains the code fields in the table, and loads them into the virtual machine for execution.。This is completely feasible in a single-process node, and the cache with storage can also have high execution performance.。However, it is no longer feasible in the microservices modular design architecture of FISCO BCOS version 3.x.。The computing cluster needs to know which storage instance the storage table corresponding to the contract address is mounted on, which is bound to encounter problems such as addressing mapping and mapping synchronization, which is not conducive to the scheduling and partition mounting of the executor.。 +In the blockchain storage architecture, the contract data is stored in a storage table corresponding to the contract address。When initiating a call to a contract, the executor accesses the storage table corresponding to the contract address from the storage, obtains the code fields in the table, and loads them into the virtual machine for execution。This is completely feasible in a single-process node, and the cache with storage can also have high execution performance。However, it is no longer feasible in the microservices modular design architecture of FISCO BCOS version 3.x。The computing cluster needs to know which storage instance the storage table corresponding to the contract address is mounted on, which is bound to encounter problems such as addressing mapping and mapping synchronization, which is not conducive to the scheduling and partition mounting of the executor。 Design Objectives: -- Convenient module scheduling and partition mounting, allowing storage mounting for multiple distributed use cases; +- Easy to perform module scheduling and partition mounting, allowing storage to mount multiple distributed use cases; - Compared to the CNS centralized contract addressing experience, BFS provides easier for humans to read, understand, and manage partitions**Contract Resource Addressing Experience**; -- Resource path partition management, different partitions have different responsibilities; +- Resource path partition management, different partitions with different responsibilities; -- Provides a common interface for users to call, users can operate BFS resources in smart contracts.。 +- Provides a common interface for users to call, users can operate BFS resources in smart contracts。 Design Constraints: -- The operating interfaces of BFS are implemented in the form of pre-compiled contracts.; -- BFS maintains a tree-like logical structure of the contract file system, represented as a tree-like multi-level directory layer.;In the actual storage layer, the absolute path is the table name.; -- Resource types in BFS can be simply classified as directory resources, contract resources; -- Users are not allowed to create data at will. Users are only allowed to operate in the '/ apps' and '/ tables' directories. +- BFS's operating interfaces are implemented in the form of pre-compiled contracts; +- BFS maintains a tree-like logical structure of the contract file system, represented as a tree-like multi-level directory layer;In the actual storage layer, the absolute path is the table name; +- Resource types in BFS can be simply classified as directory resources and contract resources; +- Do not allow users to create data at will, users are only allowed to operate in the '/ apps', '/ tables' directory ### 2.1 BFS Partition Management @@ -118,42 +118,42 @@ To facilitate O & M management and partition isolation for different application | **Partition Name** | **Path primary purpose** | **Description** | |------------|------------------|-------------------------------------------------------------------------------------------------------| -| /sys | System Contract Directory| This directory stores the logical structure of system contracts and does not support changes at this time.。 | -| /apps | Business Contract Catalog| Users can only create directories and deploy contracts in this directory. Deployment contracts will create paths in this directory, such as / apps / Hello /. Multi-level directories are supported.。 | +| /sys | System Contract Directory| This directory stores the logical structure of system contracts and does not support changes at this time。 | +| /apps | Business Contract Catalog| Users can only create directories and deploy contracts in this directory. Deployment contracts will create paths in this directory, such as / apps / Hello /. Multi-level directories are supported。 | | /tables | User Table Directory| Table contracts created by users are placed in this directory and are visible to the public。 | -| /usr | User Directory| The user is stored in the user directory as a contract, which binds the user name, public key, permissions, and provides a functional interface.**(Not yet implemented)** | +| /usr | User Directory| The user is stored in the user directory as a contract, which binds the user name, public key, permissions, and provides a functional interface**(Not yet implemented)** | Partition restrictions: - Users are only allowed to create directories under '/ apps /' and '/ tables /'; - The contract table created by using the underlying KVTable or Table interface will create the user table path in the '/ tables /' directory; -- When users deploy contracts, they add directories to the '/ apps /' directory。 +- When deploying a contract, users will add a directory to the '/ apps /' directory。 ### 2.2 Positioning of BFS in the overall blockchain architecture -When executing the schedule, the parallelizable transactions will be sent to different execution physical machines (here referred to as executors).。In the process of issuing the transaction, the executor will read the data from the cache, and if the cache does not have the data needed for transaction execution, it will read the data from the storage.。When reading the cache, the computing cluster must ensure that the data read by different executors does not conflict (that is, different executors do not read the same data, and if the same data is read, there will be data synchronization problems).。All contract resources in BFS are unique and isolated, ensuring that the resources read by each executor can be isolated from each other in storage.。 +When executing the schedule, the parallelizable transactions will be sent to different execution physical machines (here referred to as executors)。In the process of issuing the transaction, the executor will read the data from the cache, and if the cache does not have the data needed for transaction execution, it will read the data from the storage。When reading the cache, the computing cluster must ensure that the data read by different executors does not conflict (that is, different executors do not read the same data, and if the same data is read, there will be data synchronization problems)。All contract resources in BFS are unique and isolated, ensuring that the resources read by each executor can be isolated from each other in storage。 -The overall architecture diagram is shown below, when executing transactions, the tree file data structure maintained by BFS exists in the cache in the executor, and in the actual storage layer, the absolute path of the resource is the table name in the storage layer.。The executor will be addressed to the corresponding contract storage table through the resource path when executing the contract, and thus to the corresponding storage table through the resource absolute path.。It can be seen that BFS maintains the mapping relationship of < resource path = > storage table > in the blockchain.。 +The overall architecture diagram is shown below, when executing transactions, the tree file data structure maintained by BFS exists in the cache in the executor, and in the actual storage layer, the absolute path of the resource is the table name in the storage layer。The executor will be addressed to the corresponding contract storage table through the resource path when executing the contract, and thus to the corresponding storage table through the resource absolute path。It can be seen that BFS in the blockchain is the maintenancestorage table>The mapping relationship of。 -Further, the tree organization of BFS can effectively solve the problem of contract resources in scheduling execution and partition storage, the executors of different partitions can only load the contract state data under a directory during execution, and different storage instances can also be mounted to different directory structures.。 +Further, the tree organization of BFS can effectively solve the problem of contract resources in scheduling execution and partition storage, the executors of different partitions can only load the contract state data under a directory during execution, and different storage instances can also be mounted to different directory structures。 ![](../../images/design/bfs_in_system.png) ### 2.3 BFS Organizational Structure -BFS maintains the mapping relationship between resource paths and storage tables in a tree structure. In the actual storage layer, the absolute path of contract resources is the table name of the storage table.。 +BFS maintains the mapping relationship between resource paths and storage tables in a tree structure. In the actual storage layer, the absolute path of contract resources is the table name of the storage table。 -The BFS logical structure diagram is shown below, and the upper part is a sample diagram of the smart contract tree directory structure, which is represented as the logical organization of all resource files.;The second half is the actual performance of the smart contract tree directory in storage, representing the table contents actually stored in storage.。For example, the '/ apps / Hello1' path has a logical structure of '/', '/ apps /', and '/ apps / Hello1'. In the storage structure, '/', '/ apps /', and '/ apps / Hello1' all have a corresponding storage table, and the storage table name is the corresponding absolute path.。 +The BFS logical structure diagram is shown below, and the upper part is a sample diagram of the smart contract tree directory structure, which is represented as the logical organization of all resource files;The second half is the actual performance of the smart contract tree directory in storage, representing the table contents actually stored in storage。For example, the '/ apps / Hello1' path has a logical structure of '/', '/ apps /', and '/ apps / Hello1'. In the storage structure, '/', '/ apps /', and '/ apps / Hello1' all have a corresponding storage table, and the storage table name is the corresponding absolute path。 ![](../../images/design/bfs_logic_structure.png) ### 2.4 Storage contents of BFS storage table -The resource types in BFS can be simply classified as directory resources and contract resources, in which contract resources can be divided into ordinary contracts, Table contracts, pre-compiled contracts, contract soft links, etc.。The following figure shows all the storage tables involved in BFS: +The resource types in BFS can be simply classified as directory resources and contract resources, in which contract resources can be divided into ordinary contracts, Table contracts, pre-compiled contracts, contract soft links, etc。The following figure shows all the storage tables involved in BFS: ![](../../images/design/bfs_data_structure.png) -- The directory resource storage table mainly records the sub-resource name and resource type of the directory path. The storage content is as follows: +- The directory resource storage table mainly records the sub-resource name, resource type and other data under the directory path. The storage content is as follows: | name | type | status | acl_type | acl_black | acl_white | extra | |--------|-----------|--------|-------------|-----------|-----------|-------| @@ -162,11 +162,11 @@ The resource types in BFS can be simply classified as directory resources and co | sys | directory | normal | black/white | {...} | {...} | ... | | usr | directory | normal | black/white | {...} | {...} | ... | -- The contract resource storage table mainly records the status data required by the contract during execution. +-The contract resource storage table mainly records the status data required by the contract during execution, and the following will discuss the differences between the various types of contract resources in the storage table - Ordinary contract, Table contract - - This type of contract resource storage table mainly stores the 'code' field, as well as other status data. + - This type of contract resource storage table mainly stores the 'code' field, as well as other status data | Key | Value | |-------|------------------------------------------| @@ -174,9 +174,9 @@ The resource types in BFS can be simply classified as directory resources and co | abi | ABI field loaded to the execution engine for DAG analysis at execution time| | state | ... (Other status data) | - - contract soft link + - Contract soft link - - This type of contract resource mainly stores the real contract address corresponding to the soft link and the ABI corresponding to the real contract. + - This type of contract resource mainly stores the real contract address corresponding to the soft link and the ABI corresponding to the real contract | Key | Value | |--------------|-----------------------| @@ -185,46 +185,46 @@ The resource types in BFS can be simply classified as directory resources and co | link_address | 0x123456 (Contract Address)| | link_abi | ... (contract ABI)| - - Precompiled Contracts + - Pre-compiled contracts - - This type of contract resource is only displayed logically, and there is no actual storage table. + - This type of contract resource is only displayed logically, and there is no actual storage table ### 2.5 BFS Storage Table Lifecycle -BFS storage table life cycle mainly includes creation, modification, reading, temporarily does not support delete, move and other operations。From the perspective of BFS resource classification, there are three types: the creation and reading of directory resources, the creation and reading of ordinary contracts, and the creation, modification and reading of contract soft links.。 +BFS storage table life cycle mainly includes creation, modification, reading, temporarily does not support delete, move and other operations。From the perspective of BFS resource classification, there are three types: the creation and reading of directory resources, the creation and reading of ordinary contracts, and the creation, modification and reading of contract soft links。 #### 2.5.1 Directory Resource Lifecycle -- When the blockchain creation block is initialized, the on-chain system directories are created: '/', '/ apps', '/ tables', '/ usr', and '/ sys'.。 -- When you create a BFS resource, you can create an absolute-path multi-level directory. In this case, a directory resource that does not exist in the absolute path is recursively created. For example, when you create / tables / test / t1, the / tables / test path does not exist.。 -- Users can read 'sub _ dir' in directory resources through BFS list interface +- When the blockchain creation block is initialized, the on-chain system directories will be created: '/', '/ apps', '/ tables', '/ usr', '/ sys'. These directory resources will be created during initialization。 +- When you create a BFS resource, you can create an absolute path multi-level directory. In this case, a directory resource that does not exist in the absolute path is recursively created. For example, when you create / tables / test / t1, the / tables / test path does not exist。 +- Users can read 'sub _ dir' in directory resources through the BFS list interface #### 2.5.2 Common Contract Resource Life Cycle -- When a user initiates a contract creation request or a contract creation request, a corresponding contract resource table is created in the '/ apps' directory. For example, when a contract 0x123456 is created, a storage table of '/ apps / 123456' is created。 -- **It is worth noting that:** The unreadable nature of the address of Solidity is contrary to the BFS readable and visible principle, so the contract address after the deployment of Solidity only generates the contract storage table, and its BFS metadata is not written to the '/ apps' table.。For example, if the address is 0x123456 after the user deploys the contract, the contract storage table '/ apps / 123456' will be created for the contract, but the metadata will not be written to '/ apps', that is, the user calls' list '.(/apps)', will not show subdirectory has' 123456 'this resource。You can use the link operation to bind the Solidity contract address to BFS.。 -- When a user initiates a request to create a table contract, a corresponding contract resource table is created in the '/ tables' directory. For example, when a table contract t _ test contract is created, a storage table of '/ tables / t _ test' is created。 -- When the contract is executed, the status data in the storage table corresponding to the contract resource is read。 -- When deploying a common contract, a contract permission data table is created. For more information, see [Permission Underlying Node Design](./committee_design.html#id15) +- When a user initiates a contract creation request or a contract creation request, a corresponding contract resource table will be created in the '/ apps' directory. For example, when a contract 0x123456 is created, a storage table of '/ apps / 123456' will be created。 +- **It is worth noting that:** The unreadable nature of the address of Solidity is contrary to the BFS readable and visible principle, so the contract address after the deployment of Solidity only generates the contract storage table, and its BFS metadata is not written to the '/ apps' table。For example, if the address is 0x123456 after the user deploys the contract, the contract storage table '/ apps / 123456' will be created for the contract, but the metadata will not be written to '/ apps', that is, the user calls' list '(/apps)', will not show subdirectory has' 123456 'this resource。You can use the link operation to bind the Solidity contract address to BFS。 +- When a user initiates a request to create a table contract, a corresponding contract resource table will be created in the '/ tables' directory. For example, when a table contract t _ test contract is created, a storage table of '/ tables / t _ test' will be created。 +- When executing a contract, the status data in the storage table corresponding to the contract resource will be read。 +- When deploying a common contract, a contract permission data table will be created. For more information, see [Permission Underlying Node Design](./committee_design.html#id15) #### 2.5.3 Contract Soft Link Resource Life Cycle -- You can create a contract soft link resource through the 'link' interface. The corresponding resource will be created in the '/ apps' directory. For example, when you call link, the parameter is Hello v1 0x123456, the contract name is Hello, v1 is the version number, and 0x123456 is the real contract address. The '/ apps / Hello / v1' contract soft link resource will be created.。 -- Similarly, you can use the 'link' interface to overwrite the written contract soft link resource. For example, if the parameter is Hello v1 0x666888 when the link is called again, the written contract soft link resource of '/ apps / Hello / v1' will be overwritten.。 -- Users can call the contract soft link directly through the console's' call 'command to call the real contract.。 -- The user can call the 'readlink' interface to obtain the real contract address of the contract soft link, and can also read the contract address and ABI when calling the contract.。 +- You can create a contract soft link resource through the 'link' interface. The corresponding resource will be created in the '/ apps' directory. For example, when you call link, the parameter is Hello v1 0x123456, the contract name is Hello, v1 is the version number, and 0x123456 is the real contract address. The '/ apps / Hello / v1' contract soft link resource will be created。 +- Similarly, you can use the 'link' interface to overwrite the written contract soft link resource. For example, if the parameter is Hello v1 0x666888 when the link is called again, the written contract soft link resource of '/ apps / Hello / v1' will be overwritten。 +- Users can call the contract soft link directly through the console's' call 'command to call the real contract。 +- The user can obtain the real contract address of the contract soft link by calling the 'readlink' interface, and can also read the contract address and ABI when calling the contract。 ### 2.6 BFS main interface implementation #### 2.6.1 Deployment Contract -When the contract is deployed, it is predetermined that the parent directory of the level directory already exists and information can be written, and then the virtual machine is started to perform the deployment contract creation operation.;After the virtual machine is successfully executed, create a contract table and write the contract code segment, ABI, status data, etc. after the virtual machine is executed to the contract table, and finally record the metadata of the newly deployed contract in the data table of the parent directory。The overall process is shown in the following figure, and the flowchart takes the creation of "/ apps / Hello / World" as an example.。 +When the contract is deployed, it is predetermined that the parent directory of the level directory already exists and information can be written, and then the virtual machine is started to perform the deployment contract creation operation;After the virtual machine is successfully executed, create a contract table and write the contract code segment, ABI, status data, etc. after the virtual machine is executed to the contract table, and finally record the metadata of the newly deployed contract in the data table of the parent directory。The overall process is shown in the following figure, and the flowchart takes the creation of "/ apps / Hello / World" as an example。 ![](../../images/design/bfs_in_deploy.png) #### 2.6.2 Creating Folders -When creating a directory, it is determined that the parent directory of the directory at this level already exists and information can be written, and then the metadata of the newly created directory is recorded in the data table of the parent directory.。The overall process is shown in the following figure, and the flowchart takes the creation of "/ apps / Hello / World" as an example.。 +When creating a directory, it is determined that the parent directory of the directory at this level already exists and information can be written, and then the metadata of the newly created directory is recorded in the data table of the parent directory。The overall process is shown in the following figure, and the flowchart takes the creation of "/ apps / Hello / World" as an example。 ![](../../images/design/bfs_in_mkdir.png) @@ -232,9 +232,9 @@ When creating a directory, it is determined that the parent directory of the dir ### 3.1 Description of BFS compatibility with CNS -FISCO BCOS 3.+ BFS and 2.+The version of the CNS service is similar, providing the chain contract path and contract address, contract version of the mapping relationship between the record and the corresponding query function, to facilitate the caller by memorizing a simple contract name to achieve the call to the chain contract.。Compared with CNS, BFS is a step closer, with a multi-level concept, which can more intuitively show the path hierarchy and facilitate user management.。 +FISCO BCOS 3.+ BFS and 2+The version of the CNS service is similar, providing the chain contract path and contract address, contract version of the mapping relationship between the record and the corresponding query function, to facilitate the caller by memorizing a simple contract name to achieve the call to the chain contract。Compared with CNS, BFS is a step closer, with a multi-level concept, which can more intuitively show the path hierarchy and facilitate user management。 -In order to adapt the user in 2.+Version has used the code logic of the CNS interface, and BFS provides an interface that is consistent with the CNS effect: +In order to adapt the user in 2+Version has used the code logic of the CNS interface, and BFS provides an interface that is consistent with the CNS effect: ```solidity struct BfsInfo { @@ -251,14 +251,14 @@ abstract contract BfsPrecompiled { } ``` -- The 'link' interface is the same as the 'insert' parameter of the CNS. A mapping relationship between the contract name / contract version number and the contract address is added and its ABI is recorded.。 -- The 'list' interface has the same effect as the 'selectByName' traversal interface of the CNS. All resources in the directory will be returned. The corresponding contract address is in ext [0], and the contract ABI is in ext [1]。It is also compatible with 'selectByNameAndVersion'. If the resource corresponding to the parameter is a soft connection resource, a resource will be returned.。 -- The 'readlink' interface is the same as the 'getContractAddress' interface of the CNS. The corresponding contract address is obtained according to the contract name / contract version number.。 +- The 'link' interface is the same as the 'insert' parameter of the CNS. A mapping relationship between the contract name / contract version number and the contract address is added and its ABI is recorded。 +- The 'list' interface has the same effect as the 'selectByName' traversal interface of the CNS. All resources in the directory will be returned. The corresponding contract address is in ext [0], and the contract ABI is in ext [1]。It is also compatible with 'selectByNameAndVersion'. If the resource corresponding to the parameter is a soft connection resource, a resource will be returned。 +- The 'readlink' interface is the same as the 'getContractAddress' interface of the CNS. The corresponding contract address is obtained according to the contract name / contract version number。 -**Use example:** Next, the console demonstrates how to use the BFS interface to add, upgrade, traverse, and call contract versions.。 +**Use example:** Next, the console demonstrates how to use the BFS interface to add, upgrade, traverse, and call contract versions。 - Create a new contract name and version number: - - The user creates a BFS soft connection with the contract name Hello and version v1: + - User creates a BFS soft connection with contract name Hello and version v1: ```shell # Create a contract softlink with the contract name Hello and the contract version v1 @@ -267,14 +267,14 @@ abstract contract BfsPrecompiled { "code":0, "msg":"Success" } - # The link file is created in the / apps / directory. + # The link file is created in the / apps / directory [group0]: /apps> ls ./Hello/v1 v1 -> 19a6434154de51c7a7406edf312f01527441b561 ``` - At this point, the user has created the contract name Hello and specified the contract version number as v1. At this point, under the BFS directory structure, the path of the newly created soft connection is' / apps / Hello / v1' -- Upgrade Contract Version - - Users can specify versions corresponding to multiple contract names and can overwrite the existing version numbers +- Upgrade contract version + - Users can specify versions corresponding to multiple contract names and can overwrite pre-existing version numbers ```shell # Create a contract soft link with the contract name Hello and the contract version latest @@ -283,7 +283,7 @@ abstract contract BfsPrecompiled { "code":0, "msg":"Success" } - # The link file is created in the / apps / directory. + # The link file is created in the / apps / directory [group0]: /apps> ls ./Hello/latest latest -> 0x2b5DCbaE97f9d9178E8b051b08c9Fb4089BAE71b # Version number can be overwritten @@ -294,8 +294,8 @@ abstract contract BfsPrecompiled { } ``` -- Traverse all version numbers of the specified contract name - - Users can traverse all version numbers of a specified contract name through the list interface. +- iterates over all version numbers of the specified contract name + - Users can traverse all version numbers of a specified contract name through the list interface ```shell [group0]: /apps> ls ./Hello diff --git a/3.x/en/docs/design/guomi.md b/3.x/en/docs/design/guomi.md index 29cfe5a97..11a47d5a0 100644 --- a/3.x/en/docs/design/guomi.md +++ b/3.x/en/docs/design/guomi.md @@ -7,14 +7,14 @@ Tags: "State Secret Algorithm" "SSL" "SM" " In order to fully support domestic cryptography algorithms, the Golden Chain Alliance is based on [Domestic Cryptography Standards](http://www.gmbz.org.cn/main/bzlb.html)It implements the encryption and decryption, signature, signature verification, hash algorithm, and SSL communication protocol, and integrates it into the FISCO BCOS platform to realize the**Commercial passwords recognized by the National Cryptographic Office**full support of。 -**The state secret version of FISCO BCOS replaces the cryptographic algorithms of the underlying modules such as transaction signature verification, p2p network connection, node connection, data drop encryption, etc. with the state secret algorithm.**The main features of the state secret version of FISCO BCOS and the standard version are compared as follows: +**The state secret version of FISCO BCOS replaces the cryptographic algorithms of the underlying modules such as transaction signature verification, p2p network connection, node connection, data drop encryption, etc. with the state secret algorithm**The main features of the state secret version of FISCO BCOS and the standard version are compared as follows: | | FISCO BCOS Standard Edition| State Secret Edition FISCO BCOS| |:------------:|:--------------------:|:------------------:| | SSL Link| OpenSSL TLSv1.2 Protocol| State Secret TLSv1.1 Agreement| | Signature Verification| ECDSA Signature Algorithm| SM2 Signature Algorithm| | message digest algorithm| SHA-256 SHA-3 | SM3 Message Digest Algorithm| - | falling disk encryption algorithm| AES-256 encryption algorithm| SM4 Encryption Algorithm| + | falling disk encryption algorithm| AES-256 Encryption Algorithm| SM4 Encryption Algorithm| | Certificate Mode| OpenSSL certificate mode| State Secret Dual Certificate Mode| | contract compiler| Ethereum Solidity Compiler| State Secret Solidity Compiler| @@ -34,8 +34,8 @@ The ECDHE _ SM4 _ SM3 cipher suite of State Secret SSL 1.1 is used to establish |:------------:|:----------------------------------------:|:----------------------------------------:| | Encryption Suite| Using ECDH, RSA, SHA-256, AES256 and other cryptographic algorithms| Adopting the State Secret Algorithm| | PRF algorithm| SHA-256 | SM3 | - | Key exchange mode| Transmission elliptic curve parameters and the signature of the current message| The signature and encryption certificate of the current message.| - | Certificate Mode| OpenSSL certificate mode| The dual certificate model of the State Secret, which is an encryption certificate and a signature certificate, respectively.| + | Key exchange mode| Transmission elliptic curve parameters and the signature of the current message| The signature and encryption certificate of the current message| + | Certificate Mode| OpenSSL certificate mode| The dual certificate model of the State Secret, which is an encryption certificate and a signature certificate, respectively| ## Data structure differences @@ -46,4 +46,4 @@ The data structure differences between the State Secret Edition and the Standard | Signature| ECDSA (Public key length: 512 bits, private key length: 256 bits) | SM2 (Length of public key: 512 bits, length of private key: 256 bits) | | Hash| SHA3 (Hash string length: 256 bits) | SM3 (Hash string length: 256 bits) | | symmetric encryption and decryption| AES (Encryption Key Length: 256 bits) | SM4 (Symmetric key length: 128 bits) | - | Transaction length| 520bits(The identifier is 8bits and the signature length is 512bits.) | 1024bits(128 bytes, including public key 512bits, signature length 512bits) | + | Transaction length| 520bits(The identifier is 8bits and the signature length is 512bits) | 1024bits(128 bytes, including public key 512bits, signature length 512bits) | diff --git a/3.x/en/docs/design/hsm.md b/3.x/en/docs/design/hsm.md index d51358b4a..9f3776e6a 100644 --- a/3.x/en/docs/design/hsm.md +++ b/3.x/en/docs/design/hsm.md @@ -8,22 +8,22 @@ Tags: "hardware encryption" "" HSM "" "cipher machine" " ### cipher machine HSM -Hardware security module (HSM) is a computer hardware device used to secure and manage the digital keys used by strong authentication systems, while providing related cryptographic operations.。Hardware security modules are typically connected directly to a computer or network server in the form of an expansion card or external device。 +Hardware security module (HSM) is a computer hardware device used to secure and manage the digital keys used by strong authentication systems, while providing related cryptographic operations。Hardware security modules are typically connected directly to a computer or network server in the form of an expansion card or external device。 ### GMT0018 -《GMT0018-The 2012 Cryptographic Device Application Interface Specification is a cryptographic device application interface specification issued by the State Cryptographic Administration and conforms to the Chinese cryptographic industry standard.。It establishes a unified application interface standard for service-type cryptographic devices under the framework of the public key cryptographic infrastructure application system, through which cryptographic devices are invoked to provide basic cryptographic services to the upper layer.。Provide standard basis and guidance for the development, use and testing of such cryptographic devices, which is conducive to improving the level of productization, standardization and serialization of such cryptographic devices.。 +"GMT0018-2012 Cryptographic Device Application Interface Specification" is a cryptographic device application interface specification issued by the National Cryptographic Administration and conforms to the Chinese cryptographic industry standard。It establishes a unified application interface standard for service-type cryptographic devices under the framework of the public key cryptographic infrastructure application system, through which cryptographic devices are invoked to provide basic cryptographic services to the upper layer。Provide standard basis and guidance for the development, use and testing of such cryptographic devices, which is conducive to improving the level of productization, standardization and serialization of such cryptographic devices。 -FISCO BCOS 2.8.0 and FISCO BCOS 3.3.0 versions introduce cipher machine functionality。Users can put the password into the cipher machine, through the cipher machine**consensus signature**、**Transaction Validation**。FISCO BCOS supports GMT0018-The 2012 Cipher Device Application Interface Specification for Cipher Cards / Ciphers supports the SDF standard, which allows FISCO BCOS to have faster cryptographic calculations and more secure key protection。 +FISCO BCOS 2.8.0 and FISCO BCOS 3.3.0 versions introduce cipher machine functionality。Users can put the password into the cipher machine, through the cipher machine**consensus signature**、**Transaction Validation**。FISCO BCOS supports the "GMT0018-2012 password device application interface specification" password card / password machine, supports the SDF standard, which makes FISCO BCOS have faster password calculation speed, more secure key protection。 -## Second, call the password machine module. +## Second, call the password machine module -The consensus and trading module of FISCO BCOS calls the cipher machine.。 -Consensus and transaction modules call 'bcos when signing-crypto 'module,' bcos-crypto 'call again' hsm-The 'crypto' module, which finally calls the password machine API interface to complete the signature。The parameters involved are also the built-in key index keyIndex of the cipher machine passed in through the configuration file, and finally the cipher machine signature interface 'SDF _ InternalSign _ ECC' is called.。 -The transaction module calls' bcos' in the same way when checking the signature.-crypto 'module,' bcos-crypto 'call again' hsm-The 'crypto' module, which finally calls the password machine API interface to complete the signature。Finally call the password machine verification interface 'SDF _ ExternalVerify _ ECC'。 +The consensus and trading module of FISCO BCOS calls the cipher machine。 +When signing the consensus and transaction modules, call the 'bcos-crypto' module, 'bcos-crypto' and then call the 'hsm-crypto' module, and finally call the password machine API interface to complete the signature。The parameters involved are also the built-in key index keyIndex of the cipher machine passed in through the configuration file, and finally the cipher machine signature interface 'SDF _ InternalSign _ ECC' is called。 +In the same way, the transaction module calls the 'bcos-crypto' module, and then calls the 'hsm-crypto' module, and finally calls the password machine API interface to complete the signature。Finally call the password machine verification interface 'SDF _ ExternalVerify _ ECC'。 ### hsm-crypto module -hsm-Crypto is an encapsulated cipher API interface that uses C++The hardware security module (Hardware Secure Module) implemented to assist applications in calling the GMT0018-The PCI password card or password machine of 2012 Common Interface Specification for Password Equipment performs operations of SM2, SM3 and SM4。FISCO BCOS node, and java-The sdk calls the password machine API interface by calling this module.。[Github Project Address](https://github.com/WeBankBlockchain/hsm-crypto) +hsm-crypto is an encapsulated cipher machine API interface that uses C++Hardware secure module (Hardware secure module), which can assist applications to call PCI password cards or cipher machines that comply with the GMT0018-2012 Common Interface Specification for Cryptographic Devices to perform state-secret algorithms SM2, SM3, and SM4 operations。FISCO BCOS node, as well as java-sdk by calling the module, call the password machine API interface。[Github Project Address](https://github.com/WeBankBlockchain/hsm-crypto) -At this point, the hardware cipher machine HSM design document is over, about FISCO BCOS and java-sdk how to use password machine, please refer to [build a password module using hardware state chain](../tutorial/air/use_hsm.md) +At this point, the hardware cipher machine HSM design document is over, about FISCO BCOS and java-sdk how to use the cipher machine, please refer to [Building a national secret chain using hardware cryptographic modules](../tutorial/air/use_hsm.md) diff --git a/3.x/en/docs/design/index.md b/3.x/en/docs/design/index.md index 00ddd9371..3093858d0 100644 --- a/3.x/en/docs/design/index.md +++ b/3.x/en/docs/design/index.md @@ -2,17 +2,17 @@ Tags: "System Design" "Consensus" "Distributed Storage" "Contract Catalog" -FISCO BCOS 3.x version adopted**Microservices Modularization**Design architecture, the overall system includes access layer, scheduling layer, computing layer, storage layer and management layer.: +FISCO BCOS 3.x version adopted**Microservices Modularization**Design architecture, the overall system includes access layer, scheduling layer, computing layer, storage layer and management layer: -- **access layer**: Responsible for blockchain**The ability to connect** , including the "external gateway service" that provides P2P capabilities and the "internal gateway service" that provides SDK access.。 +- **access layer**: Responsible for blockchain**The ability to connect** , including the "external gateway service" that provides P2P capabilities and the "internal gateway service" that provides SDK access。 *** -- **scheduling layer**The "brain center" system for the operation and scheduling of the blockchain kernel, responsible for the entire blockchain system.**operation scheduling**, including network distribution scheduling, transaction pool management, consensus mechanism, calculation scheduling and other modules.。 +- **scheduling layer**The "brain center" system for the operation and scheduling of the blockchain kernel, responsible for the entire blockchain system**operation scheduling**, including network distribution scheduling, transaction pool management, consensus mechanism, calculation scheduling and other modules。 *** -- **calculation layer**: Responsible**Transaction Validation**The core of the blockchain is to decode the transaction and execute it in the contract virtual machine to get the result of the transaction execution.。 +- **calculation layer**: Responsible**Transaction Validation**The core of the blockchain is to decode the transaction and execute it in the contract virtual machine to get the result of the transaction execution。 *** - **Storage Tier**: Responsible**Drop Disk Storage** Data such as transaction, block and ledger status。 *** -- **Management**: Implemented for each module of the entire blockchain system.**visual management** platform, including management functions such as deployment, configuration, logging, and network routing。FISCO BCOS 3.x system architecture based on open source microservices framework Tars。 +- **Management**: Implemented for each module of the entire blockchain system**visual management** platform, including management functions such as deployment, configuration, logging, and network routing。FISCO BCOS 3.x system architecture based on open source microservices framework Tars。 *** ------ @@ -20,11 +20,11 @@ ___ Support**Flexible split combination** Microservice modules, which can build different forms of service patterns, currently include: *** -- **Lightweight Air Edition**: Adopting all-in-The one encapsulation mode compiles all modules into a binary (process), a process is a blockchain node, including all functional modules such as network, consensus, access, etc., using local RocksDB storage, suitable for beginners entry, function verification, POC products, etc.。 +- **Lightweight Air Edition**All-in-one encapsulation mode is used to compile all modules into a binary (process), a process is a blockchain node, including all functional modules such as network, consensus, access, etc., using local RocksDB storage, suitable for beginners, function verification, POC products, etc。 *** -- **Pro Edition**It consists of RPC, Gateway service, and multiple blockchain node services. Multiple node services can form a group. All nodes share access layer services. Access layer services can be extended in parallel. It is suitable for production environments with controllable capacity (within T level).。 +- **Pro Edition**It consists of RPC, Gateway service, and multiple blockchain node services. Multiple node services can form a group. All nodes share access layer services. Access layer services can be extended in parallel. It is suitable for production environments with controllable capacity (within T level)。 -- **Large Capacity Max Edition**: Consists of all services at each layer, each service can be independently extended, storage using distributed storage TiKV, management using Tars-Framwork Services。It is suitable for scenarios where massive transactions are linked and a large amount of data needs to be stored on disk.。 +- **Large Capacity Max Edition**: Consists of all services at each layer, each service can be expanded independently, the storage adopts distributed storage TiKV, and the management adopts Tars-Framwork service。It is suitable for scenarios where massive transactions are linked and a large amount of data needs to be stored on disk。 ---------- ```eval_rst diff --git a/3.x/en/docs/design/network_compress.md b/3.x/en/docs/design/network_compress.md index 85dc396a9..8d45caf10 100644 --- a/3.x/en/docs/design/network_compress.md +++ b/3.x/en/docs/design/network_compress.md @@ -1,14 +1,14 @@ -# 13. Network packet compression. +# 13. Network packet compression tags: "p2p network compression" "data compression" ---- -In the external network environment, the performance of the blockchain system is limited by the network bandwidth. In order to minimize the impact of the network bandwidth on the system performance, FISCO BCOS 3.0 supports the p2p network compression function in 'v3.1.0'.。The p2p network compression function is enabled by default and does not need to be controlled by a configuration item。 +In the external network environment, the performance of the blockchain system is limited by the network bandwidth. In order to minimize the impact of the network bandwidth on the system performance, FISCO BCOS 3.0 supports the p2p network compression function in 'v3.1.0'。The p2p network compression function is enabled by default and does not need to be controlled by a configuration item。 ## System framework -Network compression is mainly implemented at the P2P network layer, and the system framework is as follows. +Network compression is mainly implemented at the P2P network layer, and the system framework is as follows ![](../../images/design/compress_architecture.png) @@ -20,7 +20,7 @@ Network compression consists of two main processes: ## Core implementation -Considering performance, compression efficiency, etc., we chose [Zstd](https://github.com/facebook/zstd)to achieve packet compression and decompression。This section focuses on the implementation of network compression.。 +Considering performance, compression efficiency, etc., we chose [Zstd](https://github.com/facebook/zstd)to achieve packet compression and decompression。This section focuses on the implementation of network compression。 ### Data compression flag bit @@ -34,7 +34,7 @@ The network data packet mainly includes two parts: packet header and data. The p - Version: Tagged packet version type, primarily for packet version compatibility - packet type: Packet type marked - Seq: Packet Sequence Number -- ext: Used to extend the tag to the packet, such as whether the tag packet is of the respond type and whether it is compressed. +- ext: Used to extend the tag to the packet, such as whether the tag packet is of the respond type and whether it is compressed **The network compression module only compresses network data, not packet headers。** @@ -42,27 +42,27 @@ Considering that compressing and decompressing small data packets cannot save da ![](../../images/design/compress_flag.png) -- Add compression tag: ext|= 0x0010 +- Add compression marker: ext|= 0x0010 - Eliminate compression marks: ext & = ~ 0x0010 ### Process Flow -The following is an example in which a node node0 sends a p2p packet to another node node1 to describe the key processing flow of the p2p network compression module in detail.。 +The following is an example in which a node node0 sends a p2p packet to another node node1 to describe the key processing flow of the p2p network compression module in detail。 **Send-side processing flow** -- Node node0 passes packet to P2P layer; +- Node node0 passes the packet into the P2P layer; - If P2P determines that the packet of the packet is greater than 'c _ compressThreshold' and the receiving node version supports compression (version 3.1 and later), it calls the compression interface to compress the payload data of the packet; - The encoding module adds a packet header to the packet and updates the compression tag bit 'ext'。That is, if the packet is a compressed packet, the|= 0x0010`; -- P2P transmits the encoded data packet to the destination node。 +- P2P delivers the encoded packet to the destination node。 **Receiving end processing flow** -- After the target machine receives the packet, the decoding module separates the packet header and determines whether the network data is compressed by determining the ext field of the packet header, that is, 'm _ ext & 0x0010 = = 0x0010'; -- If the network packet has been compressed, the decompression interface is called to decompress the payload data.; -- reset resets the ext flag bit, that is, 'm _ ext & = ~ 0x0010' to prevent the same packet from being decompressed multiple times。 +-After the target machine receives the data packet, the decoding module separates the packet header, and judges whether the network data is compressed by judging the ext field of the packet header, that is, 'm _ ext & 0x0010 = = 0x0010'; +- If the network packet is compressed, call the decompression interface to decompress the payload data; +-reset resets the ext flag bit, that is, 'm _ ext & = ~ 0x0010' to prevent the same packet from being decompressed multiple times。 ## Compatibility Description - **Data Compatibility**Changes that do not involve stored data; -- **Network compatible with rc1**: Forward compatible, only relase-3.1 and above nodes have network compression function。 +- **Network compatible with rc1**: Forward compatibility, only relase-3.1 and above nodes have network compression function。 diff --git a/3.x/en/docs/design/p2p.md b/3.x/en/docs/design/p2p.md index cca1b5e52..fa04a68c9 100644 --- a/3.x/en/docs/design/p2p.md +++ b/3.x/en/docs/design/p2p.md @@ -5,16 +5,16 @@ Tags: "Peer-to-Peer Network" "P2P Module" "Peer-to-Peer Network" "Status Synchro ---- ## Design Objectives -FISCO BCOS P2P module provides efficient, universal and secure network communication basic functions, supports unicast, multicast and broadcast of blockchain messages, supports blockchain node status synchronization, and supports multiple protocols。P2P network can be dynamically configured, networking can be dynamically configured;In addition, P2P networks ensure that a single node failure should not affect the entire network communication of the node. +FISCO BCOS P2P module provides efficient, universal and secure network communication basic functions, supports unicast, multicast and broadcast of blockchain messages, supports blockchain node status synchronization, and supports multiple protocols。P2P network can be dynamically configured, networking can be dynamically configured;In addition, P2P networks ensure that a single node failure should not affect the entire network communication of the node ## P2P main function -- Networking dynamic configurable +- Networking dynamic configuration -P2P network can be dynamically configured, in the process of system operation to support nodes to dynamically join and exit, networking dynamic configuration.。 +P2P network can be dynamically configured, in the process of system operation to support nodes to dynamically join and exit, networking dynamic configuration。 -- Blockchain Node Identification +- Blockchain node identity -A blockchain node is uniquely identified by the blockchain node identifier. In FISCOBCOS, the node is uniquely identified by nodeID, and the blockchain node is addressed by the blockchain node identifier on the blockchain network. +A blockchain node is uniquely identified by the blockchain node identifier. In FISCOBCOS, the node is uniquely identified by nodeID, and the blockchain node is addressed by the blockchain node identifier on the blockchain network - Manage network connections @@ -22,15 +22,15 @@ Maintain long TCP connections between blockchain nodes on the blockchain network - Messaging -Unicast, multicast, or broadcast messages between blockchain nodes in a blockchain network. Each message has a unique identifier. +Unicast, multicast, or broadcast messages between blockchain nodes in a blockchain network. Each message has a unique identifier -- State Synchronization +- Status synchronization Synchronize status between blockchain nodes -- Network Security +- Network security -The network module ensures that a single node failure does not affect the entire network communication of the node, and supports the node to restore the original networking function after the recovery of abnormal scenarios. +The network module ensures that a single node failure does not affect the entire network communication of the node, and supports the node to restore the original networking function after the recovery of abnormal scenarios ### Protocol Format @@ -61,29 +61,29 @@ Options Format: ## Blockchain Node Identification -The blockchain node identifier is generated by the public key of the ECC algorithm. Each blockchain node must have a unique ECC key pair. The blockchain node identifier uniquely identifies a blockchain node in the blockchain network. +The blockchain node identifier is generated by the public key of the ECC algorithm. Each blockchain node must have a unique ECC key pair. The blockchain node identifier uniquely identifies a blockchain node in the blockchain network Typically, to join a blockchain network, a node must prepare at least three files: -- node.key node key, in ECC format -- node.crt node certificate, issued by CA -- ca.crt CA certificate, provided by CA +-node.key node key, ECC format +-node.crt node certificate, issued by CA +-ca.crt CA certificate, provided by CA institution -In addition to the unique blockchain node identifier, blockchain nodes can also focus on topics for addressing. +In addition to the unique blockchain node identifier, blockchain nodes can also focus on topics for addressing Blockchain node addressing: -- Blockchain Node Identity Addressing +- Blockchain node identity addressing Locate a unique blockchain node in the blockchain network by using the blockchain node identifier -- Topic Addressing +-Topic Addressing -Use a topic to locate a group of nodes in the blockchain network that are interested in the topic. +Use a topic to locate a group of nodes in the blockchain network that are interested in the topic ## Manage network connections -A long TCP connection is automatically initiated and maintained between blockchain nodes. In the event of a system failure or network abnormality, a reconnection is initiated. +A long TCP connection is automatically initiated and maintained between blockchain nodes. In the event of a system failure or network abnormality, a reconnection is initiated When a connection is established between blockchain nodes, the CA certificate is used for authentication @@ -96,24 +96,24 @@ When a connection is established between blockchain nodes, the CA certificate is participant blockchain node A participant blockchain node B - Blockchain Node A-> > Blockchain Node A: Load keys and certificates - Blockchain Node B-> > Blockchain Node B: Load keys and certificates - Blockchain Node A-> > Blockchain Node B: Initiate connection - Blockchain Node B-> > Blockchain Node A: Connection successful - Blockchain Node B-> Blockchain Node A: Initiate SSL handshake - Blockchain Node A-> > Blockchain Node A: Get public key from certificate as node ID - Blockchain Node B-> > Blockchain Node B: Get public key from certificate as node ID - Blockchain Node B-> Blockchain Node A: Handshake successful, SSL connection established + blockchain node A->>Blockchain Node A: Load keys and certificates + blockchain node B->>Blockchain Node B: Load keys and certificates + blockchain node A->>Blockchain Node B: Initiate connection + blockchain node B->>Blockchain Node A: Connection successful + blockchain node B->Blockchain Node A: Initiate SSL handshake + blockchain node A->>Blockchain Node A: Get public key from certificate as node ID + blockchain node B->>Blockchain Node B: Get public key from certificate as node ID + blockchain node B->Blockchain Node A: Handshake successful, SSL connection established ``` ## Messaging -Blockchain inter-node messages support unicast, multicast, and broadcast. Each message has a unique identifier. +Blockchain inter-node messages support unicast, multicast, and broadcast. Each message has a unique identifier - Unicast, where a single blockchain node sends a message to a single blockchain node, addressed by the blockchain node identity -- Multicast: A single blockchain node sends a message to a group of blockchain nodes, using topic addressing -- Broadcast, a single blockchain node sends a message to all blockchain nodes +- Multicast, where a single blockchain node sends a message to a group of blockchain nodes, addressed by topic +- Broadcast, where a single blockchain node sends a message to all blockchain nodes ### unicast process @@ -124,9 +124,9 @@ Blockchain inter-node messages support unicast, multicast, and broadcast. Each m participant blockchain node A participant blockchain node B - Blockchain Node A-> > Blockchain Node A: Filter online nodes based on the node ID - Blockchain Node A-> > Blockchain Node B: Send Message - Blockchain Node B-> > Blockchain Node A: Message Packet Back + blockchain node A->>Blockchain Node A: Filter online nodes based on the node ID + blockchain node A->>Blockchain Node B: Send Message + blockchain node B->>Blockchain Node A: Message Packet Back ``` @@ -141,14 +141,14 @@ Blockchain inter-node messages support unicast, multicast, and broadcast. Each m participant blockchain node C participant blockchain node D - Blockchain Node A-> > Blockchain Node A: Select nodes B and C according to Topic 1 - Blockchain Node A-> > Blockchain Node B: Send Message - Blockchain Node A-> > Blockchain Node C: Send Message - Blockchain Node B-> > Blockchain Node B: Select nodes C and D according to Topic 2 - Blockchain Node B-> > Blockchain Node C: Send Message - Blockchain Node B-> > Blockchain Node D: Send Message - Blockchain Node C-> > Blockchain Node C: Select node D according to Topic 3 - Blockchain Node C-> > Blockchain Node D: Send Message + blockchain node A->>Blockchain Node A: Select nodes B and C according to Topic 1 + blockchain node A->>Blockchain Node B: Send Message + blockchain node A->>Blockchain Node C: Send Message + blockchain node B->>Blockchain Node B: Select nodes C and D according to Topic 2 + blockchain node B->>Blockchain Node C: Send Message + blockchain node B->>Blockchain Node D: Send Message + Blockchain Node C->>Blockchain Node C: Select node D according to Topic 3 + Blockchain Node C->>Blockchain Node D: Send Message ``` @@ -163,21 +163,21 @@ Blockchain inter-node messages support unicast, multicast, and broadcast. Each m participant blockchain node C participant blockchain node D - Blockchain Node A-> > Blockchain Node A: Traverse all node IDs - Blockchain Node A-> > Blockchain Node B: Send Message - Blockchain Node A-> > Blockchain Node C: Send Message - Blockchain Node A-> > Blockchain Node D: Send Message - Blockchain Node B-> > Blockchain Node B: Traverse all node IDs - Blockchain Node B-> > Blockchain Node C: Send Message - Blockchain Node B-> > Blockchain Node D: Send Message - Blockchain Node C-> > Blockchain Node C: Traverse all node IDs - Blockchain Node C-> > Blockchain Node D: Send Message + blockchain node A->>Blockchain Node A: Traverse all node IDs + blockchain node A->>Blockchain Node B: Send Message + blockchain node A->>Blockchain Node C: Send Message + blockchain node A->>Blockchain Node D: Send Message + blockchain node B->>Blockchain Node B: Traverse all node IDs + blockchain node B->>Blockchain Node C: Send Message + blockchain node B->>Blockchain Node D: Send Message + Blockchain Node C->>Blockchain Node C: Traverse all node IDs + Blockchain Node C->>Blockchain Node D: Send Message ``` ## State Synchronization -Each node maintains its own state and broadcasts the Seq of the state regularly across the network to synchronize with other nodes. +Each node maintains its own state and broadcasts the Seq of the state regularly across the network to synchronize with other nodes ```eval_rst .. mermaid:: @@ -186,11 +186,11 @@ Each node maintains its own state and broadcasts the Seq of the state regularly participant blockchain node A participant blockchain node B - Blockchain Node A-> Blockchain Node B: broadcast seq - Blockchain Node A-> > Blockchain Node A: Determine whether the seq of node B changes - Blockchain Node A-> > Blockchain Node B: seq change, initiate status query request - Blockchain Node B-> > Blockchain Node A: Return Node Status - Blockchain Node A-> > Blockchain Node A: Update Node B's status and seq + blockchain node A->Blockchain Node B: broadcast seq + blockchain node A->>Blockchain Node A: Determine whether the seq of node B changes + blockchain node A->>Blockchain Node B: seq change, initiate status query request + blockchain node B->>Blockchain Node A: Return Node Status + blockchain node A->>Blockchain Node A: Update Node B's status and seq ``` @@ -210,35 +210,35 @@ Each node maintains its own state and broadcasts the Seq of the state regularly participant sdk [Publisher] - sdk [Subscriber]-> > Node 0: Subscribe to topic1, type: 0x32 + sdk [Subscriber]->>Node 0: Subscribe to topic1, type: 0x32 - Node 0-> > Node 0: Update topic list + Node 0 ->>Node 0: Update topic list - Node 1-> > Node 0: Request a list of topics + Node 1->>Node 0: Request a list of topics - Node 0--> > Node 1: Response topic list + Node 0 -->>Node 1: Response topic list - sdk [Publisher]-> > Node 1: Unicast message to topic1, type: 0x30 + sdk [Publisher]->>Node 1: Unicast message to topic1, type: 0x30 - Node 1-> > Node 0: Node forwarding message + Node 1->>Node 0: Node forwarding message - Node 0->>sdk [Subscriber]: Node forwarding message + Node 0 ->>sdk [Subscriber]: Node forwarding message - sdk [Subscriber]--> > Node 0: Packet back, type: 0x31 + sdk [Subscriber]-->>Node 0: Packet back, type: 0x31 - Node 0--> > Node 1: Node forwarding message + Node 0 -->>Node 1: Node forwarding message - Node 1-->>sdk [Publisher]:Node forwarding message + Node 1 ->>sdk [Publisher]:Node forwarding message ``` ```eval_rst .. note:: - - Unicast means that if multiple subscribers subscribe to the same topic, the node randomly selects a subscriber to push the message - - Message publishers and message subscribers need to choose the same topic - - The return packet after the subscriber receives the message is automatically sent by the sdk, which does not need to be handled by the user. The return packet only indicates that the subscriber has successfully received the message. - - Publisher receives an error code if there are no subscribers before Publisher pushes the message*100*indicating that no nodes are available in the network + -Unicast means that if multiple subscribers subscribe to the same topic, the node randomly selects a subscriber to push the message + - Message publishers and message subscribers need to select the same topic + -After the Subscriber receives the message, the packet is automatically sent by the sdk, and the user does not need to handle it himself. The packet only indicates that the Subscriber has successfully received the message + - Publisher will receive an error code if there are no subscribers before Publisher pushes the message*100*indicating that no nodes are available in the network ``` @@ -260,39 +260,39 @@ Each node maintains its own state and broadcasts the Seq of the state regularly participant sdk [Publisher] - sdk [Subscriber0]-> > Node 0: Subscribe to topic1, type: 0x32 + sdk [Subscriber0]->>Node 0: Subscribe to topic1, type: 0x32 - Node 0-> > Node 0: Update topic list + Node 0 ->>Node 0: Update topic list - Node 1-> > Node 0: Request a list of topics + Node 1->>Node 0: Request a list of topics - Node 0--> > Node 1: Response topic list + Node 0 -->>Node 1: Response topic list - Node 2-> > Node 0: Request a list of topics + Node 2->>Node 0: Request a list of topics - Node 0--> > Node 2: Response topic list + Node 0 -->>Node 2: Response topic list - sdk [Subscriber1]-> > Node 1: Subscribe to topic1, type: 0x32 + sdk [Subscriber1]->>Node 1: Subscribe to topic1, type: 0x32 - Node 1-> > Node 1: Update topic list + Node 1->>Node 1: Update topic list - Node 0-> > Node 1: Request a list of topics + Node 0 ->>Node 1: Request a list of topics - Node 1--> > Node 0: Response topic list + Node 1 ->>Node 0: Response topic list - Node 2-> > Node 1: Request a list of topics + Node 2->>Node 1: Request a list of topics - Node 1--> > Node 2: Response topic list + Node 1 ->>Node 2: Response topic list - sdk [Publisher]-> > Node 2: Multicast message to topic1, type: 0x35 + sdk [Publisher]->>Node 2: Multicast message to topic1, type: 0x35 - Node 2-> > Node 0: Node forwarding message + Node 2->>Node 0: Node forwarding message - Node 2-> > Node 1: Node forwarding message + Node 2->>Node 1: Node forwarding message - Node 2-->>sdk [Publisher]: Packet back, type: 0x31 + Node 2 ->>sdk [Publisher]: Packet back, type: 0x31 - Node 0->>sdk [Subscriber0]: Node forwarding message + Node 0 ->>sdk [Subscriber0]: Node forwarding message Node 1->>sdk [Subscriber1]: Node forwarding message @@ -301,7 +301,7 @@ Each node maintains its own state and broadcasts the Seq of the state regularly ```eval_rst .. note:: - - Multicast means that if multiple subscribers subscribe to the same topic, the node pushes messages to all subscribers. - - As long as the network is normal, Publisher can receive the response packet that the node message is pushed successfully even if no subscriber receives the message. + - Multicast means that if multiple subscribers subscribe to the same topic, the node pushes messages to all subscribers + -As long as the network is normal, Publisher can receive the response packet of node message push success even if there is no Subsciber receiving the message ``` \ No newline at end of file diff --git a/3.x/en/docs/design/parallel/DMC.md b/3.x/en/docs/design/parallel/DMC.md index 611d49cf8..5d85bb642 100644 --- a/3.x/en/docs/design/parallel/DMC.md +++ b/3.x/en/docs/design/parallel/DMC.md @@ -6,29 +6,29 @@ Tags: "Execute" "Parallel Scheduling" "DMC" "Deterministic Parallel Contracts" ## 1. Background -Nowadays, multi-core has gradually become the mainstream of today's CPU. In the future, CPU may integrate more cores and enter the era of many cores.。 +Nowadays, multi-core has gradually become the mainstream of today's CPU. In the future, CPU may integrate more cores and enter the era of many cores。 -Blockchain In order to ensure transaction transactionality, transactions are serialized and thoroughly serialized, first sort transactions, and then execute smart contracts with a single thread to avoid transaction confusion, data conflicts, etc. caused by out-of-order execution.。Even if a server has a multi-core CPU, the operating system supports multi-threaded multi-process, and there are multiple nodes and multiple servers in the network, all transactions are methodically and strictly single-threaded on each computer.。 +Blockchain In order to ensure transaction transactionality, transactions are serialized and thoroughly serialized, first sort transactions, and then execute smart contracts with a single thread to avoid transaction confusion, data conflicts, etc. caused by out-of-order execution。Even if a server has a multi-core CPU, the operating system supports multi-threaded multi-process, and there are multiple nodes and multiple servers in the network, all transactions are methodically and strictly single-threaded on each computer。 -In FISCO BCOS version 2.0, a DAG parallel solution was introduced to convert the linear execution of transactions into parallel execution of DAG graphs.。In the practical application of the DAG parallel solution, the blockchain can organize a transaction dependency graph based on the mutually exclusive resources that need to be used when each transaction is executed (mutually exclusive means exclusive use of resources, for example, in the above-mentioned transfer problem mutually exclusive resources, it refers to the balance status of each account). In order to prevent the transaction dependency relationship from being looped in the graph, we can specify that the transaction in the transaction list involves the same mutually exclusive。 +In FISCO BCOS version 2.0, a DAG parallel solution was introduced to convert the linear execution of transactions into parallel execution of DAG graphs。In the practical application of the DAG parallel solution, the blockchain can organize a transaction dependency graph based on the mutually exclusive resources that need to be used when each transaction is executed (mutually exclusive means exclusive use of resources, for example, in the above-mentioned transfer problem mutually exclusive resources, it refers to the balance status of each account). In order to prevent the transaction dependency relationship from being looped in the graph, we can specify that the transaction in the transaction list involves the same mutually exclusive。 Ideally, all transactions can be parallelized, and the boost of DAG parallelism is equal to the number of cores in the system。 However, any advanced technology has its scope and limitations, and DAG still has improvement points in specific scenarios: -1. DAG parallel requires the blockchain user to provide the mutual exclusion of all resources in advance, in order to analyze the feasible parallel path from the transaction through the mutual exclusion relationship, the analysis of the mutual exclusion relationship is a complex process for the user, once the analysis error, will lead to the serious consequences of inconsistency and consensus in the blockchain.。 -1. DAG and behavior to ensure sequential consistency, the selection of mutually exclusive relations must follow the most conservative strategy, when the contract logic is complex, involving a large number of resource access, DAG is difficult to analyze a feasible parallel path, and finally equivalent to serial execution.。 +1. DAG parallel requires the blockchain user to provide the mutual exclusion of all resources in advance, in order to analyze the feasible parallel path from the transaction through the mutual exclusion relationship, the analysis of the mutual exclusion relationship is a complex process for the user, once the analysis error, will lead to the serious consequences of inconsistency and consensus in the blockchain。 +1. DAG and behavior to ensure sequential consistency, the selection of mutually exclusive relations must follow the most conservative strategy, when the contract logic is complex, involving a large number of resource access, DAG is difficult to analyze a feasible parallel path, and finally equivalent to serial execution。 ## 2 、 Deterministic Multi-Contract Parallel Scheme (DMC) -The core idea of the deterministic multi-contract parallel scheme (DMC) based on parallel scheduling is to ensure the certainty of mutually exclusive resource access during parallel transaction execution, achieving the following goals. -- Easy to use: The bottom layer of the blockchain automatically enables parallelism, eliminating the need for users to pay attention to parallel logic and providing conflict fields in advance.。 -- Efficient: transactions within blocks are not executed repeatedly, with no pre-execution, pre-analysis or retry processes。 -- Compatible: This solution can be used regardless of EVM, WASM, Precompiled, or other contracts using any consensus mechanism。 +The core idea of the deterministic multi-contract parallel scheme (DMC) based on parallel scheduling is to ensure the certainty of mutually exclusive resource access during parallel transaction execution, achieving the following goals +-Easy to use: The bottom layer of the blockchain automatically enables parallelism, eliminating the need for users to pay attention to parallel logic and providing conflict fields in advance。 +- Efficient: transactions within the block are not executed repeatedly, there is no pre-execution, pre-analysis or retry process。 +- Compatible: Whether EVM, WASM, Precompiled or other contracts, using any consensus mechanism, can use this scheme。 -The DMC solution first requires that there is no shared data between smart contracts in the blockchain, each contract has an independent storage space, and other contracts cannot be read and written.。When executing a transaction, DMC splits the different smart contract code blocks called by all transactions in the block into multiple code segments, and the code blocks of multiple different smart contracts are executed in a staggered manner, with the boundaries of the split being cross-contract calls and mutually exclusive resource access.。 +The DMC solution first requires that there is no shared data between smart contracts in the blockchain, each contract has an independent storage space, and other contracts cannot be read and written。When executing a transaction, DMC splits the different smart contract code blocks called by all transactions in the block into multiple code segments, and the code blocks of multiple different smart contracts are executed in a staggered manner, with the boundaries of the split being cross-contract calls and mutually exclusive resource access。 -In the DMC scheme, multiple transactions calling the same contract are always serial in a global perspective, and DMC allows multiple transactions calling different contracts to be executed in parallel, and since there is no shared data between smart contracts, parallel execution between different contracts can always ensure the consistency of the final result.。When any smart contract makes a cross-contract call or accesses a locked mutex, the DMC suspends the execution of the transaction and waits for all other transactions in the current phase to complete, or for cross-contract calls and access to locked mutex resources. This waiting process is called global synchronization.。For each global synchronization, DMC allocates cross-contract calls and mutually exclusive resource access to transactions according to fixed rules to ensure that the same contract and mutually exclusive resources are not accessed in parallel, and ultimately to ensure the consistency of execution results while achieving parallelism.。 +In the DMC scheme, multiple transactions calling the same contract are always serial in a global perspective, and DMC allows multiple transactions calling different contracts to be executed in parallel, and since there is no shared data between smart contracts, parallel execution between different contracts can always ensure the consistency of the final result。When any smart contract makes a cross-contract call or accesses a locked mutex, the DMC suspends the execution of the transaction and waits for all other transactions in the current phase to complete, or for cross-contract calls and access to locked mutex resources. This waiting process is called global synchronization。For each global synchronization, DMC allocates cross-contract calls and mutually exclusive resource access to transactions according to fixed rules to ensure that the same contract and mutually exclusive resources are not accessed in parallel, and ultimately to ensure the consistency of execution results while achieving parallelism。 ## 3. Brief description of DMC process @@ -36,27 +36,27 @@ In the DMC scheme, multiple transactions calling the same contract are always se The user inputs a list of transactions to the blockchain system, including transaction 1 to transaction 6。The blockchain system packages transactions into blocks and executes consensus, and the list of transactions within the block is a list of transactions in a determined order, assuming that six transactions are entered, from**Transaction 1 to Transaction 6**;Each transaction invokes a different contract, including**Contract 1 to Contract 3**。 **Phase 1** -- During the consensus process, the blockchain system inputs the block to the DMC scheduler. The DMC scheduler generates an initial phase 1 based on the transaction list in the block and converts all transactions in the block into messages. Phase 1 contains messages corresponding to all transactions in the block.-6。 -- At the beginning of each phase, the DMC scheduler constructs a hash table, sequentially and serially traverses all messages in the current phase, extracts the destination contract (To) field of the message, and puts the destination contract field into the hash table. - - If the destination contract field is successfully placed in the hash table, the message invokes the destination contract for the first time, and the DMC scheduler asynchronously sends the message to any trade executor to execute the message; - - If there is a conflict when the destination contract field is placed in the hash table, it means that the destination contract for the message call has been called earlier, and the DMC scheduler moves the message to the next stage (stage 2).; +-During the consensus process, the blockchain system inputs the block to the DMC scheduler. The DMC scheduler generates the initial stage 1 according to the transaction list in the block and converts all transactions in the block into messages. The stage 1 contains the messages corresponding to all transactions in the block. The message contains all fields and additional information of the transaction. The additional information is context information. The context information is initially empty by default, for example, messages 1-6。 +-At the beginning of each phase, the DMC scheduler will construct a hash table, sequentially traverse all messages in the current phase, extract the destination contract (To) field of the message, and put the destination contract field into the hash table: + -If the destination contract field is successfully placed in the hash table, it means that the destination contract called by the message is executed for the first time, and the DMC scheduler asynchronously sends the message to any transaction executor to execute the message; + - If there is a conflict when the destination contract field is placed in the hash table, it means that the destination contract called by the message has already been called by an earlier message, and the DMC scheduler moves the message to the next stage (stage 2); ![](../../../images/design/dmc_stage1.png) **Phase 2** -- The DMC scheduler traverses the messages in phase 1 and finds that message 2 and message 1 invoke the same contract, message 3 and message 5 invoke the same contract, and message 4 and message 6 invoke the same contract. The DMC scheduler asynchronously sends the messages (message 1, message 3, and message 4) that invoke the contract for the first time to multiple transaction executors for execution, and the others (message 2, message 5, and message 6) move to the。 -- Message 1, message 3, and message 4, because they call different contracts respectively, multiple transaction executors can execute these three messages in parallel, and the DMC scheduler will wait for the execution of messages 1, 3, and 4 to complete。 +- The DMC scheduler traverses the messages in stage 1, and finds that message 2 and message 1 call the same contract, message 3 and message 5 call the same contract, and message 4 and message 6 call the same contract. The DMC scheduler asynchronously sends the messages (message 1, message 3, message 4) calling the contract for the first time to multiple transaction executors for execution, and the others (message 2, message 5, message 6) move to the next。 +-Message 1, message 3 and message 4, because they call different contracts respectively, multiple transaction executors can execute these 3 messages in parallel, and the DMC scheduler will wait for the execution of messages 1, 3, and 4 to complete。 ![](../../../images/design/dmc_stage2.png) **Phase 3** -- In the process of executing a message, if a cross-contract call occurs within the smart contract corresponding to the message, the transaction executor suspends the execution of the current message, saves the context of the message execution, and generates a new cross-contract call message. The new message contains the previously saved context. The transaction executor sends the new message of the cross-contract call to the DMC scheduler.。 -- The DMC scheduler moves the cross-contract call message from the trade executor to the next stage.。 -- If the DMC scheduler executes message 2 in phase 2 and the smart contract invoked by the message initiates a cross-contract invocation, and the destination contract of the cross-contract invocation is 3, the transaction executor suspends the execution of message 2, saves the context of message 2, generates a new cross-contract invocation message 2 ', and the transaction executor sends the cross-contract invocation message 2' to the DMC scheduler.。The DMC scheduler puts message 2 'into stage 3, and the DMC scheduler then sends the message to other transaction executors in stage 3.。 +- In the process of executing a message, if a cross-contract call occurs within the smart contract corresponding to the message, the transaction executor suspends the execution of the current message, saves the context of the message execution, and generates a new cross-contract call message。 +- DMC scheduler moves cross-contract call messages from trade executor to next stage。 +-If the DMC scheduler executes message 2 in phase 2, and the smart contract called by the message initiates a cross-contract call, and the destination contract of the cross-contract call is 3, the transaction executor suspends the execution of message 2, saves the context of message 2, generates a new cross-contract call message 2 ', and the transaction executor sends the cross-contract call message 2' to the DMC scheduler。The DMC scheduler puts message 2 'into stage 3, and the DMC scheduler then sends the message to other transaction executors in stage 3。 ![](../../../images/design/dmc_stage3.png) **Phase 4** -- After the transaction executor finishes executing the message, if the message contains a context, which is equivalent to a cross-contract call return, the transaction executor restores the context based on the context in the message, generates a new message, and puts the result of the current message execution as the cross-contract call return value in the parameters of the new message, and sends the new message returned by the cross-contract call to the DMC scheduler.。 +- After the transaction executor finishes executing the message, if the message contains a context, which is equivalent to a cross-contract call return, the transaction executor will restore the context according to the context in the message, generate a new message, and put the result of the current message execution as the cross-contract call return value in the parameters of the new message, and send the new message returned by the cross-contract call to the DMC scheduler。 ![](../../../images/design/dmc_stage4.png) -- The DMC scheduler moves the cross-contract call message from the trade executor to the next stage.。 -- If the DMC scheduler finds that the next stage is not empty, it will set the current stage as the next stage and repeat the initial steps, and so on, until all transactions are executed。 \ No newline at end of file +- DMC scheduler moves cross-contract call messages from trade executor to next stage。 +-If the DMC scheduler finds that the next stage is not empty, it will set the current stage as the next stage and repeat the initial steps, and so on, until all transactions are completed。 \ No newline at end of file diff --git a/3.x/en/docs/design/parallel/dag.md b/3.x/en/docs/design/parallel/dag.md index a42e45c6d..df44cbaff 100644 --- a/3.x/en/docs/design/parallel/dag.md +++ b/3.x/en/docs/design/parallel/dag.md @@ -8,7 +8,7 @@ Tag: "transaction parallel" "" DAG "" ### 1.1 DAG -An acyclic directed graph is called a directed acyclic graph (**D**irected **A**cyclic **G**raph), or DAG diagram for short。In a batch of transactions, you can identify the mutually exclusive resources that each transaction needs to occupy by a certain method, and then construct a transaction dependency DAG diagram according to the order of transactions in the Block and the occupancy of mutually exclusive resources, as shown in the figure below, all transactions with an entry of 0 (no dependent pre-order tasks) can be executed in parallel.。As shown in the figure below, after topological sorting based on the order of the original transaction list on the left, you can get the transaction DAG on the right。 +An acyclic directed graph is called a directed acyclic graph (**D**irected **A**cyclic **G**raph), or DAG diagram for short。In a batch of transactions, you can identify the mutually exclusive resources that each transaction needs to occupy by a certain method, and then construct a transaction dependency DAG diagram according to the order of transactions in the Block and the occupancy of mutually exclusive resources, as shown in the figure below, all transactions with an entry of 0 (no dependent pre-order tasks) can be executed in parallel。As shown in the figure below, after topological sorting based on the order of the original transaction list on the left, you can get the transaction DAG on the right。 ![](../../../images/parallel/DAG.png) @@ -17,12 +17,12 @@ An acyclic directed graph is called a directed acyclic graph (**D**irected **A** ![](../../../images/parallel/architecture.png) The main processes include: -- Users initiate transactions directly or indirectly through the SDK。Transactions can be transactions that can be executed in parallel and transactions that cannot be executed in parallel.; -- The transaction enters the node's transaction pool and waits for packaging; -- The transaction is packaged into blocks by Sealer and sent to BlockVerifier for verification after consensus.; +- Users initiate transactions directly or indirectly through the SDK。Transactions can be transactions that can be executed in parallel and transactions that cannot be executed in parallel; +- The transaction enters the transaction pool of the node, waiting to be packaged; +- The transaction is packaged as a block by Sealer and sent to BlockVerifier for verification after consensus; - BlockVerifier generates a transaction DAG based on a list of transactions in a block; -- BlockVerifier constructs the execution context and executes the transaction DAG in parallel; -- After the block is verified, the blockchain。 +- BlockVerifier constructs execution context, executes transaction DAG in parallel; +- After the block is verified, the block is on the chain。 ## 3 Important processes @@ -33,34 +33,34 @@ The main processes include: The DAG data structure used in the scheme is as follows: ![](../../../images/parallel/TxDAG.png) Among them: -- Vertex - - inDegree is used to store the current in-degree of the vertex; - - OutEdge is used to store the outgoing edge information of the vertex, specifically the ID list of all vertices connected to the outgoing edge。 +- Vertex (Vertex) + -inDegree is used to store the current in-degree of the vertex; + -outEdge is used to save the outgoing edge information of the vertex, specifically the ID list of all vertices connected to the outgoing edge。 - DAG: - - vtxs is used to store a list of all nodes in the DAG; - - topLevel is a concurrent queue used to store the ID of the node with the current entry of 0, which is accessed concurrently by multiple threads during execution.; - - totalVtxs: Total number of vertices - - totalConsume: Total number of vertices that have been executed; + -vtxs is used to store a list of all nodes in the DAG; + -topLevel is a concurrent queue, which is used to store the node ID with the current entry of 0. It can be accessed by multiple threads concurrently during execution; + -totalVtxs: total number of vertices + -totalConsume: Total number of vertices that have been executed; - void init(uint32_t \_maxSize)Initializes a DAG with a maximum vertex number of maxSize; - void addEdge(ID from, ID to): Create a directed edge between vertices from and to; - void generate()Construct a DAG structure based on existing edges and vertices; - ID waitPop(bool needWait): Wait for a node with an in degree of 0 to be taken out of topLevel; - void clear()Clear all node and edge information in the DAG。 - TxDAG: - - dag: DAG instance - - exeCnt: Count of transactions that have been executed; - - totalParaTxs: Total number of parallel transactions; - - txs: List of parallel transactions + -dag: DAG instance + -exeCnt: count of transactions that have been executed; + -totalParaTxs: Total number of parallel transactions; + -txs: list of parallel transactions - bool hasFinished()Returns true if the entire DAG has been executed. Otherwise, returns false; - - void executeUnit(): Take out a transaction with no upper layer dependencies and execute it.; + - void executeUnit(): Take out a transaction with no upper layer dependencies and execute it; #### 3.1.2 Transaction DAG Construction Process The process is as follows: ![](../../../images/parallel/dag_construction.png) -1. All transactions in the block are removed from the packaged block.; -2. Initialize a DAG instance with the number of transactions as the maximum number of vertices.; +1. All transactions in the block are removed from the packaged block; +2. Initialize a DAG instance with the number of transactions as the maximum number of vertices; 3. Read out all transactions in order, and if a transaction is parallelizable, parse its conflict domain and check if there are previous transactions that conflict with that transaction, and if so, construct dependent edges between the corresponding transactions;If the transaction is not parallelizable, it is considered that it must be executed after all of the preceding transactions have been executed, thus creating a dependent edge between the transaction and all of its preceding transactions。 ### 3.2 DAG execution process @@ -69,5 +69,5 @@ The process is as follows: ![](../../../images/parallel/execution.png) -1. The main thread will first initialize a thread group of the corresponding size according to the number of hardware cores, and if the number of hardware cores fails, no other threads will be created.; -2. When the DAG has not yet been executed, the thread loop waits for a transaction with a pop in and out degree of 0 from the DAG.。If the transaction to be executed is successfully taken out, the transaction is executed, and after execution, the entry of subsequent dependent tasks is reduced by 1, and if the entry of the transaction is reduced to 0, the transaction is added to the topLevel.;If it fails, the DAG has been executed and the thread exits。 \ No newline at end of file +1. The main thread will first initialize a thread group of the corresponding size according to the number of hardware cores, and if the number of hardware cores fails, no other threads will be created; +2. When the DAG has not yet been executed, the thread loop waits for a transaction with a pop in and out degree of 0 from the DAG。If the transaction to be executed is successfully taken out, the transaction is executed, and after execution, the entry of subsequent dependent tasks is reduced by 1, and if the entry of the transaction is reduced to 0, the transaction is added to the topLevel;If it fails, the DAG has been executed and the thread exits。 \ No newline at end of file diff --git a/3.x/en/docs/design/parallel/group.md b/3.x/en/docs/design/parallel/group.md index 04b064548..2596dd9f3 100644 --- a/3.x/en/docs/design/parallel/group.md +++ b/3.x/en/docs/design/parallel/group.md @@ -4,7 +4,7 @@ Tags: "Group Schema" "Schema" ---- -Considering the needs of real business scenarios, FISCO BCOS introduces a multi-group architecture, which supports blockchain nodes to start multiple groups, and isolates transaction processing, data storage, and block consensus between groups, ensuring the privacy of the blockchain system while reducing the complexity of system operation and maintenance.。Transactions between different groups can be executed in parallel, improving performance。 +Considering the needs of real business scenarios, FISCO BCOS introduces a multi-group architecture, which supports blockchain nodes to start multiple groups, and isolates transaction processing, data storage, and block consensus between groups, ensuring the privacy of the blockchain system while reducing the complexity of system operation and maintenance。Transactions between different groups can be executed in parallel, improving performance。 ```eval_rst @@ -12,13 +12,13 @@ Considering the needs of real business scenarios, FISCO BCOS introduces a multi- For example: - All nodes of institutions A, B, and C form a blockchain network to run business 1;After a period of time, institutions A and B start business 2 and do not want the data and transaction processing related to that business to be perceived by institution C.? + All nodes of institutions A, B, and C form a blockchain network to run business 1;After a period of time, institutions A and B start business 2 and do not want the data and transaction processing related to that business to be perceived by institution C? - **1.3 Series FISCO BCOS System** : Agency A and Agency B re-establish a chain to run business 2;The O & M administrator needs two O & M chains to maintain two sets of ports - **FISCO BCOS 2.0+** : Institution A and Institution B create a new group to run business 2;The O & M administrator only needs to maintain one chain - It is clear that FISCO BCOS 2.0 is based on meeting the same privacy protection needs.+Better scalability, operability and flexibility。 + It is clear that FISCO BCOS 2.0 is based on meeting the same privacy protection needs+Better scalability, operability and flexibility。 ``` In a multi-group architecture, the network is shared between groups through [network access and ledger whitelist](../security_control/node_management.md)Achieve network message isolation between ledgers。 @@ -26,13 +26,13 @@ In a multi-group architecture, the network is shared between groups through [net ![](../../../images/parallel/ledger.png) -Data isolation between groups, each group runs its own consensus algorithm independently, and different groups can use different consensus algorithms.。From the bottom up, each ledger module mainly includes three layers: core layer, interface layer and scheduling layer. These three layers cooperate with each other, and FISCO BCOS can ensure that a single group runs independently and robustly.。 +Data isolation between groups, each group runs its own consensus algorithm independently, and different groups can use different consensus algorithms。From the bottom up, each ledger module mainly includes three layers: core layer, interface layer and scheduling layer. These three layers cooperate with each other, and FISCO BCOS can ensure that a single group runs independently and robustly。 ## Core Layer -The core layer is responsible for putting [blocks] of the group(../../tutorial/key_concepts.html#id3)Data, block information, system tables, and block execution results are written to the underlying database.。 +The core layer is responsible for putting [blocks] of the group(../../tutorial/key_concepts.html#id3)Data, block information, system tables, and block execution results are written to the underlying database。 -Storage is divided into world states(State)and distributed storage(AMDB)Two parts, the world state includes MPTState and StorageState, responsible for storing the status information of transaction execution, StorageState performance is higher than MPTState, but does not store block history information.;AMDB exposes simple queries to the outside(select)Submitted(commit)and update(update)Interface, responsible for operating contract tables, system tables and user tables, with pluggable features, the backend can support a variety of database types, currently supports [RocksDB database](https://github.com/facebook/rocksdb)and MySQL [storage](../storage/storage.md)。 +Storage is divided into world states(State)and distributed storage(AMDB)Two parts, the world state includes MPTState and StorageState, responsible for storing the status information of transaction execution, StorageState performance is higher than MPTState, but does not store block history information;AMDB exposes simple queries to the outside(select)Submitted(commit)and update(update)Interface, responsible for operating contract tables, system tables and user tables, with pluggable features, the backend can support a variety of database types, currently supports [RocksDB database](https://github.com/facebook/rocksdb)and MySQL [storage](../storage/storage.md)。 ![](../../../images/parallel/storage.png) @@ -41,18 +41,18 @@ Storage is divided into world states(State)and distributed storage(AMDB)Two part Interface layer includes transaction pool(TxPool), Blockchain(BlockChain)and block executor(BlockVerifier)Three modules。 -- **Trading pool(TxPool)**: Interact with the network layer and scheduling layer, responsible for caching transactions broadcast by clients or other nodes, scheduling layer(Mainly synchronization and consensus modules)Remove transactions from the transaction pool for broadcast or block packaging.; +- **Trading pool(TxPool)**: Interact with the network layer and scheduling layer, responsible for caching transactions broadcast by clients or other nodes, scheduling layer(Mainly synchronization and consensus modules)Remove transactions from the transaction pool for broadcast or block packaging; -- **Blockchain(BlockChain)**: Interacts with the core layer and the scheduling layer, which is the only entry for the scheduling layer to access the underlying storage.(Synchronization, consensus module)You can query the block height, obtain the specified block, and submit the block through the block link port.; +- **Blockchain(BlockChain)**: Interacts with the core layer and the scheduling layer, which is the only entry for the scheduling layer to access the underlying storage(Synchronization, consensus module)You can query the block height, obtain the specified block, and submit the block through the block link port; -- **Block Executor(BlockVerifier)**: Interacts with the scheduling layer to execute the blocks passed in from the scheduling layer and returns the block execution results to the scheduling layer.。 +- **Block Executor(BlockVerifier)**: Interacts with the scheduling layer to execute the blocks passed in from the scheduling layer and returns the block execution results to the scheduling layer。 ## scheduling layer Scheduling layer includes consensus module(Consensus)and synchronization module(Sync)。 -- **Consensus Module**Includes Sealer and Engine threads, responsible for packaging transactions and executing consensus processes, respectively。Sealer thread from trading pool(TxPool)Take the transaction and package it into a new block;The Engine thread executes the consensus process. The consensus process executes the block. After the consensus is successful, the block and the block execution result are submitted to the blockchain.(BlockChain)The blockchain uniformly writes this information to the underlying storage, triggers the transaction pool to delete all transactions contained in the blockchain, and notifies the client of the transaction execution results in the form of callbacks. Currently, FISCO BCOS mainly supports [PBFT](../consensus/pbft.md)and [Raft](../storage/storage.md)consensus algorithm; +- **Consensus Module**Includes Sealer and Engine threads, responsible for packaging transactions and executing consensus processes, respectively。Sealer thread from trading pool(TxPool)Take the transaction and package it into a new block;The Engine thread executes the consensus process. The consensus process executes the block. After the consensus is successful, the block and the block execution result are submitted to the blockchain(BlockChain)The blockchain uniformly writes this information to the underlying storage, triggers the transaction pool to delete all transactions contained in the blockchain, and notifies the client of the transaction execution results in the form of callbacks. Currently, FISCO BCOS mainly supports [PBFT](../consensus/pbft.md)and [Raft](../storage/storage.md)consensus algorithm; - **synchronization module**: responsible for broadcasting transactions and getting the latest blocks, -Given the consensus process, [leader](../consensus/pbft.html#id1)It is responsible for packaging blocks, and the leader may switch at any time. Therefore, it is necessary to ensure that the client's transactions are sent to each blockchain node as much as possible. After the node receives new transactions, the synchronization module broadcasts these new transactions to all other nodes.;Considering that inconsistent machine performance in the blockchain network or the addition of new nodes will cause the block height of some nodes to lag behind that of other nodes, the synchronization module provides the block synchronization function, which sends the latest block height of its own node to other nodes, and other nodes will actively download the latest block when they find that the block height lags behind that of other nodes.。 \ No newline at end of file +Given the consensus process, [leader](../consensus/pbft.html#id1)It is responsible for packaging blocks, and the leader may switch at any time. Therefore, it is necessary to ensure that the client's transactions are sent to each blockchain node as much as possible. After the node receives new transactions, the synchronization module broadcasts these new transactions to all other nodes;Considering that inconsistent machine performance in the blockchain network or the addition of new nodes will cause the block height of some nodes to lag behind that of other nodes, the synchronization module provides the block synchronization function, which sends the latest block height of its own node to other nodes, and other nodes will actively download the latest block when they find that the block height lags behind that of other nodes。 \ No newline at end of file diff --git a/3.x/en/docs/design/parallel/index.md b/3.x/en/docs/design/parallel/index.md index 895d9da5e..efecf7e81 100644 --- a/3.x/en/docs/design/parallel/index.md +++ b/3.x/en/docs/design/parallel/index.md @@ -4,7 +4,7 @@ Tags: "Execute" "Parallel Scheduling" "DMC" "DAG" ---------- -FISCO BCOS has a comprehensive parallel processing design to improve transaction processing performance.。According to**parallel granularity**From fine to coarse division, its parallel mechanism can be divided into: +FISCO BCOS has a comprehensive parallel processing design to improve transaction processing performance。According to**parallel granularity**From fine to coarse division, its parallel mechanism can be divided into: * **Transaction**The parallel: DAG transaction parallel, pipeline parallel diff --git a/3.x/en/docs/design/parallel/pipeline.md b/3.x/en/docs/design/parallel/pipeline.md index 1bcaddf8f..893730d52 100644 --- a/3.x/en/docs/design/parallel/pipeline.md +++ b/3.x/en/docs/design/parallel/pipeline.md @@ -2,32 +2,32 @@ ## 介绍 -The latest parallel architecture of FISCO BCOS is pipelined parallel architecture, which is an experimental function. Subsequent versions may have modifications that are not forward compatible.。 +The latest parallel architecture of FISCO BCOS is pipelined parallel architecture, which is an experimental function. Subsequent versions may have modifications that are not forward compatible。 advantages of pipeline architecture -- The execution results of the pipeline architecture are consistent with the serial execution results, making it easy to find problems.。 -- Pipeline architecture improves performance without special preconditions, and other parallel schemes have preconditions.。 -- The theory of pipeline architecture has been thoroughly studied in computer system theory, and there are many tried-and-tested optimization measures that can be directly applied to the blockchain.。 +-The execution results of the pipeline architecture are consistent with the serial execution results, which is easy to find problems。 +- Pipeline architecture improves performance without special preconditions, other parallel schemes have preconditions。 +-The theory of pipeline architecture has been thoroughly studied in computer system theory, and there are many tried-and-tested optimization measures that can be directly applied to the blockchain。 FISCO BCOS supports two pipeline execution architectures: scalar pipeline and superscalar pipeline, suitable for different scenarios。 - Scalar pipeline splits transactions into three steps and executes them in parallel, regardless of the scenario, improving performance by at least 20% compared to normal serial。 -- The superscalar pipeline will try to execute more transactions in parallel, and the performance improvement depends on the amount of conflicts between the transaction read and write data. When the number of conflicts is small, the performance can be improved by more than 300% compared to ordinary serial, and when the number of conflicts is large, the performance is lower than ordinary serial.。 +- The superscalar pipeline will try to execute more transactions in parallel, and the performance improvement depends on the amount of conflicts between the transaction read and write data. When the number of conflicts is small, the performance can be improved by more than 300% compared to ordinary serial, and when the number of conflicts is large, the performance is lower than ordinary serial。 ## Conditions of use The pipeline actuator needs to be opened manually. After adding the pipeline architecture, there are currently 5 execution modes: 1. Normal serial (config.genesis, is _ serial _ execute = true): Default mode。 -1. Scalar pipeline (config.ini, baseline _ scheduler = true): The transaction execution result is consistent with that of ordinary serial, and the performance is 20% higher than that of ordinary serial, which is suitable for all scenarios.。 -1. Superscalar pipeline (config.ini, baseline _ scheduler _ parallel = true): The transaction execution result is the same as that of ordinary serial. The performance improvement depends on the amount of conflicts between transaction read and write data. When the amount of conflicts is small, the performance can be improved by more than 300%. When the amount of conflicts is large, the performance is low. It is suitable for scenarios where the amount of transaction read and write data conflicts is small.。 -1. sharding mode (feature _ sharding enabled): Users are required to manually allocate transactions to different partitions. Transactions in multiple partitions can be executed in parallel. However, when a cross-partition call occurs, the transaction execution result will be inconsistent with the normal serial, which is suitable for scenarios where transaction partitions can be manually allocated.。 +1. Scalar pipeline (config.ini, baseline _ scheduler = true): The transaction execution result is consistent with that of ordinary serial, and the performance is 20% higher than that of ordinary serial, which is suitable for all scenarios。 +1. Superscalar pipeline (config.ini, baseline _ scheduler _ parallel = true): The transaction execution result is the same as that of ordinary serial. The performance improvement depends on the amount of conflicts between transaction read and write data. When the amount of conflicts is small, the performance can be improved by more than 300%. When the amount of conflicts is large, the performance is low. It is suitable for scenarios where the amount of transaction read and write data conflicts is small。 +1. sharding mode (feature _ sharding enabled): Users are required to manually allocate transactions to different partitions. Transactions in multiple partitions can be executed in parallel. However, when a cross-partition call occurs, the transaction execution result will be inconsistent with the normal serial, which is suitable for scenarios where transaction partitions can be manually allocated。 1. DAG parallel (config.ini, enable _ dag = true, and enable feature _ sharding): Smart contracts need to be statically analyzed in advance, suitable for scenarios with simple contract logic。 Modes 1, 2 and 3 are compatible with each other and can be mixed, 4 and 5 are not compatible with other modes。 Enable conditions for pipeline executor - Enable all bugfixes -- The node architecture is air -- Use serial (config.genesis, is _ serial _ execute = true) mode without opening feature _ sharding +- node architecture is air +-Use serial (config.genesis, is _ serial _ execute = true) mode without opening feature _ sharding - use evm virtual machine, not wasm If these conditions are met, the pipeline executor can be enabled and can be mixed with ordinary serial operation. If the conditions are not met, the node will not start and the error reason will be output in stdout。 @@ -36,7 +36,7 @@ If these conditions are met, the pipeline executor can be enabled and can be mix ### Enable Scalar Pipeline -In the node configuration file config.ini, add the executor.baseline _ scheduler option. +In the node configuration file config.ini, add the executor.baseline _ scheduler option ``` [executor] @@ -45,7 +45,7 @@ In the node configuration file config.ini, add the executor.baseline _ scheduler ### Enable superscalar pipeline -In the node configuration file config.ini, add the executor.baseline _ scheduler _ parallel option. +In the node configuration file config.ini, add the executor.baseline _ scheduler _ parallel option Before turning on this option, make sure that baseline _ scheduler = true is set, otherwise the option is invalid diff --git a/3.x/en/docs/design/parallel/sharding.md b/3.x/en/docs/design/parallel/sharding.md index fd8c60c30..7bcb53176 100644 --- a/3.x/en/docs/design/parallel/sharding.md +++ b/3.x/en/docs/design/parallel/sharding.md @@ -7,15 +7,15 @@ Label: "sharding" "shard" "trade execution" " When multiple applications are hosted on a blockchain, the "**intra-block fragmentation**"Technology enables parallelization of inter-application transaction execution。 -Grouping contracts on the chain in FISCO BCOS support。When executing transactions within a block, the transactions within a block are split into multiple "**intra-block fragmentation**"Hereinafter referred to as: shards), transactions of the same shard are scheduled to be executed in the same executor.。 +Grouping contracts on the chain in FISCO BCOS support。When executing transactions within a block, the transactions within a block are split into multiple "**intra-block fragmentation**"Hereinafter referred to as: shards), transactions of the same shard are scheduled to be executed in the same executor。 ## 方案 **Key point** -* Parallel: Different shards within a block are scheduled for parallel execution in different executors.。 -* Local: The mutual invocation of contracts within the shard is done directly in the same executor, and the shards do not interfere with each other.。 -* Cross-shard: Cross-shard calls are made directly in a block, transparent to the user, and its scheduling is based on the DMC mechanism to avoid heavy SPV proof.。 +* Parallel: Different shards within a block are scheduled for parallel execution in different executors。 +* Local: The mutual invocation of contracts within the shard is done directly in the same executor, and the shards do not interfere with each other。 +* Cross-shard: Cross-shard calls are made directly in a block, transparent to the user, and its scheduling is based on the DMC mechanism to avoid heavy SPV proof。 * Configuration: Users can manage the shards to which contracts belong in the console * Inheritance: Contracts deployed within a shard belong to the same shard, eliminating tedious shard management operations。 @@ -112,7 +112,7 @@ Add 0xd24180cc0fef2f3e545de4f9aafc09345cd08903 to hello_shard Ok. You can use 'l /shards/hello_shard ``` -> the ls command of BFS can also be queried +> The ls command of BFS can also be queried ``` [group0]: /apps> ls /shards/account_shard/ @@ -124,7 +124,7 @@ d24180cc0fef2f3e545de4f9aafc09345cd08903 **Call the contract in shard** -> The transaction will be automatically dispatched to the corresponding shard for execution, and different shards will be executed in different executors. This operation is transparent to the user, and the experience is no different from calling an ordinary contract +> The transaction is automatically scheduled to be executed in the corresponding shard, and different shards are executed in different executors, which is transparent to the user and is no different from calling a normal contract in experience ``` [group0]: /apps> call HelloWorld 0xd24180cc0fef2f3e545de4f9aafc09345cd08903 set nice diff --git a/3.x/en/docs/design/protocol_description.md b/3.x/en/docs/design/protocol_description.md index fcfd10320..7c9c6e305 100644 --- a/3.x/en/docs/design/protocol_description.md +++ b/3.x/en/docs/design/protocol_description.md @@ -6,10 +6,10 @@ Tags: "data structure" "encoding" ```eval_rst .. note:: - The implementation of the FISCO BCOS 3.x data and encoding protocol is located in 'bcos-tars-protocol `_ + The implementation of the FISCO BCOS 3.x data and encoding protocol is located in 'bcos-tars-protocol`_ ``` -FISCO BCOS 3.x defaults to [tars](https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/base/tars-protocol.md)Encoding protocol, this chapter mainly introduces the encoding protocol of FISCO BCOS 3.x basic data structure.。 +FISCO BCOS 3.x defaults to [tars](https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/base/tars-protocol.md)Encoding protocol, this chapter mainly introduces the encoding protocol of FISCO BCOS 3.x basic data structure。 ## 1. Block header data structure @@ -25,9 +25,9 @@ The fields in the block header that need to be hashed: | parentInfo | vector |Parent block information, including the block height and hash of the parent block| | txsRoot | vector |Hash of all MerkleRoot transactions in the block| | receiptRoot | vector |Hash of all receipts within the block MerkleRoot| -| stateRoot | vector |The root hash of all transaction state changes in the block.| +| stateRoot | vector |The root hash of all transaction state changes in the block| | blockNumber | long |Block height| -| gasUsed | string |the sum of gas consumed by all transactions in the block.| +| gasUsed | string |the sum of gas consumed by all transactions in the block| | timestamp | long |Block Header Timestamp| | sealer | long|ID of the consensus node that generated the block header| | sealerList | vector> |List of all consensus nodes in the system when generating the block header| @@ -40,8 +40,8 @@ Definition of all fields in the block header: | Field| Type| Description| | ---- | ---- | ---- | -|data |BlockHeaderData |The block header is used to calculate the data corresponding to the encoding of all fields of the hash.| -|dataHash ||The root hash of all transaction state changes in the block.| +|data |BlockHeaderData |The block header is used to calculate the data corresponding to the encoding of all fields of the hash| +|dataHash ||The root hash of all transaction state changes in the block| |signatureList |vector |Signature list generated after block header consensus is successful| ## 2. Block data structure @@ -61,7 +61,7 @@ The definition of tars for blocks can be found in [here](https://github.com/FISC ## 3. Transaction data structure -The definition of tars for transactions can be found [here](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/Transaction.tars)Similar to the block header, the data protocol field of the transaction is also divided into two parts: the field used to calculate the hash and the field that does not participate in the hash calculation.。 +The definition of tars for transactions can be found [here](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/Transaction.tars)Similar to the block header, the data protocol field of the transaction is also divided into two parts: the field used to calculate the hash and the field that does not participate in the hash calculation。 ### 3.1 TransactionData @@ -75,9 +75,9 @@ defines all the fields in the transaction that are used to calculate the hash, a |blockLimit |long |require, the blockLimit of the transaction, to prevent duplication of transactions| |nonce |string |require, a random number provided by the sender of the message to uniquely identify the transaction and also to prevent duplication of transactions| |to |string | optional, the address of the transaction receiver| -|input |vector | require, the data related to the transaction, including the functions and parameters called by the transaction.| +|input |vector | require, the data related to the transaction, including the functions and parameters called by the transaction| -The hashWith field (also known as the transaction hash / transaction unique identifier) is generated as follows. +The hashWith field (also known as the transaction hash / transaction unique identifier) is generated as follows ![](../../images/design/generate_hash_process.png) @@ -92,7 +92,7 @@ Definition of Exchanged Fields: | signature|vector |optional, the signature of the transaction| | sender |vector |optional, the address of the account where the transaction was sent| | importTime |long | optional, the timestamp when the transaction was sent to the node| -| attribute |int | optional, the attributes of the transaction, used to mark the type of transaction, the parallel conflict domain of the transaction, etc.| +| attribute |int | optional, the attributes of the transaction, used to mark the type of transaction, the parallel conflict domain of the transaction, etc| | source |string |optional, the address of the transaction receiver, for DMC scheduling| ### 3.3 TransactionMetaData @@ -104,11 +104,11 @@ Only the transaction metadata information is included in the proposal of the con |hash |vector |optional, transaction hash| |to |string |optional, the address of the transaction receiver| |source |string |optional, the address of the transaction receiver, for DMC scheduling| -|attribute |unsigned int |optional, the attributes of the transaction, used to mark the type of transaction, the parallel conflict domain of the transaction, etc.| +|attribute |unsigned int |optional, the attributes of the transaction, used to mark the type of transaction, the parallel conflict domain of the transaction, etc| -## 4. Transaction receipt data structure. +## 4. Transaction receipt data structure -The definition of tars for transaction receipts can be found [here](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/TransactionReceipt.tars)Similar to block headers and transactions, the data protocol field of the transaction receipt is also divided into two parts: the field used to calculate the hash and the field that does not participate in the hash calculation.。 +The definition of tars for transaction receipts can be found [here](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/TransactionReceipt.tars)Similar to block headers and transactions, the data protocol field of the transaction receipt is also divided into two parts: the field used to calculate the hash and the field that does not participate in the hash calculation。 ### 4.1 LogEntry @@ -143,10 +143,10 @@ The data structure of the transaction receipt is defined as follows: |data|TransactionReceiptData|Encoded data for all fields used to calculate the hash in the transaction receipt| |dataHash|vector|Transaction Receipt Code| -The design of block and transaction related data structure ensures that FISCO BCOS has the function of checking data integrity.。Block hash, transaction Merkel tree root, receipt Merkel tree root, status Merkel tree root, parent block information and other fields, can effectively verify the validity and integrity of the block, to prevent data tampering。 -In addition, users can obtain block information by calling the relevant interface on the console to verify data consistency.。 +The design of block and transaction related data structure ensures that FISCO BCOS has the function of checking data integrity。Block hash, transaction Merkel tree root, receipt Merkel tree root, status Merkel tree root, parent block information and other fields, can effectively verify the validity and integrity of the block, to prevent data tampering。 +In addition, users can obtain block information by calling the relevant interface on the console to verify data consistency。 ### 4.4 Native transactions -FISCO BCOS implements the smallBank contract based on the solidity contract and the pre-compiled version.。Small Bank originated from blockBench and is recognized by the industry and academia as one of the basic tests of blockchain systems. FISCO BCOS defines small Bank's transactions that enable inter-account transfers as native transactions.。 -Through the deployment contract smallBank, the final execution with EVM。smallBank also provides a precompiled contract method, which can be implemented by calling the smallBank precompiled contract address.。 +FISCO BCOS implements the smallBank contract based on the solidity contract and the pre-compiled version。Small Bank originated from blockBench and is recognized by the industry and academia as one of the basic tests of blockchain systems. FISCO BCOS defines small Bank's transactions that enable inter-account transfers as native transactions。 +Through the deployment contract smallBank, the final execution with EVM。smallBank also provides a precompiled contract method, which can be implemented by calling the smallBank precompiled contract address。 diff --git a/3.x/en/docs/design/rip.md b/3.x/en/docs/design/rip.md index eb4fc4451..e106e6a9a 100644 --- a/3.x/en/docs/design/rip.md +++ b/3.x/en/docs/design/rip.md @@ -1,4 +1,4 @@ -# 13. Network forwarding strategies based on dynamic routing. +# 13. Network forwarding strategies based on dynamic routing Tags: "dynamic routing" "network forwarding" ---- diff --git a/3.x/en/docs/design/security_control/certificate_list.md b/3.x/en/docs/design/security_control/certificate_list.md index 55fd17ba9..91a2e9125 100644 --- a/3.x/en/docs/design/security_control/certificate_list.md +++ b/3.x/en/docs/design/security_control/certificate_list.md @@ -18,41 +18,41 @@ This document provides an introductory description of black and white lists. For **Configuration type of CA black and white list** -- 基于**Scope of action**(Network Configuration / Ledger Configuration) dimensions can be divided into**Network Configuration**, which affects the node connection establishment process of the entire network; -- 基于**Whether it can be changed**(reconfigurable / fixed configuration) dimensions can be divided into**Configurable**, content can be changed, effective after restart; -- 基于**Storage position**(local storage / on-chain storage) dimensions can be divided into**Local Storage**The content is recorded locally, not on the chain.。 +- Based on**Scope of action**(Network Configuration / Ledger Configuration) dimensions can be divided into**Network Configuration**, which affects the node connection establishment process of the entire network; +- Based on**Whether it can be changed**(reconfigurable / fixed configuration) dimensions can be divided into**Configurable**, content can be changed, effective after restart; +- Based on**Storage position**(local storage / on-chain storage) dimensions can be divided into**Local Storage**The content is recorded locally, not on the chain。 ## Module Architecture -The following figure shows the modules involved in the CA blacklist and their relationships。Legend A-> B indicates that the B module depends on the data of the A module, and the B module is initialized later than the A module。The whitelist has the same architecture as the blacklist。 +The following figure shows the modules involved in the CA blacklist and their relationships。Legend A->B indicates that the B module depends on the data of the A module, and the B module is initialized later than the A module。The whitelist has the same architecture as the blacklist。 ![](../../../images/node_management/architecture.png) -< center > Module architecture < / center > +
Module Architecture
## Core Process -Underlying implementation of SSL two-way authentication。During the handshake process, the node obtains the nodeID of the other node through the certificate provided by the other node, and checks whether the nodeID is related to the black and white list of the node configuration.。If the connection is rejected based on the configuration of the black and white lists, continue the subsequent process。 +Underlying implementation of SSL two-way authentication。During the handshake process, the node obtains the nodeID of the other node through the certificate provided by the other node, and checks whether the nodeID is related to the black and white list of the node configuration。If the connection is rejected based on the configuration of the black and white lists, continue the subsequent process。 **Rejection logic** * Blacklist: Deny connections to nodes written in the blacklist -* Whitelist: Deny connections to all nodes that are not configured in the whitelist。The whitelist is empty, indicating that it is not open. Any connection is accepted.。 +* Whitelist: Deny connections to all nodes that are not configured in the whitelist。The whitelist is empty, indicating that it is not open. Any connection is accepted。 **Priority** -Blacklist takes precedence over whitelist。For example, if A, B, and C are configured in the whitelist, D's connection will be rejected. If A is also configured in the blacklist, A will also be rejected.。 +Blacklist takes precedence over whitelist。For example, if A, B, and C are configured in the whitelist, D's connection will be rejected. If A is also configured in the blacklist, A will also be rejected。 ## Scope of influence -- CA black and white lists have a significant impact on P2P node connectivity and AMOP functionality at the network layer.**invalidate**; +- CA black and white lists have a significant impact on P2P node connections and AMOP functions at the network layer**invalidate**; - potential impact on the consensus and synchronization capabilities of the ledger layer,**Affects consensus and synchronization message / data forwarding**。 ## Configuration Format **Blacklist** -Add the '[certificate _ blacklist]' path to the node 'config.ini' configuration ('[certificate _ blacklist]' is optional in the configuration)。The content of the CA blacklist is the Node ID list of the node, and crl.X is the Node ID of the opposite node that this node refuses to connect to.。An example of the configuration format of the CA blacklist is as follows。 +Add the '[certificate _ blacklist]' path to the node 'config.ini' configuration ('[certificate _ blacklist]' is optional in the configuration)。The content of the CA blacklist is the Node ID list of the node, and crl.X is the Node ID of the opposite node that this node refuses to connect to。An example of the configuration format of the CA blacklist is as follows。 ```ini [certificate_blacklist] @@ -63,7 +63,7 @@ Add the '[certificate _ blacklist]' path to the node 'config.ini' configuration **Whitelist** -Add the '[certificate _ whitelist]' path to the node 'config.ini' configuration ('[certificate _ whitelist]' is optional in the configuration)。The content of the CA whitelist is the Node ID list of the node. Cal.X is the Node ID of the opposite node to which the node can accept connections.。An example of the configuration format of the CA whitelist is as follows。 +Add the '[certificate _ whitelist]' path to the node 'config.ini' configuration ('[certificate _ whitelist]' is optional in the configuration)。The content of the CA whitelist is the Node ID list of the node. Cal.X is the Node ID of the opposite node to which the node can accept connections。An example of the configuration format of the CA whitelist is as follows。 ``` ini [certificate_whitelist] diff --git a/3.x/en/docs/design/security_control/committee_design.md b/3.x/en/docs/design/security_control/committee_design.md index 4e037c090..cf6118d87 100644 --- a/3.x/en/docs/design/security_control/committee_design.md +++ b/3.x/en/docs/design/security_control/committee_design.md @@ -4,37 +4,37 @@ Tags: "contract permissions" "deployment permissions" "permission control" "perm ---- -FISCO BCOS 3.x introduces the authority governance system of contract granularity.。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting.。 +FISCO BCOS 3.x introduces the authority governance system of contract granularity。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting。 Please refer to the link for detailed permission governance usage documentation: [Permission Governance Usage Guide](../develop/committee_usage.md) ## Overall design -In the FISCO BCOS3.0 framework, the governance system is implemented by a system contract, which provides relatively flexible and versatile functional modules that meet the demands of almost all scenarios while ensuring pluggability.。 +In the FISCO BCOS3.0 framework, the governance system is implemented by a system contract, which provides relatively flexible and versatile functional modules that meet the demands of almost all scenarios while ensuring pluggability。 ### 1. Role division -In FISCO BCOS3.0, on-chain roles can be divided into three categories according to their responsibilities: governance roles, contract administrator roles, and user roles, which are managed and managed in turn.。 +In FISCO BCOS3.0, on-chain roles can be divided into three categories according to their responsibilities: governance roles, contract administrator roles, and user roles, which are managed and managed in turn。 -**Governance role**: Governance of chain governance rules, governance committees, top chain managers。Including: governance rule setting, governance committee election, account freezing, unfreezing, etc.。At the same time, the governance role can control the role of the lower-level contract administrator.。 +**Governance role**: Governance of chain governance rules, governance committees, top chain managers。Including: governance rule setting, governance committee election, account freezing, unfreezing, etc。At the same time, the governance role can control the role of the lower-level contract administrator。 -**Contract Administrator Role**: The Contract Administrator role manages access to contract interfaces。For on-chain participants, any user can deploy contracts when the contract administrator does not set contract deployment permissions.。The contract deployment account can specify the contract administrator account when deploying the contract, if not specified, the contract administrator defaults to the contract deployer.。It should be noted that once the governance committee finds that the contract administrator has not performed his or her duties as contract administrator, the contract administrator can be reset by a vote of the governance committee.。 +**Contract Administrator Role**: The Contract Administrator role manages access to contract interfaces。For on-chain participants, any user can deploy contracts when the contract administrator does not set contract deployment permissions。The contract deployment account can specify the contract administrator account when deploying the contract, if not specified, the contract administrator defaults to the contract deployer。It should be noted that once the governance committee finds that the contract administrator has not performed his or her duties as contract administrator, the contract administrator can be reset by a vote of the governance committee。 -**User Roles**A user role is a role that participates in the business. Any account (including the governance role and the contract administrator role) belongs to the user role。Whether the user role can participate in the relevant business (issuing transactions) depends on whether the contract administrator has set the relevant permissions.。If the contract administrator does not set a permission type for the contract interface (blacklist or whitelist mode), anyone can call the contract interface。If the whitelist is set, you can only access it when the whitelist is hit. If the whitelist is in blacklist mode, you cannot access the corresponding interface if the whitelist is hit.。 +**User Roles**A user role is a role that participates in the business. Any account (including the governance role and the contract administrator role) belongs to the user role。Whether the user role can participate in the relevant business (issuing transactions) depends on whether the contract administrator has set the relevant permissions。If the contract administrator does not set a permission type for the contract interface (blacklist or whitelist mode), anyone can call the contract interface。If the whitelist is set, you can only access it when the whitelist is hit. If the whitelist is in blacklist mode, you cannot access the corresponding interface if the whitelist is hit。 ### 2. Governance rules management -- Governance roles complete the governance committee election through the governance module and set governance rules such as the weight of voting rights for each governance committee member, turnout and participation in the governance decision-making process。Also set contract deployment permissions; +- Governance role through the governance module to complete the governance committee election, and set governance rules, such as the voting weight of each governance committee member, the turnout and participation rate in the governance decision-making process。Also set contract deployment permissions; - The contract administrator role deploys business contracts and sets permissions on business contract-related interfaces; -- User roles complete business operations by calling the contract interface.。 +- User roles complete business operations by calling the contract interface。 ## Detailed design ### 1. Governance module -The governance module provides governance functions, which are completed by the governance committee through multi-party voting according to the decision rules.。The governance contract data structure is as follows. +The governance module provides governance functions, which are completed by the governance committee through multi-party voting according to the decision rules。The governance contract data structure is as follows ```solidity / / address list of governors @@ -49,31 +49,31 @@ uint8 public _winRate; #### Types of governance proposals -The types of proposals of the Governance Committee mainly include the following types. +The types of proposals of the Governance Committee mainly include the following types -- Meta-governance classes: add, remove members, modify governance member weights, modify thresholds for voting, set deployment permissions, proposal voting, and withdrawal。 +- MetaGovernance classes: add, remove members, modify governance member weights, modify thresholds for voting, set deployment permissions, proposal voting, and withdrawal。 - Permission Class: Reset Contract Agent。 #### Governance Committee Decision Planning -Decision rules make decisions based on data from three dimensions: the weight of the governor's voting rights, turnout and participation.。When the governance committee has only one administrator, it degenerates to the administrator model, and all proposals pass。If the governance committee has more than one yes, it will be judged by the following rules。When the manager changes, all outstanding decision proposals are decided according to the new manager parameters.。 +Decision rules make decisions based on data from three dimensions: the weight of the governor's voting rights, turnout and participation。When the governance committee has only one administrator, it degenerates to the administrator model, and all proposals pass。If the governance committee has more than one yes, it will be judged by the following rules。When the manager changes, all outstanding decision proposals are decided according to the new manager parameters。 First, for the participation rate threshold, the range of values is 1-100。When the participation rate threshold is set to 0 Yes, the participation rate threshold rule is invalid。When the participation rate threshold is adjusted, all outstanding decision proposals are decided according to the new participation rate threshold。The participation rate threshold decision can be calculated according to the following formula, and if not satisfied, the status of the proposal is' noEnoughVotes'。 -**Total Voting Weight / Total Weight > = Participation Threshold** +**Total Voting Weight / Total Weight>= Participation threshold** Second, for the weight pass rate threshold, the range of values is 0-100。When the weight pass rate threshold is set to 0, the weight pass rate threshold rule fails。When the weight pass rate threshold is adjusted, all proposals for outstanding decisions are decided according to the new weight pass rate threshold。The weight pass rate threshold decision can be calculated as follows。If established, the representative proposal has been passed, if not established, the representative proposal has failed。 -**Total consent weight / total voting weight > = weight pass rate threshold** +**Total consent weight / total voting weight>= Weight Pass Rate Threshold** #### Governance operation process -- Initial Phase +- Initial phase -To simplify the initialization operation and improve the user experience, you only need to configure one account as the initial member of the governance committee when building the chain.。If not specified, the system will automatically randomly generate a private key, as a member of the governance committee, the administrator weight is 1, the turnout threshold and participation threshold are 0, that is, after initialization, the governance committee is administrator mode.。 +To simplify the initialization operation and improve the user experience, you only need to configure one account as the initial member of the governance committee when building the chain。If not specified, the system will automatically randomly generate a private key, as a member of the governance committee, the administrator weight is 1, the turnout threshold and participation threshold are 0, that is, after initialization, the governance committee is administrator mode。 -- Operation Phase +- Operation phase During the operational phase, the governance committee operates on the meta-governance class, the permission class。All operations can be divided into proposal, voting, decision-making through the automatic execution of the stage。 @@ -81,26 +81,26 @@ During the operational phase, the governance committee operates on the meta-gove #### Permission Management -Permissions include creation permissions, contract access management permissions, and table access management permissions.。 +Permissions include creation permissions, contract access management permissions, and table access management permissions。 - Create Contract Permissions: Permissions to deploy contracts, managed by the governance committee。 - Contract Access Management: Access to the contract interface, managed by the contract administrator。 -The so-called contract administrator mode, that is, when the contract is deployed, an account is designated as the administrator of the contract to manage the access rights of the relevant interface.。For contract or table access, the main reason for using the contract administrator model instead of the governance committee model for permission management is to consider the user experience and decision efficiency.。At the same time, the contract administrator can be modified by the governance committee to ensure the security of contract authority management.。 +The so-called contract administrator mode, that is, when the contract is deployed, an account is designated as the administrator of the contract to manage the access rights of the relevant interface。For contract or table access, the main reason for using the contract administrator model instead of the governance committee model for permission management is to consider the user experience and decision efficiency。At the same time, the contract administrator can be modified by the governance committee to ensure the security of contract authority management。 #### Permission Policy Considering the efficiency of rights management operations, the rights module provides two rights management policies: whitelist mode and blacklist mode。 -- Whitelist mode: When an account is in the interface whitelist, the account can access the current interface; -- Blacklist mode: When an account is in the interface blacklist, the account cannot access the current interface; +-Whitelist mode: When an account is in the interface whitelist, the account can access the current interface; +-Blacklist mode: When the account is in the interface blacklist, the account cannot access the current interface; #### Operation process The operation process of contract authority is as follows。 -1. Deployment policy setting: The governance committee decides to set the deployment policy of the group, and selects whether it is a blacklist or a whitelist.。 -2. Access policy setting: The contract administrator has the right to set the ACL policy of the contract access interface, and select the blacklist or whitelist mode.。The contract administrator directly invokes the setMethodAuthType of the permission contract.(address contractAddr, bytes4 func, uint8 acl)to set the type of ACL。 +1. Deployment policy setting: The governance committee decides to set the deployment policy of the group, and selects whether it is a blacklist or a whitelist。 +2. Access policy setting: The contract administrator has the right to set the ACL policy of the contract access interface, and select the blacklist or whitelist mode。The contract administrator directly invokes the setMethodAuthType of the permission contract(address contractAddr, bytes4 func, uint8 acl)to set the type of ACL。 3. Add access rules。Contract administrator can choose to add rules for access。then all rules are saved in mapping\ [methodId]\ [account] [bool] ### 3. Contract Design @@ -110,19 +110,19 @@ For the address of the permission management contract, see https://github.com/FI Major contracts include: - CommitteeManager: the only entry point for permission governance, management proposal and governance committee, governance committee can call the corresponding interface of the contract to initiate governance proposal。The underlying node has a unique address 0x10001 -- ProposalManager: Proposal management contract, managed by CommitteeManager, for storing proposals +- ProposalManager: proposal management contract, managed by the CommitteeManager, for storing proposals - Committee: governance committee contract, managed by the CommitteeManager, records governance committee information -- ContractAuthPrecompiled: Permission information read / write interface provided by the underlying node. The write interface has permission control. The underlying node has a unique address 0x1005. +-ContractAuthPrecompiled: Permission information read and write interface provided by the underlying node, the write interface has permission control, and the underlying node has a unique address 0x1005 Permission governance performs the following steps: 1. Governance member A initiates a proposal to modify the system configuration and calls the CommitteeManager interface -2. The CommitteeManager obtains relevant information about the governance committee from the existing Committee. +2. The CommitteeManager obtains relevant information about the governance committee from the existing Committee 3. CommitteeManager calls ProposalManager, creates a proposal and pushes into the proposal list 4. Governance Committee B calls the CommitteeManager interface to vote on the proposal 5. CommitteeManager calls ProposalManager, votes on the proposal, and writes to the voting list 6. The ProposalManager collects the voting results of the proposal and calls the Committee interface to confirm whether the proposal threshold is reached -7. Committee returns the confirmation result. +7. Committee returns the confirmation result 8. After the CommitteeManager confirms the status of the proposal and reaches the executable state, it initiates a call to 'SystemConfigPrecompiled' or 'SensusPrecompiled' 9. The system pre-compilation contract will first confirm whether the called sender starts with / sys /, and then execute。(CommitteeManager is a built-in on-chain contract with a fixed address / sys / 10001) @@ -134,12 +134,12 @@ Permission governance performs the following steps: Each time a contract is deployed, it will be created in the same directory with the contract name+Storage table of "_ accessAuth" for storing interface-to-user whitelist data。 -The underlying layer can directly access the storage through the table name to obtain permission information.。In order for solidity and liquid to access the permission table corresponding to the directory contract, open the / sys / contractAuth system contract, you can access the permission storage table corresponding to the contract by accessing the / sys / contractAuth method to determine the permissions。 +The underlying layer can directly access the storage through the table name to obtain permission information。In order for solidity and liquid to access the permission table corresponding to the directory contract, open the / sys / contractAuth system contract, you can access the permission storage table corresponding to the contract by accessing the / sys / contractAuth method to determine the permissions。 #### concrete realization -1. Create a permission table when creating a contract: When executing the creation, you can create an additional permission table.。 -2. Provide the read and write operation interface of the permission table: provide the / sys / contractAuth system contract, which is specially used as the system contract to access the permission table.。Solidity uses the 0x1005 address。 +1. Create a permission table when creating a contract: When executing the creation, you can create an additional permission table。 +2. Provide the read and write operation interface of the permission table: provide the / sys / contractAuth system contract, which is specially used as the system contract to access the permission table。Solidity uses the 0x1005 address。 3. System contract ContractAuth interface ```solidity diff --git a/3.x/en/docs/design/security_control/index.rst b/3.x/en/docs/design/security_control/index.rst new file mode 100644 index 000000000..a350dd804 --- /dev/null +++ b/3.x/en/docs/design/security_control/index.rst @@ -0,0 +1,36 @@ +############################################################## +14. Security control +############################################################## + +Tags: "Security Control" "Network Security" "Storage Security" "Black and White List" "Permission Control" + +---- + +In order to ensure the security of communication between nodes and the security of node data access, FISCO BCOS introduces three mechanisms: node admission mechanism, CA blacklist and permission control, and makes strict security control at the network and storage level。 + + +**Network Level Security Control** + +- Node usage**SSL connection** The confidentiality of communication data is guaranteed + +- Introduction**network admission mechanism** You can delete the offending node of a specified group from the consensus node list or group to ensure system security + +- Pass**group whitelist mechanism** to ensure that each group can only receive messages from the corresponding group, ensuring the isolation of communication data between groups + +- Introduction**CA blacklist mechanism** , can disconnect the network connection with the evil node in time + +- Presented**authority governance system** Mechanism for flexible, granular control of external account deployment contracts and permissions to create, insert, delete, and update user tables。 + + +**Storage-plane security controls** + +Based on distributed storage, a distributed storage permission control mechanism is proposed to perform effective permission control in a flexible and fine-grained manner, and a permission control mechanism is designed and implemented to restrict external accounts(tx.origin)Access to storage. Permissions include contract deployment, table creation, and table write operations。 + + +.. toctree:: + :maxdepth: 1 + + node_management.md + certificate_list.md + permission_control.md + committee_design.md diff --git a/3.x/en/docs/design/security_control/node_management.md b/3.x/en/docs/design/security_control/node_management.md index c09184639..6711d8aeb 100644 --- a/3.x/en/docs/design/security_control/node_management.md +++ b/3.x/en/docs/design/security_control/node_management.md @@ -10,62 +10,62 @@ This document provides an introductory description of node admission management. ### Single-chain multi-ledger -Blockchain technology is a decentralized, open and transparent distributed data storage technology that can reduce trust costs and achieve safe and reliable data interaction.。However, the transaction data of the blockchain faces the threat of privacy leakage: +Blockchain technology is a decentralized, open and transparent distributed data storage technology that can reduce trust costs and achieve safe and reliable data interaction。However, the transaction data of the blockchain faces the threat of privacy leakage: -- For public chains, a node can join the network at will and get all the data from the global ledger.; -- For the alliance chain, although there is a network access mechanism, the node can obtain the data of the global ledger after joining the blockchain.。 +- For public chains, a node can join the network at will and get all the data from the global ledger; +- For the alliance chain, although there is a network access mechanism, the node can obtain the data of the global ledger after joining the blockchain。 -FISCO BCOS, as a consortium chain, raised the issue of on-chain privacy.**Single-chain multi-ledger**the solution of。FISCO BCOS by introducing**GROUP**concept, which expands the alliance chain from the original one-chain one-ledger storage / execution mechanism to a one-chain multi-ledger storage / execution mechanism, and implements data isolation and confidentiality on the same chain based on the group dimension.。 +FISCO BCOS, as a consortium chain, raised the issue of on-chain privacy**Single-chain multi-ledger**the solution of。FISCO BCOS by introducing**GROUP**concept, which expands the alliance chain from the original one-chain one-ledger storage / execution mechanism to a one-chain multi-ledger storage / execution mechanism, and implements data isolation and confidentiality on the same chain based on the group dimension。 ![](../../../images/node_management/multi_ledger.png) -< center > Multi-ledger < / center > +
Multi-ledger
-As shown in the figure above, node ABC joins the blue group and jointly maintains the blue ledger; Nodes B and C join the pink group and maintain the pink ledger; Nodes A and B join the yellow group and maintain the yellow ledger。The three groups share common network services, but each group has its own separate ledger storage and transaction execution environment.。The client sends the transaction to a group to which the node belongs, and the transaction and data are agreed and stored within the group, while other groups are unaware of the transaction.。 +As shown in the figure above, node ABC joins the blue group and jointly maintains the blue ledger; Nodes B and C join the pink group and maintain the pink ledger; Nodes A and B join the yellow group and maintain the yellow ledger。The three groups share common network services, but each group has its own separate ledger storage and transaction execution environment。The client sends the transaction to a group to which the node belongs, and the transaction and data are agreed and stored within the group, while other groups are unaware of the transaction。 ### node admission mechanism -Based on the introduction of the group concept, node admission management can be divided into**network admission mechanism**和**group access mechanism**。The rules of the admission mechanism are recorded in the configuration. After the node is started, the configuration information is read to judge the admission of the network and group.。 +Based on the introduction of the group concept, node admission management can be divided into**network admission mechanism**和**group access mechanism**。The rules of the admission mechanism are recorded in the configuration. After the node is started, the configuration information is read to judge the admission of the network and group。 ## noun explanation ### Node Type -The nodes discussed in this document are nodes that have completed network admission and are capable of P2P communication.。**Network admission process involves P2P node connection list addition and certificate verification。** +The nodes discussed in this document are nodes that have completed network admission and are capable of P2P communication。**Network admission process involves P2P node connection list addition and certificate verification。** -- **Group Node**Node that completes network admission and joins the group。A group node can only be one of a consensus node and an observation node。The consensus node participates in consensus block and transaction / block synchronization, and the observation node only participates in block synchronization.。**The group node admission process involves the sending of transactions that dynamically add or delete nodes.。** +- **Group Node**Node that completes network admission and joins the group。A group node can only be one of a consensus node and an observation node。The consensus node participates in consensus block and transaction / block synchronization, and the observation node only participates in block synchronization。**The group node admission process involves the sending of transactions that dynamically add or delete nodes。** - **free node**Nodes that complete network admission but do not join the group。**Free nodes have not yet passed group admission and do not participate in consensus and synchronization。** The node relationships are as follows: ![](../../../images/node_management/node_relationship.png) -< center > node relationships < / center > +
node relationship
### Configuration Type - < td > Divide dimensions < / td > - < td > Configuration type < / td > - < td > < center > Description < / center > < / td > + + + - + + - < td > Group Configuration < / td > The configuration of a < td > node affects the group to which the node belongs. Each group has a configuration, < br > < B > The file name is group.X.*where X is the group number < / B > < / td > + - - < td > Configurable < / td > < td > The configuration can be changed later, the node restart takes effect, and the < br > < B > file suffix is .ini < / B > < / td > + - - < td > On-chain storage < / td > < td > The configuration is stored on the blockchain, and group consensus is required to modify it. Currently, there is no content that requires network-wide consensus. < br > < B > Configuration items that need to be reset by the new chain or take effect through transaction modification < / B > < / td > +
divide dimensionConfiguration Type
Description
Scope of influence < / td > - < td > Network Configuration < / td > < td > The configuration of a node affects the entire network where the node is located. The node uses the same configuration for the entire network, < br > < B > The file name is config.*Scope of influenceNetwork ConfigurationThe configuration of a node affects the entire network in which the node is located. The node uses the same configuration for the entire network
The file name is config*
Group ConfigurationThe configuration of a node affects the single group in which the node is located. Each group has its own configuration
The file is named group.X*where X is the group number
Can I change the < / td > < td > fixed configuration < / td > < td > to use only the first configuration content, and subsequent modifications to the configuration are invalid, < br > < B > The file suffix is .genesis < / B > < / td > + Whether it can be changedFixed configurationOnly the first configuration is used, and subsequent modifications to the configuration are invalid
The file suffix is .genesis
ConfigurableThe configuration can be changed later, and the node restart takes effect
The file suffix is .ini
Storage location < / td > < td > Local storage < / td > < td > The configuration is stored in a local file, which can be modified directly by the user, < br > < B > The user can modify his own file to restart the effective configuration item < / B > < / td > + Storage positionLocal StorageConfiguration is stored in the local file, the user can directly modify,
The user can restart the effective configuration item by modifying its own file
On-chain storageThe configuration is stored on the blockchain, and its modification requires group consensus. Currently, there is no content that requires network-wide consensus
Configuration items that need to be reset by a new chain or modified by a transaction
@@ -75,62 +75,62 @@ The configuration items related to node transfer management are:**P2P node conne - < td > < center > Configuration Item < / center > < / td > - < td > < center > Function < / center > < / td > - < td > < center > Scope of influence < / center > < / td > - < td > < center > Can I change < / center > < / td > - < td > < center > Storage location < / center > < / td > + + + + + - < td > P2P node connection list < / td > < td > record which nodes this node expects to establish network communication with < / td > < td > network configuration < / td > < td > configurable < / td > < td > local storage < / td > + - < td > node certificate < / td > < td > proves that it is a node licensed by a trusted third party < / td > < td > network configuration < / td > < td > reconfigurable < / td > < td > local storage < / td > + - < td > CA Blacklist < / td > < td > Record which nodes this node is prohibited from establishing network communication with < / td > < td > Network Configuration < / td > < td > Configurable < / td > < td > Local Storage < / td > + - < td > Initial list of group nodes < / td > < td > Record the list of nodes participating in consensus / synchronization during the Genesis block phase < / td > < td > Group configuration < / td > < td > Fixed configuration < / td > < td > < td > Local storage < / td > + - < td > group node system table < / td > < td > record the list of nodes currently participating in group consensus / synchronization < / td > < td > group configuration < / td > < td > reconfigurable < / td > < td > on-chain storage < / td > +
Configuration Item
Role
Scope of influence
Whether it can be changed
Storage position
P2P node connection listRecord which nodes this node expects to establish network communication withNetwork ConfigurationConfigurableLocal Storage
Node CertificateProve that you are a node licensed by a trusted third partyNetwork ConfigurationConfigurableLocal Storage
CA BlacklistRecord which nodes this node is prohibited from establishing network communication withNetwork ConfigurationConfigurableLocal Storage
Initial list of group nodesRecord the list of nodes participating in consensus / synchronization during the Genesis block phaseGroup ConfigurationFixed configurationLocal Storage
Group Node System TableRecord the list of nodes currently participating in a group consensus / synchronizationGroup ConfigurationConfigurableOn-chain storage
## Module Architecture ![](../../../images/node_management/architecture.png) -< center > Module architecture < / center > +
Module Architecture
-**Configuration item and system module diagram**As above, arrow direction A-> B indicates that the B module depends on the data of the A module, and the B module is initialized later than the A module。 +**Configuration item and system module diagram**As above, arrow direction A->B indicates that the B module depends on the data of the A module, and the B module is initialized later than the A module。 ## Core Process ### General initialization process ![](../../../images/node_management/initialization.png) -< center > General initialization process < / center > +
General initialization process
### First Initialization Process When a node is started for the first time, the content of the fixed configuration file is written to block 0 in groups and directly submitted to the chain。The specific logic for initialization is: ![](../../../images/node_management/first_initialization.png) -< center > Initial Initialization Process < / center > +
First Initialization Process
-The configuration content related to node admission management that needs to be written at this stage is:**Initial list of group nodes-> Group Node System Table**。 +The configuration content related to node admission management that needs to be written at this stage is:**Initial list of group nodes ->Group Node System Table**。 Description: -- Block 0 of all nodes of the same ledger must be consistent, that is**Fixed Profile**are consistent; -- Each subsequent startup of the node checks whether the 0th block information is consistent with the fixed configuration file.。If the fixed configuration file is modified, the node will output an alarm message when it is started again, but it will not affect the normal operation of the group.。 +-The 0th block of all nodes in the same ledger must be consistent, that is**Fixed Profile**are consistent; +-Each subsequent startup of the node checks whether the 0th block information is consistent with the fixed configuration file。If the fixed configuration file is modified, the node will output an alarm message when it is started again, but it will not affect the normal operation of the group。 ### CA blacklist-based node connection process -**SSL authentication is used to determine whether nodes are allowed to join a chain.**。All nodes on a chain trust a trusted third party (the issuer of the node certificate)。 +**SSL authentication is used to determine whether nodes are allowed to join a chain**。All nodes on a chain trust a trusted third party (the issuer of the node certificate)。 -FISCO BCOS Requirements Implementation**SSL mutual authentication**。During the handshake process, the node obtains the nodeID of the other node from the certificate provided by the other node and checks whether the nodeID is in its own CA blacklist.。If it exists, close the connection. If it does not exist, create a session.。 +FISCO BCOS Requirements Implementation**SSL mutual authentication**。During the handshake process, the node obtains the nodeID of the other node from the certificate provided by the other node and checks whether the nodeID is in its own CA blacklist。If it exists, close the connection. If it does not exist, create a session。 CA blacklist mechanism also supports**SSL one-way authentication**After the session is established, the node can obtain the nodeID of the other node from the session for judgment. If the nodeID is in its own CA blacklist, disconnect the established session。 @@ -139,14 +139,14 @@ CA blacklist mechanism also supports**SSL one-way authentication**After the sess three node types (consensus node+Observation node+Free node) can be converted through the relevant interface as follows: ![](../../../images/node_management/type_and_conversion_of_nodes.png) -< center > Types of consensus nodes and their conversion operations < / center > +
Related Types of Consensus Nodes and Their Transformation Operations
## Interface and configuration description ### Node Profile Hierarchy ![](../../../images/node_management/config_file_organization.png) -< center > Hierarchical relationship of configuration files < / center > +
Hierarchical relationships of profiles
The organization rules for the profile are:**The configuration of each group is independent**、**Fixed configuration and reconfigurable independent**。The files currently in use are**Network Changeable Profile**`config.ini`、**Group Fixed Profile**'group.N. genesis' and**Group Can Change Profile**'group.N.ini ', where' N 'is the group number of the node。对于**Network / Group Changeable Profile**If the value of a configuration item is not explicitly defined in the file, the program will use the default value of the configuration item。 @@ -211,11 +211,11 @@ The organization rules for the profile are:**The configuration of each group is
Key
Expain
-< tr > < td > name < / td > < td > string < / td > < td > No < / td > < td > PRI < / td > < td > Each row has the same value. Distributed storage implements full table query based on this key < / td > < / tr > -< tr > < td > type < / td > < td > string < / td > < td > No < / td > < td > < / td > < td > node type (sealer / observer) < / td > < / tr > -< tr > < td > node _ id < / td > < td > string < / td > < td > No < / td > < td > < / td > < td > node NodeID < / td > < / tr > -< tr > < td > enable _ num < / td > < td > string < / td > < td > No < / td > < td > < / td > < td > The block height in effect for this node type < / td > < / tr > -< tr > < td > _ status _ < / td > < td > string < / td > < td > No < / td > < td > < / td > < td > Distributed storage common field, "0" can be deleted with "1" < / td > < / tr > +namestringNoPRIEach row has the same value. Distributed Storage implements full table query based on this key +typestringNoNode type (sealer / observer) +node_idstringNoNode Node ID +enable_numstringNoBlock height of the node type in effect +_status_stringNoDistributed storage general field, '0' can be deleted with' 1' ### Group System Table Interface Definition @@ -236,5 +236,5 @@ contract ConsensusSystemTable ## Functional Outlook -- **Configurable**At present, the modification is restarted to take effect, and the subsequent dynamic loading can be realized, and the modification takes effect in real time.; -- **CA Blacklist**Currently, a node-based blacklist is implemented, and an institution-based blacklist can be considered in the future.。 +- **Configurable**At present, the modification is restarted to take effect, and the subsequent dynamic loading can be realized, and the modification takes effect in real time; +- **CA Blacklist**Currently, a node-based blacklist is implemented, and an institution-based blacklist can be considered in the future。 diff --git a/3.x/en/docs/design/security_control/permission_control.md b/3.x/en/docs/design/security_control/permission_control.md index 758313f2f..7ddb47449 100644 --- a/3.x/en/docs/design/security_control/permission_control.md +++ b/3.x/en/docs/design/security_control/permission_control.md @@ -5,19 +5,19 @@ Tags: "Security Control" "Access Control" "Permission Control" " ---- ## Introduction to Permission Control -Compared with the public chain, which is free to join and exit, free to trade and free to search, the alliance chain has the requirements of access permission, transaction diversification, commercial privacy and security considerations, high stability and so on.。Therefore, the alliance chain needs to emphasize the concept of "authority" and "control" in practice.。 +Compared with the public chain, which is free to join and exit, free to trade and free to search, the alliance chain has the requirements of access permission, transaction diversification, commercial privacy and security considerations, high stability and so on。Therefore, the alliance chain needs to emphasize the concept of "authority" and "control" in practice。 -In order to reflect the concept of "permission" and "control," FISCO BCOS platform is based on distributed storage, and proposes a distributed storage permission control mechanism, which can be flexible and fine-grained for effective permission control, providing an important technical means for the governance of the alliance chain.。Distributed permission control based on external accounts(tx.origin)access mechanism, including contract deployment, table creation, table write operations (insert, update and delete) for permission control, table read operations are not subject to permission control.。 In practice, each account uses an independent and unique public-private key pair, and its private key is used to sign the transaction when it is initiated, so that the recipient can know which account the transaction was issued from through public key verification, thus realizing the control of the transaction and the traceability of subsequent supervision.。 +In order to reflect the concept of "permission" and "control," FISCO BCOS platform is based on distributed storage, and proposes a distributed storage permission control mechanism, which can be flexible and fine-grained for effective permission control, providing an important technical means for the governance of the alliance chain。Distributed permission control based on external accounts(tx.origin)access mechanism, including contract deployment, table creation, table write operations (insert, update and delete) for permission control, table read operations are not subject to permission control。 In practice, each account uses an independent and unique public-private key pair, and its private key is used to sign the transaction when it is initiated, so that the recipient can know which account the transaction was issued from through public key verification, thus realizing the control of the transaction and the traceability of subsequent supervision。 ## Permission Control Rules The permission control rules are as follows: -1. The minimum granularity of permission control is table, which is controlled based on external accounts.。 -2. Using the whitelist mechanism, tables with no permissions configured are fully released by default, that is, all external accounts have read and write permissions.。 -3. Permission settings use the permission table (\ _ sys _ table _ access _)。If the table name and external account address are set in the permission table, the account has read and write permissions on the table.。 +1. The minimum granularity of permission control is table, which is controlled based on external accounts。 +2. Using the whitelist mechanism, tables with no permissions configured are fully released by default, that is, all external accounts have read and write permissions。 +3. Permission settings use the permission table (\ _ sys _ table _ access _)。If the table name and external account address are set in the permission table, the account has read and write permissions on the table。 ## Permission Control Classification -Distributed storage permission control is divided into permission control for user tables and system tables.。A user table is a table created by a user contract. You can set permissions on user tables.。System tables refer to the tables built into the FISCO BCOS blockchain network. For details about the design of system tables, see [Storage Document].(../storage/storage.md)。The permission control of system tables is as follows: +Distributed storage permission control is divided into permission control for user tables and system tables。A user table is a table created by a user contract. You can set permissions on user tables。System tables refer to the tables built into the FISCO BCOS blockchain network. For details about the design of system tables, see [Storage Document](../storage/storage.md)。The permission control of system tables is as follows: |Table Name|Table Storage Data Description|Meaning of permission control| |:---------------|:-------------|:-----------| @@ -32,8 +32,8 @@ For the user table and each system table, the SDK implements three APIs for perm - User table: - **public String grantUserTableManager(String tableName, String address):** Set permission information based on user table name and external account address。 - **public String revokeUserTableManager(String tableName, String address):** Remove permission information based on user table name and external account address。 - - **public List\ listUserTableManager(String tableName):** Query the set permission record list according to the user table name(Each record contains the external account address and the active block height.)。 -- _ sys _ tables _ Tables: + - **public List\ listUserTableManager(String tableName):** Query the set permission record list according to the user table name(Each record contains the external account address and the active block height)。 +- _ sys _ tables _ tables: - **public String grantDeployAndCreateManager(String address):** Add permissions to deploy contracts and create user tables for external account addresses。 - **public String revokeDeployAndCreateManager(String address):** Remove deployment contract and create user table permissions for external account addresses。 - **public List\ listDeployAndCreateManager():** Querying the list of permission records that have permission to deploy contracts and create user tables。 @@ -41,7 +41,7 @@ For the user table and each system table, the SDK implements three APIs for perm - **public String grantPermissionManager(String address):** Add permissions for managing external account addresses。 - **public String revokePermissionManager(String address):** Permission to remove administrative permissions for an external account address。 - **public List\ listPermissionManager():** Query the list of permission records that have administrative permissions。 -- _ sys _ consensus _ Table: +- _ sys _ consensus _ table: - **public String grantNodeManager(String address):** Add node management permissions for external account addresses。 - **public String revokeNodeManager(String address):** Remove the node management permission of the external account address。 - **public List\ listNodeManager():** Query the list of permission records that have node management。 @@ -49,12 +49,12 @@ For the user table and each system table, the SDK implements three APIs for perm - **public String grantCNSManager(String address):** Increase Use CNS permissions for external account addresses。 - **public String revokeCNSManager(String address):** Remove Use CNS permission for an external account address。 - **public List\ listCNSManager():** Querying the list of records that have permission to use the CNS。 -- _ sys _ config _ Table: +- _ sys _ config _ table: - **public String grantSysConfigManager(String address):** Increase the system parameter management permission of the external account address。 - - **public String revokeSysConfigManager(String address):** Remove the system parameter management permission of the external account address.。 + - **public String revokeSysConfigManager(String address):** Remove the system parameter management permission of the external account address。 - **public List\ listSysConfigManager():** Query the list of records that have permission to manage system parameters。 -The API for setting and removing permissions returns a JSON string containing code and msg fields. When an operation is performed without permission, its code defines the-50000, msg is defined as "permission denied"。When the permission is set successfully, its code is 0 and msg is "success"。 +The API for setting and removing permissions returns a JSON string, which contains code and msg fields. When an operation is performed without permissions, the code definition is -50000, and msg is defined as "permission denied."。When the permission is set successfully, its code is 0 and msg is "success"。 ## Data Definition Permission information is stored as a system table. The permission table name is _ sys _ table _ access _, and its field information is defined as follows: @@ -74,12 +74,12 @@ Permission information is stored as a system table. The permission table name is +-----------+-------+--------+-----+---------------------------------------------+ ``` -For the insertion or update of the permission table, the current block does not take effect, but takes effect in the next block of the current block.。When the status field is "0," the permission record is in the normal effective state. When the status field is "1," the permission record has been deleted. That is, the permission record is in the invalid state.。 +For the insertion or update of the permission table, the current block does not take effect, but takes effect in the next block of the current block。When the status field is "0," the permission record is in the normal effective state. When the status field is "1," the permission record has been deleted. That is, the permission record is in the invalid state。 ## Permission Control Design #### Permission control function design -Determine external accounts, tables to be operated and how to operate based on transaction information。The table to be operated on is a user table or a system table.。The system table is used to control the system functions of the block chain, and the user table is used to control the business functions of the block chain, as shown in the following figure。External accounts can control related system and business functions by querying the permission table to obtain permission-related information, determining the permissions and then operating the relevant user tables and permission tables.。 +Determine external accounts, tables to be operated and how to operate based on transaction information。The table to be operated on is a user table or a system table。The system table is used to control the system functions of the block chain, and the user table is used to control the business functions of the block chain, as shown in the following figure。External accounts can control related system and business functions by querying the permission table to obtain permission-related information, determining the permissions and then operating the relevant user tables and permission tables。 ```eval_rst .. mermaid:: @@ -90,16 +90,16 @@ Determine external accounts, tables to be operated and how to operate based on t participant system table participant user table - External Account-> > Permission Table: Query - Permission Table-> > System Table: Control - Permission Table-> > User Table: Control - System tables-> > System functions of blockchain: Control - User Table-> > Business Functions of Blockchain: Control + External accounts ->>Permission Table: Query + Permissions Table ->>System tables: Control + Permissions Table ->>User Table: Control + System Tables ->>System functions of the blockchain: Control + User Table ->>Business Functions of Blockchain: Control ``` #### Permission control process design -The process of permission control is as follows: first, the client initiates a transaction request, and the node obtains the transaction data to determine the external account and the table to be operated and the way the table is operated.。If the operation mode is determined to be a write operation, check the permission information of the external account for the operation table (the permission information is obtained from the permission table).。If the check has permission, the write operation is performed and the transaction is executed normally;If no permission is checked, the write operation is rejected and no permission information is returned。If the operation mode is determined to be a read operation, the permission information is not checked, the read operation is performed normally, and the query data is returned。The flow chart is as follows。 +The process of permission control is as follows: first, the client initiates a transaction request, and the node obtains the transaction data to determine the external account and the table to be operated and the way the table is operated。If the operation mode is determined to be a write operation, check the permission information of the external account for the operation table (the permission information is obtained from the permission table)。If the check has permission, the write operation is performed and the transaction is executed normally;If no permission is checked, the write operation is rejected and no permission information is returned。If the operation mode is determined to be a read operation, the permission information is not checked, the read operation is performed normally, and the query data is returned。The flow chart is as follows。 ```eval_rst .. mermaid:: @@ -133,5 +133,5 @@ The process of permission control is as follows: first, the client initiates a t ## Permission Control Tool The distributed storage permission control of FISCO BCOS can be used in the following ways: -- For ordinary users, use the permission function through console commands. For details, see [Permission Management User Guide](../../develop/committee_usage.md)。 -- For developers, the SDK implements three interfaces based on the user table controlled by permissions and each system table, namely the authorization, revocation, and query permission interfaces.。 +-For ordinary users, use the permission function through console commands. For details, please refer to [Permission Management User Guide](../../develop/committee_usage.md)。 +-For developers, the SDK implements three interfaces according to the user table controlled by permissions and each system table, namely authorization, revocation and query permission interfaces。 diff --git a/3.x/en/docs/design/storage/archive.md b/3.x/en/docs/design/storage/archive.md index 3fdfc8883..f7fbeef58 100644 --- a/3.x/en/docs/design/storage/archive.md +++ b/3.x/en/docs/design/storage/archive.md @@ -6,7 +6,7 @@ Tags: "data archiving" "data clipping" ## Background Introduction -Blockchain node data will continue to increase over time, and some of these historical blocks and transactions are accessed very infrequently, or even will not be accessed, through data archiving can archive this cold data to cheaper storage devices.。The requirements are as follows: +Blockchain node data will continue to increase over time, and some of these historical blocks and transactions are accessed very infrequently, or even will not be accessed, through data archiving can archive this cold data to cheaper storage devices。The requirements are as follows: 1. Can specify the scope of the archive block 2. Archiving operation does not affect the normal consensus of nodes @@ -19,7 +19,7 @@ Archive data can be archived to RocksDB and TiKV. Data archived to RocksDB can b ### Archive Data Range -Statistics of a 100w block high FISCO-In BCOS 3.0 nodes, for example, transaction data accounts for 46.5G (42.7%), receipt data accounts for 16.5G (15.2%), and status data accounts for 40.3G (37.1%). In addition, the largest 's _ number _ 2 _ txs' block transaction hash list is only 4.9G (4.5%), so the solution decides to archive only transaction and receipt data in the near-term.。The archive node just cannot obtain the transaction and receipt of the archive block, and other functions are normal.。 +Statistics show that the data in a 100w block high FISCO-BCOS 3.0 node accounts for 46.5G (42.7%) of transaction data, 16.5G (15.2%) of receipt data, and 40.3G (37.1%) of status data. In addition, the largest 's _ number _ 2 _ txs' block transaction hash list is only 4.9G (4.9%) of the data that does not affect the basic data, so the archiving。The archive node just cannot obtain the transaction and receipt of the archive block, and other functions are normal。 ```bash s_tables size is 661.029MB @@ -40,9 +40,9 @@ The data archiving process is as follows: ```mermaid sequenceDiagram - Archiving Tools-> > Node RocksDB: Check parameter, read archive block - Archiving Tools-> > Archive RocksDB: Write to archive block - Archiving Tools-> > Node: Delete Archived Blocks + Archiving tools ->>Node RocksDB: Check parameter, read archive block + Archiving tools ->>Archive RocksDB: Write to archive block + Archiving tools ->>Node: Delete Archived Blocks ``` ### Archive Data Query diff --git a/3.x/en/docs/design/storage/storage.md b/3.x/en/docs/design/storage/storage.md index 22c2e8355..54261069a 100644 --- a/3.x/en/docs/design/storage/storage.md +++ b/3.x/en/docs/design/storage/storage.md @@ -6,29 +6,29 @@ Tags: "storage" "storage" "transaction" ## 设计 -The storage layer needs to be able to meet the different design goals of the three versions of Air, Pro and Max, so we use the same set of interfaces to mask the specific implementation of different versions of storage.。For the Air and Pro versions, the storage layer uses RocksDB to meet its lightweight and high-performance requirements, and for the Max version to support large-scale data storage requirements by accessing a distributed database that can support horizontal expansion, we chose Tikv to ensure multi-copy data consistency and high availability through the Raft protocol.。The overall storage service design is shown in the following figure。 +The storage layer needs to be able to meet the different design goals of the three versions of Air, Pro and Max, so we use the same set of interfaces to mask the specific implementation of different versions of storage。For the Air and Pro versions, the storage layer uses RocksDB to meet its lightweight and high-performance requirements, and for the Max version to support large-scale data storage requirements by accessing a distributed database that can support horizontal expansion, we chose Tikv to ensure multi-copy data consistency and high availability through the Raft protocol。The overall storage service design is shown in the following figure。 ![](../../../images/design/storage_design.png) -The difference between the Air, Pro and Max versions is that the specific implementation of the 'Storage SDK' used is different. For the Air and Pro versions, the implementation based on RocksDB encapsulation will be created during initialization, while for the Max version, the TiKV encapsulation-based implementation is provided, while retaining the ability to customize storage, users can access other databases based on specific business needs.。 +The difference between the Air, Pro and Max versions is that the specific implementation of the 'Storage SDK' used is different. For the Air and Pro versions, the implementation based on RocksDB encapsulation will be created during initialization, while for the Max version, the TiKV encapsulation-based implementation is provided, while retaining the ability to customize storage, users can access other databases based on specific business needs。 ## Max version data commit -In the Max version of FISCO BCOS 3.x, the computing layer consists of multiple execution services. When executing a block, each execution service receives the smart contract execution task assigned by the scheduling layer and returns the execution result to the scheduling layer. The data changes generated during the execution are stored in the memory of each execution service.。 +In the Max version of FISCO BCOS 3.x, the computing layer consists of multiple execution services. When executing a block, each execution service receives the smart contract execution task assigned by the scheduling layer and returns the execution result to the scheduling layer. The data changes generated during the execution are stored in the memory of each execution service。 ```eval_rst .. mermaid:: flowchart TD - c(Consensus / Synchronization)-->|1-Request Execution Block|s(Scheduling Services) - s-->|2-Execute transaction|e1(Execution Services) - s-->|2-Execute transaction|e2(Execution Services) - e1-->|3-Return Results|s - e2-->|3-Return Results|s - s-->|4-Get Status hash|e1 - s-->|4-Get Status hash|e2 - s-->|5-Generate header and receipt|s - s-->|6-Returns the block execution result|c + c(Consensus / Synchronization)-->|1 - Request Execution Block|s(Scheduling Services) + s-->|2 - Execute transactions|e1(Execution Services) + s-->|2 - Execute transactions|e2(Execution Services) + e1-->|3 - Return Results|s + e2-->|3 - Return Results|s + s-->|4- Get status hash|e1 + s-->|4- Get status hash|e2 + s-->|5- Generate header and receipt|s + s-->|6 - Return block execution result|c ``` @@ -48,25 +48,25 @@ class TransactionalStorageInterface : public virtual StorageInterface }; ``` -During the commit process, the scheduling service acts as the coordinator of the two-phase transaction, with each execution service acting as a participant to complete the commit of the block together.。 +During the commit process, the scheduling service acts as the coordinator of the two-phase transaction, with each execution service acting as a participant to complete the commit of the block together。 1. Preparation phase: - When a block begins to be committed, the dispatch service calls the 'asyncPrepare' method of the storage (Storage) object it holds to commit the block, receipt, index, and other data to the storage service.。After 'asyncPrepare' returns the result, the dispatch service notifies all execution services to call the 'asyncPrepare' method of the storage object it holds based on the returned transaction information, committing the state changes held by each execution service to the storage service。 + When a block begins to be committed, the dispatch service calls the 'asyncPrepare' method of the storage (Storage) object it holds to commit the block, receipt, index, and other data to the storage service。After 'asyncPrepare' returns the result, the dispatch service notifies all execution services to call the 'asyncPrepare' method of the storage object it holds based on the returned transaction information, committing the state changes held by each execution service to the storage service。 1. Submission phase: - When the dispatch service collects the successful return of all execution service calls' asyncPrepare ', the dispatch service itself calls the' asyncCommit 'method of the storage object it holds, submits the data to the backend database, and notifies all execution services to call the' asyncCommit 'method.。If an execution service call 'asyncPrepare' fails or times out, the dispatch service itself calls the 'asyncRollback' method of the storage object it holds and notifies all execution services to call the 'asyncRollback' method to roll back the data.。 + When the dispatch service collects the successful return of all execution service calls' asyncPrepare ', the dispatch service itself calls the' asyncCommit 'method of the storage object it holds, submits the data to the backend database, and notifies all execution services to call the' asyncCommit 'method。If an execution service call 'asyncPrepare' fails or times out, the dispatch service itself calls the 'asyncRollback' method of the storage object it holds and notifies all execution services to call the 'asyncRollback' method to roll back the data。 ```eval_rst .. mermaid:: flowchart TD d(Scheduling Services)-->|1-asyncPrepare|s(Storage Services) - s-->|2-Returns transaction information|d + s-->|2 - Return transaction information|d d-->|3-asyncPrepare|e1(Execution Services) d-->|3-asyncPrepare|e2(Execution Services) e1-->|4-asyncPrepare|s e2-->|4-asyncPrepare|s - e1-->|5-Return Results|d - e2-->|5-Return Results|d + e1-->|5 - Return Results|d + e2-->|5 - Return Results|d d-->|6-asyncCommit/asyncRollback|s ``` @@ -77,12 +77,12 @@ The difference between Air and Pro versions of commit and Max is that on the one ![air](../../../images/design/storage_design_air.png) -The processing logic of the scheduling service and the execution service is the same as that of the Max version, except that in the commit phase, when the scheduling service and the execution service commit data, they hold the same storage object.。 +The processing logic of the scheduling service and the execution service is the same as that of the Max version, except that in the commit phase, when the scheduling service and the execution service commit data, they hold the same storage object。 ## system storage table -This section describes the storage structure and purpose of the data table stored in the creation block on the basis of pre-written storage.。 +This section describes the storage structure and purpose of the data table stored in the creation block on the basis of pre-written storage。 | Table Name| Field| Use| | ----------------------- | ---------------------------------------------- | ---------------------------------------------- | diff --git a/3.x/en/docs/design/storage/storage_security.md b/3.x/en/docs/design/storage/storage_security.md index 0cde1e918..83a492071 100644 --- a/3.x/en/docs/design/storage/storage_security.md +++ b/3.x/en/docs/design/storage/storage_security.md @@ -5,16 +5,16 @@ Tags: "Drop Disk Encryption" "Data Encryption" "Data Security" " ---- ## Background Introduction -In the architecture of the alliance chain, a blockchain is built between institutions, and data is visible within each institution of the alliance chain.。 +In the architecture of the alliance chain, a blockchain is built between institutions, and data is visible within each institution of the alliance chain。 -In some scenarios with high data security requirements, members within the alliance do not want organizations outside the alliance to have access to data on the alliance chain。At this point, you need to access the data on the federation chain.。 +In some scenarios with high data security requirements, members within the alliance do not want organizations outside the alliance to have access to data on the alliance chain。At this point, you need to access the data on the federation chain。 Access control of federated chain data, which is mainly divided into two aspects * Access control of communication data on the chain * Access Control of Node Storage Data -For access control of on-chain communication data, FISCO BCOS is done through node certificates and SSL.。The main focus here is on access control for node storage data, i.e., on-disk encryption。 +For access control of on-chain communication data, FISCO BCOS is done through node certificates and SSL。The main focus here is on access control for node storage data, i.e., on-disk encryption。 ![](../../../images/design/data_secure_background.png) @@ -27,7 +27,7 @@ Falling disk encryption is performed inside the institution。In the organizatio ![](../../../images/design/diskencryption_framework.png) -Drop-disk encryption is performed within the organization, and each organization independently manages the security of its own hard drive data。In the intranet, the hard drive data of each node is encrypted。Access to all encrypted data, managed through Key Manager。Key Manager is a service deployed in an organization's intranet to manage node hard disk data access keys.。When a node in the intranet is started, it obtains the access key for the encrypted data from the Key Manager to access its own encrypted data.。 +Drop-disk encryption is performed within the organization, and each organization independently manages the security of its own hard drive data。In the intranet, the hard drive data of each node is encrypted。Access to all encrypted data, managed through Key Manager。Key Manager is a service deployed in an organization's intranet to manage node hard disk data access keys。When a node in the intranet is started, it obtains the access key for the encrypted data from the Key Manager to access its own encrypted data。 Cryptographically protected objects include: @@ -36,14 +36,14 @@ Cryptographically protected objects include: ## concrete realization -The specific implementation process is accomplished through the dataKey held by the node itself and the global superKey managed by the Key Manager.。 +The specific implementation process is accomplished through the dataKey held by the node itself and the global superKey managed by the Key Manager。 **Node** -* The node uses its own dataKey to encrypt and decrypt the data (Encrypted Space) it manages.。 -* The node itself does not store the dataKey on the local disk, but stores the encrypted cipherDataKey of the dataKey.。 +* The node uses its own dataKey to encrypt and decrypt the data (Encrypted Space) it manages。 +* The node itself does not store the dataKey on the local disk, but stores the encrypted cipherDataKey of the dataKey。 * When the node is started, request the cipherDataKey from the Key Manager to obtain the dataKey。 -* The dataKey is only in the node's memory. When the node is closed, the dataKey is automatically discarded.。 +* The dataKey is only in the node's memory. When the node is closed, the dataKey is automatically discarded。 **Key Manager** @@ -51,13 +51,13 @@ Holds the global superKey, which is responsible for responding to authorization - Key Manager must be online in real time to respond to node startup requests。 - When the node is started, the cipherDataKey is sent. The key manager decrypts the cipherDataKey with the superKey. If the decryption is successful, the node's dataK is returned to the node。 -- The key manager can only be accessed from the intranet. The key manager cannot be accessed from the intranet within the organization. +- The key manager can only be accessed from the intranet. The key manager cannot be accessed from the intranet within the organization ![](../../../images/design/diskencryption.png) ## Program Process -The scheme process is divided into node initial configuration and node safe operation.。 +The scheme process is divided into node initial configuration and node safe operation。 ### node initial configuration @@ -76,7 +76,7 @@ Before starting a node, you must configure a dataKey for the node ### node security operation -When the node is started, the key manager is used to obtain the dataKey for local data access.。 +When the node is started, the key manager is used to obtain the dataKey for local data access。 (1) The node starts, reads cipherDataKey from the configuration file, and sends it to the Key Manager。 @@ -98,4 +98,4 @@ Specific use of disk encryption, can refer to: ## Storage Security -The Air and Pro versions of the storage system use the back-end database RocksDB, which is a high-performance key-Value Database。Design a perfect persistence mechanism, while ensuring performance and security, can be a good support range query。TiKV database for Max version。Both have high reliability and can cope with abnormal scenarios such as node power outages, restarts, and network fluctuations. After abnormal scenarios are restored, data can be read and written normally.。 +The Air and Pro versions of the storage system use the back-end database RocksDB, which is a high-performance Key-Value database。Design a perfect persistence mechanism, while ensuring performance and security, can be a good support range query。TiKV database for Max version。Both have high reliability and can cope with abnormal scenarios such as node power outages, restarts, and network fluctuations. After abnormal scenarios are restored, data can be read and written normally。 diff --git a/3.x/en/docs/design/sync.md b/3.x/en/docs/design/sync.md index 47313f6a9..836208c1e 100644 --- a/3.x/en/docs/design/sync.md +++ b/3.x/en/docs/design/sync.md @@ -4,37 +4,37 @@ Tags: "Block Synchronization" "Transaction Synchronization" ---- -Synchronization is a very important function of blockchain nodes。It is an adjunct to consensus and provides the necessary operating conditions for consensus。Synchronization is divided into transaction synchronization and state synchronization.。Synchronization of transactions ensures that each transaction arrives correctly at each node。The synchronization of the state ensures that the nodes behind the block can return to the latest state correctly.。Only nodes that hold the latest block state can participate in the consensus.。 +Synchronization is a very important function of blockchain nodes。It is an adjunct to consensus and provides the necessary operating conditions for consensus。Synchronization is divided into transaction synchronization and state synchronization。Synchronization of transactions ensures that each transaction arrives correctly at each node。The synchronization of the state ensures that the nodes behind the block can return to the latest state correctly。Only nodes that hold the latest block state can participate in the consensus。 ## Transaction Broadcast -Transaction synchronization is to allow transactions on the blockchain to reach all nodes as much as possible.。Provides the basis for consensus to package transactions into blocks。 +Transaction synchronization is to allow transactions on the blockchain to reach all nodes as much as possible。Provides the basis for consensus to package transactions into blocks。 -A transaction (tx1) is sent from the client to a node. After receiving the transaction, the node puts the transaction into its own transaction pool (TxPool) for consensus packaging.。At the same time, the node broadcasts the transaction to other nodes, which receive the transaction and place it in their own transaction pool.。 +A transaction (tx1) is sent from the client to a node. After receiving the transaction, the node puts the transaction into its own transaction pool (TxPool) for consensus packaging。At the same time, the node broadcasts the transaction to other nodes, which receive the transaction and place it in their own transaction pool。 * For transactions coming from SDK, broadcast to all nodes * For transactions broadcast from other nodes, put them directly into the trading pool -* A transaction is broadcast only once on a node, and when a duplicate transaction is received, it is not broadcast twice. +* A transaction is broadcast only once on a node, and when a duplicate transaction is received, it is not broadcast twice -There is a very small probability that a transaction will not reach a node, which is allowed.。The purpose of reaching as many nodes as possible is to allow this transaction to be packaged, agreed upon, and confirmed as soon as possible, and to try to get the results of the transaction to be executed faster.。When a transaction does not reach a certain node, it will only make the execution time of the transaction longer and will not affect the correctness of the transaction.。In the consensus process, the list of block transactions packaged by the leader is verified, and if there is a lack of transactions locally, the missing transactions are actively requested from the leader.。 +There is a very small probability that a transaction will not reach a node, which is allowed。The purpose of reaching as many nodes as possible is to allow this transaction to be packaged, agreed upon, and confirmed as soon as possible, and to try to get the results of the transaction to be executed faster。When a transaction does not reach a certain node, it will only make the execution time of the transaction longer and will not affect the correctness of the transaction。In the consensus process, the list of block transactions packaged by the leader is verified, and if there is a lack of transactions locally, the missing transactions are actively requested from the leader。 ## State Synchronization -State synchronization is to keep the state of blockchain nodes up to date.。The new and old state of the blockchain refers to the old and new data currently held by the blockchain node, that is, the height of the current block held by the node.。If the block height of a node is the highest block height of the blockchain, the node has the latest status of the blockchain.。Only nodes with the latest state can participate in the consensus for the next new block.。 +State synchronization is to keep the state of blockchain nodes up to date。The new and old state of the blockchain refers to the old and new data currently held by the blockchain node, that is, the height of the current block held by the node。If the block height of a node is the highest block height of the blockchain, the node has the latest status of the blockchain。Only nodes with the latest state can participate in the consensus for the next new block。 ![](../../../../2.x/images/sync/block.png) -When a new node is added to the blockchain, or a node that has been disconnected restores the network, the block of this node lags behind the other nodes and the state is not up-to-date。At this time, state synchronization is required.。As shown in the figure, the node that needs status synchronization (Node 1) will actively request other nodes to download blocks.。The entire download process spreads the download load across multiple nodes。 +When a new node is added to the blockchain, or a node that has been disconnected restores the network, the block of this node lags behind the other nodes and the state is not up-to-date。At this time, state synchronization is required。As shown in the figure, the node that needs status synchronization (Node 1) will actively request other nodes to download blocks。The entire download process spreads the download load across multiple nodes。 **State synchronization and download queue** -When a blockchain node is running, it regularly broadcasts its highest block height to other nodes.。After receiving the block height broadcast from other nodes, the node will compare it with its own block height. If its own block height lags behind this block height, it will start the block download process.。 +When a blockchain node is running, it regularly broadcasts its highest block height to other nodes。After receiving the block height broadcast from other nodes, the node will compare it with its own block height. If its own block height lags behind this block height, it will start the block download process。 -The download of the block is done by request.。The node that enters the download process will randomly select the node that meets the requirements and send the block interval to be downloaded.。The node that receives the download request will reply to the corresponding block based on the content of the request.。 +The download of the block is done by request。The node that enters the download process will randomly select the node that meets the requirements and send the block interval to be downloaded。The node that receives the download request will reply to the corresponding block based on the content of the request。 ![](../../../../2.x/images/sync/Download.png) -The node that receives the reply block maintains a download queue locally to buffer and sort the downloaded blocks.。The download queue is a priority queue in order of block height。Downloaded blocks are continuously inserted into the download queue. When the blocks in the queue can be connected to the current local blockchain of the node, the blocks are removed from the download queue and actually connected to the current local blockchain.。 +The node that receives the reply block maintains a download queue locally to buffer and sort the downloaded blocks。The download queue is a priority queue in order of block height。Downloaded blocks are continuously inserted into the download queue. When the blocks in the queue can be connected to the current local blockchain of the node, the blocks are removed from the download queue and actually connected to the current local blockchain。 ## Synchronization Scene Example @@ -43,20 +43,20 @@ The node that receives the reply block maintains a download queue locally to buf The process by which a transaction is broadcast to all nodes: * A transaction is sent to a node via RPC -* The node receiving the transaction broadcasts the transaction to other nodes in full. +* The node receiving the transaction broadcasts the transaction to other nodes in full * The node receives the broadcast transaction and will not broadcast it again ### Lack of transaction synchronization scenario during consensus validation In the process of consensus missing trading scenarios: -* After receiving the proposal, the node found that there were missing transactions in the proposal list. +* After receiving the proposal, the node found that there were missing transactions in the proposal list * Will actively request the missing transaction from the proposal-initiated node (unicast) * The packaging node returns the missing transaction after receiving the request (unicast) -### The transaction pool is empty and requests other node transactions. +### The transaction pool is empty and requests other node transactions -When a node's transaction pool is empty, a scenario in which a node actively requests a transaction is triggered. +When a node's transaction pool is empty, a scenario in which a node actively requests a transaction is triggered * Proactively broadcast message packets of onEmptyTxs to all nodes (broadcast) * After receiving the message packet, other nodes will actively return the transaction to the requesting node (unicast) @@ -81,16 +81,16 @@ A member of the group was unexpectedly closed at some point, but other members w * Other nodes accept the response, query the block from their own blockchain, and reply to the starting node * The node receives the block and places it in the download queue * The node takes the block out of the download queue and writes it to the blockchain -* If the download is not completed, continue the request. If the download is completed, switch the status, enable transaction synchronization, and enable consensus. +* If the download is not completed, continue the request. If the download is completed, switch the status, enable transaction synchronization, and enable consensus **Synchronization of new team members** A non-group member joins a group as a new group member, and this node is started for the first time to synchronize blocks from the original group member: * Non-members are not registered to the group, but non-members start first -* At this time, I find that I am not in the group. I do not broadcast the status or the transaction. I just wait for other group members to send status messages. +* At this time, I find that I am not in the group. I do not broadcast the status or the transaction. I just wait for other group members to send status messages * At this time, the new team member does not exist in the team member, and the status will not be broadcast to the new team member * Administrators add new members to a group * Members broadcast their status to new members * The new team member receives the team member status, compares its own block height (0), and starts the download process -* The subsequent download process is the same as the group member block synchronization process. +* The subsequent download process is the same as the group member block synchronization process diff --git a/3.x/en/docs/design/tx_procedure.md b/3.x/en/docs/design/tx_procedure.md index 94824bd73..3a5f933b1 100644 --- a/3.x/en/docs/design/tx_procedure.md +++ b/3.x/en/docs/design/tx_procedure.md @@ -1,25 +1,25 @@ # 2. Transaction process -Transactions - the core of the blockchain system, responsible for recording everything that happens on the blockchain。With the introduction of smart contracts in the blockchain, transactions go beyond the original definition of "value transfer," and a more precise definition should be a digital record of a transaction in the blockchain.。transactions, large or small, require the involvement of transactions。 +Transactions - the core of the blockchain system, responsible for recording everything that happens on the blockchain。With the introduction of smart contracts in the blockchain, transactions go beyond the original definition of "value transfer," and a more precise definition should be a digital record of a transaction in the blockchain。transactions, large or small, require the involvement of transactions。 -The life of the transaction, through the stages shown in the chart below。This article will review the entire flow of the transaction and get a glimpse of the complete life cycle of the FISCO BCOS transaction.。 +The life of the transaction, through the stages shown in the chart below。This article will review the entire flow of the transaction and get a glimpse of the complete life cycle of the FISCO BCOS transaction。 ![](../../images/design/transaction_lifetime/IMG_5188.PNG) ## Transaction Generation -After the user's request is sent to the client, the client builds a valid transaction that includes the following key information. +After the user's request is sent to the client, the client builds a valid transaction that includes the following key information -1. Receiving address: FISCO BCOS transactions are divided into two categories, one is the deployment contract transactions, one is the call contract transactions。The former, since the transaction does not have a specific recipient, specifies that the receiving address for such transactions is fixed to empty;The latter requires that the receiving address of the transaction be set to the address of the contract on the chain.。 -1. Transaction-related data: A transaction often requires some user-provided input to perform the user's desired action, which is encoded into the transaction in binary form.。 -1. Transaction signature: In order to show that the transaction is indeed sent by itself, the user will provide the SDK with the private key to allow the client to sign the transaction, where the private key and the user account are one-to-one correspondence.。 +1. Receiving address: FISCO BCOS transactions are divided into two categories, one is the deployment contract transactions, one is the call contract transactions。The former, since the transaction does not have a specific recipient, specifies that the receiving address for such transactions is fixed to empty;The latter requires that the receiving address of the transaction be set to the address of the contract on the chain。 +1. Transaction-related data: A transaction often requires some user-provided input to perform the user's desired action, which is encoded into the transaction in binary form。 +1. Transaction signature: In order to show that the transaction is indeed sent by itself, the user will provide the SDK with the private key to allow the client to sign the transaction, where the private key and the user account are one-to-one correspondence。 -The blockchain client then populates the transaction with the necessary fields, such as the transaction ID and blockLimit for transaction replay prevention.。For the specific structure and field meaning of the transaction, please refer to [Coding Protocol Document](./protocol_description.md)After the transaction is constructed, the client then sends the transaction to the node over the RPC channel。 +The blockchain client then populates the transaction with the necessary fields, such as the transaction ID and blockLimit for transaction replay prevention。For the specific structure and field meaning of the transaction, please refer to [Coding Protocol Document](./protocol_description.md)After the transaction is constructed, the client then sends the transaction to the node over the RPC channel。 ![](../../images/design/transaction_lifetime/IMG_5189.PNG) ## Trading pool -After a blockchain transaction is sent to a node, the node verifies whether a transaction is legitimate by verifying the transaction signature。If a transaction is legal, the node further checks whether the transaction has been repeated, and if it has never occurred, the transaction is added to the transaction pool and cached.。If the transaction is illegal or the transaction is repeated, the transaction will be discarded directly。 +After a blockchain transaction is sent to a node, the node verifies whether a transaction is legitimate by verifying the transaction signature。If a transaction is legal, the node further checks whether the transaction has been repeated, and if it has never occurred, the transaction is added to the transaction pool and cached。If the transaction is illegal or the transaction is repeated, the transaction will be discarded directly。 ![](../../images/design/transaction_lifetime/IMG_5190.PNG) @@ -31,26 +31,26 @@ In order to make the transaction reach all nodes as much as possible, other tran ## Transaction Packaging -In order to improve the efficiency of transaction processing, and also to determine the order of execution after the transaction to ensure transactionality, when there are transactions in the transaction pool, the Sealer thread is responsible for taking out a certain number of transactions from the transaction pool in a first-in, first-out order, assembling them into blocks to be agreed upon, and then the blocks to be agreed upon are sent to each node for processing.。 +In order to improve the efficiency of transaction processing, and also to determine the order of execution after the transaction to ensure transactionality, when there are transactions in the transaction pool, the Sealer thread is responsible for taking out a certain number of transactions from the transaction pool in a first-in, first-out order, assembling them into blocks to be agreed upon, and then the blocks to be agreed upon are sent to each node for processing。 ![](../../images/design/transaction_lifetime/IMG_5191.JPG) ## Transaction Execution -After the node receives the block, it calls the block validator to take the transactions out of the block one by one and execute them.。In the case of precompiled contract code, the execution engine in the validator calls the corresponding C++function, otherwise the execution engine will hand over the transaction to the EVM (Ethereum Virtual Machine) or WASM for execution.。 +After the node receives the block, it calls the block validator to take the transactions out of the block one by one and execute them。In the case of precompiled contract code, the execution engine in the validator calls the corresponding C++function, otherwise the execution engine will hand over the transaction to the EVM (Ethereum Virtual Machine) or WASM for execution。 -The transaction may execute successfully, or it may fail due to logical errors or insufficient Gas。The result and status of the transaction execution are returned encapsulated in the transaction receipt.。 +The transaction may execute successfully, or it may fail due to logical errors or insufficient Gas。The result and status of the transaction execution are returned encapsulated in the transaction receipt。 ![](../../images/design/transaction_lifetime/IMG_5192.JPG) ## Trading consensus -The blockchain requires an agreement between nodes on the execution result of the block before the block can be released.。The PBFT algorithm is generally used in FISCO BCOS to ensure the consistency of the entire system, and the general process is as follows: each node executes the same block independently, and then the nodes exchange their execution results.。 +The blockchain requires an agreement between nodes on the execution result of the block before the block can be released。The PBFT algorithm is generally used in FISCO BCOS to ensure the consistency of the entire system, and the general process is as follows: each node executes the same block independently, and then the nodes exchange their execution results。 ## Trading Drop -After the consensus block is released, the node needs to write the transactions and execution results in the block to the hard disk for permanent storage, and update the mapping table of block height and block hash, etc., and then the node will remove the transactions that have been dropped from the transaction pool to start a new round of the block process.。Users can query the transaction data and receipt information they are interested in in the historical data on the chain through information such as transaction hashes.。 +After the consensus block is released, the node needs to write the transactions and execution results in the block to the hard disk for permanent storage, and update the mapping table of block height and block hash, etc., and then the node will remove the transactions that have been dropped from the transaction pool to start a new round of the block process。Users can query the transaction data and receipt information they are interested in in the historical data on the chain through information such as transaction hashes。 ## transaction atomicity -Update of data state on multiple blockchain nodes by one transaction is atomic。When there are external influences, such as power outages, restarts, network fluctuations and other abnormal scenarios, resulting in consensus failure, each blockchain node will discard the current execution results, and will not change the status of the transaction down。The falling behavior of the transaction on each node for data state updates must be done after the nodes reach a consensus, thus ensuring the atomicity of the transaction.。 \ No newline at end of file +Update of data state on multiple blockchain nodes by one transaction is atomic。When there are external influences, such as power outages, restarts, network fluctuations and other abnormal scenarios, resulting in consensus failure, each blockchain node will discard the current execution results, and will not change the status of the transaction down。The falling behavior of the transaction on each node for data state updates must be done after the nodes reach a consensus, thus ensuring the atomicity of the transaction。 \ No newline at end of file diff --git a/3.x/en/docs/design/virtual_machine/evm.md b/3.x/en/docs/design/virtual_machine/evm.md index 5d7785269..fca2e2488 100644 --- a/3.x/en/docs/design/virtual_machine/evm.md +++ b/3.x/en/docs/design/virtual_machine/evm.md @@ -6,7 +6,7 @@ Tags: "EVM" "Smart Contract" "Virtual Machine" " On the blockchain, users complete actions that require consensus by running contracts deployed on the blockchain。Ethereum virtual machine, the executor of smart contract code。 -When the smart contract is compiled into a binary file, it is deployed on the blockchain。The user triggers the execution of the smart contract by calling the interface of the smart contract.。The EVM executes the code of the smart contract, modifying the data (state) on the current blockchain。The modified data will be agreed upon to ensure consistency。 +When the smart contract is compiled into a binary file, it is deployed on the blockchain。The user triggers the execution of the smart contract by calling the interface of the smart contract。The EVM executes the code of the smart contract, modifying the data (state) on the current blockchain。The modified data will be agreed upon to ensure consistency。 ## EVMC – Ethereum Client-VM Connector API @@ -21,13 +21,13 @@ EVMC defines two main types of invocation interfaces: - Instance interface: the interface through which the node invokes the EVM - Callback interface: interface of EVM callback node -The EVM itself does not save state data. The node operates the EVM through the instance interface. The EVM, in turn, adjusts the Callback interface to operate the state of the node.。 +The EVM itself does not save state data. The node operates the EVM through the instance interface. The EVM, in turn, adjusts the Callback interface to operate the state of the node。 ![](../../../images/evm/evmc.png) **Instance interface** -Defines the operation of the node to the virtual machine, including creation, destruction, setup, etc.。 +Defines the operation of the node to the virtual machine, including creation, destruction, setup, etc。 The interface is defined in evmc _ instance (evmc.h) @@ -41,7 +41,7 @@ The interface is defined in evmc _ instance (evmc.h) **Callback Interface** -Defines the operations of EVM on nodes, mainly for state reading and writing, block information reading and writing, etc.。 +Defines the operations of EVM on nodes, mainly for state reading and writing, block information reading and writing, etc。 The interface is defined in evmc _ context _ fn _ table (evmc.h)。 @@ -62,9 +62,9 @@ The interface is defined in evmc _ context _ fn _ table (evmc.h)。 ### EVM command -Solidity is the execution language of the contract, which is compiled by solc and becomes an assembly-like EVM instruction.。Interpreter defines a complete set of instructions。After the solidity is compiled, the binary file is generated, the binary file is the collection of EVM instructions, the transaction is sent to the node in binary form, the node receives, through the EVMC call EVM to execute these instructions。In EVM, the logic of these instructions is implemented in code emulation。 +Solidity is the execution language of the contract, which is compiled by solc and becomes an assembly-like EVM instruction。Interpreter defines a complete set of instructions。After the solidity is compiled, the binary file is generated, the binary file is the collection of EVM instructions, the transaction is sent to the node in binary form, the node receives, through the EVMC call EVM to execute these instructions。In EVM, the logic of these instructions is implemented in code emulation。 -Solidity is a stack-based language, and EVM is called as a stack when executing binary.。 +Solidity is a stack-based language, and EVM is called as a stack when executing binary。 **Examples of Arithmetic Instruction** @@ -144,7 +144,7 @@ CASE(SSTORE) **Examples of contract call instructions** -The CALL instruction can call another contract based on the address.。First, the EVM determines that it is a CALL instruction and calls "caseCall."()"', in caseCall()"', use"' caseCallSetup()"'Take the data from the stack, package it into msg, and call evmc's callback function as an argument。Eth is called back "()"'After that, start a new EVM, process the call, and then pass the execution result of the new EVM to" call()"'parameter is returned to the current EVM, the current EVM writes the result to the result stack SSP, the call ends。The logic for contract creation is similar to this logic。 +The CALL instruction can call another contract based on the address。First, the EVM determines that it is a CALL instruction and calls "caseCall."()"', in caseCall()"', use"' caseCallSetup()"'Take the data from the stack, package it into msg, and call evmc's callback function as an argument。Eth is called back "()"'After that, start a new EVM, process the call, and then pass the execution result of the new EVM to" call()"'parameter is returned to the current EVM, the current EVM writes the result to the result stack SSP, the call ends。The logic for contract creation is similar to this logic。 ``` cpp CASE(CALL) @@ -196,4 +196,4 @@ void VM::caseCall() ## SUMMARY -EVM is a state execution machine, the input is the binary instructions compiled by solidity and the state data of the node, and the output is the change of the node state.。Ethereum achieves compatibility of multiple virtual machines through EVMC。 +EVM is a state execution machine, the input is the binary instructions compiled by solidity and the state data of the node, and the output is the change of the node state。Ethereum achieves compatibility of multiple virtual machines through EVMC。 diff --git a/3.x/en/docs/design/virtual_machine/gas.md b/3.x/en/docs/design/virtual_machine/gas.md index ced53868e..a8b1dbca4 100644 --- a/3.x/en/docs/design/virtual_machine/gas.md +++ b/3.x/en/docs/design/virtual_machine/gas.md @@ -3,7 +3,7 @@ Tags: "Gas" "Smart Contract" "Virtual Machine" " ---- -EVM virtual machines have a set of Gas mechanisms to measure the CPU, memory and storage resources consumed by the chain on each transaction.。FISCO BCOS introduces Precompiled contract, supports built-in C++In order to improve the security of precompiled contracts, FISCO BCOS v2.4.0 introduces the Gas mechanism in precompiled contracts.。 +EVM virtual machines have a set of Gas mechanisms to measure the CPU, memory and storage resources consumed by the chain on each transaction。FISCO BCOS introduces Precompiled contract, supports built-in C++In order to improve the security of precompiled contracts, FISCO BCOS v2.4.0 introduces the Gas mechanism in precompiled contracts。 ## Precompiled contracts support Gas calculations @@ -13,21 +13,21 @@ In FISCO BCOS v2.4.0, the 'PrecompiledGas' module is added to calculate gas. The ![](../../../images/evm/precompiled_gas.png) -'PrecompiledGas' mainly records the basic operations called during the execution of the Precompiled contract for each transaction, and the Gas that consumes memory. +'PrecompiledGas' mainly records the basic operations called during the execution of the Precompiled contract for each transaction, and the Gas that consumes memory -- When the virtual machine executes a transaction and calls the 'call' interface of the 'Precompiled' contract, each time a basic operation is called, the corresponding 'OPCode' is added to the**Runtime Instruction Collection**中 +- When the virtual machine executes a transaction and calls the 'call' interface of the 'Precompiled' contract, each time a basic operation is called, the corresponding 'OPCode' is added to the 'PrecompiledGas'**Runtime Instruction Collection**中 -- When the virtual machine executes a transaction and calls the 'call' interface of the 'Precompiled' contract, the memory consumed by the runtime of the 'PrecompiledGas' is updated when the memory occupied by the basic operation changes +- When the virtual machine executes a transaction and calls the 'call' interface of the 'Precompiled' contract, the memory consumed by the runtime of the 'PrecompiledGas' will be updated when the memory occupied by the basic operation changes -- After the 'Precompiled' contract is executed, you can call the interface to calculate the Gas consumption of the 'Precompiled' contract based on the set of instructions executed and the memory consumed during the running of the 'Precompiled' contract.。 +- After the 'Precompiled' contract is executed, you can call the interface to calculate the Gas consumption of the 'Precompiled' contract based on the set of instructions executed and the memory consumed during the running of the 'Precompiled' contract。 ### Precompiled contract Gas measure -The FISCO BCOS Precompiled contract Gas measurement standard refers to EVM, which mainly includes CPU, memory and storage dimensions.。The following details the specific Gas calculation method for precompiled contracts.。 +The FISCO BCOS Precompiled contract Gas measurement standard refers to EVM, which mainly includes CPU, memory and storage dimensions。The following details the specific Gas calculation method for precompiled contracts。 #### Precompiled Contract Memory Gas Calculation -Precompiled contract memory consumption mainly comes from input, output and additional memory consumption at runtime。When the total memory consumed by a transaction is' txMemUsed ', the corresponding memory gas is calculated as follows。That is, add 'memoryGasUnit' gas every 32 bytes, and the value of 'memoryGasUnit' is 3. +Precompiled contract memory consumption mainly comes from input, output and additional memory consumption at runtime。When the total memory consumed by a transaction is' txMemUsed ', the corresponding memory gas is calculated as follows。That is, add 'memoryGasUnit' gas every 32 bytes, and the value of 'memoryGasUnit' is 3 ``` MemoryGas(txMemUsed) = memoryGasUnit * txMemUsed / 32 + (txMemUsed * txMemUsed)/512 @@ -35,9 +35,9 @@ Precompiled contract memory consumption mainly comes from input, output and addi #### Precompiled Contract CPU, Storage Gas Compute -In order to calculate the Gas consumed by the underlying operation of the Precompiled contract, FISCO BCOS v2.4.0 maps the Precompiled contract to a specific opcode and defines the Gas corresponding to each underlying operation.。 +In order to calculate the Gas consumed by the underlying operation of the Precompiled contract, FISCO BCOS v2.4.0 maps the Precompiled contract to a specific opcode and defines the Gas corresponding to each underlying operation。 -##### The opcode corresponding to the underlying operation of the precompiled contract. +##### The opcode corresponding to the underlying operation of the precompiled contract The 'PrecompiledGas' module maps Precompiled contract base operations to opcodes as follows: @@ -49,7 +49,7 @@ GT | GT call of ConditionPrecompiled to determine whether the left value is grea LE | The LE call of ConditionPrecompiled to determine whether the left value is less than or equal to the right value| 0x03 | LT | LT call of ConditionPrecompiled to determine whether the left value is less than the right value| 0x04 | NE | The NE call of ConditionPrecompiled to determine whether the left value is not equal to the right value| 0x05 | -Limit | The Limit call of ConditionPrecompiled, which limits the number of pieces of data queried from the CRUD interface.| 0x06 | +Limit | The Limit call of ConditionPrecompiled, which limits the number of pieces of data queried from the CRUD interface| 0x06 | GetInt | The getInt call to EntryPrecompiled converts the string to int256 / uint256 and returns| 0x07 | GetAddr | GetAddress call of EntryPrecompiled, converting string to Address| 0x08 | Set | Set call to EntryPrecompiled, setting the value of the specified Key to the specified Value| 0x09 | diff --git a/3.x/en/docs/design/virtual_machine/index.rst b/3.x/en/docs/design/virtual_machine/index.rst new file mode 100644 index 000000000..16cb67d7c --- /dev/null +++ b/3.x/en/docs/design/virtual_machine/index.rst @@ -0,0 +1,35 @@ +############################################################## +8. Smart Contract Engine +############################################################## + +Tags: "smart contract" "virtual machine" + +---- + +The execution of transactions is an important function on a blockchain node。The execution of the transaction is to take out the binary code of the smart contract in the transaction and execute it with the executor (Executor)。The Consensus module (Consensus) takes transactions out of the transaction pool, packages them into blocks, and calls the executor to execute the transactions in the blocks。During the execution of the transaction, the state of the blockchain (State) is modified to form a new block state stored (Storage)。Executor in this process, similar to a black box, the input is the smart contract code, the output is the change of state。 + +With the development of technology, people began to pay attention to the performance and ease of use of actuators。On the one hand, people hope that smart contracts can be executed faster on the blockchain to meet the needs of large-scale transactions。On the other hand, people want to develop in a more familiar and better language。And then there are some alternatives to the traditional executor (EVM), such as: WASM, the traditional EVM is coupled in the node code。The first thing to do is to abstract the interface of the actuator and be compatible with the implementation of various virtual machines。Therefore, EVMC was designed。 + +EVMC (Ethereum Client-VM Connector API), Is the interface of the actuator abstracted by Ethereum, designed to be able to interface with various types of actuators。FISCO BCOS currently uses Ethereum's smart contract language, Solidity, and therefore follows Ethereum's abstraction of the executor interface。 + +.. image:: ../../../images/evm/evmc_frame.png + +On the node, the consensus module hands over the packaged blocks to the executor for execution。When the virtual machine is executed, the reading and writing of the state will in turn operate the state data on the node through the EVMC callback。 + +After a layer of EVMC abstraction, FISCO BCOS can interface with more efficient and easy-to-use actuators that will emerge in the future。Currently, FISCO BCOS uses evone to execute the solidity contract and wasmtime to execute the wasm contract。 + +.. toctree:: + :maxdepth: 1 + + evm.md + precompiled.md + wasm.md + gas.md + +.. _Executor: ./evm.html + +.. _Consensus: ../consensus + +.. _JIT: https://github.com/ethereum/evmjit + +.. _WASM: https://webassembly.org/ \ No newline at end of file diff --git a/3.x/en/docs/design/virtual_machine/precompiled.md b/3.x/en/docs/design/virtual_machine/precompiled.md index d7dfec4cf..5d0e9b33c 100644 --- a/3.x/en/docs/design/virtual_machine/precompiled.md +++ b/3.x/en/docs/design/virtual_machine/precompiled.md @@ -4,7 +4,7 @@ Tags: "Precompiled Contracts" "Smart Contracts" "Precompiled" ---- -Precompiled contracts provide a way to use C.++The method of writing contracts, which separates contract logic from data, has better performance than solidity contracts, and can be upgraded by modifying the underlying code.。 +Precompiled contracts provide a way to use C++The method of writing contracts, which separates contract logic from data, has better performance than solidity contracts, and can be upgraded by modifying the underlying code。 ### Precompiled Contracts vs. Solidity Contracts @@ -17,21 +17,21 @@ Precompiled contracts provide a way to use C.++The method of writing contracts, ### Module Architecture The architecture of Precompiled is shown in the following figure: -- The block validator determines the type based on the address of the called contract when executing the transaction.。Address 1-4 indicates the Ethereum precompiled contract, address 0x1000-0x10000 is C++Precompiled contracts, other addresses are EVM contracts。 +- The block validator determines the type based on the address of the called contract when executing the transaction。Addresses 1-4 represent Ethereum precompiled contracts, addresses 0x1000-0x10000 are C++Precompiled contracts, other addresses are EVM contracts。 ![](../../../images/precompiled/architecture.png) ### Key Processes -- When executing a precompiled contract, you first need to get the object of the precompiled contract based on the contract address.。 +- When executing a precompiled contract, you first need to get the object of the precompiled contract based on the contract address。 - Each precompiled contract object implements the 'call' interface, where the specific logic of the precompiled contract is implemented。 -- 'call 'obtains the' Function Selector 'and parameters according to the abi code of the transaction, and then executes the corresponding logic。 +- 'call' obtains the 'Function Selector' and parameters according to the abi code of the transaction, and then executes the corresponding logic。 ```mermaid graph LR Start(Commencement) --> branch1{Precompiled Contracts} branch1 --> |Yes|op1 [Get contract object by address] branch1 --> |否|op2[EVM] - op1 --> op3 [parse calling function and parameters] + op1 --> op3 [Parse Call Function and Parameters] op3 --> End(Return execution result) op2 --> End(Return execution result) ``` diff --git a/3.x/en/docs/design/virtual_machine/wasm.md b/3.x/en/docs/design/virtual_machine/wasm.md index 06466fd93..b41341251 100644 --- a/3.x/en/docs/design/virtual_machine/wasm.md +++ b/3.x/en/docs/design/virtual_machine/wasm.md @@ -2,7 +2,7 @@ ## FISCO BCOS Environment Interface Specification -The FISCO BCOS Environment Interface (FBEI) specification includes the blockchain underlying platform [FISCO BCOS](https://gitee.com/FISCO-BCOS/FISCO-BCOS)Application Programming Interface (API) exposed to the Wasm virtual machine。All APIs in the FBEI specification are implemented by FISCO BCOS, and programs running in the Wasm virtual machine can directly access these APIs to obtain the environment and state of the blockchain.。 +The FISCO BCOS Environment Interface (FBEI) specification includes the blockchain underlying platform [FISCO BCOS](https://gitee.com/FISCO-BCOS/FISCO-BCOS)Application Programming Interface (API) exposed to the Wasm virtual machine。All APIs in the FBEI specification are implemented by FISCO BCOS, and programs running in the Wasm virtual machine can directly access these APIs to obtain the environment and state of the blockchain。 ### Data Type @@ -12,14 +12,14 @@ In the FBEI specification, the data types of API parameters and return values ar .. list-table:: :header-rows: 1 - * - Type Mark - - 定义 + * - Type marking + - Definition * - i32 - - 32-bit integer, consistent with the definition of the i32 type in Wasm + -32-bit integer, consistent with the definition of the i32 type in Wasm * - i32ptr - - 32-bit integer, which is stored in the same way as the i32 type in Wasm, but is used to represent the memory offset in the virtual machine + -32-bit integer, which is stored in the same way as the i32 type in Wasm, but is used to represent the memory offset in the virtual machine * - i64 - - 64-bit integer, consistent with the definition of the i64 type in Wasm + -64-bit integer, consistent with the definition of the i64 type in Wasm ``` ### API List @@ -28,7 +28,7 @@ In the FBEI specification, the data types of API parameters and return values ar **_ Description _** -Write key-value pair data to blockchain underlying storage for persistent storage。The byte sequence representing the key and value needs to be stored in the virtual machine memory first.。 +Write key-value pair data to blockchain underlying storage for persistent storage。The byte sequence representing the key and value needs to be stored in the virtual machine memory first。 **_ Parameters _** @@ -36,12 +36,12 @@ Write key-value pair data to blockchain underlying storage for persistent storag .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - keyOffset - i32ptr - - Start address of the storage location of the key in the virtual machine memory + - The starting address of the storage location of the key in the virtual machine memory * - keyLength - i32 - Length of key @@ -50,7 +50,7 @@ Write key-value pair data to blockchain underlying storage for persistent storag - The starting address of the location where the value is stored in the virtual machine memory * - valueLength - i32 - - Length of value + - Length of the value ``` **_ Return Value _** @@ -60,14 +60,14 @@ None。 ```eval_rst .. note:: - When setStorage is called, if the provided valueLength parameter is 0, the data corresponding to the key is deleted from the underlying storage of the blockchain.。In this case, the API implementation will directly skip the reading of the value, so the valueOffset parameter does not need to be given a valid value, and is generally set directly to 0.。 + When setStorage is called, if the provided valueLength parameter is 0, the data corresponding to the key is deleted from the underlying storage of the blockchain。In this case, the API implementation will directly skip the reading of the value, so the valueOffset parameter does not need to be given a valid value, and is generally set directly to 0。 ``` #### getStorage **_ Description _** -According to the key provided, the corresponding value in the underlying storage of the blockchain is read into the memory of the virtual machine.。The byte sequence representing the key needs to be stored in the virtual machine memory and the memory area where the value is stored needs to be allocated in advance.。 +According to the key provided, the corresponding value in the underlying storage of the blockchain is read into the memory of the virtual machine。The byte sequence representing the key needs to be stored in the virtual machine memory and the memory area where the value is stored needs to be allocated in advance。 **_ Parameters _** @@ -75,18 +75,18 @@ According to the key provided, the corresponding value in the underlying storage .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - keyOffset - i32ptr - - Start address of the storage location of the key in the virtual machine memory + - The starting address of the storage location of the key in the virtual machine memory * - keyLength - i32 - Length of key * - valueOffset - i32ptr - - Virtual machine memory start address for storing values + - VM memory start address for holding values ``` **_ Return Value _** @@ -96,16 +96,16 @@ According to the key provided, the corresponding value in the underlying storage :header-rows: 1 * - Type - - 描述 + - Description * - i32 - - Length of value + - Length of the value ``` #### getCallData **_ Description _** -Copy the input data of the current transaction to the virtual machine memory, which needs to be allocated in advance to store the transaction input data.。 +Copy the input data of the current transaction to the virtual machine memory, which needs to be allocated in advance to store the transaction input data。 **_ Parameters _** @@ -113,12 +113,12 @@ Copy the input data of the current transaction to the virtual machine memory, wh .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - resultOffset - i32ptr - - Start address of the VM memory used to store the current transaction input data + - VM memory start address for storing current transaction input data ``` **_ Return Value _** @@ -142,16 +142,16 @@ None。 :header-rows: 1 * - Type - - 描述 + - Description * - i32 - - Length of the current transaction input data + - Length of current transaction input data ``` #### getCaller **_ Description _** -Obtain the address of the caller who initiated the contract call, and allocate the memory area to store the caller's address in advance.。 +Obtain the address of the caller who initiated the contract call, and allocate the memory area to store the caller's address in advance。 **_ Parameters _** @@ -159,9 +159,9 @@ Obtain the address of the caller who initiated the contract call, and allocate t .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - resultOffset - i32ptr - Virtual machine memory start address used to store the caller's address @@ -183,15 +183,15 @@ Pass a sequence of bytes representing the return value to the host environment a .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - dataOffset - i32ptr - - The starting address of the virtual machine memory used to store the return value. + - The virtual machine memory start address used to store the return value * - dataLength - i32 - - Length of return value + - Length of the return value ``` **_ Return Value _** @@ -210,12 +210,12 @@ Throws a sequence of bytes representing exception information to the host enviro .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - dataOffset - i32ptr - - The start address of the storage location of the exception information in the virtual machine memory. + - The starting address of the storage location of the exception information in the virtual machine memory * - dataLength - i32 - Length of exception information @@ -235,7 +235,7 @@ None。 **_ Description _** -Create a transaction log。Up to 4 log indexes can be created for this log。The byte sequence representing the log data and its index needs to be stored in the virtual machine memory first.。 +Create a transaction log。Up to 4 log indexes can be created for this log。The byte sequence representing the log data and its index needs to be stored in the virtual machine memory first。 **_ Parameters _** @@ -243,27 +243,27 @@ Create a transaction log。Up to 4 log indexes can be created for this log。The .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - dataOffset - i32ptr - - Start address of the storage location of log data in the virtual machine memory + - The starting address of the storage location of the log data in the virtual machine memory * - dataLength - i32 - Length of log data * - topic1 - i32ptr - - Virtual machine memory start address for 1st log index, no time 0 + - Virtual machine memory start address of the first log index, no time 0 * - topic2 - i32ptr - - Virtual machine memory start address for 2nd log index, no time 0 + - VM memory start address of the 2nd log index, no time 0 * - topic3 - i32ptr - - Virtual machine memory start address for 3rd log index, no time 0 + - The memory start address of the third log index, which is not set to 0 * - topic4 - i32ptr - - Virtual machine memory start address for 4th log index, no time 0 + - VM memory start address of the 4th log index, no time 0 ``` **_ Return Value _** @@ -280,7 +280,7 @@ None。 **_ Description _** -Obtain the address of the caller who initiates the contract call at the beginning of the call chain, and allocate the memory area for storing the caller's address in advance.。Unlike the 'getCaller' interface, the caller address obtained by this interface must be an external account address.。 +Obtain the address of the caller who initiates the contract call at the beginning of the call chain, and allocate the memory area for storing the caller's address in advance。Unlike the 'getCaller' interface, the caller address obtained by this interface must be an external account address。 **_ Parameters _** @@ -288,9 +288,9 @@ Obtain the address of the caller who initiates the contract call at the beginnin .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - resultOffset - i32ptr - Virtual machine memory start address used to store the caller's address @@ -317,7 +317,7 @@ None。 :header-rows: 1 * - Type - - 描述 + - Description * - i64 - Current block height ``` @@ -339,16 +339,16 @@ None。 :header-rows: 1 * - Type - - 描述 + - Description * - i64 - - Timestamp of the current block + - Timestamp of current block ``` #### call **_ Description _** -To initiate an external contract call, the byte sequence representing the call parameters needs to be stored in the virtual machine memory.。After calling this interface, the execution process is blocked until the external contract call ends or an exception occurs.。 +To initiate an external contract call, the byte sequence representing the call parameters needs to be stored in the virtual machine memory。After calling this interface, the execution process is blocked until the external contract call ends or an exception occurs。 **_ Parameters _** @@ -356,18 +356,18 @@ To initiate an external contract call, the byte sequence representing the call p .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - addressOffset - i32ptr - - The starting address of the storage location of the called contract address in the virtual machine memory. + - The starting address of the storage location of the called contract address in the virtual machine memory * - dataOffset - i32ptr - - Start address of the storage location of the call parameter in the virtual machine memory + - The starting address of the storage location of the call parameter in the virtual machine memory * - dataLength - i32 - - Length of call parameter + - the length of the call parameter ``` **_ Return Value _** @@ -377,16 +377,16 @@ To initiate an external contract call, the byte sequence representing the call p :header-rows: 1 * - Type - - 描述 + - Description * - i32 - - Call status, 0 indicates success, otherwise it indicates failure + -Call status, 0 means success, otherwise it means failure ``` #### getReturnDataSize **_ Description _** -Gets the length of the return value of the external contract call, which can only be called after the external contract call is successful.。 +Gets the length of the return value of the external contract call, which can only be called after the external contract call is successful。 **_ Parameters _** @@ -397,14 +397,14 @@ None。 :header-rows: 1 * - Type - - 描述 + - Description * - i32 - Return value length of external contract call ``` #### getReturnData -Obtain the return value of an external contract call. When using the call, allocate the memory area for storing the return value in advance according to the return result of getReturnDataSize.。 +Obtain the return value of an external contract call. When using the call, allocate the memory area for storing the return value in advance according to the return result of getReturnDataSize。 **_ Parameters _** @@ -412,12 +412,12 @@ Obtain the return value of an external contract call. When using the call, alloc .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - resultOffset - i32ptr - - The starting address of the virtual machine memory used to store the return value. + - The virtual machine memory start address used to store the return value ``` **_ Return Value _** @@ -434,7 +434,7 @@ All contracts must be encoded in [WebAssembly binary](https://webassembly.github ### Symbol Import -The contract file can only import the interfaces specified in the FBEI. All interfaces need to be imported from the namespace named 'bcos', and the signature must be consistent with the interface signature declared in the BCOS environment interface specification.。In addition to the 'bcos' command space, there is a special namespace called 'debug'。The function declared in the 'debug' namespace is mainly used in the debugging mode of the virtual machine. This namespace will not be enabled in the formal production environment. For more information, see Debugging mode.。 +The contract file can only import the interfaces specified in the FBEI. All interfaces need to be imported from the namespace named 'bcos', and the signature must be consistent with the interface signature declared in the BCOS environment interface specification。In addition to the 'bcos' command space, there is a special namespace called 'debug'。The function declared in the 'debug' namespace is mainly used in the debugging mode of the virtual machine. This namespace will not be enabled in the formal production environment. For more information, see Debugging mode。 ### Symbol export @@ -462,12 +462,12 @@ Output a 32-bit integer value in the log at the bottom of the blockchain。 .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - value - i32 - - 32-bit integer value + -32-bit integer value ``` #### print64 @@ -482,12 +482,12 @@ Output a 64-bit integer value in the log at the bottom of the blockchain。 .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - value - i64 - - 64-bit integer value + -64-bit integer value ``` #### printMem @@ -502,12 +502,12 @@ Output a piece of virtual machine memory in the form of printable characters in .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - offset - i32 - - Start address of memory region + - Start address of the memory region * - len - i32 - Length of memory area @@ -515,7 +515,7 @@ Output a piece of virtual machine memory in the form of printable characters in #### printMemHex -Output a piece of virtual machine memory in the log at the bottom of the blockchain as a hexadecimal string.。 +Output a piece of virtual machine memory in the log at the bottom of the blockchain as a hexadecimal string。 **_ Parameters _** @@ -523,12 +523,12 @@ Output a piece of virtual machine memory in the log at the bottom of the blockch .. list-table:: :header-rows: 1 - * - Parameter Name + * - Parameter name - Type - - 描述 + - Description * - offset - i32 - - Start address of memory region + - Start address of the memory region * - len - i32 - Length of memory area diff --git a/3.x/en/docs/develop/account.md b/3.x/en/docs/develop/account.md index 1c69f2ef8..54d2d3366 100644 --- a/3.x/en/docs/develop/account.md +++ b/3.x/en/docs/develop/account.md @@ -4,23 +4,23 @@ Tags: "Create Account" "State Secret Account" "Key File" " ---- -FISCO BCOS uses accounts to identify and differentiate each individual user。In a blockchain system that uses a public-private key system, each account corresponds to a pair of public and private keys.。where the address string calculated by the public key using a secure one-way algorithm such as a hash is used as the account name for the account, i.e.**Account Address**In order to distinguish it from the address of a smart contract and for some other historical reasons, the account address is also often referred to.**External Account Address**。The private key known only to the user corresponds to the password in the traditional authentication model.。Users need to prove that they know the private key of the corresponding account through a secure cryptographic protocol to claim their ownership of the account and perform sensitive account operations。 +FISCO BCOS uses accounts to identify and differentiate each individual user。In a blockchain system that uses a public-private key system, each account corresponds to a pair of public and private keys。where the address string calculated by the public key using a secure one-way algorithm such as a hash is used as the account name for the account, i.e**Account Address**In order to distinguish it from the address of a smart contract and for some other historical reasons, the account address is also often referred to**External Account Address**。The private key known only to the user corresponds to the password in the traditional authentication model。Users need to prove that they know the private key of the corresponding account through a secure cryptographic protocol to claim their ownership of the account and perform sensitive account operations。 ```eval_rst .. important:: - In other previous tutorials, to simplify the operation, the default account provided by the tool was used.。However, in the actual application deployment, users need to create their own accounts and properly save the account private key to avoid serious security issues such as account private key disclosure.。 + In other previous tutorials, to simplify the operation, the default account provided by the tool was used。However, in the actual application deployment, users need to create their own accounts and properly save the account private key to avoid serious security issues such as account private key disclosure。 ``` This article will specifically describe how accounts are created, stored, and used, requiring readers to have a certain Linux operating base。 -FISCO BCOS provides scripts and Java SDK to create accounts, as well as Java SDK and console to store account private keys。Users can choose to store the account private key as a file in PEM or PKCS12 format according to their needs.。where the PEM format stores the private key in clear text, while PKCS12 stores the private key encrypted with a user-supplied password。 +FISCO BCOS provides scripts and Java SDK to create accounts, as well as Java SDK and console to store account private keys。Users can choose to store the account private key as a file in PEM or PKCS12 format according to their needs。where the PEM format stores the private key in clear text, while PKCS12 stores the private key encrypted with a user-supplied password。 ## Creation of account ### Create an account using a script -The 'get _ gm _ account.sh' script for generating an account is consistent with the 'get _ account.sh' script for generating an account.。 +The 'get _ gm _ account.sh' script for generating an account is consistent with the 'get _ account.sh' script for generating an account。 #### 1. Get the script @@ -30,7 +30,7 @@ curl -#LO https://raw.githubusercontent.com/FISCO-BCOS/console/master/tools/get_ ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_account.sh && chmod u+x get_account.sh && bash get_account.sh -h` + -If you cannot download for a long time due to network problems, please try'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_account.sh && chmod u+x get_account.sh && bash get_account.sh -h` ``` State secret version please use the following instruction to get the script @@ -41,7 +41,7 @@ curl -#LO https://raw.githubusercontent.com/FISCO-BCOS/console/master/tools/get_ ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_gm_account.sh && chmod u+x get_gm_account.sh && bash get_gm_account.sh -h` + -If you cannot download for a long time due to network problems, please try'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_gm_account.sh && chmod u+x get_gm_account.sh && bash get_gm_account.sh -h` ``` Execute the above instructions and see the following output to download the correct script, otherwise please try again。 @@ -56,7 +56,7 @@ Usage: get_account.sh -h Help ``` -#### 2. Use the script to generate the PEM format private key. +#### 2. Use the script to generate the PEM format private key - Generate private key and address @@ -88,7 +88,7 @@ Execute the above command, the result is as follows [INFO] Account publicHex : 0x5309fa17ae97f81f80a1da3d6b116377ace351dffdcbfd0e91fbb3bcf0312d363c78b8aaf929b3661c1f02e8b2c318358843de6a2dcc66cc0d5260a0d6874a6e ``` -#### 3. Use the script to generate the PKCS12 format private key. +#### 3. Use the script to generate the PKCS12 format private key - Generate private key and address @@ -128,14 +128,14 @@ Enter Import Password: ## Storage of accounts - Java SDK supports private key string or file loading, so the private key of the account can be stored in the database or local file。 -- Local files support two storage formats, with PKCS12 encrypted storage and PEM plaintext storage。 -- When developing a business, you can choose the storage and management method of the private key according to the actual business scenario.。 +- Local files support two storage formats, where PKCS12 is stored encrypted and PEM is stored in clear text。 +-When developing a business, you can choose the storage and management method of the private key according to the actual business scenario。 ## Use of account ### Console Load Private Key File -The account generation script get _ account.sh is provided in the console. The generated account private key file is in the accounts directory. You need to specify the private key file when loading the private key in the console.。 +The account generation script get _ account.sh is provided in the console. The generated account private key file is in the accounts directory. You need to specify the private key file when loading the private key in the console。 There are several ways to start the console: ```shell @@ -147,7 +147,7 @@ bash start.sh group0 -p12 p12Name #### Default startup -The console randomly generates an account and starts it with the group number specified in the console configuration file.。 +The console randomly generates an account and starts it with the group number specified in the console configuration file。 ```shell bash start.sh @@ -161,11 +161,11 @@ The console randomly generates an account and starts it with the group name spec bash start.sh group0 ``` -- Note: The specified group requires configuration beans in the console configuration file。 +- Note: The specified group requires configuration beans in the console profile。 #### Start using a private key file in PEM format -- Start with the account of the specified pem file, enter the parameters: group number,-pem, pem file path +- Use the account startup of the specified pem file, enter parameters: group number, -pem, pem file path ```shell bash start.sh group0 -pem accounts/0xebb824a1122e587b17701ed2e512d8638dfb9c88.pem @@ -173,7 +173,7 @@ bash start.sh group0 -pem accounts/0xebb824a1122e587b17701ed2e512d8638dfb9c88.pe #### Start using PKCS12 format private key file -- Use the specified p12 file account, you need to enter a password, enter parameters: group name,-p12, p12 file path +- Use the specified p12 file account, you need to enter a password, enter parameters: group name, -p12, p12 file path ```shell bash start.sh group0 -p12 accounts/0x5ef4df1b156bc9f077ee992a283c2dbb0bf045c0.p12 @@ -184,7 +184,7 @@ Enter Export Password: If the account generation script get _ accounts.sh generates an account private key file in PEM or PKCS12 format, you can use the account by loading the PEM or PKCS12 account private key file。There are two classes for loading private keys: P12Manager and PEMManager, where P12Manager is used to load private key files in PKCS12 format and PEMManager is used to load private key files in PEM format。 -- P12Manager Usage Example: +-P12Manager Usage Example: Load private key using code。 @@ -199,7 +199,7 @@ CryptoSuite cryptoSuite = client.getCryptoSuite(); cryptoSuite.loadAccount("p12", p12AccountFilePath, password); ``` -- PEMManager Usage Example +- PEMManager use example Load private key using code。 @@ -216,7 +216,7 @@ cryptoSuite.loadAccount("pem", pemAccountFilePath, null); ## Calculation of account address -The FISCO BCOS account address is calculated from the ECDSA public key and keccak is calculated for the hexadecimal representation of the ECDSA public key.-256sum hash, taking the hexadecimal representation of the last 20 bytes of the calculation result as the account address, each byte requires two hexadecimal representations, so the account address length is 40。FISCO BCOS account address compatible with Ethereum。 +The account address of FISCO BCOS is calculated from the ECDSA public key. The keccak-256sum hash is calculated for the hexadecimal representation of the ECDSA public key. The hexadecimal representation of the last 20 bytes of the calculation result is taken as the account address. Each byte needs two hexadecimal representations, so the length of the account address is 40。FISCO BCOS account address compatible with Ethereum。 Note keccak-256sum with 'SHA3'**Not the same**For more information, see [here](https://ethereum.stackexchange.com/questions/550/which-cryptographic-hash-function-does-ethereum-use)。 [Ethereum Address Generation](https://kobl.one/blog/create-full-ethereum-keypair-and-address/) @@ -257,9 +257,9 @@ You can get output similar to the following 8d251b400667e2dcc79ec6de6a143627401e32ed2234ec69769c8fa378fd0e2ab7a9d963aefd3bc2f8f1cceccba54351709082e619d4e74d0c0fee3e67173ccd ``` -### 2. Calculate the address based on the public key. +### 2. Calculate the address based on the public key -In this section, we calculate the corresponding account address based on the public key.。We need to get keccak-256sum tool, can be downloaded from [here](https://github.com/vkobel/ethereum-generate-wallet/tree/master/lib)。 +In this section, we calculate the corresponding account address based on the public key。We need to get the keccak-256sum tool, which can be downloaded from [here](https://github.com/vkobel/ethereum-generate-wallet/tree/master/lib)。 ```shell openssl ec -in ecprivkey.pem -text -noout 2>/dev/null| sed -n '7,11p' | tr -d ": \n" | awk '{print substr($0,3);}' | ./keccak-256sum -x -l | tr -d ' -' | tail -c 41 @@ -275,10 +275,10 @@ dcc703c0e500b653ca82273b7bfad8045d85a470 ```eval_rst .. important:: - To freeze, unfreeze, or revoke an account, you need to enable the blockchain permission mode. For more information, see the Permission Management User Guide < https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/committee_usage.html>`_ + To freeze, unfreeze, or revoke an account, you must enable the blockchain permission mode. For more information, see the Permission Governance Usage Guide`_ ``` -After the blockchain permission mode is enabled, each time a contract call is initiated, the account status is checked (tx.origin). The account status is recorded in the BFS '/ usr /' directory in the form of a storage table named '/ usr /'.+ The account address. If the account status is not found, the account is normal by default.。The account status under the BFS '/ usr /' directory is only created when the account status is actively set。**Only the governance committee member can operate the interface of account status management.。** +After the blockchain permission mode is enabled, each time a contract call is initiated, the account status is checked (tx.origin). The account status is recorded in the BFS '/ usr /' directory in the form of a storage table named '/ usr /'+ The account address. If the account status is not found, the account is normal by default。The account status under the BFS '/ usr /' directory is only created when the account status is actively set。**Only the governance committee member can operate the interface of account status management。** The governance committee can operate on the account through the AccountManagerPrecompiled interface, fixed address 0x10003。 @@ -290,7 +290,7 @@ enum AccountStatus{ } abstract contract AccountManager { - / / Set account status, only governance members can call, 0- normal, others - Abnormal, if the account does not exist, it will be created first + / / Set the account status, which can only be called by the governance committee, 0-normal, others-abnormal, if the account does not exist, it will be created first function setAccountStatus(address addr, AccountStatus status) public virtual returns (int32); / / Any user can call function getAccountStatus(address addr) public view virtual returns (AccountStatus); @@ -299,6 +299,6 @@ abstract contract AccountManager { ### Freezing, unfreezing and abolishing of accounts -The governance committee can initiate transactions on pre-compiled contracts with a fixed address of 0x10003 and read and write the status of the account.。At the time of execution, it will be determined whether the transaction sponsor msg.sender is a governance member in the governance committee record, and if not, it will be rejected。It is worth noting that the address of the account of the governance committee does not allow the status to be modified。 +The governance committee can initiate transactions on pre-compiled contracts with a fixed address of 0x10003 and read and write the status of the account。At the time of execution, it will be determined whether the transaction sponsor msg.sender is a governance member in the governance committee record, and if not, it will be rejected。It is worth noting that the address of the account of the governance committee does not allow the status to be modified。 Governance members can also freeze, unfreeze, and abolish accounts through the console. For details, see: [Freeze / Unfreeze Account Order](../operation_and_maintenance/console/console_commands.html#freezeaccount-unfreezeaccount), [Order to Abolish Account](../operation_and_maintenance/console/console_commands.html#abolishaccount) diff --git a/3.x/en/docs/develop/amop.md b/3.x/en/docs/develop/amop.md index 41f85ac65..00f36de1a 100644 --- a/3.x/en/docs/develop/amop.md +++ b/3.x/en/docs/develop/amop.md @@ -1,15 +1,15 @@ # 7. Using the AMOP function -Tag: "java-sdk "" AMOP "" On-Chain Messenger Protocol " +tags: "java-sdk" "AMOP" "on-chain messenger protocol" ---- -The Java SDK supports the Advanced Messages Onchain Protocol (AMOP). Users can use the AMOP protocol to exchange messages with other organizations.。 +The Java SDK supports the Advanced Messages Onchain Protocol (AMOP). Users can use the AMOP protocol to exchange messages with other organizations。 ## 1. Interface description -AMOP enables any subscriber who subscribes to a topic to receive push messages related to that topic. +AMOP enables any subscriber who subscribes to a topic to receive push messages related to that topic -The interface class of AMOP module can refer to the file java.-The "src / main / java / org / fisco / bcos / sdk / v3 / amop / Amop.java" file in the sdk contains the following interfaces: +For the interface classes of the AMOP module, see the "src / main / java / org / fisco / bcos / sdk / v3 / amop / Amop.java" file in the java-sdk file, which mainly contains the following interfaces: ### 1.1 subscribeTopic @@ -18,7 +18,7 @@ Subscribe to a topic **Parameters:** * topic: Subscribe to Topic Name。Type: "String"。 -* callback: The function that processes the topic message, which is called when a message related to the topic is received.。Type: "AmopRequestCallback"。 +* callback: The function that processes the topic message, which is called when a message related to the topic is received。Type: "AmopRequestCallback"。 **Example:** @@ -33,7 +33,7 @@ amop.start(); AmopRequestCallback cb = new AmopRequestCallback() { @Override public void onRequest(String endpoint, String seq, byte[] data) { - / / You can write the processing logic after receiving the message here.。 + / / You can write the processing logic after receiving the message here。 System.out.println("Received msg, content:" + new String(data)); } }; @@ -55,7 +55,7 @@ Send AMOP messages as unicast **注意:** -For a unicast AMOP message, if there are multiple clients subscribing to the topic, a random one can receive the unicast message.。 +For a unicast AMOP message, if there are multiple clients subscribing to the topic, a random one can receive the unicast message。 **Example:** @@ -68,7 +68,7 @@ amop.start(); AmopResponseCallback cb = new AmopResponseCallback() { @Override public void onResponse(Response response) { - / / You can write the processing logic of the received reply here.。 + / / You can write the processing logic of the received reply here。 System.out.println( "Get response, { errorCode:" + response.getErrorCode() @@ -130,7 +130,7 @@ Reply Message。 **Parameters:** -* endpoint: The peer endpoint that receives the message. It is returned in the 'AmopRequestCallback' callback.。Type: "String" +* endpoint: The peer endpoint that receives the message. It is returned in the 'AmopRequestCallback' callback。Type: "String" * seq: Message seq, returned in the 'AmopRequestCallback' callback。Type: "String" * content: Reply message content。Type: "byte []" @@ -158,7 +158,7 @@ amop.subscribeTopic("MyTopic", cb); ### 1.6 setCallback -Set the default callback. When the callback specified by the subscription topic is empty, the default callback API is called when a message is received. +Set the default callback. When the callback specified by the subscription topic is empty, the default callback API is called when a message is received **Parameters:** @@ -166,7 +166,7 @@ Set the default callback. When the callback specified by the subscription topic ## 2. Example -For more examples, see Java.-sdk-demo](https://github.com/FISCO-BCOS/java-sdk-demo)Project source code "java-sdk-demo / src / main / java / org / fisco / bcos / sdk / demo / amop / ". Link: [java-sdk-demo GitHub Link](https://github.com/FISCO-BCOS/java-sdk-demo),[java-sdk-demo Gitee Link](https://gitee.com/FISCO-BCOS/java-sdk-demo)。 +More examples please see [java-sdk-demo](https://github.com/FISCO-BCOS/java-sdk-demo)Code demonstration under "java-sdk-demo / src / main / java / org / fisco / bcos / sdk / demo / amop /" project source code, link: [java-sdk-demo GitHub link](https://github.com/FISCO-BCOS/java-sdk-demo)[java-sdk-demo Gitee link](https://gitee.com/FISCO-BCOS/java-sdk-demo)。 * Example: @@ -212,7 +212,7 @@ Reference [Building the First Blockchain Network](../quick_start/air_installatio ```shell mkdir -p ~/fisco && cd ~/fisco -# Get Java-sdk code +# get java-sdk code git clone https://github.com/FISCO-BCOS/java-sdk-demo # If the pull fails for a long time due to network problems, try the following command: @@ -225,7 +225,7 @@ bash gradlew build ### Step 3: Configure -* Copy the certificate: set up your FISCO BCOS network node "nodes / ${ip}Copy the certificate in the / sdk / "directory to" java-sdk-demo / dist / conf "directory。 +* Copy the certificate: set up your FISCO BCOS network node "nodes / ${ip}/ sdk / "Copy the certificate from the directory to the" java-sdk-demo / dist / conf "directory。 ```shell # Enter dist directory cd dist @@ -242,7 +242,7 @@ cp conf/config-example.toml conf/config.toml **Run Subscribers:** ```shell -# in java-sdk-demo / dist directory +# In the java-sdk-demo / dist directory # We subscribe to a topic called "testTopic" java -cp "apps/*:lib/*:conf/" org.fisco.bcos.sdk.demo.amop.Subscribe testTopic ``` @@ -317,6 +317,6 @@ At the same time, return to the topic subscriber's terminal and find the termina Note: -1. The broadcast message is not returned.。 +1. The broadcast message is not returned。 2. The receiver may receive multiple repeated broadcast messages。 \ No newline at end of file diff --git a/3.x/en/docs/develop/api.md b/3.x/en/docs/develop/api.md index cec50af53..134193900 100644 --- a/3.x/en/docs/develop/api.md +++ b/3.x/en/docs/develop/api.md @@ -5,7 +5,7 @@ Tags: "RPC" --------- The Java SDK provides Java API interfaces for blockchain application developers. By function, Java APIs can be divided into the following categories: -- Client: Provides access to FISCO BCOS 3.x node JSON-RPC interface support, providing support for deployment and invocation contracts; +- Client: Provides support for accessing the JSON-RPC interface for FISCO BCOS 3.x nodes, providing support for deploying and invoking contracts; - Precompiled: Provides calls to FISCO BCOS 3.x Precompiled contract(Precompiled Contracts)interfaces, including 'SensusService', 'SystemConfigService', 'BFSService', and 'KVTableService'。 - AuthManager: Provides FISCO BCOS 3.x permissions to control invocation of pre-deployment contracts。 @@ -25,13 +25,13 @@ Sending transactions to the blockchain。 **Parameters** -- node: allows RPC to send requests to the specified node -- signedTransactionData: transactions after signature -- withProof: return whether to bring Merkel tree proof +-node: allows RPC to send requests to the specified node +-signedTransactionData: transactions after signature +-withProof: return whether to bring Merkel tree proof **Return value** -- BcosTransactionReceipt: After receiving the transaction, the node returns the packet to the SDK, including the transaction hash information.。 +- BcosTransactionReceipt: After receiving the transaction, the node returns the packet to the SDK, including the transaction hash information。 **Example:** ``` @@ -52,10 +52,10 @@ The transaction publishing asynchronous interface, after receiving the response **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - signedTransactionData: Transaction string after signature; -- withProof: return whether to bring Merkel tree proof -- callback: After the SDK receives the packet return from the node, it calls the callback function. The callback function will bring the transaction receipt.。 +-withProof: return whether to bring Merkel tree proof +- callback: After the SDK receives the packet return from the node, it calls the callback function. The callback function will bring the transaction receipt。 **Return value** @@ -67,12 +67,12 @@ Send a request to the node, call the contract constant interface。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transaction: Contract invocation information, including the contract address, the contract caller, and the abi encoding of the invoked contract interface and parameters **Return value** -- Call: The return result of the contract constant interface, including the current block height, interface execution status information, and interface execution results. +- Call: The return result of the contract constant interface, including the current block height, interface execution status information, and interface execution results **Example:** ``` @@ -93,11 +93,11 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"call","params":["group0","","0xc ### callAsync -The contract constant interface is called asynchronously. After receiving the execution result of the contract interface returned by the node, the specified callback function is executed. +The contract constant interface is called asynchronously. After receiving the execution result of the contract interface returned by the node, the specified callback function is executed **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transaction: Contract invocation information, including contract address, contract caller, and invocation interface and parameter information; - callback: callback function。 @@ -111,7 +111,7 @@ Query contract code information corresponding to a specified contract address。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - address: Contract Address。 **Return value** @@ -138,7 +138,7 @@ Obtain the latest block height of the group corresponding to the client object **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -163,8 +163,8 @@ Asynchronously obtains the latest block height of the group corresponding to the **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after getting block height +-node: allows RPC to send requests to the specified node +-callback: get callback after block high **Return value** @@ -172,11 +172,11 @@ Asynchronously obtains the latest block height of the group corresponding to the ### getTotalTransactionCount -Obtain the transaction statistics of the client group, including the number of transactions on the chain and the number of failed transactions on the chain.。 +Obtain the transaction statistics of the client group, including the number of transactions on the chain and the number of failed transactions on the chain。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -204,12 +204,12 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"getTotalTransactionCount","param ### getTotalTransactionCountAsync -Asynchronously obtains the transaction statistics of the client corresponding to the group, including the number of transactions on the chain and the number of failed transactions on the chain.。 +Asynchronously obtains the transaction statistics of the client corresponding to the group, including the number of transactions on the chain and the number of failed transactions on the chain。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after obtaining transaction information +-node: allows RPC to send requests to the specified node +-callback: callback after getting transaction information **Return value** @@ -221,11 +221,11 @@ Obtain block information according to block height。 **Parameters** -- node: allows RPC to send requests to the specified node; +-node: allows RPC to send requests to the specified node; - blockNumber: Block height; -- onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information.; -- onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information.; - - false: The block returned by the node contains complete transaction information.; +-onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information; +-onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information; + - false: The block returned by the node contains complete transaction information; - true: The block returned by the node contains only the transaction hash。 **Return value** @@ -266,13 +266,13 @@ Obtain block information asynchronously according to block height。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - blockNumber: Block height; -- onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information.; +-onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information; - onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information; - - false: The block returned by the node contains complete transaction information.; + - false: The block returned by the node contains complete transaction information; - true: The block returned by the node contains only the transaction hash; -- callback: callback after block completion。 +-callback: get the callback after the block is completed。 **Return value** @@ -284,12 +284,12 @@ Obtain block information based on block hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - blockHash: Block Hash -- onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information.; +-onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information; - onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information; - true: The block returned by the node contains only the transaction hash; - - false: The block returned by the node contains complete transaction information.。 + - false: The block returned by the node contains complete transaction information。 **Return value** @@ -357,13 +357,13 @@ Asynchronously obtain block information based on block hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - blockHash: Block Hash -- onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information.; +-onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information; - onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information; - true: The block returned by the node contains only the transaction hash; - - false: The block returned by the node contains complete transaction information.; -- callback: callback after block completion。 + - false: The block returned by the node contains complete transaction information; +-callback: get the callback after the block is completed。 **Return value** @@ -375,7 +375,7 @@ Obtain block hash based on block height **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - blockNumber: Block height **Return value** @@ -401,9 +401,9 @@ Obtain block hash asynchronously based on block height **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - blockNumber: Block height -- callback: callback after getting +-callback: callback after getting **Return value** @@ -415,9 +415,9 @@ Get transaction information based on transaction hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transactionHash: Transaction Hash -- withProof: whether to bring Merkel tree proof +-withProof: whether to bring Merkel Tree Proof **Return value** @@ -429,10 +429,10 @@ Asynchronous acquisition of transaction information based on transaction hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transactionHash: Transaction Hash -- withProof: whether to bring Merkel tree proof -- callback: the callback when the transaction is obtained. +-withProof: whether to bring Merkel Tree Proof +-callback: Get the callback at the time of the transaction **Return value** @@ -444,9 +444,9 @@ Get transaction receipt information based on transaction hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transactionHash: Transaction Hash -- withProof: return whether to bring Merkel tree proof +-withProof: return whether to bring Merkel tree proof **Return value** @@ -491,10 +491,10 @@ Asynchronously obtain transaction receipt information based on transaction hash **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transactionHash: Transaction Hash -- withProof: return whether to bring Merkel tree proof -- callback: callback when obtaining transaction receipt +-withProof: return whether to bring Merkel tree proof +-callback: callback when getting transaction receipt **Return value** @@ -506,7 +506,7 @@ Get the number of unprocessed transactions in the transaction pool。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -530,8 +530,8 @@ Get the number of unprocessed transactions in the transaction pool。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback when obtaining transaction receipt +-node: allows RPC to send requests to the specified node +-callback: callback when getting transaction receipt **Return value** @@ -628,7 +628,7 @@ Asynchronously obtain the network connection information of a specified node。 **Parameters** -- callback: callback after getting +-callback: callback after getting **Return value** @@ -640,7 +640,7 @@ Get Node Synchronization Status。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -664,8 +664,8 @@ Asynchronously get node synchronization status。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after obtaining synchronization information +-node: allows RPC to send requests to the specified node +-callback: callback after getting synchronization information **Return value** @@ -677,8 +677,8 @@ Gets the value of the system configuration item based on the specified configura **Parameters** -- node: allows RPC to send requests to the specified node -- key: System configuration items, including 'tx _ count _ limit' and 'consensus _ leader _ period'. +-node: allows RPC to send requests to the specified node +- key: System configuration items, including 'tx _ count _ limit' and 'consensus _ leader _ period' **Return value** @@ -704,9 +704,9 @@ Asynchronously gets the value of the system configuration item based on the spec **Parameters** -- node: allows RPC to send requests to the specified node -- key: System configuration items, including 'tx _ count _ limit' and 'consensus _ leader _ period'. -- callback: callback after getting the configuration item +-node: allows RPC to send requests to the specified node +- key: System configuration items, including 'tx _ count _ limit' and 'consensus _ leader _ period' +-callback: callback after getting the configuration item **Return value** @@ -720,7 +720,7 @@ Obtain the observation node list of the group corresponding to the client。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -746,8 +746,8 @@ Asynchronously obtain the observation node list of the client corresponding to t **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after getting the node list +-node: allows RPC to send requests to the specified node +-callback: callback after getting the node list **Return value** @@ -759,7 +759,7 @@ Obtain the consensus node list of the client group。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -797,8 +797,8 @@ Asynchronously obtain the consensus node list of the corresponding client group **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after getting the node list +-node: allows RPC to send requests to the specified node +-callback: callback after getting the node list **Return value** @@ -810,7 +810,7 @@ Obtain PBFT view information when a node uses the PBFT consensus algorithm。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -835,8 +835,8 @@ Asynchronously obtains PBFT view information when a node uses the PBFT consensus **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after obtaining PBFT view information +-node: allows RPC to send requests to the specified node +-callback: callback after obtaining PBFT view information **Return value** @@ -848,7 +848,7 @@ Get Node Consensus Status。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node **Return value** @@ -873,8 +873,8 @@ Asynchronously get node consensus state。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after getting the status +-node: allows RPC to send requests to the specified node +-callback: callback after getting status **Return value** @@ -957,7 +957,7 @@ Query the status information of the current group asynchronously。 **Parameters** -- callback: callback after status information is queried +-callback: callback after status information is queried **Return value** @@ -1000,7 +1000,7 @@ Asynchronously obtain the group list of the current node。 **Parameters** -- callback: callback after getting the group list +-callback: callback after getting group list **Return value** @@ -1041,7 +1041,7 @@ Asynchronously obtains the list of nodes connected to the specified group of the **Parameters** -- callback: callback after getting the node list +-callback: callback after getting the node list **Return value** @@ -1124,7 +1124,7 @@ Asynchronously obtain the current node group information list。 **Parameters** -- callback: callback after obtaining group information +-callback: callback after getting group information **Return value** @@ -1161,13 +1161,13 @@ Asynchronously obtain information about a specified node in a group。 **Parameters** - node: Specify node name -- callback: callback after obtaining information +-callback: callback after getting information **Return value** - None -## 5. Pre-compiled contract service interface. +## 5. Pre-compiled contract service interface ### 5.1 BFSService @@ -1177,23 +1177,23 @@ Creates a directory at the specified absolute path。 **Parameters** -- path: absolute path +-path: absolute path **Return value** -- RetCode: Create Directory Results。 +- RetCode: Create directory results。 #### list -View the information of the specified absolute path. If it is a directory file, the meta information of all sub-resources in the directory is returned. If it is another file, the meta information of the file is returned.。 +View the information of the specified absolute path. If it is a directory file, the meta information of all sub-resources in the directory is returned. If it is another file, the meta information of the file is returned。 **Parameters** -- path: absolute path +-path: absolute path **Return value** -- List\ < BFSCompiled.BfsInfo\ >: Returns a list of meta information for the file。 +- List\Returns a list of meta information for a file。 ### link @@ -1201,14 +1201,14 @@ Create a link file for the contract, in the absolute path / apps / directory。F **Parameters** -- name: contract name -- version: Version name -- contractAddress: contract address -- abi: Contract ABI +-name: contract name +-version: Version name +-contractAddress: contract address +-abi: Contract ABI **Return value** -- RetCode: Create Linked File Results。 +- RetCode: Create linked file results。 ### readlink @@ -1216,11 +1216,11 @@ Read the real contract address pointed to by the linked file **Parameters** -- path: absolute path +-path: absolute path **Return value** -- String: contract address +-String: contract address ### 5.2 ConsensusService @@ -1229,8 +1229,8 @@ Read the real contract address pointed to by the linked file **Parameters** -- nodeId: The ID of the node added as the consensus node. -- weight: add the weight of the consensus node +-nodeId: the ID of the node added as the consensus node +-weight: Add the weight of the consensus node **Return value** @@ -1238,7 +1238,7 @@ Read the real contract address pointed to by the linked file ```eval_rst .. note:: - In order to ensure that the new node does not affect the consensus, the node to be added as a consensus node must establish a P2P network connection with other nodes in the group, otherwise it cannot be added as a consensus node.。 + In order to ensure that the new node does not affect the consensus, the node to be added as a consensus node must establish a P2P network connection with other nodes in the group, otherwise it cannot be added as a consensus node。 ``` #### addObserver @@ -1247,7 +1247,7 @@ Add the specified node as an observation node。 **Parameters** -- nodeId: The ID of the node added as an observation node. +- nodeId: The ID of the node added as an observation node **Return value** @@ -1259,7 +1259,7 @@ Move the specified node out of the group。 **Parameters** -- nodeId: The node ID of the node removed from the group. +- nodeId: The node ID of the node removed from the group **Return value** @@ -1273,7 +1273,7 @@ Sets the value of the specified system configuration item。 **Parameters** -- key: Configuration item. Currently, 'tx _ count _ limit' and 'consensus _ leader _ period' are supported.; +- key: Configuration item. Currently, 'tx _ count _ limit' and 'consensus _ leader _ period' are supported; - value: The value to which the system configuration item is set。 @@ -1291,7 +1291,7 @@ Create User Table。 - tableName: Name of the created user table; - keyFieldName: Primary key name of the user table; -- valueFields: The fields of the user table. +- valueFields: The fields of the user table **Return value** @@ -1318,7 +1318,7 @@ Query specified records in the user table。 **Parameters** - tableName: Queried user table name; -- key: the primary key value to be queried.; +- key: the primary key value to be queried; **Return value** @@ -1334,7 +1334,7 @@ Obtain the description information of the specified user table。 **Return value** -- Map: Description of the user table. The mapping between 'PrecompiledConstant.KEY _ NAME' and the mapping between 'PrecompiledConstant.FIELD _ NAME' and all fields. The fields are separated by commas.。 +- Map: Description of the user table. The mapping between 'PrecompiledConstant.KEY _ NAME' and the mapping between 'PrecompiledConstant.FIELD _ NAME' and all fields. The fields are separated by commas。 #### asyncSet @@ -1353,21 +1353,21 @@ Obtain the description information of the specified user table。 ### 5.5 CNSService -**Note:** from 3.0.0-rc3 version started, CNS is no longer supported。Please refer to the BFSService link function for the corresponding contract alias function.。 +**Note:** Starting with version 3.0.0-rc3, CNS is no longer supported。Please refer to the BFSService link function for the corresponding contract alias function。 ## 6. AuthManager Rights Management Interface Rights management interfaces include the following three interfaces: - Query interface without permission; -- Governance Committee-specific interface: An interface that has the private key of the governance committee to initiate transactions in order to execute correctly.; -- Administrator-specific interface: An interface where transactions initiated by an administrator's private key with administrative privileges on the corresponding contract can be executed correctly.。 +- Governance Committee-specific interface: an interface that has the private key of the governance committee to initiate transactions in order to execute correctly; +- Administrator-specific interface: an interface where transactions initiated by an administrator's private key with administrative rights to the corresponding contract can be executed correctly。 ### 6.1 Query interface without permission #### getCommitteeInfo -At initialization, a governance committee is deployed whose address information is automatically generated or specified at build _ chain.sh.。Initialize only one member, and the weight of the member is 1。 +At initialization, a governance committee is deployed whose address information is automatically generated or specified at build _ chain.sh。Initialize only one member, and the weight of the member is 1。 **Parameters** @@ -1375,7 +1375,7 @@ At initialization, a governance committee is deployed whose address information **Return value** -- CommitteeInfo: Details of the Governance Committee +- CommitteeInfo: governance committee details #### getProposalInfo @@ -1383,7 +1383,7 @@ Get information about a specific proposal。 **Parameters** -- proposalID: the ID number of the proposal +-proposalID: ID number of the proposal **Return value** @@ -1399,7 +1399,7 @@ Get the permissions policy for the current global deployment **Return value** -- BigInteger: policy type: 0 is no policy, 1 is whitelist mode, 2 is blacklist mode +-BigInteger: Policy type: 0 is no policy, 1 is whitelist mode, 2 is blacklist mode #### checkDeployAuth @@ -1407,25 +1407,25 @@ Check whether an account has deployment permissions **Parameters** -- account: account address +-account: account address **Return value** -- Boolean: Permission +-Boolean: Permission #### checkMethodAuth -Check whether an account has the permission to call an interface of a contract. +Check whether an account has the permission to call an interface of a contract **Parameters** -- contractAddr: contract address -- func: function selector for the interface, 4 bytes -- account: account address +-contractAddr: contract address +-func: function selector for the interface, 4 bytes +-account: account address **Return value** -- Boolean: Permission +-Boolean: Permission #### getAdmin @@ -1433,11 +1433,11 @@ Get the administrator address for a specific contract **Parameters** -- contractAddr: contract address +-contractAddr: contract address **Return value** -- account: account address +-account: account address ### 6.2 Special interface for accounts of governance committee members @@ -1445,16 +1445,16 @@ There must be an account in the Governance Committee's Governors to call, and if #### updateGovernor -In the case of a new governance committee, add an address and weight.。If you are deleting a governance member, you can set the weight of a governance member to 0。 +In the case of a new governance committee, add an address and weight。If you are deleting a governance member, you can set the weight of a governance member to 0。 **Parameters** -- account: account address -- weight: account weight +-account: account address +-weight: account weight **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### setRate @@ -1462,24 +1462,24 @@ Set proposal threshold, which is divided into participation threshold and weight **Parameters** -- participatesRate: participation threshold, in percentage units -- winRate: by weight threshold, percentage unit +-participatesRate: participation threshold, percentage unit +-winRate: by weight threshold, percentage unit **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### setDeployAuthType -Set the ACL policy for deployment. Only white _ list and black _ list policies are supported. +Set the ACL policy for deployment. Only white _ list and black _ list policies are supported **Parameters** -- deployAuthType: When type is 1, it is set to a whitelist. When type is 2, it is set to a blacklist.。 +-deployAuthType: When type is 1, it is set as a white list, and when type is 2, it is set as a black list。 **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### modifyDeployAuth @@ -1487,12 +1487,12 @@ Modify a deployment permission proposal for an administrator account **Parameters** -- account: account address -- openFlag: whether to enable or disable permissions +-account: account address +-openFlag: whether to turn permissions on or off **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### resetAdmin @@ -1500,12 +1500,12 @@ Resetting an administrator account proposal for a contract **Parameters** -- newAdmin: Account address -- contractAddr: contract address +-newAdmin: Account address +-contractAddr: contract address **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### revokeProposal @@ -1513,7 +1513,7 @@ Undo the initiation of a proposal, an operation that only the governance committ **Parameters** -- proposalId: ID number of the proposal +-proposalId: ID number of the proposal **Return value** @@ -1525,8 +1525,8 @@ vote on a proposal **Parameters** -- proposalId: ID number of the proposal -- agree: Do you agree to this proposal? +-proposalId: ID number of the proposal +-agree: Do you agree to this proposal? **Return value** @@ -1534,21 +1534,21 @@ vote on a proposal ### 6.3 Special interface for contract administrator account -Each contract has an independent administrator. Only the administrator account of a contract can set the interface permissions of the contract.。 +Each contract has an independent administrator. Only the administrator account of a contract can set the interface permissions of the contract。 #### setMethodAuthType -Set the API call ACL policy of a contract. Only white _ list and black _ list policies are supported. +Set the API call ACL policy of a contract. Only white _ list and black _ list policies are supported **Parameters** -- contractAddr: contract address -- func: function selector for the contract interface, four bytes in length。 -- authType: When type is 1, it is set to a whitelist. When type is 2, it is set to a blacklist.。 +-contractAddr: contract address +-func: function selector for contract interface, length is four bytes。 +-authType: When type is 1, it is set as a white list, and when type is 2, it is set as a black list。 **Return value** -- result: If it is 0, the setting is successful。 +-result: If it is 0, the setting is successful。 #### setMethodAuth @@ -1556,11 +1556,11 @@ Modify the interface call ACL policy of a contract。 **Parameters** -- contractAddr: contract address -- func: function selector for the contract interface, four bytes in length。 -- account: account address -- isOpen: whether to enable or disable permissions +-contractAddr: contract address +-func: function selector for contract interface, length is four bytes。 +-account: account address +-isOpen: whether the permission is enabled or disabled **Return value** -- result: If it is 0, the setting is successful。 +-result: If it is 0, the setting is successful。 diff --git a/3.x/en/docs/develop/committee_usage.md b/3.x/en/docs/develop/committee_usage.md index 0306ce303..cd0924ad2 100644 --- a/3.x/en/docs/develop/committee_usage.md +++ b/3.x/en/docs/develop/committee_usage.md @@ -4,33 +4,33 @@ Tags: "Contract Permissions" "Deployment Permissions" "Permission Control" "Perm ---- -FISCO BCOS 3.x introduces the authority governance system of contract granularity.。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting.。 +FISCO BCOS 3.x introduces the authority governance system of contract granularity。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting。 For detailed design, please refer to the link: [Permission Management System Design](../design/committee_design.md) ## Enable permission governance mode -Before the blockchain is initialized and started, you must enable and set the permission governance configuration in the configuration to correctly start the permission governance mode.。Reconfiguration after blockchain startup will not work。 +Before the blockchain is initialized and started, you must enable and set the permission governance configuration in the configuration to correctly start the permission governance mode。Reconfiguration after blockchain startup will not work。 -To enable the permission governance mode, set the 'is _ auth _ check' option to 'true' and set the 'auth _ admin _ account' initial committee account address to the correct address.。 +To enable the permission governance mode, set the 'is _ auth _ check' option to 'true' and set the 'auth _ admin _ account' initial committee account address to the correct address。 -Different node deployment modes of FISCO BCOS have slightly different ways to enable permission governance。This section will discuss separately how to turn on permission governance in different node deployment modes.。 +Different node deployment modes of FISCO BCOS have slightly different ways to enable permission governance。This section will discuss separately how to turn on permission governance in different node deployment modes。 ### FISCO BCOS Air Edition Opens Permission Governance -FISCO BCOS Air version of the chain deployment tool details, please refer to: [Air deployment tool](../tutorial/air/build_chain.md)。Take building four nodes as an example to enable permission governance settings.。 +FISCO BCOS Air version of the chain deployment tool details, please refer to: [Air deployment tool](../tutorial/air/build_chain.md)。Take building four nodes as an example to enable permission governance settings。 -Chain building deployment tools are-A 'and'-a 'Two modes for enabling permission mode: +The deployment tool has two modes, '-A' and '-a', to enable the permission mode: - `-A`: The permission setting will be enabled, and an account address will be randomly generated by using the 'get _ account.sh' and 'get _ gm _ account.sh' scripts, and the public-private key pair of the generated account will be placed in the 'ca' directory of the chain. For details about creating and using an account, see [Creating and Using an Account](./account.md) -- `-a ': will open the permission settings and specify an account address as the only account for initializing the governance committee.**When specifying, you must confirm that the account exists and that the account address is correct, otherwise permission governance will be unavailable because there is no governance committee authority.**。 +- '-a': will turn on permission settings and specify an account address as the only account to initialize the governance committee**When specifying, you must confirm that the account exists and that the account address is correct, otherwise permission governance will be unavailable because there is no governance committee authority**。 #### Example of enabling permission governance -Use '-A 'option to enable permission mode, you can see that' Auth Mode 'has been enabled,' Auth init account 'initial account is' 0x976fe0c250181c7ef68a17d3bc34916978da103a '。 +Use the '-A' option to enable permission mode. You can see that 'Auth Mode' is enabled and the initial account of 'Auth init account' is' 0x976fe0c250181c7ef68a17d3bc34916978da103a '。 ```shell -## If you use-A option, the permission setting is turned on, and an account address is randomly generated as the only admin account for initializing the governance committee. +## If the -A option is used, the permission setting is turned on and an account address is randomly generated as the only admin account to initialize the governance committee bash build_chain.sh -l 127.0.0.1:4 -o nodes -A [INFO] Downloading fisco-bcos binary from https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.0.0/fisco-bcos-linux-x86_64.tar.gz ... @@ -65,11 +65,11 @@ ls nodes/ca/accounts 0x976fe0c250181c7ef68a17d3bc34916978da103a.pem 0x976fe0c250181c7ef68a17d3bc34916978da103a.public.pem ``` -Use '-a 'option to enable permission mode, specify the account address as the initial governance member, you can see that' Auth Mode 'has been enabled,' Auth init account 'initial account is' 0x976fe0c250181c7ef68a17d3bc34916978da103a ' +Use the '-a' option to enable permission mode, specify the account address as the initial governance member, you can see that 'Auth Mode' is enabled, and the initial account of 'Auth init account' is' 0x976fe0c250181c7ef68a17d3bc34916978da103a ' ```shell -## If you use-a option, the permission settings are turned on and the account address is specified as the only admin account for initializing the governance committee +## If you use the -a option, turn on permission settings and specify the account address as the only admin account to initialize the governance committee bash build_chain.sh -l 127.0.0.1:4 -o nodes -a 0x976fe0c250181c7ef68a17d3bc34916978da103a [INFO] Generate ca cert successfully! @@ -98,7 +98,7 @@ Processing IP:127.0.0.1 Total:4 #### View node permission configuration -Either use '-A 'or'-a 'option enables permission governance, which is reflected in the configuration of each node。When the node starts initialization, it will read the configuration and initialize the permission contract.。 +Whether permission governance is enabled with the '-A' or '-a' option, it is reflected in the configuration of each node。When the node starts initialization, it will read the configuration and initialize the permission contract。 Let's take 'nodes / 127.0.0.1 / node0 / config.genesis' as an example: @@ -115,9 +115,9 @@ Let's take 'nodes / 127.0.0.1 / node0 / config.genesis' as an example: ### FISCO BCOS Pro / Max Edition Enable Permission Governance -FISCO BCOS Pro version of the build chain deployment tool details please refer to: [build Pro version of the blockchain network](../tutorial/pro/installation.md)。Take BcosBuilder as an example to enable permission governance settings.。 +FISCO BCOS Pro version of the build chain deployment tool details please refer to: [build Pro version of the blockchain network](../tutorial/pro/installation.md)。Take BcosBuilder as an example to enable permission governance settings。 -Before enabling the Pro / Max blockchain network permission mode, ensure that [Deploy Pro Blockchain Node] has been completed.(../tutorial/pro/installation.html#id4)All previous steps。 +Before enabling the Pro / Max blockchain network permission mode, ensure that [Deploy Pro Blockchain Node] has been completed(../tutorial/pro/installation.html#id4)All previous steps。 When copying a configuration file, you need to manually configure permissions to initialize the configuration。To copy a configuration file, refer to: [Deploying RPC Services](../tutorial/pro/installation.html#rpc) @@ -145,17 +145,17 @@ init_auth_address="0x976fe0c250181c7ef68a17d3bc34916978da103a" ... ``` -After completing the configuration items, you can continue to deploy RPC services, GateWay services, and node services.。Continue process reference: [Deploy RPC Service](../tutorial/pro/installation.html#rpc) +After completing the configuration items, you can continue to deploy RPC services, GateWay services, and node services。Continue process reference: [Deploy RPC Service](../tutorial/pro/installation.html#rpc) ## Console Use -The console has commands dedicated to permission governance and commands to switch console accounts.。You can use the console to manage permissions. For more information, see [Permission command](../operation_and_maintenance/console/console_commands.html#id14)。The permission governance command will only appear if the console is connected to the node with permission governance enabled.。 +The console has commands dedicated to permission governance and commands to switch console accounts。You can use the console to manage permissions. For more information, see [Permission command](../operation_and_maintenance/console/console_commands.html#id14)。The permission governance command will only appear if the console is connected to the node with permission governance enabled。 Console operation commands include the following three types. For details, see [Permission Operation Commands](../operation_and_maintenance/console/console_commands.html#id14): - Query status command, which has no permission control and is accessible to all accounts。 -- Governance Committee Special Orders, which can only be used if the account of the Governance Committee is held。 -- Contract administrator-specific commands that can only be accessed by an administrator account with administrative privileges on a contract。 +- Governance Committee-specific orders, which can only be used if the account of the Governance Committee is held。 +- Contract administrator-specific commands, which can only be accessed by an administrator account with administrative privileges on a contract。 ## Use examples @@ -163,7 +163,7 @@ First, use the build _ chain.sh script to build a blockchain with permission mod Reference here [Creating and Using an Account](./account.md)link to create a new account, specifying that the account address of the initialization governance member is 0x1cc06388cd8a12dcf7fb8967378c0aea4e6cf642 -You can use '-A 'option to automatically generate an account。Accounts are distinguished between state and non-state secrets and are automatically generated based on the type of chain。 +You can use the '-A' option to automatically generate an account。Accounts are distinguished between state and non-state secrets and are automatically generated based on the type of chain。 ```shell bash build_chain.sh -l 127.0.0.1:4 -o nodes4 -a 0x1cc06388cd8a12dcf7fb8967378c0aea4e6cf642 @@ -171,9 +171,9 @@ You can use '-A 'option to automatically generate an account。Accounts are dist ### 1. Use of governance members -Use the 'getCommitteeInfo' command to see that there is only one governance committee at initialization, with a weight of 1. +Use the 'getCommitteeInfo' command to see that there is only one governance committee at initialization, with a weight of 1 -And the account used in the current console is the member. +And the account used in the current console is the member ```shell [group0]: /> getCommitteeInfo @@ -196,7 +196,7 @@ As you can see, a proposal was launched, proposal number 1。 Because the current governance committee has only one member and both the participation threshold and the weight threshold are zero, the proposal initiated is certain to succeed。 -Use the 'getCommitteeInfo' command to see that the weight of the governance committee has indeed been updated. +Use the 'getCommitteeInfo' command to see that the weight of the governance committee has indeed been updated ```shell [group0]: /> updateGovernorProposal 0x1cc06388cd8a12dcf7fb8967378c0aea4e6cf642 2 @@ -215,7 +215,7 @@ Against Voters: You can also add new governance members using 'updateGovernorProposal': -Only length and character checks will be done here, not correctness checks.。You can see the successful addition of a governance committee with a weight of 1 +Only length and character checks will be done here, not correctness checks。You can see the successful addition of a governance committee with a weight of 1 ```shell [group0]: /> updateGovernorProposal 0xba0cd3e729cfe3ebdf1f74a10ec237bfd3954e1e 1 @@ -233,7 +233,7 @@ Against Voters: You can also use 'updateGovernorProposal' to delete governance members: -If the account weight is set to 0, the governance member is deleted. +If the account weight is set to 0, the governance member is deleted ```shell [group0]: /> updateGovernorProposal 0xba0cd3e729cfe3ebdf1f74a10ec237bfd3954e1e 0 @@ -289,7 +289,7 @@ At this point, the Commission's participation rate must be greater than 51, the Use the current account to initiate the 'setDeployAuthTypeProposal' proposal, change the global deployment permission policy, and use the whitelist mode。 -At this point, you can see that the type of the sixth proposal is' setDeployAuthType 'and the status is' notEnoughVotes'. The proposal cannot be passed yet, and the current deployment permission policy is still in the no-policy state.。 +At this point, you can see that the type of the sixth proposal is' setDeployAuthType 'and the status is' notEnoughVotes'. The proposal cannot be passed yet, and the current deployment permission policy is still in the no-policy state。 ```shell [group0]: /> setDeployAuthTypeProposal white_list @@ -308,7 +308,7 @@ Against Voters: There is no deploy strategy, everyone can deploy contracts. ``` -Switch to another committee account and vote on proposal 6, you can see that the vote was successful and the proposal status changed to end.。Deployment policy also becomes whitelist mode。 +Switch to another committee account and vote on proposal 6, you can see that the vote was successful and the proposal status changed to end。Deployment policy also becomes whitelist mode。 ```shell [group0]: /> loadAccount 0xba0cd3e729cfe3ebdf1f74a10ec237bfd3954e1e @@ -333,9 +333,9 @@ Deploy strategy is White List Access. ### 2. Deployment Permissions -Continue. The deployment permission of the current chain is in whitelist mode.。 +Continue. The deployment permission of the current chain is in whitelist mode。 -The governance committee does not have the permission to deploy, but the governance committee can initiate the deployment permission to open an account.。 +The governance committee does not have the permission to deploy, but the governance committee can initiate the deployment permission to open an account。 You can also initiate a proposal to turn off deployment permissions through the command 'closeDeployAuthProposal' @@ -393,11 +393,11 @@ At this point, the HelloWorld contract administrator for address 0x33E56a083e135 ### 3. Contract Administrator Use -The contract administrator of the current HelloWorld contract 0x33E56a083e135936C1144960a708c43A661706C0 is the '0xab835e87a86f94af10c81278bb9a82ea13d82d39' account. +The contract administrator of the current HelloWorld contract 0x33E56a083e135936C1144960a708c43A661706C0 is the '0xab835e87a86f94af10c81278bb9a82ea13d82d39' account The contract administrator can set the interface policy for the current contract: -The contract administrator's "set" to the HelloWorld contract.(string)"The contract sets the whitelist mode, and after the setting is successful, the administrator does not have permission to call set(string)Interface +The contract administrator's "set" to the HelloWorld contract(string)"The contract sets the whitelist mode, and after the setting is successful, the administrator does not have permission to call set(string)Interface ```shell [group0]: /> getContractAdmin 0x33E56a083e135936C1144960a708c43A661706C0 @@ -479,12 +479,12 @@ Return values:(May the flame guide thee.) Initiate a proposal to upgrade the logic of voting calculations。The upgrade proposal vote calculation logic is divided into the following steps: -1. Write contracts based on interfaces.; +1. Write contracts based on interfaces; 2. Deploy the written contract on the chain and get the address of the contract; 3. Initiate a proposal to upgrade the voting calculation logic, enter the address of the contract as a parameter, and vote on it in the governance committee; -4. After the vote is passed (the voting calculation logic is still the original logic at this time), the voting calculation logic is upgraded.;Otherwise do not upgrade。 +4. After the vote is passed (the voting calculation logic is still the original logic at this time), the voting calculation logic is upgraded;Otherwise do not upgrade。 -The voting calculation logic contract can only be used according to a certain interface implementation.。For contract implementation, see the following interface contract 'VoteComputerTemplate.sol': +The voting calculation logic contract can only be used according to a certain interface implementation。For contract implementation, see the following interface contract 'VoteComputerTemplate.sol': ```solidity // SPDX-License-Identifier: Apache-2.0 @@ -513,7 +513,7 @@ abstract contract VoteComputerTemplate is BasicAuth { address[] memory againstVoters ) public view virtual returns (uint8); - / / This is a verification interface for computational logic for other governance members to verify the validity of the contract. + / / This is a verification interface for computational logic for other governance members to verify the validity of the contract function voteResultCalc( uint32 agreeVotes, uint32 doneVotes, @@ -592,7 +592,7 @@ ParticipatesRate: 0% , WinRate: 0% Governor Address | Weight index0 : 0x4a37eba43c66df4b8394abdf8b239e3381ea4221 | 2 -# Deploy the VoteComputer contract. The first parameter 0x10001 is a fixed address, and the second parameter is the address of the current governance committee member Committee. +# Deploy the VoteComputer contract. The first parameter 0x10001 is a fixed address, and the second parameter is the address of the current governance committee member Committee [group0]: /apps> deploy VoteComputer 0x10001 0xa0974646d4462913a36c986ea260567cf471db1f transaction hash: 0x429a7ceccefb3a4a1649599f18b60cac1af040cd86bb8283b9aab68f0ab35ae4 contract address: 0x6EA6907F036Ff456d2F0f0A858Afa9807Ff4b788 diff --git a/3.x/en/docs/develop/console_deploy_contract.md b/3.x/en/docs/develop/console_deploy_contract.md index 51b9be8ab..b1db74ed9 100644 --- a/3.x/en/docs/develop/console_deploy_contract.md +++ b/3.x/en/docs/develop/console_deploy_contract.md @@ -1,13 +1,13 @@ -# 5. Console deployment calls the contract. +# 5. Console deployment calls the contract ----- -This document describes how to configure the console and describes how the console deploys contracts and invokes contracts. +This document describes how to configure the console and describes how the console deploys contracts and invokes contracts ## 1. Download the configuration console ### Step 1. Install the console dependencies -Console running depends on Java environment(We recommend Java 14.)and the installation command is as follows: +Console running depends on Java environment(We recommend Java 14)and the installation command is as follows: ```shell # Ubuntu system installation java @@ -25,7 +25,7 @@ cd ~/fisco && curl -LO https://github.com/FISCO-BCOS/console/releases/download/v ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, please try cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh + -If you cannot download for a long time due to network problems, please try cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh ``` ### Step 3. Configure the console @@ -38,12 +38,12 @@ cp -n console/conf/config-example.toml console/conf/config.toml ```eval_rst .. note:: - If the node does not use the default port, replace 20200 in the file with the corresponding rpc port of the node. You can use the "[rpc] .listen _ port" configuration item of the node config.ini to obtain the rpc port of the node.。 + If the node does not use the default port, replace 20200 in the file with the corresponding rpc port of the node. You can use the "[rpc] .listen _ port" configuration item of the node config.ini to obtain the rpc port of the node。 ``` -- Configure Console Certificates +- Configure console certificates -SSL connection is enabled by default between the console and the node. The console needs to configure a certificate to connect to the node.。The SDK certificate is generated at the same time as the node is generated. You can directly copy the generated certificate for the console to use: +SSL connection is enabled by default between the console and the node. The console needs to configure a certificate to connect to the node。The SDK certificate is generated at the same time as the node is generated. You can directly copy the generated certificate for the console to use: ```shell cp -r nodes/127.0.0.1/sdk/* console/conf @@ -54,7 +54,7 @@ cp -r nodes/127.0.0.1/sdk/* console/conf ```eval_rst .. note:: - Please make sure that the 30300 ~ 30303, 20200 ~ 20203 ports of the machine are not occupied。 - - For console configuration methods and commands, please refer to 'here <.. / operation _ and _ maintenance / console / index.html >' _ implementation。 + -For console configuration methods and commands, please refer to 'here<../operation_and_maintenance/console/index.html>'_ Implementation。 ``` - Start @@ -110,7 +110,7 @@ contract HelloWorld { ### Step 2. Deploy the HelloWorld contract -To facilitate the user's quick experience, the HelloWorld contract is built into the console and located in the console directory 'contracts / consolidation / HelloWorld.sol'. +To facilitate the user's quick experience, the HelloWorld contract is built into the console and located in the console directory 'contracts / consolidation / HelloWorld.sol' ```shell # Enter the following command in the console to return the contract address if the deployment is successful diff --git a/3.x/en/docs/develop/contract_life_cycle.md b/3.x/en/docs/develop/contract_life_cycle.md index d5c6dda8e..de8b32f76 100644 --- a/3.x/en/docs/develop/contract_life_cycle.md +++ b/3.x/en/docs/develop/contract_life_cycle.md @@ -4,19 +4,19 @@ Tags: "Contract Management" "Contract Lifecycle" "Deployment Contract" "Call Con ---- -This document describes the entire life cycle of a contract from development, deployment, invocation, upgrade, freezing, to retirement, as well as the roles and management methods involved in the entire smart contract life cycle.。 +This document describes the entire life cycle of a contract from development, deployment, invocation, upgrade, freezing, to retirement, as well as the roles and management methods involved in the entire smart contract life cycle。 ```eval_rst .. important:: - For contract lifecycle management, freeze, unfreeze, and revoke operations, and contract deployment call permission control, you need to enable the blockchain permission mode. For more information, see '[Permission Management User Guide] < https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/committee_usage.html>`_ + For contract lifecycle management, freeze, unfreeze, and revoke operations, and contract deployment call permission control, you need to enable the blockchain permission mode. For more information, see '[Permission Governance User Guide]`_ ``` ## 1. Smart Contract Development FISCO BCOS platform supports three smart contract usage forms: Solidity, Liquid, and Precompiled。 -- The Solidity contract is the same as Ethereum and is implemented in Solidity syntax, in FISCO BCOS 3.0+Version 0.4.25 - 0.8.11 Solidity contract supported。 -- Liquid is an embedded domain specific language (eDSL) that can be used to write smart contracts that run on the underlying blockchain platform FISCO BCOS.。 +- Solidity contract is the same as Ethereum, implemented with Solidity syntax, in FISCO BCOS 3.0+Version 0.4.25 - 0.8.11 Solidity contract supported。 +-Liquid is an embedded Domain Specific Language (eDSL) developed by the Microbank blockchain team and fully open source, which can be used to write smart contracts running on the underlying blockchain platform FISCO BCOS。 - Precompiled contracts using C++Development, built into the FISCO BCOS platform, has better performance than the Solidity contract, and its contract interface needs to be predetermined at compile time, suitable for scenarios where logic is fixed but consensus is required。 **Extended reading** @@ -31,12 +31,12 @@ FISCO BCOS platform supports three smart contract usage forms: Solidity, Liquid, [Precompiled Contract Usage Documentation](../contract_develop/c++_contract/index.md) -## 2. Smart contract deployment and invocation. +## 2. Smart contract deployment and invocation -After the user completes the development of the smart contract, the smart contract is deployed on the chain and the call transaction is initiated.。Users can use [SDK](./sdk/index.md)Package the compiled contract into a transaction and send it to the FISCO BCOS blockchain node on the chain.。The community has provided highly packaged tools that users can use quickly out of the box: +After the user completes the development of the smart contract, the smart contract is deployed on the chain and the call transaction is initiated。Users can use [SDK](./sdk/index.md)Package the compiled contract into a transaction and send it to the FISCO BCOS blockchain node on the chain。The community has provided highly packaged tools that users can use quickly out of the box: -- Using the Console: [Console](../operation_and_maintenance/console/index.md)The console wraps the Java SDK, provides command line interaction, and provides developers with node query and management tools.。 -- Using the Java Contract Generation Tool: [Java Contract Generation Tool](../operation_and_maintenance/console/console_config.html#java)Supports automatic compilation of Solidity and generation of Java files, support for specifying wbc-Liquid compiles the WASM file and the ABI file to generate the Java file.。 +- Use Console: [Console](../operation_and_maintenance/console/index.md)The console wraps the Java SDK, provides command line interaction, and provides developers with node query and management tools。 +- Use the Java Contract Generation Tool: [Java Contract Generation Tool](../operation_and_maintenance/console/console_config.html#java)Supports automatic compilation of Solidity and generation of Java files, WASM files compiled by specifying wbc-liquid, and ABI files to generate Java files。 **Extended reading** @@ -50,11 +50,11 @@ After the user completes the development of the smart contract, the smart contra ## 3. Smart contract data storage -After the smart contract is deployed, the underlying storage structure will create a data table to store the opcode, ABI JSON string, and state status data corresponding to the contract.。Among them: +After the smart contract is deployed, the underlying storage structure will create a data table to store the opcode, ABI JSON string, and state status data corresponding to the contract。Among them: -- Opcode is a code snippet that can only be generated after compilation, which is loaded into the virtual machine for execution each time it is called.; -- The ABI JSON string is an interface file stored for the convenience of external SDK calls, which records the parameter return format of each interface of the smart contract and the parallel conflict domain of each interface.; -- The state data stores the data that needs to be stored persistently when the smart contract is running, such as contract member variables.。 +-opcode is a code snippet that can only be generated after compilation, which is loaded into the virtual machine for execution on each call; +-ABI JSON string is an interface file stored for the convenience of external SDK calls, which records the parameter return format of each interface of the smart contract and the parallel conflict field of each interface; +-state state data stores data that needs to be stored persistently at runtime for smart contracts, such as contract member variables。 Create a table named "Directory+Contract address. "Take the address 0x1234567890123456789012345678901234567890 as an example. The stored table name is" / apps / 1234567890123456789012345678901234567890, "where" / apps / "is the fixed prefix of BFS. For details, please refer to [BFS Design Document](../design/contract_directory.md)。 @@ -66,26 +66,26 @@ Create a table named "Directory+Contract address. "Take the address 0x1234567890 ## 4. Smart Contract Upgrade -As you can see from Section 3 of this article, each smart contract deployment has a separate address on the chain, which corresponds to a separate storage table in storage.。Therefore, smart contract upgrades should also be divided into retained data upgrades and non-retained data upgrades.。 +As you can see from Section 3 of this article, each smart contract deployment has a separate address on the chain, which corresponds to a separate storage table in storage。Therefore, smart contract upgrades should also be divided into retained data upgrades and non-retained data upgrades。 -- Retaining the old contract data upgrade is more complex, and the specific solutions are as follows: - - (Recommended) Users need to actively divide the contract into**logical contract** 和**Data contracts**The data contract is used to store the data that needs to be stored on the chain. The open data read / write interface is used by the logical contract. The logical contract calls the read / write interface of the data contract during calculation.。When you need to upgrade, you only need to upgrade the logical contract. The new logical contract calls the old data contract interface, and the old logical contract is no longer used.。 - - (Recommended) Equivalent to the extension of the first solution, the data that needs to be stored is stored using the CRUD data interface, and the CRUD data is persisted on the chain through node consensus。For details, please refer to [Developing Applications Using CRUD Precompiled Contracts](../contract_develop/c++_contract/use_crud_precompiled.md), [Develop applications using KV storage precompiled contracts](../contract_develop/c++_contract/use_kv_precompiled.md) - - By using the delegate call's proxy contract to actively invoke the logical contract, the resulting state data is saved in the proxy contract, and the logical contract can be upgraded with the interface unchanged.。 -- The case of upgrading without retaining data is simpler, the user redeploys the upgraded contract, and there will be a new address。The contract based on the new address can be operated by the application, and the data of the new contract will also be used, and the data recorded by the old contract will exist on the chain, so the application needs to actively avoid the new business logic calling the old contract data.。 +- The situation of retaining the old contract data to upgrade is more complicated. The specific solutions are as follows: + - (Recommended) Users need to actively divide the contract into**logical contract** 和**Data contracts**The data contract is used to store the data that needs to be stored on the chain. The open data read / write interface is used by the logical contract. The logical contract calls the read / write interface of the data contract during calculation。When you need to upgrade, you only need to upgrade the logical contract. The new logical contract calls the old data contract interface, and the old logical contract is no longer used。 + - (Recommended) Equivalent to the extension of the first solution, the data that needs to be stored is stored using the CRUD data interface, and the CRUD data is stored on the chain through node consensus and persistence。For details, please refer to [Developing Applications Using CRUD Precompiled Contracts](../contract_develop/c++_contract/use_crud_precompiled.md), [Develop applications using KV storage precompiled contracts](../contract_develop/c++_contract/use_kv_precompiled.md) + -By using the delegate call's proxy contract to actively invoke the logical contract, the generated state data is saved in the proxy contract, and the logical contract can be upgraded while keeping the interface unchanged。 +-The situation of upgrading without retaining data is simpler. Users will redeploy the upgraded contract and there will be new addresses。The contract based on the new address can be operated by the application, and the data of the new contract will also be used, and the data recorded by the old contract will exist on the chain, so the application needs to actively avoid the new business logic calling the old contract data。 -## 5. Smart contract permission management operations. +## 5. Smart contract permission management operations ```eval_rst .. important:: - Contract lifecycle management Freeze, unfreeze, and revoke operations, as well as contract deployment call permission control, all need to enable the blockchain permission mode, please refer to the 'Permission Management User Guide < https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/committee_usage.html>`_ + For information about contract lifecycle management, freeze, unfreeze, and revoke operations, and contract deployment call permission control, you must enable the blockchain permission mode`_ ``` -After the blockchain permission mode is enabled, each contract deployment creates a contract permission data table in addition to the contract storage data table in the storage layer, which is used to record the contract administrator address, contract status, and contract interface ACL.。By default, the administrator address of the contract is the address of the account that initiated the deployment contract operation (if there is a contract creation contract, the contract administrator address is the transaction initiation account tx.origin)。 +After the blockchain permission mode is enabled, each contract deployment creates a contract permission data table in addition to the contract storage data table in the storage layer, which is used to record the contract administrator address, contract status, and contract interface ACL。By default, the administrator address of the contract is the address of the account that initiated the deployment contract operation (if there is a contract creation contract, the contract administrator address is the transaction initiation account tx.origin)。 ![](../../images/develop/contract_auth.png) -The contract administrator can operate the contract through the AuthManagerPrecompiled interface. The fixed address is 0x1005.。 +The contract administrator can operate the contract through the AuthManagerPrecompiled interface. The fixed address is 0x1005。 ```solidity enum Status{ @@ -118,33 +118,33 @@ abstract contract AuthManagerPrecompiled { ### 5.1 Freezing, unfreezing and abolishing smart contracts -The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the status of the contract.。 +The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the status of the contract。 -When the write operation of the contract status is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected.。 +When the write operation of the contract status is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected。 ```eval_rst .. important:: - Compatibility Note: Contract lifecycle management revocation can only be performed above node version 3.2.。 + Compatibility Note: Contract lifecycle management revocation can only be performed above node version 3.2。 ``` The contract administrator can also freeze contracts through the console. For more information, see [Freeze Contract Command](../operation_and_maintenance/console/console_commands.html#freezecontract)[Order to Unfreeze Contracts](../operation_and_maintenance/console/console_commands.html#unfreezecontract) ### 5.2 Smart Contract Deployment Permission Control -Authority control for deployment contracts will be centrally controlled by a governance committee, which will control deployment authority by vote。After the governance committee's proposal for a deployment permission is approved, the deployment permission write interface of the fixed address 0x1005 precompiled contract will be actively called, and these write interfaces are also limited to the governance committee contract call.。 +Authority control for deployment contracts will be centrally controlled by a governance committee, which will control deployment authority by vote。After the governance committee's proposal for a deployment permission is approved, the deployment permission write interface of the fixed address 0x1005 precompiled contract will be actively called, and these write interfaces are also limited to the governance committee contract call。 The deployment permissions are recorded in the BFS directory / apps, which represents the write permissions allowed in the / apps directory。 The governance committee can perform operations such as permission control of deployment contracts through the console. For more information, see [Proposal for Setting Deployment Permission Types](../operation_and_maintenance/console/console_commands.html#setdeployauthtypeproposal) , [Open Deployment Permission Proposal](../operation_and_maintenance/console/console_commands.html#opendeployauthproposal) , [Close Deployment Permissions Proposal](../operation_and_maintenance/console/console_commands.html#closedeployauthproposal) -The transaction initiation address tx.origin will be verified when checking the deployment permissions. If you do not have the permissions, an error code will be returned.-5000。That is, the user deployment contract and the user deployment contract are verified.。 +When checking the deployment permissions, the transaction initiation address tx.origin is verified. If you do not have the permissions, the error code -5000 is returned。That is, the user deployment contract and the user deployment contract are verified。 ### 5.3 Smart Contract Call Permission Control -The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the access ACL of the contract interface.。 +The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the access ACL of the contract interface。 -When the write operation of the access ACL of the contract interface is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected.。 +When the write operation of the access ACL of the contract interface is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected。 The contract administrator can access the write operation of the ACL through the console. For more information, see [Contract administrator command](../operation_and_maintenance/console/console_commands.html#setmethodauth) -When checking the contract invocation permission, the transaction initiation address tx.origin and the message sender msg.sender will be verified. If there is no permission, an error code will be returned.-5000。That is, the user invokes the contract, the user invokes the contract through the contract, and the contract invokes the contract.。 +When checking the contract invocation permission, the transaction initiation address tx.origin and the message sender msg.sender will be verified. If there is no permission, the error code -5000 will be returned。That is, the user invokes the contract, the user invokes the contract through the contract, and the contract invokes the contract。 diff --git a/3.x/en/docs/develop/contract_safty_practice.md b/3.x/en/docs/develop/contract_safty_practice.md index 1c1c625ce..6e2d05738 100644 --- a/3.x/en/docs/develop/contract_safty_practice.md +++ b/3.x/en/docs/develop/contract_safty_practice.md @@ -2,7 +2,7 @@ Smart contract security refers to the design, coding, deployment, operation and maintenance of smart contracts throughout the life cycle, take measures to ensure the security and reliability of the contract, to prevent malicious attacks, exploits or incorrect operations caused by the loss of assets or system crash。 -This article details the strategies, recommended practices, and security measures that smart contracts should use at all stages, starting from design patterns.。 +This article details the strategies, recommended practices, and security measures that smart contracts should use at all stages, starting from design patterns。 ## 1. Smart Contract Design Patterns @@ -10,9 +10,9 @@ This article details the strategies, recommended practices, and security measure Author: Chu Yuzhi | FISCO BCOS Core Developer -With the development of blockchain technology, more and more enterprises and individuals begin to combine blockchain with their own business。The unique advantages of blockchain, for example, data is open, transparent and immutable, which can facilitate business.。But at the same time, there are some hidden dangers。The transparency of the data means that anyone can read it.;Cannot be tampered with, meaning that information cannot be deleted once it is on the chain, and even the contract code cannot be changed。In addition, the openness of the contract, the callback mechanism, each of the characteristics can be used as an attack technique, a little careless, light contract is useless, heavy to face the risk of disclosure of corporate secrets.。Therefore, before the business contract is put on the chain, the security and maintainability of the contract need to be fully considered in advance.。Fortunately, through a lot of practice of Solidity language in recent years, developers continue to refine and summarize, has formed some"Design Pattern"To guide the daily development of common problems。 +With the development of blockchain technology, more and more enterprises and individuals begin to combine blockchain with their own business。The unique advantages of blockchain, for example, data is open, transparent and immutable, which can facilitate business。But at the same time, there are some hidden dangers。The transparency of the data means that anyone can read it;Cannot be tampered with, meaning that information cannot be deleted once it is on the chain, and even the contract code cannot be changed。In addition, the openness of the contract, the callback mechanism, each of the characteristics can be used as an attack technique, a little careless, light contract is useless, heavy to face the risk of disclosure of corporate secrets。Therefore, before the business contract is put on the chain, the security and maintainability of the contract need to be fully considered in advance。Fortunately, through a lot of practice of Solidity language in recent years, developers continue to refine and summarize, has formed some"Design Pattern"To guide the daily development of common problems。 -In 2019, the IEEE included a paper from the University of Vienna entitled "Design Patterns For Smart Contracts In the Ethereum Ecosystem."。This paper analyzes the hot Solidity open source projects, combined with previous research results, sorted out 18 design patterns。These design patterns cover security, maintainability, lifecycle management, authentication, and more.。 +In 2019, the IEEE included a paper from the University of Vienna entitled "Design Patterns For Smart Contracts In the Ethereum Ecosystem."。This paper analyzes the hot Solidity open source projects, combined with previous research results, sorted out 18 design patterns。These design patterns cover security, maintainability, lifecycle management, authentication, and more。 | Type| Mode| |--------------------|---------------------------------------------------------------------------------------------------------------------------| @@ -24,15 +24,15 @@ In 2019, the IEEE included a paper from the University of Vienna entitled "Desig Next, this article will select the most common and common of these 18 design patterns, which have been extensively tested in actual development experience。 -### 1.1 Checks-Effects-Interaction - Ensure that the state is complete, and then make external calls. +### 1.1 Checks-Effects-Interaction - Ensure the state is complete before making external calls This pattern is a coding style constraint that effectively avoids replay attacks。Typically, a function might have three parts: -- Checks: Parameter Validation -- Effects: Modify contract status +-Checks: Parameter Validation +-Effects: Modify contract status - Interaction: external interaction -This model requires contracts to follow Checks-Effects-The order of the interaction to organize the code。The benefit of it is that before making an external call, Checks-Effects has completed all work related to the state of the contract itself, making the state complete and logically self-consistent, so that external calls cannot exploit the incomplete state for attacks.。Review the previous AddService contract, did not follow this rule, in the case of its own state has not been updated to call the external code, the external code can naturally cross a knife, so that _ adders [msg.sender] = true permanently not called, thus invalidating the require statement.。We check-effects-Review the original code from the perspective of interaction: +This pattern requires contracts to organize code in the order Checks-Effects-Interaction。The advantage is that Checks-Effects has completed all the work related to the state of the contract itself before making the external call, making the state complete and logically self-consistent, so that the external call cannot be attacked with incomplete state。Review the previous AddService contract, did not follow this rule, in the case of its own state has not been updated to call the external code, the external code can naturally cross a knife, so that _ adders [msg.sender] = true permanently not called, thus invalidating the require statement。Let's review the original code in terms of checks-effects-interaction: ```solidity //Checks @@ -46,7 +46,7 @@ This model requires contracts to follow Checks-Effects-The order of the interact _adders[msg.sender] = true; ``` -As long as the order is slightly adjusted to meet the Checks-Effects-Interaction mode, tragedy is avoided: +With a slight adjustment of the order, satisfying the Checks-Effects-Interaction pattern, the tragedy is avoided: ```solidity //Checks @@ -59,7 +59,7 @@ As long as the order is slightly adjusted to meet the Checks-Effects-Interaction adder.notify(); ``` -Since the _ adders mapping has been modified, when a malicious attacker wants to recursively call addByOne, the require line of defense will work to intercept the malicious call.。Although this pattern is not the only way to resolve reentry attacks, it is still recommended that developers follow。 +Since the _ adders mapping has been modified, when a malicious attacker wants to recursively call addByOne, the require line of defense will work to intercept the malicious call。Although this pattern is not the only way to resolve reentry attacks, it is still recommended that developers follow。 ### 1.2 Mutex - Prohibit Recursion @@ -83,9 +83,9 @@ contract Mutex { } ``` -In this example, before calling the some function, the noReancy modifier is run to assign the locked variable to true。If some is called recursively at this point, the logic of the modifier is activated again, and the first line of code for the modifier throws an error because the locked property is already true.。 +In this example, before calling the some function, the noReancy modifier is run to assign the locked variable to true。If some is called recursively at this point, the logic of the modifier is activated again, and the first line of code for the modifier throws an error because the locked property is already true。 -### 1.3 Data segregation - Separation of data and logic +### 1.3 Data segregation - separation of data and logic Before understanding the design pattern, take a look at the following contract code: @@ -104,11 +104,11 @@ contract Computer{ } ``` -This contract contains two capabilities, one is to store data(setData function)The other is the use of data for calculation.(Compute function)。If the contract is deployed for a period of time and you find that the compute is incorrectly written, for example, you should not multiply by 10, but multiply by 20, it will lead to the question of how to upgrade the contract as described above.。At this point, you can deploy a new contract and try to migrate the existing data to the new contract, but this is a heavy operation, on the one hand, to write the code of the migration tool, on the other hand, the original data is completely obsolete, empty of valuable node storage resources。 +This contract contains two capabilities, one is to store data(setData function)The other is the use of data for calculation(Compute function)。If the contract is deployed for a period of time and you find that the compute is incorrectly written, for example, you should not multiply by 10, but multiply by 20, it will lead to the question of how to upgrade the contract as described above。At this point, you can deploy a new contract and try to migrate the existing data to the new contract, but this is a heavy operation, on the one hand, to write the code of the migration tool, on the other hand, the original data is completely obsolete, empty of valuable node storage resources。 -Therefore, it is necessary to be modular in advance when programming。If we will"Data"Seen as unchanging things, will"Logic"Seeing as something that can change, you can perfectly avoid the above problems。The Data Segregation (which means data separation) pattern is a good implementation of this idea.。The model requires a business contract and a data contract: the data contract is only for data access, which is stable.;Business contracts, on the other hand, perform logical operations through data contracts.。 +Therefore, it is necessary to be modular in advance when programming。If we will"Data"Seen as unchanging things, will"Logic"Seeing as something that can change, you can perfectly avoid the above problems。The Data Segregation (which means data separation) pattern is a good implementation of this idea。The model requires a business contract and a data contract: the data contract is only for data access, which is stable;Business contracts, on the other hand, perform logical operations through data contracts。 -In conjunction with the previous example, we transfer data read and write operations specifically to a contract DataRepository. +In conjunction with the previous example, we transfer data read and write operations specifically to a contract DataRepository ```solidity contract DataRepository{ @@ -141,11 +141,11 @@ contract Computer{ } ``` -In this way, as long as the data contract is stable, the upgrade of the business contract is very lightweight.。For example, when I want to replace Computer with ComputerV2, the original data can still be reused。 +In this way, as long as the data contract is stable, the upgrade of the business contract is very lightweight。For example, when I want to replace Computer with ComputerV2, the original data can still be reused。 -### 1.4 Satellite - Breaking down contract functions +### 1.4 Satellite - Decompose Contract Function -A complex contract usually consists of many functions, if these functions are all coupled in a contract, when a function needs to be updated, you have to deploy the entire contract, normal functions will be affected.。The Satellite model addresses these issues using the single-duty principle, advocating the placement of contract subfunctions into subcontracts, with each subcontract (also known as a satellite contract) corresponding to only one function.。When a sub-function needs to be modified, just create a new sub-contract and update its address to the main contract.。 +A complex contract usually consists of many functions, if these functions are all coupled in a contract, when a function needs to be updated, you have to deploy the entire contract, normal functions will be affected。The Satellite model addresses these issues using the single-duty principle, advocating the placement of contract subfunctions into subcontracts, with each subcontract (also known as a satellite contract) corresponding to only one function。When a sub-function needs to be modified, just create a new sub-contract and update its address to the main contract。 For a simple example, the setVariable function of the following contract is to calculate the input data (compute function) and store the calculation result in the contract state _ variable: @@ -164,7 +164,7 @@ contract Base { } ``` -After deployment, if you find that the compute function is incorrectly written and you want to multiply by a factor of 20, you must redeploy the entire contract.。However, if you initially operate in Satellite mode, you only need to deploy the corresponding subcontract。 +After deployment, if you find that the compute function is incorrectly written and you want to multiply by a factor of 20, you must redeploy the entire contract。However, if you initially operate in Satellite mode, you only need to deploy the corresponding subcontract。 First, let's strip the compute function into a separate satellite contract: @@ -176,7 +176,7 @@ contract Satellite { } ``` -The main contract then relies on the subcontract to complete setVariable. +The main contract then relies on the subcontract to complete setVariable ```solidity contract Base { @@ -204,9 +204,9 @@ contract Satellite2{ } ``` -### 1.5 Contract Registry - Track the latest contracts +### 1.5 Contract Registry - Track Latest Contracts -In Satellite mode, if a primary contract depends on a subcontract, when the subcontract is upgraded, the primary contract needs to update the address reference to the subcontract, which is done through updateXXX, for example, the updateSatellite function described earlier.。This type of interface is a maintainable interface and has nothing to do with the actual business. Too much exposure of this type of interface will affect the aesthetics of the main contract and greatly reduce the caller's experience.。The Contract Registry design pattern elegantly solves this problem。In this design mode, there is a special contract Registry to track each upgrade of a subcontract, and the main contract can obtain the latest subcontract address by querying this Registyr contract.。After the satellite contract is redeployed, the new address is updated via the Registry.update function。 +In Satellite mode, if a primary contract depends on a subcontract, when the subcontract is upgraded, the primary contract needs to update the address reference to the subcontract, which is done through updateXXX, for example, the updateSatellite function described earlier。This type of interface is a maintainable interface and has nothing to do with the actual business. Too much exposure of this type of interface will affect the aesthetics of the main contract and greatly reduce the caller's experience。The Contract Registry design pattern elegantly solves this problem。In this design mode, there is a special contract Registry to track each upgrade of a subcontract, and the main contract can obtain the latest subcontract address by querying this Registyr contract。After the satellite contract is redeployed, the new address is updated via the Registry.update function。 ```solidity contract Registry{ @@ -214,7 +214,7 @@ contract Registry{ address _current; address[] _previous; - / / If the subcontract is upgraded, update the address through the update function. + / / If the subcontract is upgraded, update the address through the update function function update(address newAddress) public{ if(newAddress != _current){ _previous.push(_current); @@ -245,7 +245,7 @@ contract Base { ### 1.6 Contract Relay - Agent invokes latest contract -This design pattern solves the same problem as Contract Registry, i.e. the main contract can call the latest subcontract without exposing the maintenance interface.。In this mode, there is a proxy contract, and the subcontract shares the same interface, responsible for passing the call request of the main contract to the real subcontract.。After the satellite contract is redeployed, the new address is updated via the SatelliteProxy.update function。 +This design pattern solves the same problem as Contract Registry, i.e. the main contract can call the latest subcontract without exposing the maintenance interface。In this mode, there is a proxy contract, and the subcontract shares the same interface, responsible for passing the call request of the main contract to the real subcontract。After the satellite contract is redeployed, the new address is updated via the SatelliteProxy.update function。 ```solidity contract SatelliteProxy{ @@ -255,7 +255,7 @@ contract SatelliteProxy{ return satellite.compute(a); } - / / If the subcontract is upgraded, update the address through the update function. + / / If the subcontract is upgraded, update the address through the update function function update(address newAddress) public{ if(newAddress != _current){ _current = newAddress; @@ -298,9 +298,9 @@ contract Mortal{ } ``` -### 1.8 Automatic Deprecation - Allow contracts to automatically stop services +### 1.8 Automatic Deprecation - allows contracts to automatically stop service -If you want a contract to be out of service after a specified period without human intervention, you can use the Automatic Deprecation pattern.。 +If you want a contract to be out of service after a specified period without human intervention, you can use the Automatic Deprecation pattern。 ``` solidity contract AutoDeprecated{ @@ -322,13 +322,13 @@ contract AutoDeprecated{ } ``` -When the user calls service, the notExpired modifier will first perform date detection, so that once a specific time has passed, the call will be intercepted at the notExpired layer due to expiration.。 +When the user calls service, the notExpired modifier will first perform date detection, so that once a specific time has passed, the call will be intercepted at the notExpired layer due to expiration。 ### 1.9 Ownership check -There are many administrative interfaces in the previous article, which can have serious consequences if they can be called by anyone, such as the self-destruct function above, which assumes that anyone can access it, and its severity is self-evident.。Therefore, a set of permission control design patterns that ensure that only specific accounts can access is particularly important。 +There are many administrative interfaces in the previous article, which can have serious consequences if they can be called by anyone, such as the self-destruct function above, which assumes that anyone can access it, and its severity is self-evident。Therefore, a set of permission control design patterns that ensure that only specific accounts can access is particularly important。 -For permission control, you can use the ownership mode.。This pattern guarantees that only the owner of the contract can call certain functions.。First you need an Owned contract: +For permission control, you can use the ownership mode。This pattern guarantees that only the owner of the contract can call certain functions。First you need an Owned contract: ```solidity contract Owned{ @@ -355,13 +355,13 @@ contract Biz is Owned{ } ``` -Thus, when the manage function is called, the onlyOwner modifier runs first and detects whether the caller is consistent with the contract owner, thus intercepting unauthorized calls.。 +Thus, when the manage function is called, the onlyOwner modifier runs first and detects whether the caller is consistent with the contract owner, thus intercepting unauthorized calls。 ### 1.10 Delay in Secret Disclosure These patterns are typically used in specific scenarios, and this section will focus on privacy-based coding patterns and design patterns for interacting with off-chain data。 -On-chain data is open and transparent, once some private data on the chain, anyone can see, and can never withdraw。Commit And Reveal mode allows users to convert the data to be protected into unrecognizable data, such as a string of hash values, until a certain point to reveal the meaning of the hash value, revealing the true original value.。In the voting scenario, for example, suppose that the voting content needs to be revealed after all participants have completed the voting to prevent participants from being affected by the number of votes during this period.。We can look at the specific code used in this scenario: +On-chain data is open and transparent, once some private data on the chain, anyone can see, and can never withdraw。Commit And Reveal mode allows users to convert the data to be protected into unrecognizable data, such as a string of hash values, until a certain point to reveal the meaning of the hash value, revealing the true original value。In the voting scenario, for example, suppose that the voting content needs to be revealed after all participants have completed the voting to prevent participants from being affected by the number of votes during this period。We can look at the specific code used in this scenario: ```solidity contract CommitReveal { @@ -395,7 +395,7 @@ contract CommitReveal { } ``` -## 2. Smart contract programming strategy. +## 2. Smart contract programming strategy [Solidity Programming Strategy for Smart Contract Writing](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/articles/3_features/35_contract/solidity_design_programming_strategy.html) @@ -403,21 +403,21 @@ Author : MAO Jiayu | FISCO BCOS Core Developer **"Do not add entities unless necessary"。** -- Important data that requires distributed collaboration is chained, and non-essential data is not chained.; -- Sensitive data is desensitized or encrypted and then linked (depending on the degree of data confidentiality, select an encryption algorithm that meets the requirements of the privacy protection security level); +- Important data that requires distributed collaboration is chained, and unnecessary data is not chained; +- Sensitive data is desensitized or encrypted on the chain (depending on the degree of data confidentiality, select the encryption algorithm that meets the requirements of the privacy protection security level); - On-chain authentication, off-chain authorization。 -When using blockchain, developers don't need to put all their business and data on the chain。Instead, "good steel is on the cutting edge," and smart contracts are more suitable for use in distributed collaboration business scenarios.。 +When using blockchain, developers don't need to put all their business and data on the chain。Instead, "good steel is on the cutting edge," and smart contracts are more suitable for use in distributed collaboration business scenarios。 ### 2.1 Refinement of function variables -If complex logic is defined in a smart contract, especially if complex function parameters, variables, and return values are defined in the contract, you will encounter the following errors at compile time. +If complex logic is defined in a smart contract, especially if complex function parameters, variables, and return values are defined in the contract, you will encounter the following errors at compile time ```shell Compiler error: Stack too deep, try removing local variables. ``` -This is also one of the high-frequency technical issues in the community.。The reason for this problem is that EVM is designed for a maximum stack depth of 16。All calculations are performed within a stack, and access to the stack is limited to the top of the stack in such a way as to allow one of the top 16 elements to be copied to the top of the stack, or to swap the top of the stack with one of the 16 elements below.。All other operations can only take the top few elements, and after the operation, the result is pushed to the top of the stack.。Of course, you can put the elements on the stack into storage or memory.。However, you cannot access only the element on the stack at the specified depth unless you first remove the other elements from the top of the stack。If the size of the input parameters, return values, and internal variables in a contract exceeds 16, it clearly exceeds the maximum depth of the stack.。Therefore, we can use structs or arrays to encapsulate input or return values to reduce the use of elements at the top of the stack, thereby avoiding this error。For example, the following code encapsulates the original 16 bytes variables by using the bytes array.。 +This is also one of the high-frequency technical issues in the community。The reason for this problem is that EVM is designed for a maximum stack depth of 16。All calculations are performed within a stack, and access to the stack is limited to the top of the stack in such a way as to allow one of the top 16 elements to be copied to the top of the stack, or to swap the top of the stack with one of the 16 elements below。All other operations can only take the top few elements, and after the operation, the result is pushed to the top of the stack。Of course, you can put the elements on the stack into storage or memory。However, you cannot access only the element on the stack at the specified depth unless you first remove the other elements from the top of the stack。If the size of the input parameters, return values, and internal variables in a contract exceeds 16, it clearly exceeds the maximum depth of the stack。Therefore, we can use structs or arrays to encapsulate input or return values to reduce the use of elements at the top of the stack, thereby avoiding this error。For example, the following code encapsulates the original 16 bytes variables by using the bytes array。 ```solidity function doBiz(bytes[] paras) public { @@ -428,7 +428,7 @@ function doBiz(bytes[] paras) public { ### 2.2 Guaranteed parameters and behavior as expected -When writing smart contracts, it is important to pay attention to the examination of contract parameters and behavior, especially those contract functions that are open to the outside world.。Solidity provides keywords such as require, revert, and assert to detect and handle exceptions.。Once the error is detected and found, the entire function call is rolled back and all state modifications are rolled back as if the function had never been called。The following uses three keywords to achieve the same semantics.。 +When writing smart contracts, it is important to pay attention to the examination of contract parameters and behavior, especially those contract functions that are open to the outside world。Solidity provides keywords such as require, revert, and assert to detect and handle exceptions。Once the error is detected and found, the entire function call is rolled back and all state modifications are rolled back as if the function had never been called。The following uses three keywords to achieve the same semantics。 ```solidity require(_data == data, "require data is valid"); @@ -440,13 +440,13 @@ assert(_data == data); However, these three keywords generally apply to different usage scenarios: -- require: The most commonly used detection keyword to verify whether the input parameters and the result of calling the function are legitimate.。 -- revert: Applicable to a branch judgment scenario。 +-require: The most commonly used detection keyword to verify whether the input parameters and the result of calling the function are legitimate。 +-revert: Applicable in a branch judgment scenario。 - assert: Check whether the result is correct and legal, generally used at the end of the function。 -In a function of a contract, you can use the function decorator to abstract part of the parameter and condition checking。Within the function body, you can use if for the running state-Else and other judgment statements to check, the abnormal branch using revert fallback。You can use assert to check the execution result or intermediate state before the function runs。In practice, it is recommended to use the require keyword and move the condition check to the function decorator.;This allows the function to have more single responsibilities and focus more on the business logic.。At the same time, condition codes such as function modifiers are easier to reuse, and contracts are more secure and hierarchical.。 +In a function of a contract, you can use the function decorator to abstract part of the parameter and condition checking。In the function body, you can check the running state using judgment statements such as if-else, and use revert fallback for abnormal branches。You can use assert to check the execution result or intermediate state before the function runs。In practice, it is recommended to use the require keyword and move the condition check to the function decorator;This allows the function to have more single responsibilities and focus more on the business logic。At the same time, condition codes such as function modifiers are easier to reuse, and contracts are more secure and hierarchical。 -Take a fruit store inventory management system as an example, design a fruit supermarket contract.。This contract only contains the management of all fruit categories and inventory quantities in the store, and the setFruitStock function provides a function corresponding to the fruit inventory settings.。In this contract, we need to check the incoming parameters, i.e. the fruit name cannot be empty。 +Take a fruit store inventory management system as an example, design a fruit supermarket contract。This contract only contains the management of all fruit categories and inventory quantities in the store, and the setFruitStock function provides a function corresponding to the fruit inventory settings。In this contract, we need to check the incoming parameters, i.e. the fruit name cannot be empty。 ```solidity pragma solidity ^0.4.25; @@ -463,11 +463,11 @@ contract FruitStore { } ``` -As mentioned above, we added a function decorator for parameter checking before function execution。Similarly, by using function decorators that check before and after function execution, you can ensure that smart contracts are safer and clearer.。The writing of smart contracts requires strict pre-and post-function checks to ensure their security.。 +As mentioned above, we added a function decorator for parameter checking before function execution。Similarly, by using function decorators that check before and after function execution, you can ensure that smart contracts are safer and clearer。The writing of smart contracts requires strict pre-and post-function checks to ensure their security。 ### 2.3 Strictly control the execution authority of functions -If the parameters and behavior detection of smart contracts provide static contract security measures, then the mode of contract permission control provides control of dynamic access behavior.。Since smart contracts are published on the blockchain, all data and functions are open and transparent to all participants, and any node participant can initiate a transaction, which does not guarantee the privacy of the contract.。Therefore, the contract publisher must design a strict access restriction mechanism for the function。Solidity provides syntax such as function visibility modifiers and modifiers, which can be used flexibly to help build a smart contract system with legal authorization and controlled calls.。Or take the fruit contract just now as an example.。Now getStock provides a function to query the inventory quantity of specific fruits.。 +If the parameters and behavior detection of smart contracts provide static contract security measures, then the mode of contract permission control provides control of dynamic access behavior。Since smart contracts are published on the blockchain, all data and functions are open and transparent to all participants, and any node participant can initiate a transaction, which does not guarantee the privacy of the contract。Therefore, the contract publisher must design a strict access restriction mechanism for the function。Solidity provides syntax such as function visibility modifiers and modifiers, which can be used flexibly to help build a smart contract system with legal authorization and controlled calls。Or take the fruit contract just now as an example。Now getStock provides a function to query the inventory quantity of specific fruits。 ```solidity pragma solidity ^0.4.25; @@ -487,7 +487,7 @@ contract FruitStore { } ``` -The fruit store owner posted the contract on the chain.。However, after publication, the setFruitStock function can be called by any other affiliate chain participant。Although the participants in the alliance chain are real-name authenticated and can be held accountable afterwards.;However, once a malicious attacker attacks the fruit store, calling the setFruitStock function can modify the fruit inventory at will, or even clear all the fruit inventory, which will have serious consequences for the normal operation and management of the fruit store.。Therefore, it is necessary to set up some prevention and authorization measures: for the function setFruitStock that modifies the inventory, the caller can be authenticated before the function executes.。Similarly, these checks may be reused by multiple functions that modify the data, using an onlyOwner decorator to abstract this check。The owner field represents the owner of the contract and is initialized in the contract constructor.。Using public to modify the getter query function, you can pass _ owner()function to query the owner of a contract。 +The fruit store owner posted the contract on the chain。However, after publication, the setFruitStock function can be called by any other affiliate chain participant。Although the participants in the alliance chain are real-name authenticated and can be held accountable afterwards;However, once a malicious attacker attacks the fruit store, calling the setFruitStock function can modify the fruit inventory at will, or even clear all the fruit inventory, which will have serious consequences for the normal operation and management of the fruit store。Therefore, it is necessary to set up some prevention and authorization measures: for the function setFruitStock that modifies the inventory, the caller can be authenticated before the function executes。Similarly, these checks may be reused by multiple functions that modify the data, using an onlyOwner decorator to abstract this check。The owner field represents the owner of the contract and is initialized in the contract constructor。Using public to modify the getter query function, you can pass _ owner()function to query the owner of a contract。 ```solidity contract FruitStore { @@ -518,11 +518,11 @@ contract FruitStore { } ``` -In this way, we can encapsulate the corresponding function call permission check into the decorator, the smart contract will automatically initiate the caller authentication check, and only allow the contract deployer to call the setFruitStock function, thus ensuring that the contract function is open to the specified caller.。 +In this way, we can encapsulate the corresponding function call permission check into the decorator, the smart contract will automatically initiate the caller authentication check, and only allow the contract deployer to call the setFruitStock function, thus ensuring that the contract function is open to the specified caller。 ### 2.4 Abstract generic business logic -Analyzing the above FruitStore contract, we found that there seems to be something strange mixed in with the contract.。Referring to the programming principle of single responsibility, the fruit store inventory management contract has more logic than the above function function check, so that the contract can not focus all the code in its own business logic.。In this regard, we can abstract reusable functions and use Solidity's inheritance mechanism to inherit the final abstract contract.。Based on the above FruitStore contract, a BasicAuth contract can be abstracted, which contains the previous onlyOwner's decorator and related functional interfaces.。 +Analyzing the above FruitStore contract, we found that there seems to be something strange mixed in with the contract。Referring to the programming principle of single responsibility, the fruit store inventory management contract has more logic than the above function function check, so that the contract can not focus all the code in its own business logic。In this regard, we can abstract reusable functions and use Solidity's inheritance mechanism to inherit the final abstract contract。Based on the above FruitStore contract, a BasicAuth contract can be abstracted, which contains the previous onlyOwner's decorator and related functional interfaces。 ```solidity contract BasicAuth { @@ -561,11 +561,11 @@ contract FruitStore is BasicAuth { } ``` -In this way, the logic of FruitStore is greatly simplified, and the contract code is more streamlined, focused and clear.。 +In this way, the logic of FruitStore is greatly simplified, and the contract code is more streamlined, focused and clear。 ### 2.5 Prevention of loss of private key -There are two ways to call contract functions in the blockchain: internal calls and external calls.。For privacy protection and permission control, a business contract defines a contract owner。Suppose user A deploys the FruitStore contract, then the above contract owner is the external account address of deployer A.。This address is generated by the private key calculation of the external account.。However, in the real world, the phenomenon of private key leakage, loss abound。A commercial blockchain DAPP needs to seriously consider issues such as private key replacement and reset.。The simplest and most intuitive solution to this problem is to add an alternate private key。This alternate private key supports the operation of the permission contract modification owner. The code is as follows: +There are two ways to call contract functions in the blockchain: internal calls and external calls。For privacy protection and permission control, a business contract defines a contract owner。Suppose user A deploys the FruitStore contract, then the above contract owner is the external account address of deployer A。This address is generated by the private key calculation of the external account。However, in the real world, the phenomenon of private key leakage, loss abound。A commercial blockchain DAPP needs to seriously consider issues such as private key replacement and reset。The simplest and most intuitive solution to this problem is to add an alternate private key。This alternate private key supports the operation of the permission contract modification owner. The code is as follows: ```solidity ontract BasicAuth { @@ -600,17 +600,17 @@ ontract BasicAuth { } ``` -In this way, when we find that the private key is lost or leaked, we can use the standby external account to call setOwner to reset the account to restore and ensure the normal operation of the business.。 +In this way, when we find that the private key is lost or leaked, we can use the standby external account to call setOwner to reset the account to restore and ensure the normal operation of the business。 ### 2.6 Reasonable Reservation Events -So far, we have implemented a strong and flexible permission management mechanism, and only pre-authorized external accounts can modify the contract owner attribute.。However, with the above contract code alone, we cannot record and query the history and details of modifications and calls to functions.。And such needs abound in real business scenarios.。For example, FruitStore needs to check the historical inventory modification records to calculate the best-selling and slow-selling fruits in different seasons.。 +So far, we have implemented a strong and flexible permission management mechanism, and only pre-authorized external accounts can modify the contract owner attribute。However, with the above contract code alone, we cannot record and query the history and details of modifications and calls to functions。And such needs abound in real business scenarios。For example, FruitStore needs to check the historical inventory modification records to calculate the best-selling and slow-selling fruits in different seasons。 -One way is to rely on the chain to maintain an independent ledger mechanism.。However, there are many problems with this approach: the cost overhead of keeping the off-chain ledger and on-chain records consistent is very high.;At the same time, smart contracts are open to all participants in the chain, and once other participants call the contract function, there is a risk that the relevant transaction information will not be synchronized.。For such scenarios, Solidity provides the event syntax。Event not only has the mechanism for SDK listening callback, but also can record and save event parameters and other information to the block with low gas cost.。FISCO BCOS community, there is also WEBASE-Collect-A tool like Bee that enables the complete export of block history event information after the fact.。 +One way is to rely on the chain to maintain an independent ledger mechanism。However, there are many problems with this approach: the cost overhead of keeping the off-chain ledger and on-chain records consistent is very high;At the same time, smart contracts are open to all participants in the chain, and once other participants call the contract function, there is a risk that the relevant transaction information will not be synchronized。For such scenarios, Solidity provides the event syntax。Event not only has the mechanism for SDK listening callback, but also can record and save event parameters and other information to the block with low gas cost。In the FISCO BCOS community, there are also tools like WEBASE-Collect-Bee that enable the complete export of block historical event information after the fact。 [WEBASE-Collect-Bee Tool Reference](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Collect-Bee/index.html) -Based on the above permission management contract, we can define the corresponding permission modification event, other events and so on.。 +Based on the above permission management contract, we can define the corresponding permission modification event, other events and so on。 ```solidity event LogSetAuthority (Authority indexed authority, address indexed from); @@ -629,55 +629,55 @@ function setAuthority(Authority authority) } ``` -When the setAuthority function is called, LogSetAuthority is triggered at the same time, and the Authority contract address and caller address defined in the event are recorded in the blockchain transaction receipt.。When the setAuthority method is called from the console, the corresponding event LogSetAuthority is also printed。Based on WEBASE-Collect-Bee, we can export all the historical information of the function to the database.。Also available based on WEBASE-Collect-Bee for secondary development, to achieve complex data query, big data analysis and data visualization functions。 +When the setAuthority function is called, LogSetAuthority is triggered at the same time, and the Authority contract address and caller address defined in the event are recorded in the blockchain transaction receipt。When the setAuthority method is called from the console, the corresponding event LogSetAuthority is also printed。Based on WEBASE-Collect-Bee, we can export all the history information of this function to the database。It can also be based on WEBASE-Collect-Bee for secondary development, to achieve complex data query, big data analysis and data visualization functions。 ### 2.7 Follow Safety Programming Specifications -Each language has its own coding specifications, and we need to follow Solidity's official programming style guidelines as strictly as possible to make the code easier to read, understand, and maintain, effectively reducing the number of contract bugs.。[Solidity Official Programming Style Guide Reference](https://solidity.readthedocs.io/en/latest/style-guide.html)。In addition to programming specifications, the industry has also summarized many secure programming guidelines, such as re-entry vulnerabilities, data structure overflows, random number errors, runaway constructors, storage pointers for initialization, and so on.。To address and prevent such risks, it is critical to adopt industry-recommended security programming specifications, such as the [Solidity Official Security Programming Guide](https://solidity.readthedocs.io/en/latest/security-considerations.html)。At the same time, after the contract is released and launched, you also need to pay attention to and subscribe to all kinds of security vulnerabilities and attack methods released by security organizations or institutions in the Solidity community, and make up for problems in a timely manner.。 +Each language has its own coding specifications, and we need to follow Solidity's official programming style guidelines as strictly as possible to make the code easier to read, understand, and maintain, effectively reducing the number of contract bugs。[Solidity Official Programming Style Guide Reference](https://solidity.readthedocs.io/en/latest/style-guide.html)。In addition to programming specifications, the industry has also summarized many secure programming guidelines, such as re-entry vulnerabilities, data structure overflows, random number errors, runaway constructors, storage pointers for initialization, and so on。To address and prevent such risks, it is critical to adopt industry-recommended security programming specifications, such as the [Solidity Official Security Programming Guide](https://solidity.readthedocs.io/en/latest/security-considerations.html)。At the same time, after the contract is released and launched, you also need to pay attention to and subscribe to all kinds of security vulnerabilities and attack methods released by security organizations or institutions in the Solidity community, and make up for problems in a timely manner。 -For important smart contracts, it is necessary to introduce auditing。Existing audits include manual audits, machine audits and other methods to ensure contract security through code analysis, rule validation, semantic validation and formal validation.。Although emphasized throughout this article, modularity and reuse of smart contracts that are highly reviewed and widely validated are best practice strategies。But in the actual development process, this assumption is too idealistic, each project will more or less introduce new code, or even from scratch。However, we can still grade audits based on how much code is reused, explicitly label referenced code, and focus audits and inspections on new code to save on audit costs。 +For important smart contracts, it is necessary to introduce auditing。Existing audits include manual audits, machine audits and other methods to ensure contract security through code analysis, rule validation, semantic validation and formal validation。Although emphasized throughout this article, modularity and reuse of smart contracts that are highly reviewed and widely validated are best practice strategies。But in the actual development process, this assumption is too idealistic, each project will more or less introduce new code, or even from scratch。However, we can still grade audits based on how much code is reused, explicitly label referenced code, and focus audits and inspections on new code to save on audit costs。 ### 2.8 Using the SmartDev App Plug-in -SmartDev includes a set of open, lightweight development components that cover the development, debugging, and application development of smart contracts, including the smart contract library (SmartDev-Contract), Smart Contract Compilation Plug-in (SmartDev-SCGP) and application development scaffolding (SmartDev-Scaffold)。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency.。 +SmartDev includes a set of open and lightweight development components, covering the development, debugging, and application development of smart contracts, including the SmartDev-Contract, SmartDev-SCGP, and SmartDev-Scaffold。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency。 For more information, see: [SmartDev Application Development Components](./smartdev_index.md) ## 3. Smart contract deployment permission control -Authority control for deployment contracts will be centrally controlled by a governance committee, which will control deployment authority by vote。After the governance committee's proposal for a deployment permission is approved, the deployment permission write interface of the fixed address 0x1005 precompiled contract will be actively called, and these write interfaces are also limited to the governance committee contract call.。 +Authority control for deployment contracts will be centrally controlled by a governance committee, which will control deployment authority by vote。After the governance committee's proposal for a deployment permission is approved, the deployment permission write interface of the fixed address 0x1005 precompiled contract will be actively called, and these write interfaces are also limited to the governance committee contract call。 The deployment permissions are recorded in the BFS directory / apps, which represents the write permissions allowed in the / apps directory。 The governance committee can perform operations such as permission control of deployment contracts through the console. For more information, see [Proposal for Setting Deployment Permission Types](../operation_and_maintenance/console/console_commands.html#setdeployauthtypeproposal) , [Open Deployment Permission Proposal](../operation_and_maintenance/console/console_commands.html#opendeployauthproposal) , [Close Deployment Permissions Proposal](../operation_and_maintenance/console/console_commands.html#closedeployauthproposal) -The transaction initiation address tx.origin will be verified when checking the deployment permissions. If you do not have the permissions, an error code will be returned.-5000。That is, the user deployment contract and the user deployment contract are verified.。 +When checking the deployment permissions, the transaction initiation address tx.origin is verified. If you do not have the permissions, the error code -5000 is returned。That is, the user deployment contract and the user deployment contract are verified。 -## 4. Smart contract execution permission control. +## 4. Smart contract execution permission control -The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the access ACL of the contract interface.。 +The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the access ACL of the contract interface。 -When the write operation of the access ACL of the contract interface is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected.。 +When the write operation of the access ACL of the contract interface is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected。 The contract administrator can access the write operation of the ACL through the console. For more information, see [Contract administrator command](../operation_and_maintenance/console/console_commands.html#setmethodauth) -When checking the contract invocation permission, the transaction initiation address tx.origin and the message sender msg.sender will be verified. If there is no permission, an error code will be returned.-5000。That is, the user invokes the contract, the user invokes the contract through the contract, and the contract invokes the contract.。 +When checking the contract invocation permission, the transaction initiation address tx.origin and the message sender msg.sender will be verified. If there is no permission, the error code -5000 will be returned。That is, the user invokes the contract, the user invokes the contract through the contract, and the contract invokes the contract。 ## 5. Smart contract operation and maintenance -Smart contracts in operation and maintenance mainly focus on the data state of smart contracts, smart contract upgrades, smart contract freezing, smart contract destruction.。 +Smart contracts in operation and maintenance mainly focus on the data state of smart contracts, smart contract upgrades, smart contract freezing, smart contract destruction。 ### 5.1 Smart Contract Upgrade -In Solidity, once a contract is deployed and released, its code cannot be modified and can only be modified by releasing a new contract.。If the data is stored in the old contract, there will be a so-called "orphan data" problem, the new contract will lose the historical business data previously run.。In this case, developers can consider migrating the old contract data to the new contract, but this operation has at least two problems: +In Solidity, once a contract is deployed and released, its code cannot be modified and can only be modified by releasing a new contract。If the data is stored in the old contract, there will be a so-called "orphan data" problem, the new contract will lose the historical business data previously run。In this case, developers can consider migrating the old contract data to the new contract, but this operation has at least two problems: Migrating data will increase the burden on the blockchain, resulting in waste and consumption of resources, and even introduce security issues; -2. Pull the whole body, will introduce additional migration data logic, increase contract complexity.。 +2. Pull the whole body, will introduce additional migration data logic, increase contract complexity。 -A more reasonable approach is to abstract a separate contract storage layer.。This storage layer only provides the most basic way to read and write contracts, and does not contain any business logic.。In this model, there are three contract roles: +A more reasonable approach is to abstract a separate contract storage layer。This storage layer only provides the most basic way to read and write contracts, and does not contain any business logic。In this model, there are three contract roles: -- Data contract: Save data in a contract and provide an interface for data manipulation。 -- Manage contracts: Set control permissions to ensure that only control contracts have permission to modify data contracts.。 +- Data contract: Save data in the contract and provide the operation interface of the data。 +- Manage contracts: Set control permissions to ensure that only control contracts have permission to modify data contracts。 - Control contracts: Contracts that really need to initiate operations on data。 Specific code examples are as follows: @@ -725,15 +725,15 @@ contract FruitStoreController is BasicAuth { } ``` -Once the control logic of the function needs to be changed, the developer simply modifies the FruitStoreController control contract logic, deploys a new contract, and then uses the management contract Admin to modify the new contract address parameters to easily complete the contract upgrade.。This approach eliminates data migration hazards due to changes in business control logic in contract upgrades。But there is no such thing as a free lunch, and this kind of operation requires a basic trade-off between scalability and complexity.。First, the separation of data and logic reduces operational performance。Second, further encapsulation increases program complexity。Finally, more complex contracts increase the potential attack surface, and simple contracts are safer than complex contracts.。 +Once the control logic of the function needs to be changed, the developer simply modifies the FruitStoreController control contract logic, deploys a new contract, and then uses the management contract Admin to modify the new contract address parameters to easily complete the contract upgrade。This approach eliminates data migration hazards due to changes in business control logic in contract upgrades。But there is no such thing as a free lunch, and this kind of operation requires a basic trade-off between scalability and complexity。First, the separation of data and logic reduces operational performance。Second, further encapsulation increases program complexity。Finally, more complex contracts increase the potential attack surface, and simple contracts are safer than complex contracts。 **Generic Data Structure - Data Upgrades** So far, there is a question of what to do if the data structure itself in the data contract needs to be upgraded? -For example, in FruitStore, originally only inventory information was kept, but now, as the fruit store business has grown, a total of ten branches have been opened, and each branch, each fruit's inventory and sales information needs to be recorded.。In this case, one solution is to use external association management: create a new ChainStore contract, create a mapping in this contract, and establish the relationship between the branch name and FruitStore.。 +For example, in FruitStore, originally only inventory information was kept, but now, as the fruit store business has grown, a total of ten branches have been opened, and each branch, each fruit's inventory and sales information needs to be recorded。In this case, one solution is to use external association management: create a new ChainStore contract, create a mapping in this contract, and establish the relationship between the branch name and FruitStore。 -In addition, different stores need to create a FruitStore contract。In order to record new sales information and other data, we also need to create a new contract to manage。If you can preset different types of reserved fields in FruitStore, you can avoid the overhead of creating new sales information contracts and still reuse FruitStore contracts.。But this approach will increase the storage overhead at the beginning.。A better idea is to abstract a more underlying and generic storage structure。The code is as follows: +In addition, different stores need to create a FruitStore contract。In order to record new sales information and other data, we also need to create a new contract to manage。If you can preset different types of reserved fields in FruitStore, you can avoid the overhead of creating new sales information contracts and still reuse FruitStore contracts。But this approach will increase the storage overhead at the beginning。A better idea is to abstract a more underlying and generic storage structure。The code is as follows: ```solidity contract commonDB is BasicAuth { @@ -750,7 +750,7 @@ contract commonDB is BasicAuth { } ``` -Similarly, we can add all data type variables to help commonDB cope with and meet different data type storage needs.。The corresponding control contract may be modified as follows: +Similarly, we can add all data type variables to help commonDB cope with and meet different data type storage needs。The corresponding control contract may be modified as follows: ```solidity contract FruitStoreControllerV2 is BasicAuth { @@ -762,30 +762,30 @@ contract FruitStoreControllerV2 is BasicAuth { } ``` -Using the above storage design patterns can significantly improve the flexibility of contract data storage and ensure that contracts can be upgraded.。As we all know, Solidity neither supports databases, uses code as a storage entity, nor provides the flexibility to change schemas。However, with this KV design, the storage itself can be made highly scalable。Anyway,**No strategy is perfect, and good architects are good at weighing**。Smart contract designers need to fully understand the pros and cons of various solutions and choose the right design based on the actual situation。 +Using the above storage design patterns can significantly improve the flexibility of contract data storage and ensure that contracts can be upgraded。As we all know, Solidity neither supports databases, uses code as a storage entity, nor provides the flexibility to change schemas。However, with this KV design, the storage itself can be made highly scalable。Anyway,**No strategy is perfect, and good architects are good at weighing**。Smart contract designers need to fully understand the pros and cons of various solutions and choose the right design based on the actual situation。 **Use CRUD or KV to store contract data** -The data that needs to be stored is stored using the CRUD data interface, and the CRUD data is persisted on the chain through node consensus.。For details, please refer to [Developing Applications Using CRUD Precompiled Contracts](../contract_develop/c++_contract/use_crud_precompiled.md), [Develop applications using KV storage precompiled contracts](../contract_develop/c++_contract/use_kv_precompiled.md) +The data that needs to be stored is stored using the CRUD data interface, and the CRUD data is persisted on the chain through node consensus。For details, please refer to [Developing Applications Using CRUD Precompiled Contracts](../contract_develop/c++_contract/use_crud_precompiled.md), [Develop applications using KV storage precompiled contracts](../contract_develop/c++_contract/use_kv_precompiled.md) ### 5.2 Freezing and unfreezing of smart contracts -In the event of a contract data exception or a large number of access exceptions, the contract administrator can freeze the smart contract to prevent other users from continuing to access the contract.。 +In the event of a contract data exception or a large number of access exceptions, the contract administrator can freeze the smart contract to prevent other users from continuing to access the contract。 -The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the status of the contract.。 +The contract administrator can initiate a transaction on a precompiled contract with a fixed address of 0x1005 and read and write the status of the contract。 -When the write operation of the contract status is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected.。 +When the write operation of the contract status is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected。 ```eval_rst .. important:: - Compatibility Note: Contract lifecycle management revocation can only be performed above node version 3.2.。 + Compatibility Note: Contract lifecycle management revocation can only be performed above node version 3.2。 ``` The contract administrator can also freeze contracts through the console. For more information, see [Freeze Contract Command](../operation_and_maintenance/console/console_commands.html#freezecontract)[Order to Unfreeze Contracts](../operation_and_maintenance/console/console_commands.html#unfreezecontract) ### 5.3 Smart Contract Abolition -When the contract is no longer in use and the data is no longer accessible, users can use the reserved selfdestruct to destroy the contract, and the contract administrator can also use the contract annulment function to actively set the contract status to annulment.。 +When the contract is no longer in use and the data is no longer accessible, users can use the reserved selfdestruct to destroy the contract, and the contract administrator can also use the contract annulment function to actively set the contract status to annulment。 **selfdestruct** @@ -805,11 +805,11 @@ contract Mortal{ ```eval_rst .. important:: - Compatibility Note: Contract lifecycle management revocation can only be performed above node version 3.2.。 + Compatibility Note: Contract lifecycle management revocation can only be performed above node version 3.2。 ``` **Note:** The process is irreversible, please consider the consequences as appropriate。 -When the write operation of the contract status is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected.。 +When the write operation of the contract status is performed, it will be determined whether the transaction originator msg.sender is the contract administrator of the contract permission table record, and if not, it will be rejected。 The contract administrator can also freeze contracts through the console. For more information, see [Freeze Contract Command](../operation_and_maintenance/console/console_commands.html#freezecontract) \ No newline at end of file diff --git a/3.x/en/docs/develop/index.md b/3.x/en/docs/develop/index.md index 36222ce63..db6f1e645 100644 --- a/3.x/en/docs/develop/index.md +++ b/3.x/en/docs/develop/index.md @@ -2,13 +2,13 @@ Tag: 'Application Development' ---- -The application development section aims to guide users to learn more about the FISCO BCOS blockchain and use the corresponding components for development based on the rich functions and components provided by FISCO BCOS.。This section mainly includes the following parts: +The application development section aims to guide users to learn more about the FISCO BCOS blockchain and use the corresponding components for development based on the rich functions and components provided by FISCO BCOS。This section mainly includes the following parts: -1. Blockchain RPC interface: This section provides an introduction to the Java API interface for blockchain application developers. You can use this section to familiarize yourself with how to deploy and invoke contracts.。 -2. Account usage and account management: This section specifically introduces how to create, store, and use accounts for blockchain application developers, and guides developers to create and operate accounts on demand.。 -3. Contract lifecycle and permission management: This section provides application developers with a detailed introduction to the entire lifecycle of contracts from development, deployment, invocation, upgrade, freezing to revocation, as well as the roles and management methods involved in the entire smart contract lifecycle.。 -4. Console deployment invocation contract: This section describes how application developers download the configuration console and guides developers on how to deploy and invoke contracts through the console.。 -5. SmartDev application development components: SmartDev development components for blockchain application developers to provide a comprehensive library of smart contracts, for commonly used functions, do not have to repeat the wheel, just quote on demand, you can introduce the corresponding functions, for the efficiency of contract development and security escort.。This section is intended to guide blockchain application developers to familiarize themselves with SmartDev components。 +1. Blockchain RPC interface: This section provides an introduction to the Java API interface for blockchain application developers. You can use this section to familiarize yourself with how to deploy and invoke contracts。 +2. Account usage and account management: This section specifically introduces how to create, store, and use accounts for blockchain application developers, and guides developers to create and operate accounts on demand。 +3. Contract lifecycle and permission management: This section provides application developers with a detailed introduction to the entire lifecycle of contracts from development, deployment, invocation, upgrade, freezing to revocation, as well as the roles and management methods involved in the entire smart contract lifecycle。 +4. Console deployment invocation contract: This section describes how application developers download the configuration console and guides developers on how to deploy and invoke contracts through the console。 +5. SmartDev application development components: SmartDev development components for blockchain application developers to provide a comprehensive library of smart contracts, for commonly used functions, do not have to repeat the wheel, just quote on demand, you can introduce the corresponding functions, for the efficiency of contract development and security escort。This section is intended to guide blockchain application developers to familiarize themselves with SmartDev components。 6. Use AMOP function: FISCO BCSO provides AMOP function (Advanced Messages Onchain Protocol), through this section, guide users to use the AMOP protocol to communicate with other organizations, through the specified interface to accept the system push message。 -7. Use group ring signature and homomorphic encryption: FISCO BCOS integrates homomorphic encryption and group / ring signature verification functions in the form of precompiled contracts, providing a variety of privacy protection methods.。 -8. Smart Contract Security Practices: Describes the best practices and security measures that should be used at all stages of coding, deployment, operation, and maintenance of smart contracts.。 +7. Use group ring signature and homomorphic encryption: FISCO BCOS integrates homomorphic encryption and group / ring signature verification functions in the form of precompiled contracts, providing a variety of privacy protection methods。 +8. Smart Contract Security Practices: Describes the best practices and security measures that should be used at all stages of coding, deployment, operation, and maintenance of smart contracts。 diff --git a/3.x/en/docs/develop/privacy.md b/3.x/en/docs/develop/privacy.md index b3e50a396..ac76f032a 100644 --- a/3.x/en/docs/develop/privacy.md +++ b/3.x/en/docs/develop/privacy.md @@ -1,9 +1,9 @@ -# 8. Use homomorphic encryption with group ring signatures. +# 8. Use homomorphic encryption with group ring signatures Tags: "Privacy Contract" "Privacy Protection" "Contract Development" "" Homomorphic Encryption "" "Ring Signature" "Group Signature" " ---- -Privacy protection is a major technical challenge for the alliance chain。In order to protect on-chain data, protect the privacy of alliance members, and ensure the effectiveness of supervision, FISCO BCOS integrates homomorphic encryption and group / ring signature verification functions in the form of pre-compiled contracts, providing a variety of privacy protection methods.。 +Privacy protection is a major technical challenge for the alliance chain。In order to protect on-chain data, protect the privacy of alliance members, and ensure the effectiveness of supervision, FISCO BCOS integrates homomorphic encryption and group / ring signature verification functions in the form of pre-compiled contracts, providing a variety of privacy protection methods。 Sections I and II of the document provide a brief introduction to the homomorphic encryption and group / ring signature algorithms and related application scenarios, respectively, and sections III and IV detail the FISCO BCOS privacy protection module enablement method and call method。 @@ -19,15 +19,15 @@ Sections I and II of the document provide a brief introduction to the homomorphi ### Introduction to Algorithm -homomorphic encryption(Homomorphic Encryption)It is one of the pearls in the field of public key cryptosystems and has been studied for more than forty years.。Its excellent cryptographic features have attracted cryptographers and received widespread attention in the industry.。 +homomorphic encryption(Homomorphic Encryption)It is one of the pearls in the field of public key cryptosystems and has been studied for more than forty years。Its excellent cryptographic features have attracted cryptographers and received widespread attention in the industry。 -- Homomorphic encryption is essentially a public key encryption algorithm that uses the public key pk for encryption and the private key sk for decryption.; -- Homomorphic encryption supports ciphertext computation, i.e. ciphertext generated by the same public key encryption can compute f( )operation, the resulting new ciphertext is decrypted exactly equal to the two original plaintext calculations f( )The result of; +- Homomorphic encryption is essentially a public key encryption algorithm, that is, encryption using the public key pk, decryption using the private key sk; +- Homomorphic encryption supports ciphertext computation, i.e. ciphertext generated by the same public key encryption can compute ​ f( )operation, the resulting new ciphertext is decrypted exactly equal to the two original plaintext calculations f( )The result of; - The homomorphic encryption formula is described as follows: ![](../../images/privacy/formula.jpg) -FISCO BCOS uses the paillier encryption algorithm and supports additive homomorphism.。Paillier's public and private keys are compatible with mainstream RSA encryption algorithms and have low access barriers.。At the same time, paillier, as a lightweight homomorphic encryption algorithm, has a small computational overhead and is easily accepted by the business system.。So after a trade-off between functionality and usability, the paillier algorithm was finally selected.。 +FISCO BCOS uses the paillier encryption algorithm and supports additive homomorphism。Paillier's public and private keys are compatible with mainstream RSA encryption algorithms and have low access barriers。At the same time, paillier, as a lightweight homomorphic encryption algorithm, has a small computational overhead and is easily accepted by the business system。So after a trade-off between functionality and usability, the paillier algorithm was finally selected。 ### Functional Components @@ -36,17 +36,17 @@ The FISCO BCOS Homomorphic Encryption Module provides the following functional c - paillier homomorphic library [GitHub source code](https://github.com/FISCO-BCOS/paillier-lib)/ [Gitee source code](https://gitee.com/FISCO-BCOS/paillier-lib)including Java libraries and C++homomorphic interface。 -- Paillier precompiled contracts for smart contracts to call, providing a ciphertext homomorphic operation interface。 +-paillier precompiled contracts for smart contracts to call, providing a ciphertext homomorphic operation interface。 ### Usage -For businesses that require privacy protection, if simple ciphertext calculation is involved, you can use this module to implement related functions.。All the data on the chain can be encrypted by calling the paillier library, and the ciphertext data on the chain can be added to the ciphertext by calling the paillier precompiled contract, and after the ciphertext is returned to the business layer, the decryption can be completed by calling the paillier library to obtain the execution result.。The specific process is shown in the following figure: +For businesses that require privacy protection, if simple ciphertext calculation is involved, you can use this module to implement related functions。All the data on the chain can be encrypted by calling the paillier library, and the ciphertext data on the chain can be added to the ciphertext by calling the paillier precompiled contract, and after the ciphertext is returned to the business layer, the decryption can be completed by calling the paillier library to obtain the execution result。The specific process is shown in the following figure: ![](../../images/privacy/paillier.jpg) ### Application Scenarios -In the alliance chain, different business scenarios need to be matched with different privacy protection policies.。For businesses with strong privacy, such as reconciliations between financial institutions, it is necessary to encrypt asset data。In FISCO BCOS, users can call the homomorphic encryption library to encrypt data, and call the homomorphic encryption precompiled contract when the consensus node executes the transaction to obtain the result of the ciphertext calculation.。 +In the alliance chain, different business scenarios need to be matched with different privacy protection policies。For businesses with strong privacy, such as reconciliations between financial institutions, it is necessary to encrypt asset data。In FISCO BCOS, users can call the homomorphic encryption library to encrypt data, and call the homomorphic encryption precompiled contract when the consensus node executes the transaction to obtain the result of the ciphertext calculation。 ## Group / Ring Signature @@ -55,18 +55,18 @@ In the alliance chain, different business scenarios need to be matched with diff **group signature** -group signature(Group Signature)It is a relatively anonymous digital signature scheme that protects the identity of the signer, where the user can sign the message in place of their group, and the verifier can verify that the signature is valid, but does not know which group member the signature belongs to.。At the same time, users cannot abuse this anonymity because the group administrator can open the signature through the group master's private key, exposing the signature's attribution information.。Features of a group signature include: +group signature(Group Signature)It is a relatively anonymous digital signature scheme that protects the identity of the signer, where the user can sign the message in place of their group, and the verifier can verify that the signature is valid, but does not know which group member the signature belongs to。At the same time, users cannot abuse this anonymity because the group administrator can open the signature through the group master's private key, exposing the signature's attribution information。Features of a group signature include: -- Anonymity: Group members use group parameters to generate signatures, others can only verify the validity of the signature, and know that the signer belongs to the group through the signature, but cannot obtain the signer's identity information.; -- Non-forgeability: only group members can generate valid verifiable group signatures; -- Non-linkability: Given two signatures, it is impossible to tell if they are from the same signer; -- Traceability: In the case of regulatory intervention, group owners can obtain the signer's identity by signing.。 +-Anonymity: Group members use group parameters to generate signatures, others can only verify the validity of the signature, and know that the signer belongs to the group through the signature, but cannot obtain the signer's identity information; +- Non-forgery: only group members can generate valid verifiable group signatures; +-Unlinkability: Given two signatures, it is impossible to tell whether they are from the same signer; +- Traceability: In the case of regulatory intervention, group owners can obtain the identity of the signer by signing。 **ring signature** -ring signature(Ring Signature)Is a special group signature scheme, but with complete anonymity, that is, there is no administrator role, all members can actively join the ring, and the signature cannot be opened.。The characteristics of ring signatures include: +ring signature(Ring Signature)Is a special group signature scheme, but with complete anonymity, that is, there is no administrator role, all members can actively join the ring, and the signature cannot be opened。The characteristics of ring signatures include: -- Non-forgery: No other member of the ring can forge a true signer's signature; +- Non-forgery: other members of the ring cannot forge the signature of the real signer; - Complete anonymity: no group owner, only ring members, others can only verify the validity of the ring signature, but no one can obtain the signer's identity information。 ### Functional Components @@ -75,17 +75,17 @@ The FISCO BCOS group / ring signature module provides the following functional c - Group / Ring [Signature Library](https://github.com/FISCO-BCOS/group-signature-lib), provides a complete group / ring signature algorithm c++Interface -- Group / ring signature pre-compiled contract for smart contract invocation, providing group / ring signature verification interface。 +- Group / ring signature pre-compiled contract for smart contract calls, providing group / ring signature verification interface。 ### Usage -Businesses with signer identity concealment requirements can use this module to achieve related functions.。The signer signs the data by calling the group / ring signature library, then links the signature, the business contract completes the signature verification by calling the group / ring signature precompiled contract, and returns the verification result back to the business layer.。If it is a group signature, the supervisor can also open the specified signature data to obtain the signer's identity.。The specific process is shown in the following figure: +Businesses with signer identity concealment requirements can use this module to achieve related functions。The signer signs the data by calling the group / ring signature library, then links the signature, the business contract completes the signature verification by calling the group / ring signature precompiled contract, and returns the verification result back to the business layer。If it is a group signature, the supervisor can also open the specified signature data to obtain the signer's identity。The specific process is shown in the following figure: ![](../../images/privacy/group_sig.jpg) ### Application Scenarios -Due to its natural anonymity, group / ring signatures have a wide range of applications in scenarios where the identity of participants needs to be concealed, such as anonymous voting, anonymous auctions, anonymous auctions, etc., and can even be used to implement anonymous transfers in the blockchain UTXO model.。At the same time, because the group signature is traceable, it can be used in scenarios that require regulatory intervention, and the regulator acts as the group owner or entrusts the group owner to reveal the identity of the signer.。 +Due to its natural anonymity, group / ring signatures have a wide range of applications in scenarios where the identity of participants needs to be concealed, such as anonymous voting, anonymous auctions, anonymous auctions, etc., and can even be used to implement anonymous transfers in the blockchain UTXO model。At the same time, because the group signature is traceable, it can be used in scenarios that require regulatory intervention, and the regulator acts as the group owner or entrusts the group owner to reveal the identity of the signer。 ### Development Example @@ -93,7 +93,7 @@ FISCO BCOS specifically provides users with examples of group / ring signature d - Group / ring signature server: Provides complete group / ring signed RPC services。[GitHub source code](https://github.com/FISCO-BCOS/group-signature-server)[Gitee source code](https://gitee.com/FISCO-BCOS/group-signature-server) -- Group / Ring Signing Client: Call the RPC service to sign the data, and provide signature on the chain and on-chain verification and other functions.。[GitHub source code](https://github.com/FISCO-BCOS/group-signature-client/tree/master-2.0)[Gitee source code](https://gitee.com/FISCO-BCOS/group-signature-client/tree/master-2.0) +- Group / Ring Signing Client: Call the RPC service to sign the data, and provide signature on the chain and on-chain verification and other functions。[GitHub source code](https://github.com/FISCO-BCOS/group-signature-client/tree/master-2.0)[Gitee source code](https://gitee.com/FISCO-BCOS/group-signature-client/tree/master-2.0) The sample framework is shown in the following figure. Please refer to [Client Guide Github Link](https://github.com/FISCO-BCOS/group-signature-client/tree/master-2.0)or [Client Guide Gitee Link](https://gitee.com/FISCO-BCOS/group-signature-client/tree/master-2.0)。 @@ -109,9 +109,9 @@ The FISCO BCOS privacy protection module is implemented via a pre-compiled contr ## Precompiled Contract Interface -The code of the privacy module and the pre-compiled contract developed by the user are located in 'FISCO-BCOS/bcos-executor / src / precompiled / extension 'directory, so the calling method of the privacy module and the precompiled contract developed by the user [calling process](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html#id12)Same, but note: +The code of the privacy module and the precompiled contract developed by the user are located in the 'FISCO-BCOS / bcos-executor / src / precompiled / extension' directory, so the calling method of the privacy module and the precompiled contract developed by the user [calling process](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/smart_contract.html#id12)Same, but note: -The pre-compiled contract for the privacy module has been assigned an address and does not need to be registered separately。The list of pre-compiled contracts and address assignments implemented by the privacy module are as follows. +The pre-compiled contract for the privacy module has been assigned an address and does not need to be registered separately。The list of pre-compiled contracts and address assignments implemented by the privacy module are as follows Source code can refer to the link: [GitHub link](https://github.com/FISCO-BCOS/FISCO-BCOS/tree/master/bcos-executor/src/precompiled/extension)[Gitee Link](https://gitee.com/FISCO-BCOS/FISCO-BCOS/tree/master/bcos-executor/src/precompiled/extension) diff --git a/3.x/en/docs/develop/smartdev_index.md b/3.x/en/docs/develop/smartdev_index.md index f6f21e1d6..9f4e60941 100644 --- a/3.x/en/docs/develop/smartdev_index.md +++ b/3.x/en/docs/develop/smartdev_index.md @@ -1,14 +1,14 @@ -# 6. SmartDev application development components. +# 6. SmartDev application development components -Tag: "WeBankBlockchain-SmartDev "" Application Development "" Common Components "" Smart Contract Library "" Smart Contract Compilation Plug-in "" Application Development Scaffolding " +Tags: "WeBankBlockchain-SmartDev" "Application Development" "Common Components" "Smart Contract Library" "Smart Contract Compilation Plugin" "Application Development Scaffolding" " ---- ## Component positioning -After more than ten years of development, blockchain technology has gradually taken root in various industries.。But at the same time, from a technical point of view, blockchain application development still has a high threshold, there are many pain points, the user experience in all aspects of application development needs to be improved.。 +After more than ten years of development, blockchain technology has gradually taken root in various industries。But at the same time, from a technical point of view, blockchain application development still has a high threshold, there are many pain points, the user experience in all aspects of application development needs to be improved。 -WeBankBlockchain-The original intention of SmartDev application development components is to help developers develop block chain applications efficiently and quickly.。SmartDev includes a set of open, lightweight development components, covering smart contract development, debugging, application development and other aspects, developers can freely choose the appropriate development tools according to their own situation, improve development efficiency.。 +The original intention of WeBankBlockchain-SmartDev application development component is to help developers develop block chain applications efficiently and quickly in an all-round way。SmartDev includes a set of open, lightweight development components, covering smart contract development, debugging, application development and other aspects, developers can freely choose the appropriate development tools according to their own situation, improve development efficiency。 ## Design Objectives @@ -23,25 +23,25 @@ Is it possible to provide a blockchain application code generator that is easy t How can programming Xiaobai quickly get started with blockchain application development? ... -These issues are both contract development-related and application development-related.。Based on such scenarios, combined with their own practical experience, WeBank Blockchain officially open source blockchain application development component WeBankBlockchain.-SmartDev hopes to improve the development efficiency of blockchain applications from all aspects of blockchain application development, and help developers become "10 times engineers" in blockchain application development.。Currently, the entire component is developed based on the solidity language。Recently, Weizhong Bank's blockchain has also opened up webankblockchain.-liquid (hereinafter referred to as WBC-Liquid) contract language, we will also adapt to WBC in the future.-Liquid Language。 +These issues are both contract development-related and application development-related。Based on such scenarios and combined with its own practical experience, WeBankBlockchain-SmartDev, a blockchain application development component of WeBank, is officially open-sourced. It is expected to start from all aspects of blockchain application development to improve the development efficiency of blockchain applications in multiple dimensions and help developers become "10 times engineers" in blockchain application development。Currently, the entire component is developed based on the solidity language。Recently, the WeBank blockchain has also open-sourced the webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract language, and we will also adapt the WBC-Liquid language in the future。 -Blockchain application development component WeBankBlockchain-SmartDev's original intention is to create a low-code development of the component library, all-round help developers efficient, agile development of blockchain applications.。WeBankBlockchain-SmartDev includes a set of open, lightweight development components, covering contract development, compilation, application development and other aspects, developers can choose the appropriate development tools according to their own situation, improve development efficiency.。 +The original intention of WeBankBlockchain-SmartDev is to create a low-code component library to help developers develop blockchain applications efficiently and quickly。WeBankBlockchain-SmartDev includes a set of open, lightweight development components, covering contract development, compilation, application development and other aspects, developers can choose the appropriate development tools according to their own situation, improve development efficiency。 -From the perspective of contract development, for commonly used functions, there is no need to repeat the wheel, just quote on demand, refer to the code in the "smart contract library," you can introduce the corresponding functions, for the efficiency and safety of contract development escort.。For non-basic features, such as business scenarios, we also provide code templates for reuse.。 +From the perspective of contract development, for commonly used functions, there is no need to repeat the wheel, just quote on demand, refer to the code in the "smart contract library," you can introduce the corresponding functions, for the efficiency and safety of contract development escort。For non-basic features, such as business scenarios, we also provide code templates for reuse。 -From the perspective of contract compilation, for blockchain applications under development, you no longer need to rely on the console to compile the contract code, just use the contract gradle compilation plug-in to compile in place, and you can immediately get abi, bin and java contracts.。These compilations are exported directly to the Java project, eliminating the step of copying and providing a fast, silky experience like developing native Java programs。 +From the perspective of contract compilation, for blockchain applications under development, you no longer need to rely on the console to compile the contract code, just use the contract gradle compilation plug-in to compile in place, and you can immediately get abi, bin and java contracts。These compilations are exported directly to the Java project, eliminating the step of copying and providing a fast, silky experience like developing native Java programs。 -From the perspective of application development, from smart contracts to project construction, there is a lot of mechanical and repetitive work, such as creating projects, introducing dependencies, writing configuration code, accessing smart contracts, and writing related entity classes.。By contrast, via WeBankBlockchain-SmartDev, developers can choose application development scaffolding。Scaffolding automatically generates project works based on smart contracts。The project already contains the above logic code, developers only need to continue to add business logic code based on the project, focusing on their own business.。 +From the perspective of application development, from smart contracts to project construction, there is a lot of mechanical and repetitive work, such as creating projects, introducing dependencies, writing configuration code, accessing smart contracts, and writing related entity classes。By contrast, with WeBankBlockchain-SmartDev, developers can choose application development scaffolding。Scaffolding automatically generates project works based on smart contracts。The project already contains the above logic code, developers only need to continue to add business logic code based on the project, focusing on their own business。 ![](../../../../2.x/images/governance/SmartDev/compare.png) ## Component Introduction -SmartDev includes a set of open, lightweight development components that cover the development, debugging, and application development of smart contracts, including the smart contract library (SmartDev-Contract), Smart Contract Compilation Plug-in (SmartDev-SCGP) and application development scaffolding (SmartDev-Scaffold)。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency.。 +SmartDev includes a set of open and lightweight development components, covering the development, debugging, and application development of smart contracts, including the SmartDev-Contract, SmartDev-SCGP, and SmartDev-Scaffold。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency。 ![](../../../../2.x/images/governance/SmartDev/smartdev_overview.png) -### SmartDev-Contract Smart Contract Library -Solidity Smart Contract Code Base。Contains basic types, data structures, common functions, upper-level business and other smart contract libraries.。Users can reference and reuse according to actual needs.。 +### SmartDev - Contract Smart Contract Library +Solidity Smart Contract Code Base。Contains basic types, data structures, common functions, upper-level business and other smart contract libraries。Users can reference and reuse according to actual needs。 ![](../../../../2.x/images/governance/SmartDev/contract_lib.png) @@ -57,7 +57,7 @@ Please refer to ### SmartDev-SCGP (Solidity Compiler Gradle Plugin) Smart Contract Compilation Plugin -The gradle plug-in that compiles the Solidity smart contract code into Java code can compile the smart contract in the project, generate the corresponding Java file, and automatically copy it to the corresponding package directory.。 +The gradle plug-in that compiles the Solidity smart contract code into Java code can compile the smart contract in the project, generate the corresponding Java file, and automatically copy it to the corresponding package directory。 ![](../../../../2.x/images/governance/SmartDev/compile_plugin.png) @@ -72,7 +72,7 @@ Please refer to - [Quick Start](https://smartdev-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-SmartDev-SCGP/quick_start.html) ### SmartDev-Scaffold Application Development Scaffold -Based on the configuration of the smart contract file, automatically generate the scaffolding code of the application project, including the smart contract corresponding to the entity class, service class and other content, help users only need to modify and write a small amount of code, you can implement an application, greatly simplifying the development of smart contracts.。 +Based on the configuration of the smart contract file, automatically generate the scaffolding code of the application project, including the smart contract corresponding to the entity class, service class and other content, help users only need to modify and write a small amount of code, you can implement an application, greatly simplifying the development of smart contracts。 ![](../../../../2.x/images/governance/SmartDev/scaffold.png) @@ -91,24 +91,24 @@ Please refer to ### Scenario 1: Smart Contract Development -In the development of smart contracts, from the basic four operations to the upper-level business scenarios, you can use mature, reusable libraries.。 +In the development of smart contracts, from the basic four operations to the upper-level business scenarios, you can use mature, reusable libraries。 -Take the four-rule operation as an example, you need to determine whether there is a risk of overflow, at which point you can use the math-related library LibSafeMathForUint256Utils.。 +Take the four-rule operation as an example, you need to determine whether there is a risk of overflow, at which point you can use the math-related library LibSafeMathForUint256Utils。 -Take the data structure as an example, in solidity, the key of the mapping type cannot be iterated, at this time, if you need to use the mapping of the key iteration, you can use the mapping-related library LibBytesMap.。 +Take the data structure as an example, in solidity, the key of the mapping type cannot be iterated, at this time, if you need to use the mapping of the key iteration, you can use the mapping-related library LibBytesMap。 -For example, if you want to introduce cryptographic functions such as hashing and signature verification, you can use the Crypto library.。 +For example, if you want to introduce cryptographic functions such as hashing and signature verification, you can use the Crypto library。 -Take the business scenario as an example, if you want to implement the certificate storage function, you can refer to the scenario template Evidence, which incorporates the relevant implementation, which has the effect of throwing bricks and mortar.。 +Take the business scenario as an example, if you want to implement the certificate storage function, you can refer to the scenario template Evidence, which incorporates the relevant implementation, which has the effect of throwing bricks and mortar。 ### Scenario 2: Contract modification and debugging -In the process of blockchain application development and debugging, it is usually necessary to use abi, bin, java contract, etc. in the project, and debug accordingly based on these contents.。If the contract needs to be recompiled for reasons such as adjustments, you don't have to copy the contract into the console to compile it, just run the corresponding gradle directive to generate a new compilation.。At the same time, these compilations are directly embedded in the project.。As shown in the following figure, after the HelloWorld contract is compiled, the resulting compiled product example: +In the process of blockchain application development and debugging, it is usually necessary to use abi, bin, java contract, etc. in the project, and debug accordingly based on these contents。If the contract needs to be recompiled for reasons such as adjustments, you don't have to copy the contract into the console to compile it, just run the corresponding gradle directive to generate a new compilation。At the same time, these compilations are directly embedded in the project。As shown in the following figure, after the HelloWorld contract is compiled, the resulting compiled product example: ![](../../../../2.x/images/governance/SmartDev/example.png) ### Scenario 3: Blockchain application development -If you have written a smart contract, you need to develop a web project that provides a rest interface based on the smart contract.。In this case, the user can drag the contract into the scaffold and generate the project with one click。The following figure shows the generated sample project, including the necessary configuration classes, DAO (Data Access Object) related code。Developers only need to make the necessary configuration of the project, and add the corresponding controller and other code, you can easily achieve the above requirements。 +If you have written a smart contract, you need to develop a web project that provides a rest interface based on the smart contract。In this case, the user can drag the contract into the scaffold and generate the project with one click。The following figure shows the generated sample project, including the necessary configuration classes, DAO (Data Access Object) related code。Developers only need to make the necessary configuration of the project, and add the corresponding controller and other code, you can easily achieve the above requirements。 ![](../../../../2.x/images/governance/SmartDev/dir.png) diff --git a/3.x/en/docs/introduction/change_log/3_0_0.md b/3.x/en/docs/introduction/change_log/3_0_0.md index 0f8331a45..b0ddbe49d 100644 --- a/3.x/en/docs/introduction/change_log/3_0_0.md +++ b/3.x/en/docs/introduction/change_log/3_0_0.md @@ -3,7 +3,7 @@ ```eval_rst .. admonition:: v2.x Upgrade to v3.0.0 - - **Comprehensive upgrade** : Refer to 'Build the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ Build a new blockchain and resubmit all historical transactions to the new node. The upgraded node contains v3.0.0-rc3 new features + - **Comprehensive upgrade** Reference 'Building the First Blockchain Network<../../quick_start/air_installation.html>'_ Build a new chain, resubmit all historical transactions to the new node, the upgraded node contains the new v3.0.0-rc3 features - v3.0.0 and v3.0.0-rc*Incompatible, interversion compatibility support starting with this version @@ -16,7 +16,7 @@ **Air / Pro / Max** Meet different deployment scenarios -- **Air**: Traditional blockchain architecture, all functions in one blockchain node (all-in-one) to meet the deployment needs of developers and simple scenarios +- **Air**Traditional blockchain architecture, all functions in one blockchain node (all-in-one), to meet the deployment needs of developers and simple scenarios - **Pro**: Gateway+ RPC + Blockchain nodes to meet the needs of isolated deployment of internal and external environments - **Max**: Gateway+ RPC + Blockchain node (master / standby)+ Multiple transaction executors to meet the demand for high availability and extreme performance @@ -26,9 +26,9 @@ Generate blocks in a pipelined manner to improve performance -- Divide the block generation process into four stages: packaging, consensus, execution, and placement. -- Consecutive blocks are executed in a pipelined manner through four stages (103 in packaging, 102 in consensus, 101 in execution, and 100 in drop) -- Performance approaches the performance of the longest execution phase in the pipeline when consecutive blocks are released +- Divide the block generation process into four stages: packaging, consensus, execution, and placement +-Continuous blocks go through four stages in a pipelined manner during execution (103 in packaging, 102 in consensus, 101 in execution, and 100 in drop) +-When continuously out of blocks, the performance approaches the performance of the longest execution time in the pipeline **Execution: deterministic multi-contract parallelism** @@ -36,53 +36,53 @@ Mechanisms for implementing parallel execution and scheduling of inter-contract - Efficient: Transactions on different contracts can be executed in parallel, improving transaction processing efficiency - Easy to use: transparent to developers, automatic transaction parallel execution and conflict processing -- Generic: Supports EVM, WASM, Precompiled, or other contracts +- Universal: Supports EVM, WASM, Precompiled or other contracts **Storage: KeyPage** Cache mechanism for reference memory pages for efficient blockchain storage -* The key-value is stored in pages +* Store key-value by organizing it into pages * Improve the locality of memory access and reduce the storage space occupation **Inheritance and Upgrade** -* DAG parallel execution: no longer rely on the parallel programming framework, can automatically generate conflict parameters based on the solidity code, to achieve parallel execution of transactions within the contract. +* DAG parallel execution: no longer rely on the parallel programming framework, can automatically generate conflict parameters based on the solidity code, to achieve parallel execution of transactions within the contract * PBFT consensus algorithm: immediate consensus algorithm for second-level confirmation of transactions -* For more information, please refer to the online documentation. +* For more information, please refer to the online documentation ## New Features **Blockchain File System** -Use the command line to manage blockchain resources, such as contracts, tables, etc. +Use the command line to manage blockchain resources, such as contracts, tables, etc - Commands: pwd, cd, ls, tree, mkdir, ln -- Function: the contract address and path binding, you can use the path to call the contract +-Function: bind the contract address to the path, you can call the contract with the path **Permission governance** After the blockchain is enabled, multi-party voting is required to allow * Roles: Administrator, Administrator, User -* Controlled operations: deployment contracts, contract interface calls, system parameter settings, etc. +* Controlled operations: deployment contracts, contract interface calls, system parameter settings, etc -**WBC-Liquid:WeBankBlockchain-Liquid(WBC for short-Liquid)** +**WBC-Liquid:WeBankBlockchain-Liquid(WBC-Liquid)** -It not only supports Soldity to write contracts, but also supports Rust to write contracts. +It not only supports Soldity to write contracts, but also supports Rust to write contracts -- Liquid is a smart contract programming language based on the Rust language. -- Integrated WASM runtime environment with WBC support-Liquid Smart Contract。 -- WBC-Liquid smart contract supports intelligent analysis of conflict fields and automatically opens DAG。 +- Liquid is a smart contract programming language based on the Rust language +- Integrated WASM runtime environment, support WBC-Liquid smart contract。 +- WBC-Liquid smart contract supports intelligent analysis of conflict fields, automatically open DAG。 **Inheritance and Upgrade** * Solidity: currently supported up to version 0.8.11 * CRUD: Use table structure to store data, which is more friendly to business development, and more easy-to-use interfaces are encapsulated in 3.0 -* AMOP: On-chain messenger protocol, which enables information transmission and data communication between applications accessing the blockchain through the P2P network of the blockchain. -* Disk encryption: The private key and data of the blockchain node are encrypted and stored in the physical hard disk, and the physical hardware cannot be decrypted even if it is lost. +* AMOP: On-chain messenger protocol, which enables information transmission and data communication between applications accessing the blockchain through the P2P network of the blockchain +* Disk encryption: The private key and data of the blockchain node are encrypted and stored in the physical hard disk, and the physical hardware cannot be decrypted even if it is lost * Cryptographic algorithm: built-in group ring signature and other cryptographic algorithms, can achieve a variety of secure multi-party computing scenarios -* For more information, please refer to the online documentation. +* For more information, please refer to the online documentation ## Compatibility diff --git a/3.x/en/docs/introduction/change_log/3_0_0_rc1.md b/3.x/en/docs/introduction/change_log/3_0_0_rc1.md index 0459f59cf..bd7a36eb6 100644 --- a/3.x/en/docs/introduction/change_log/3_0_0_rc1.md +++ b/3.x/en/docs/introduction/change_log/3_0_0_rc1.md @@ -1,9 +1,9 @@ ```eval_rst .. admonition:: v2.x Upgrade to v3.0.0-rc1 - - **Comprehensive upgrade** : Refer to 'Build the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ Build a new blockchain and resubmit all historical transactions to the new node. The upgraded node contains v3.0.0-rc1 new features + - **Comprehensive upgrade** Reference 'Building the First Blockchain Network<../../quick_start/air_installation.html>'_ Build a new chain, resubmit all historical transactions to the new node, the upgraded node contains the new v3.0.0-rc1 features - - v3.0.0-rc1 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions + -v3.0.0-rc1 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions - `v3.0.0-rc1 Release Note `_ ``` @@ -11,31 +11,31 @@ ## Change Description **Microservices architecture** -- Provides common block-linked entry specifications。 -- Provide a management platform, users can deploy, expand, access to interface granularity monitoring information。 +- Provide common block linking into specification。 +- Provide management platform, users can deploy, expand, access to interface granularity monitoring information。 **deterministic multi-contract parallelism** -- Easy to use: The underlying blockchain automatically parallelizes without the need for users to provide conflicting fields in advance.。 +- Easy to use: the underlying blockchain automatically parallel, without the need for users to provide conflict fields in advance。 - Efficient: transactions within the block are not executed repeatedly, there is no pre-execution or pre-analysis process。 -- Generic: Can be used regardless of EVM, WASM, Precompiled, or other contracts。 +- Universal: This solution can be used regardless of EVM, WASM, Precompiled or other contracts。 **Blockchain File System** - Introduce the concept of file system to organize on-chain resources, users can browse on-chain resources like files。 -- Based on the blockchain file system to achieve management functions, such as partitioning, permissions, etc., more intuitive。 +-Based on the blockchain file system to achieve management functions, such as partitioning, permissions, etc., more intuitive。 **pipelined PBFT consensus** -- Transaction sequencing and transaction execution are independent of each other, realizing pipeline architecture and improving resource utilization.。 -- Supports batch consensus and parallel consensus processing on blocks to improve performance。 -- Supports a single consensus leader to continuously issue blocks to improve performance。 +- Transaction sequencing and transaction execution are independent of each other, realizing pipeline architecture and improving resource utilization。 +- Support batch consensus, parallel consensus processing on blocks, improve performance。 +- Support a single consensus leader to continuously block, improve performance。 -**WeBankBlockchain-Liquid(WBC for short-Liquid)** -- Integrated WASM runtime environment with WBC support-Liquid Smart Contract。 -- WBC-Liquid smart contract supports intelligent analysis of conflict fields and automatically opens DAG。 +**WeBankBlockchain-Liquid(WBC-Liquid)** +- Integrated WASM runtime environment, support WBC-Liquid smart contract。 +- WBC-Liquid smart contract supports intelligent analysis of conflict fields, automatically open DAG。 **Compatibility** -3.0.0-rc1 version is incompatible with 2.x version data and protocol, Solidity contract source code is compatible。If you are upgrading from version 2.x to 3.0.0-rc1 version, data migration is required。 +3.0.0-rc1 version is incompatible with 2.x version data and protocol, Solidity contract source code is compatible。If you want to upgrade from version 2.x to version 3.0.0-rc1, you need to do data migration。 | | Recommended Version| Minimum Version| Description| |------------|--------------------------|-----------|------| diff --git a/3.x/en/docs/introduction/change_log/3_0_0_rc2.md b/3.x/en/docs/introduction/change_log/3_0_0_rc2.md index eed6c25da..c0555e3e4 100644 --- a/3.x/en/docs/introduction/change_log/3_0_0_rc2.md +++ b/3.x/en/docs/introduction/change_log/3_0_0_rc2.md @@ -1,11 +1,11 @@ ```eval_rst -.. admonition:: v2.x Upgrade to v3.0.0-rc2 +.. admonition:: v2.x upgrade to v3.0.0-rc2 - - **Comprehensive upgrade** : Refer to 'Build the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ Build a new blockchain and resubmit all historical transactions to the new node. The upgraded node contains v3.0.0-rc2 new features + - **Comprehensive upgrade** Reference 'Building the First Blockchain Network<../../quick_start/air_installation.html>'_ Build a new chain, resubmit all historical transactions to the new node, the upgraded node contains the new v3.0.0-rc2 features - - v3.0.0-rc2 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions + -v3.0.0-rc2 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions - - v3.0.0-rc2 and v3.0.0-rc1 incompatible, expected from v3.0.0-rc4 for inter-version compatibility support + - v3.0.0-rc2 is not compatible with v3.0.0-rc1, inter-version compatibility support is expected from v3.0.0-rc4 - `v3.0.0-rc2 Release Note `_ ``` @@ -16,26 +16,26 @@ **Change** - Optimize the complexity of code warehouse management and centralize multiple sub-warehouses to FISCO BCOS for unified management -- The transaction is modified from 'Base64' encoding to hexadecimal encoding. -- upgrade'bcos-boostssl 'and' bcos-utilities' depends on the latest version -- Modifying the Scale Codec of 'bytesN' Type Data -- Optimize the transaction processing process to avoid performance loss caused by repeated transaction checks. -- Block Height Acquisition Method of Optimized Event Push Module +- The transaction is modified from 'Base64' encoding to hexadecimal encoding +- Upgrade 'bcos-boostssl' and 'bcos-utilities' dependencies to the latest version +- Modify the Scale codec of 'bytesN' type data +- Optimize the transaction processing process to avoid performance loss caused by repeated transaction checks +-Optimize the block height acquisition method of the event push module **Repair** -- Fix the memory leak caused by the scheduler scheduling transaction +- Fix memory leaks caused by scheduler scheduling transactions - Repair DMC+Inconsistent execution during DAG scheduling - Fix [Issue 2132](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2132) -- Repair [Issue 2124](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2124) -- Fix some scenarios, the new node is connected to the network and does not trigger the quick view switch, resulting in the number of nodes meeting '(2*f+1)'But the problem of abnormal consensus -- Fix the problem that some variable access threads are unsafe and cause node crash. -- Fix AMOP Subscription Multiple topics Failure +- Fix [Issue 2124](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2124) +- Fixed some scenarios where the new node did not trigger the quick view switch, resulting in the number of nodes meeting(2*f+1)'But the problem of abnormal consensus +- Fixed the problem of node crash caused by unsafe thread of partial variable access +- Fix AMOP subscription multiple topics failed issue **Compatibility** -3.0.0-rc2 version with 3.0.0-rc1/2.0+Version data and protocol incompatible, Solidity / WBC-Liquid contract source code compatible。If you want to go from 3.0.0-rc1/2.0+Version Upgrade to 3.0.0-rc2 version, data migration is required。 +3.0.0-rc2 version and 3.0.0-rc1 / 2.0+Version data and protocol are not compatible, Solidity / WBC-Liquid contract source code is compatible。If you want to start from 3.0.0-rc1 / 2.0+Upgrade to version 3.0.0-rc2, data migration is required。 | | Recommended Version| Minimum Version| Description| | ---------- | ----------------------- | --------- | ---------------------- | diff --git a/3.x/en/docs/introduction/change_log/3_0_0_rc3.md b/3.x/en/docs/introduction/change_log/3_0_0_rc3.md index a0dac4d3c..571b87e6b 100644 --- a/3.x/en/docs/introduction/change_log/3_0_0_rc3.md +++ b/3.x/en/docs/introduction/change_log/3_0_0_rc3.md @@ -1,11 +1,11 @@ ```eval_rst .. admonition:: v2.x Upgrade to v3.0.0-rc3 - - **Comprehensive upgrade** : Refer to 'Build the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ Build a new blockchain and resubmit all historical transactions to the new node. The upgraded node contains v3.0.0-rc3 new features + - **Comprehensive upgrade** Reference 'Building the First Blockchain Network<../../quick_start/air_installation.html>'_ Build a new chain, resubmit all historical transactions to the new node, the upgraded node contains the new v3.0.0-rc3 features - - v3.0.0-rc3 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions + -v3.0.0-rc3 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions - - v3.0.0-rc3 with v3.0.0-rc1 incompatible, expected from v3.0.0-rc4 for inter-version compatibility support + - v3.0.0-rc3 is not compatible with v3.0.0-rc1, inter-version compatibility support is expected from v3.0.0-rc4 - `v3.0.0-rc3 Release Note `_ ``` @@ -16,33 +16,33 @@ **New** - Supports Solidity contract parallel conflict field analysis -- Integrate cryptography, transaction coding and decoding and other related logic into bcos.-cpp-sdk, and encapsulated into a common C interface -- WASM virtual machine support contract invocation contract -- Add bcos-wasm instead of Hera -- 'BFS 'supports soft link functionality -- Supports dynamic modification of gas limits for transaction execution via the 'tx _ gas _ limit' keyword of the 'setSystemConfig' system contract -- Deploy Contract Storage Contract ABI +-Integrate cryptography, transaction coding and decoding and other related logic into bcos-cpp-sdk and encapsulate it into a common C interface +- WASM virtual machine supports contract invocation contracts +- Added bcos-wasm to replace Hera +- 'BFS' supports soft link function +- Supports dynamically modifying the gas limit of transaction execution through the 'tx _ gas _ limit' keyword of the 'setSystemConfig' system contract +- Deploy contract storage contract ABI **Change** -- Upgrade EVM virtual machine to latest, support Solidity 0.8 +- Upgrade EVM VM to latest, support Solidity 0.8 - Optimize webcasting at the institutional level to reduce inter-agency extranet bandwidth consumption -- Support the national secret acceleration library, national secret signature and verification performance improvement 5-10 times +- Support the national secret acceleration library, the national secret signature and verification performance increased by 5-10 times - EVM nodes support 'BFS', use 'BFS' instead of 'CNS' -- DAG framework unified support for Solidity and WBC-Liquid +- DAG framework supports Solidity and WBC-Liquid **Repair** -- Repair [#issue 2312](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2312) -- Repair [#issue 2307](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2307) -- Repair [#issue 2254](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2254) -- Repair [#issue 2211](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2211) -- Repair [#issue 2195](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2195) +- Fix [#issue 2312](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2312) +- Fix [#issue 2307](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2307) +- Fix [#issue 2254](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2254) +- Fix [#issue 2211](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2211) +- Fix [#issue 2195](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2195) **Compatibility** -3.0.0-rc3 version with 3.0.0-rc2 version data and protocol incompatible, Solidity / WBC-Liquid contract source code compatible。If you want to go from 3.0.0-rc2 version upgrade to 3.0.0-rc3 version, data migration is required。 +3.0.0-rc3 version is incompatible with 3.0.0-rc2 version data and protocol, Solidity / WBC-Liquid contract source code is compatible。If you want to upgrade from version 3.0.0-rc2 to version 3.0.0-rc3, you need to do data migration。 | | Recommended Version| Minimum Version| Description| | ---------- | ----------------------- | --------- | ---------------------- | diff --git a/3.x/en/docs/introduction/change_log/3_0_0_rc4.md b/3.x/en/docs/introduction/change_log/3_0_0_rc4.md index f0395c6e0..9754078a1 100644 --- a/3.x/en/docs/introduction/change_log/3_0_0_rc4.md +++ b/3.x/en/docs/introduction/change_log/3_0_0_rc4.md @@ -1,11 +1,11 @@ # v3.0.0-rc4 ```eval_rst -.. admonition:: v2.x Upgrade to v3.0.0-rc4 +.. admonition:: v2.x upgrade to v3.0.0-rc4 - - **Comprehensive upgrade** : Refer to 'Build the first blockchain network <.. /.. / / quick _ start / air _ installation.html >' _ Build a new blockchain and resubmit all historical transactions to the new node. The upgraded node contains v3.0.0-rc3 new features + - **Comprehensive upgrade** Reference 'Building the First Blockchain Network<../..//quick_start/air_installation.html>'_ Build a new chain, resubmit all historical transactions to the new node, the upgraded node contains the new v3.0.0-rc3 features - - v3.0.0-rc4 and v3.0.0-rc3 not compatible, interversion compatibility support expected from official version + - v3.0.0-rc4 is not compatible with v3.0.0-rc3, inter-version compatibility support is expected from the official version - `v3.0.0-rc4 Release Note `_ ``` @@ -14,39 +14,39 @@ ### New -- Implementing 'Max' Version FISCO-BCOS, the storage adopts distributed storage TiKV, the execution module is independent into a service, the storage and execution can be horizontally expanded, and supports automatic master and backup recovery, which can support the massive transaction chain scene. -- Completely design and implement a compatibility framework from the data to the protocol layer to ensure secure upgrades of protocols and data -- Support CRUD contract interface, simplify blockchain application development threshold +- 'Max' version FISCO-BCOS is implemented, the storage adopts distributed storage TiKV, the execution module is independent into a service, the storage and execution can be horizontally expanded, and supports automatic master and backup recovery, which can support massive transactions on the chain scene +-Fully design and implement a compatibility framework from data to protocol layer to ensure the safe upgrade of protocols and data +- Support CRUD contract interface, simplify the blockchain application development threshold - Support group ring signature contract interface, rich on-chain privacy computing capacity - Support contract lifecycle management functions, including contract freezing and unfreezing - Support data drop disk encryption -- Based on 'mtail'+ `prometheus` + `grafana` + 'ansible 'to achieve blockchain system monitoring +- based on 'mtail'+ `prometheus` + `grafana` + 'ansible 'to achieve blockchain system monitoring ### Change - Introducing KeyPage to optimize read storage performance -- Based on the principle of Rip protocol, realize network forwarding function and improve network robustness +- Based on the principle of Rip protocol, the network forwarding function is realized, and the network robustness is improved - Support Linux aarch64 platform -- Update permission governance contracts to incorporate node role management, system configuration modification, contract lifecycle management, and other functions into the governance framework. -- Reconstruct the rights governance contract, and the calculation logic can be upgraded. +- Update permission governance contracts to incorporate node role management, system configuration modification, contract lifecycle management and other functions into the governance framework +- Reconstruct the permission governance contract, and the calculation logic can be upgraded - Optimize the performance of the DMC execution framework - Optimize network performance for RPC and P2P -- Optimize the 'Pro' version of FISCO-BCOS chain creation script, supports configuring RPC, Gateway, BcosNodeService and other services in the organization dimension -- The VM type configuration, permission management switch, and initialization account address of the node are all changed in the creation block and cannot be modified. +- Optimize the FISCO-BCOS chain creation script in the 'Pro' version, and support the configuration of RPC, Gateway, BcosNodeService and other services in the organization dimension +- The node's VM type configuration, permission management switch, and initialization account address are all changed in the creation block, and cannot be modified ### Repair -- Repair [#issue 2448](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2448) +- Fix [#issue 2448](https://github.com/FISCO-BCOS/FISCO-BCOS/issues/2448) ### Compatibility -3.0.0-rc4 version with 3.0.0-rc3 version data and protocol incompatible, Solidity / WBC-Liquid contract source code compatible。If you want to go from 3.0.0-rc3 version upgrade to 3.0.0-rc4 version, data migration is required。 +3.0.0-rc4 version is incompatible with 3.0.0-rc3 version data and protocol, Solidity / WBC-Liquid contract source code is compatible。If you want to upgrade from version 3.0.0-rc3 to version 3.0.0-rc4, you need to do data migration。 | | Recommended Version| Minimum Version| Description| |----------|---------------------------------|---------------------------------|------| | Console| 3.0.0-rc4 | 3.0.0-rc4 | | | Java SDK | 3.0.0-rc4 | 3.0.0-rc4 | | | CPP SDK | 3.0.0-rc4 | 3.0.0-rc4 | | -| WeBASE | Temporarily not supported(Expected lab-rc4 version support) | Temporarily not supported(Expected lab-rc4 version support) | | +| WeBASE | Temporarily not supported(expected lab-rc4 version support) | Temporarily not supported(expected lab-rc4 version support) | | | Solidity | Maximum support solidity 0.8.11.0| 0.6.10 | | | Liquid | 1.0.0-rc3 | 1.0.0-rc2 | | diff --git a/3.x/en/docs/introduction/change_log/3_0_1.md b/3.x/en/docs/introduction/change_log/3_0_1.md index 8740eab60..0c330b5cd 100644 --- a/3.x/en/docs/introduction/change_log/3_0_1.md +++ b/3.x/en/docs/introduction/change_log/3_0_1.md @@ -3,9 +3,9 @@ ```eval_rst .. admonition:: v2.x Upgrade to v3.x - - **Comprehensive upgrade** : Refer to 'Build the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ Build a new blockchain and resubmit all historical transactions to the new node. The upgraded node contains v3.0.0-rc3 new features + - **Comprehensive upgrade** Reference 'Building the First Blockchain Network<../../quick_start/air_installation.html>'_ Build a new chain, resubmit all historical transactions to the new node, the upgraded node contains the new v3.0.0-rc3 features - - v3.0.1 is compatible with v3.0.0 and can directly replace the binary implementation upgrade + -v3.0.1 is compatible with v3.0.0 and can be directly replaced binary implementation upgrade - v3.0.1 and v3.0.0-rc*Incompatible, data migration is required to upgrade diff --git a/3.x/en/docs/introduction/change_log/3_1_0.md b/3.x/en/docs/introduction/change_log/3_1_0.md index 91ac4df62..720cd7a5e 100644 --- a/3.x/en/docs/introduction/change_log/3_1_0.md +++ b/3.x/en/docs/introduction/change_log/3_1_0.md @@ -7,13 +7,13 @@ * Network compression function * Consensus Timing Function * Contract Binary and ABI Storage Optimization -* Adapt to EVM interfaces such as delegatecall, extCodeHash, blockHash, etc. +* Adapt to EVM interfaces such as delegatecall, extCodeHash, blockHash, etc * BFS adds query paging function #### Change * DBHash calculation logic updates to improve verification stability -* The chain configuration item is removed from config.ini and modified to be configured in the config.genesis creation block. +* The chain configuration item is removed from config.ini and modified to be configured in the config.genesis creation block * BFS catalog table structure performance optimization #### Repair @@ -31,7 +31,7 @@ If the existing data in the current chain is in the following version, can the node binary be replaced to complete the upgrade? * 3.0.x: supports gray-scale upgrade by replacing the binary, if you need to use the new features of the current version, you need to use the console to upgrade the chain version to the current version after all node binaries are replaced (see below: Upgrade method) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x * Component compatibility @@ -46,7 +46,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -59,7 +59,7 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) diff --git a/3.x/en/docs/introduction/change_log/3_1_1.md b/3.x/en/docs/introduction/change_log/3_1_1.md index 10b9965f5..ba109ace6 100644 --- a/3.x/en/docs/introduction/change_log/3_1_1.md +++ b/3.x/en/docs/introduction/change_log/3_1_1.md @@ -15,11 +15,11 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.1.0: The data compatibility version number of this version is still 3.1.0, and the upgrade can be completed by directly replacing the binary * 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data-compatible version number. For details, see [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x * Component compatibility @@ -34,7 +34,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -47,7 +47,7 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) diff --git a/3.x/en/docs/introduction/change_log/3_1_2.md b/3.x/en/docs/introduction/change_log/3_1_2.md index 7f16db1da..cf20735b5 100644 --- a/3.x/en/docs/introduction/change_log/3_1_2.md +++ b/3.x/en/docs/introduction/change_log/3_1_2.md @@ -2,17 +2,17 @@ #### New -* The extraData field is added to the transaction structure to facilitate the business to identify the transaction, which is not included in the calculation of the transaction hash. +* The extraData field is added to the transaction structure to facilitate the business to identify the transaction, which is not included in the calculation of the transaction hash #### Compatibility * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.1.0: The data compatibility version number of this version is still 3.1.0, and the upgrade can be completed by directly replacing the binary * 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data-compatible version number. For details, see [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x * Component compatibility @@ -27,7 +27,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -40,7 +40,7 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) diff --git a/3.x/en/docs/introduction/change_log/3_2_0.md b/3.x/en/docs/introduction/change_log/3_2_0.md index 4ce821f0e..d32e05ed6 100644 --- a/3.x/en/docs/introduction/change_log/3_2_0.md +++ b/3.x/en/docs/introduction/change_log/3_2_0.md @@ -26,11 +26,11 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.2.0: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.1.x / 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data-compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x * Component compatibility @@ -42,12 +42,12 @@ | CPP SDK | 3.2.0 | 3.0.0 | | | Solidity | 0.8.11 | Minimum 0.4.25, maximum 0.8.11| The compiler (console) needs to be downloaded according to the contract version| | WBC-Liquid | 1.0.0-rc3 | 1.0.0-rc3 | | -| WeBASE | - | - | A compatible bug exists. We recommend that you upgrade the node binary to 3.2.1.+ | -| WeIdentity | - | - | A compatible bug exists. We recommend that you upgrade the node binary to 3.2.1.+ | +| WeBASE | - | - | A compatible bug exists. We recommend that you upgrade the node binary to 3.2.1+ | +| WeIdentity | - | - | A compatible bug exists. We recommend that you upgrade the node binary to 3.2.1+ | #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -60,11 +60,11 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) -Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.2.0.。 +Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_2_1.md b/3.x/en/docs/introduction/change_log/3_2_1.md index 0976d1942..7f5bc49b7 100644 --- a/3.x/en/docs/introduction/change_log/3_2_1.md +++ b/3.x/en/docs/introduction/change_log/3_2_1.md @@ -11,11 +11,11 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.2.1: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.1.x / 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data-compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x * Component compatibility @@ -32,7 +32,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -45,11 +45,11 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) -Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.2.0.。 +Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_2_2.md b/3.x/en/docs/introduction/change_log/3_2_2.md index 75bee7862..0605cc8ef 100644 --- a/3.x/en/docs/introduction/change_log/3_2_2.md +++ b/3.x/en/docs/introduction/change_log/3_2_2.md @@ -27,7 +27,7 @@ Operation: first complete the upgrade of all node executable programs, and then refer to the [document](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/introduction/change_log/3_2_1.html#id5)Send transaction upgrade chain data version to v3.2.2 - Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade. + Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade Version supported for upgrade: v3.0.0+ * Component compatibility @@ -46,7 +46,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -59,11 +59,11 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) -Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.2.0.。 +Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_2_3.md b/3.x/en/docs/introduction/change_log/3_2_3.md index 9d1697bad..34a0e39a9 100644 --- a/3.x/en/docs/introduction/change_log/3_2_3.md +++ b/3.x/en/docs/introduction/change_log/3_2_3.md @@ -46,7 +46,7 @@ Operation: first complete the upgrade of all node executable programs, and then refer to the [document](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/introduction/change_log/3_2_1.html#id5) Send transaction upgrade chain data version to v3.2.3 - Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade. + Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade Version supported for upgrade: v3.0.0+ * Component compatibility @@ -65,7 +65,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -80,12 +80,12 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.2.0.。 +Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_2_4.md b/3.x/en/docs/introduction/change_log/3_2_4.md index 87cd74731..4b42c963f 100644 --- a/3.x/en/docs/introduction/change_log/3_2_4.md +++ b/3.x/en/docs/introduction/change_log/3_2_4.md @@ -39,7 +39,7 @@ Operation: first complete the upgrade of all node executable programs, and then refer to the [document](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/introduction/change_log/3_2_1.html#id5) Send transaction upgrade chain data version to v3.2.0 - Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade. + Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade Version supported for upgrade: v3.0.0+ * Component compatibility @@ -58,7 +58,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -73,12 +73,12 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.2.0.。 +Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_2_5.md b/3.x/en/docs/introduction/change_log/3_2_5.md index f691d2919..d3f37ceb1 100644 --- a/3.x/en/docs/introduction/change_log/3_2_5.md +++ b/3.x/en/docs/introduction/change_log/3_2_5.md @@ -24,7 +24,7 @@ Operation: first complete the upgrade of all node executable programs, and then refer to the [document](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/introduction/change_log/3_2_1.html#id5) Send transaction upgrade chain data version to v3.2.0 - Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade. + Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade Version supported for upgrade: v3.0.0+ * Component compatibility @@ -43,7 +43,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -58,12 +58,12 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.2.0.。 +Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_2_6.md b/3.x/en/docs/introduction/change_log/3_2_6.md index eccc7ab15..3df165d7a 100644 --- a/3.x/en/docs/introduction/change_log/3_2_6.md +++ b/3.x/en/docs/introduction/change_log/3_2_6.md @@ -23,7 +23,7 @@ Operation: first complete the upgrade of all node executable programs, and then refer to the [document](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/introduction/change_log/3_2_1.html#id5) Send transaction upgrade chain data version to v3.2.0 - Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade. + Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade Version supported for upgrade: v3.0.0+ * Component compatibility @@ -42,7 +42,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -57,12 +57,12 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.2.0.。 +Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_2_7.md b/3.x/en/docs/introduction/change_log/3_2_7.md index cfbdc81d1..503b35bfd 100644 --- a/3.x/en/docs/introduction/change_log/3_2_7.md +++ b/3.x/en/docs/introduction/change_log/3_2_7.md @@ -29,7 +29,7 @@ Operation: first complete the upgrade of all node executable programs, and then refer to the [document](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/introduction/change_log/3_2_1.html#id5) Send transaction upgrade chain data version to v3.2.0 - Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade. + Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade Version supported for upgrade: v3.0.0+ * Component compatibility @@ -48,7 +48,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -63,12 +63,12 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.2.0.。 +Set the data compatibility version number. For example, the current version is 3.2.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.2.0 diff --git a/3.x/en/docs/introduction/change_log/3_3_0.md b/3.x/en/docs/introduction/change_log/3_3_0.md index 033540e66..fccc2a109 100644 --- a/3.x/en/docs/introduction/change_log/3_3_0.md +++ b/3.x/en/docs/introduction/change_log/3_3_0.md @@ -4,15 +4,15 @@ * [intra-block shard](https://fisco-bcos-doc.readthedocs.io/zh_CN/release-3.3.0/docs/design/parallel/sharding.html)Grouping contracts, scheduling transactions from different groups to different executors, on-chip DAG scheduling, and inter-chip DMC scheduling * [Permissions dynamically configurable](): Can dynamically turn off / on the permission function at runtime -* [SDK supports hardware encryption machine](https://fisco-bcos-doc.readthedocs.io/zh_CN/release-3.3.0/docs/design/hsm.html)The SDK supports running cryptographic algorithms through the encryption machine. +* [SDK supports hardware encryption machine](https://fisco-bcos-doc.readthedocs.io/zh_CN/release-3.3.0/docs/design/hsm.html)The SDK supports running cryptographic algorithms through the encryption machine * [Gateway speed limit](../../tutorial/air/config.md): via configuration file(config.ini) Control the size of the incoming flow * [Merkle Tree Cache](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3430): 提高取交易证明的性能 -* The gateway module supports multiple CAs: different chains can share the same gateway module to forward messages. You can configure multiple CAs in the directory.。 +* The gateway module supports multiple CAs: different chains can share the same gateway module to forward messages. You can configure multiple CAs in the directory。 #### Modify * Optimize various details to improve node performance -* The transaction interface of rpc returns the input field: you can control whether it needs to be returned in the configuration file. +* The transaction interface of rpc returns the input field: you can control whether it needs to be returned in the configuration file #### Repair @@ -20,17 +20,17 @@ * Fix the problem of network disconnection caused by abnormal parsing of 'P2P' message * Fixes an issue where the 'StateStorage' read operation commits while causing the iterator to fail * Fixed the problem that the node private key file 'node.pem' was not generated during the 'Pro' version expansion operation and the expansion failed -* Fix the problem that the receipt hash is occasionally incorrect when the transaction receipt is returned. +* Fix the problem that the receipt hash is occasionally incorrect when the transaction receipt is returned #### Compatibility * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.3.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data-compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x * Component compatibility @@ -47,7 +47,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -60,7 +60,7 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) diff --git a/3.x/en/docs/introduction/change_log/3_4_0.md b/3.x/en/docs/introduction/change_log/3_4_0.md index a77cbf9af..4a3c1f5c7 100644 --- a/3.x/en/docs/introduction/change_log/3_4_0.md +++ b/3.x/en/docs/introduction/change_log/3_4_0.md @@ -24,11 +24,11 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.4.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.3.x, 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x **Component compatibility** @@ -45,7 +45,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -58,11 +58,11 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ ##### Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) -Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.4.0.。 +Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey)Set the data compatibility version number. For example, the current version is 3.4.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.4.0 diff --git a/3.x/en/docs/introduction/change_log/3_5_0.md b/3.x/en/docs/introduction/change_log/3_5_0.md index 9d1f80473..89323da2a 100644 --- a/3.x/en/docs/introduction/change_log/3_5_0.md +++ b/3.x/en/docs/introduction/change_log/3_5_0.md @@ -24,18 +24,18 @@ * [Fix an issue with gateway sending corrupted message packets](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3825) * [Fix processing of abi fields during archive tool re-import](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3820) * [Fix handling of nonce field during archive tool re-import](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3811) -* [build _ chain.sh build chain script-l parameter supports resolving domain names](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3931) +* [build _ chain.sh build script -l parameter supports domain name resolution](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3931) * [Fix boost log deadlock caused by node receiving USR1 / USR2 signal](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3947) * [Fix the problem of blocking rpc requests when taking Merkel certificates](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/3955) * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.4.x, 3.5.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.3.x, 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x @@ -43,7 +43,7 @@ Effect: The opening of the experimental function is controlled by the feature switch - Operation: After the node executable is upgraded, use the console command 'setSystemConfigByKey < feature name > 1' to enable the corresponding experimental function. For more information, see Upgrade Methods in the documentation. + Action: After upgrading the node executable, run the console command 'setSystemConfigByKey 1 'Open the corresponding experimental function, see the document upgrade method section for specific operations Note: * feature operation is irreversible and cannot be closed after opening @@ -70,7 +70,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -85,12 +85,12 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.5.0.。 +Set the data compatibility version number. For example, the current version is 3.5.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.5.0 @@ -120,7 +120,7 @@ Run the getSystemConfigByKey command in the console to view the feature status o 0 ``` -Use the setSystemConfigByKey command to enable the rpbft feature and enable rpbft consensus. +Use the setSystemConfigByKey command to enable the rpbft feature and enable rpbft consensus ``` [group0]: /apps> setSystemConfigByKey feature_rpbft 1 @@ -130,7 +130,7 @@ Use the setSystemConfigByKey command to enable the rpbft feature and enable rpbf } ``` -Run the getSystemConfigByKey command in the console to check whether the feature of the current rpbft is enabled. A value of 1 indicates that the feature is enabled. +Run the getSystemConfigByKey command in the console to check whether the feature of the current rpbft is enabled. A value of 1 indicates that the feature is enabled ``` [group0]: /apps> getSystemConfigByKey feature_rpbft diff --git a/3.x/en/docs/introduction/change_log/3_6_0.md b/3.x/en/docs/introduction/change_log/3_6_0.md index b92d887ab..8990df579 100644 --- a/3.x/en/docs/introduction/change_log/3_6_0.md +++ b/3.x/en/docs/introduction/change_log/3_6_0.md @@ -18,12 +18,12 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.4.x, 3.5.x, 3.6.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.3.x, 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x @@ -31,7 +31,7 @@ Effect: The opening of the experimental function is controlled by the feature switch - Operation: After the node executable is upgraded, use the console command 'setSystemConfigByKey < feature name > 1' to enable the corresponding experimental function. For more information, see Upgrade Methods in the documentation. + Action: After upgrading the node executable, run the console command 'setSystemConfigByKey 1 'Open the corresponding experimental function, see the document upgrade method section for specific operations Note: * feature operation is irreversible and cannot be closed after opening @@ -61,7 +61,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -76,7 +76,7 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) @@ -104,7 +104,7 @@ The current chain has been upgraded, so far,**The chain continues to run with ne ##### Enable balance asset management through the feature switch -Run the getSystemConfigByKey command in the console to view the status of the current asset management feature. +Run the getSystemConfigByKey command in the console to view the status of the current asset management feature ``` [group0]: /apps> getSystemConfigByKey feature_balance @@ -121,7 +121,7 @@ Use the setSystemConfigByKey command to enable the feature of asset management a } ``` -Run the getSystemConfigByKey command in the console to check whether the feature of the current asset management is enabled. A value of 1 indicates that the feature is enabled. +Run the getSystemConfigByKey command in the console to check whether the feature of the current asset management is enabled. A value of 1 indicates that the feature is enabled ``` [group0]: /apps> getSystemConfigByKey feature_balance diff --git a/3.x/en/docs/introduction/change_log/3_6_1.md b/3.x/en/docs/introduction/change_log/3_6_1.md index 89466abbf..3d56cc5a4 100644 --- a/3.x/en/docs/introduction/change_log/3_6_1.md +++ b/3.x/en/docs/introduction/change_log/3_6_1.md @@ -10,12 +10,12 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.4.x, 3.5.x, 3.6.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.3.x, 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x @@ -23,7 +23,7 @@ Effect: The opening of the experimental function is controlled by the feature switch - Operation: After the node executable is upgraded, use the console command 'setSystemConfigByKey < feature name > 1' to enable the corresponding experimental function. For more information, see Upgrade Methods in the documentation. + Action: After upgrading the node executable, run the console command 'setSystemConfigByKey 1 'Open the corresponding experimental function, see the document upgrade method section for specific operations Note: * feature operation is irreversible and cannot be closed after opening @@ -53,7 +53,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -68,7 +68,7 @@ Query, such as the current version returned is 3.0.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) diff --git a/3.x/en/docs/introduction/change_log/3_7_0.md b/3.x/en/docs/introduction/change_log/3_7_0.md index 155d385e2..4b9ce375a 100644 --- a/3.x/en/docs/introduction/change_log/3_7_0.md +++ b/3.x/en/docs/introduction/change_log/3_7_0.md @@ -15,12 +15,12 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.4.x, 3.5.x, 3.6.x, 3.7.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.3.x, 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x @@ -28,7 +28,7 @@ Effect: The opening of the experimental function is controlled by the feature switch - Operation: After the node executable is upgraded, use the console command 'setSystemConfigByKey < feature name > 1' to enable the corresponding experimental function. For more information, see Upgrade Methods in the documentation. + Action: After upgrading the node executable, run the console command 'setSystemConfigByKey 1 'Open the corresponding experimental function, see the document upgrade method section for specific operations Note: * feature operation is irreversible and cannot be closed after opening @@ -58,7 +58,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -73,12 +73,12 @@ Query, such as the currently returned version is 3.6.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.7.0.。 +Set the data compatibility version number. For example, the current version is 3.7.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.7.0 diff --git a/3.x/en/docs/introduction/change_log/3_7_1.md b/3.x/en/docs/introduction/change_log/3_7_1.md index 123df71b7..d887aaa6c 100644 --- a/3.x/en/docs/introduction/change_log/3_7_1.md +++ b/3.x/en/docs/introduction/change_log/3_7_1.md @@ -6,18 +6,18 @@ #### Modify -* [By modifying boost-Asio version, fix the problem that timer _ remove will core](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/4336) +* [Fix the problem that timer _ remove will core by modifying the boost-asio version](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/4336) * [Modify network disconnect log to INFO level](https://github.com/FISCO-BCOS/FISCO-BCOS/pull/4351) * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.4.x, 3.5.x, 3.6.x, 3.7.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.3.x, 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x @@ -25,7 +25,7 @@ Effect: The opening of the experimental function is controlled by the feature switch - Operation: After the node executable is upgraded, use the console command 'setSystemConfigByKey < feature name > 1' to enable the corresponding experimental function. For more information, see Upgrade Methods in the documentation. + Action: After upgrading the node executable, run the console command 'setSystemConfigByKey 1 'Open the corresponding experimental function, see the document upgrade method section for specific operations Note: * feature operation is irreversible and cannot be closed after opening @@ -55,7 +55,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -70,12 +70,12 @@ Query, such as the currently returned version is 3.6.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.7.0.。 +Set the data compatibility version number. For example, the current version is 3.7.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.7.0 diff --git a/3.x/en/docs/introduction/change_log/3_8_0.md b/3.x/en/docs/introduction/change_log/3_8_0.md index 5b284d9b2..dba2c89d9 100644 --- a/3.x/en/docs/introduction/change_log/3_8_0.md +++ b/3.x/en/docs/introduction/change_log/3_8_0.md @@ -13,11 +13,11 @@ * Historical Version Upgrade - The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version. + The "data compatibility version number ([compatibility _ version] of the chain that needs to be upgraded"(#id5)) "is the following version * 3.4.x, 3.5.x, 3.6.x, 3.7.x: The data is fully compatible with the current version, and the upgrade can be completed by directly replacing the binary * 3.3.x, 3.2.x, 3.1.x, 3.0.x: supports gray-scale upgrade by replacing the binary. If you need to use the new features of the current version, you need to upgrade the data compatible version number. See [Document](#id5) - * 3.0-rc x: The data is incompatible and cannot be upgraded. You can consider gradually migrating your business to the 3.x official version. + * 3.0-rc x: Data is not compatible and cannot be upgraded. Consider gradually migrating your business to the 3.x official version * 2.x: data is not compatible, 2.x version is still maintained, you can consider upgrading to the latest version of 2.x @@ -25,7 +25,7 @@ Effect: The opening of the experimental function is controlled by the feature switch - Operation: After the node executable is upgraded, use the console command 'setSystemConfigByKey < feature name > 1' to enable the corresponding experimental function. For more information, see Upgrade Methods in the documentation. + Action: After upgrading the node executable, run the console command 'setSystemConfigByKey 1 'Open the corresponding experimental function, see the document upgrade method section for specific operations Note: * feature operation is irreversible and cannot be closed after opening @@ -55,7 +55,7 @@ #### Upgrade Method -This operation only supports upgrading version 3.x to this version, not 3.0-Upgrade of rc or 2.x。 +This operation only supports upgrading version 3.x to this version, and does not support upgrading version 3.0-rc or 2.x。 ##### Query data compatibility version number (compatibility _ version) @@ -70,12 +70,12 @@ Query, such as the currently returned version is 3.6.0 ##### Replace Node Binary Need to be**All Nodes** -Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 ##### Set the data compatibility version number (compatibility _ version) Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_and_maintenance/console/console_commands.html#setsystemconfigbykey) -Set the data compatibility version number. For example, the current version is 3.7.0.。 +Set the data compatibility version number. For example, the current version is 3.7.0。 ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.8.0 diff --git a/3.x/en/docs/introduction/change_log/feature_bugfix_list.md b/3.x/en/docs/introduction/change_log/feature_bugfix_list.md index 4e8706dd4..7fa3cf1d4 100644 --- a/3.x/en/docs/introduction/change_log/feature_bugfix_list.md +++ b/3.x/en/docs/introduction/change_log/feature_bugfix_list.md @@ -18,19 +18,19 @@ This document documents FISCO BCOS 3.0+ List of feature feature switches and bug | | bugfix Name| Default State| Description| |----------------------------|-----------------------------------------------------|------|------------------------| -| Fix the bug that the roller contract does not roll back when rolling back in serial mode.| bugfix_revert | Open: 1| 3.2.3 and 3.5.0 are turned on by default| +| Fix the bug that the roller contract does not roll back when rolling back in serial mode| bugfix_revert | Open: 1| 3.2.3 and 3.5.0 are turned on by default| | Fix the problem of incorrect calculation of stateStorage _ hash| bugfix_statestorage_hash | Open: 1| 3.2.4, 3.5.0, 3.6.0 On by Default| | Adapt the call behavior of Ethereum| bugfix_evm_create2_delegatecall_staticcall_codecopy | Open: 1| 3.2.4 and 3.6.0 are turned on by default| | Fix an issue with the order of thrown events| bugfix_event_log_order | Open: 1| 3.2.7 and 3.6.0 are turned on by default| -| Fix the problem that call does not return an address.| bugfix_call_noaddr_return | Open: 1| 3.2.7 and 3.6.0 are turned on by default| +| Fix the problem that call does not return an address| bugfix_call_noaddr_return | Open: 1| 3.2.7 and 3.6.0 are turned on by default| | Fix the problem that pre-compiled contract hash is different from Ethereum| bugfix_precompiled_codehash | Open: 1| 3.2.7 and 3.6.0 are turned on by default| -| Fix the bug that the roller contract does not return when rolling back in dmc mode.| bugfix_dmc_revert | Open: 1| 3.2.7 and 3.6.0 are turned on by default| -| Fix the compatibility question of inconsistent keyPage hash.| bugfix_keypage_system_entry_hash | Open: 1| 3.6.1 On by default| +| Fix the bug that the roller contract does not return when rolling back in dmc mode| bugfix_dmc_revert | Open: 1| 3.2.7 and 3.6.0 are turned on by default| +| Fix the compatibility question of inconsistent keyPage hash| bugfix_keypage_system_entry_hash | Open: 1| 3.6.1 On by default| | InternalCreate Reuse Existing Deployment Contract Logic| bugfix_internal_create_redundant_storage | Open: 1| 3.6.1 On by default| | Fix the problem of restricted asset transfer after opening contract deployment permission| bugfix_internal_create_permission_denied | Open: 1| 3.7.0 On by Default| -| Fix the problem of the shard contract calling contract within the block.| bugfix_sharding_call_in_child_executive | Open: 1| 3.7.0 On by Default| -| Fix the problem of deploying empty abi and deploying the same contract without abi.| bugfix_empty_abi_reset | Open: 1| 3.7.0 On by Default| -| Fix the problem that the contract cannot be called through the contract address of eip55 type.| bugfix_eip55_addr | Open: 1| 3.7.0 On by Default| +| Fix the problem of the shard contract calling contract within the block| bugfix_sharding_call_in_child_executive | Open: 1| 3.7.0 On by Default| +| Fix the problem of deploying empty abi and deploying the same contract without abi| bugfix_empty_abi_reset | Open: 1| 3.7.0 On by Default| +| Fix the problem that the contract cannot be called through the contract address of eip55 type| bugfix_eip55_addr | Open: 1| 3.7.0 On by Default| | Solve the return value problem to the EOA account getCode| bugfix_eoa_as_contract | Open: 1| 3.8.0 On by Default| | Solve the problem that gas consumption is different from serial mode when deploying contracts in DMC mode| bugfix_dmc_deploy_gas_used | Open: 1| 3.8.0 On by Default| | Solve the problem that gas is not deducted when EVM executes status _ code other than 0 and revert| bugfix_evm_exception_gas_used | Open: 1| 3.8.0 On by Default| diff --git a/3.x/en/docs/introduction/change_log/index.rst b/3.x/en/docs/introduction/change_log/index.rst new file mode 100644 index 000000000..487a59c60 --- /dev/null +++ b/3.x/en/docs/introduction/change_log/index.rst @@ -0,0 +1,330 @@ +############################################################## +4. Release Notes +############################################################## + +Tags: "version features" "Release Note" " + +------------ + +.. important:: + Related Software and Environment Release Notes!'Please check`_ + +Upgrade Guide +------------ +FISCO BCOS version iteration, supports compatible upgrades between versions, supports grayscale upgrades, and during the grayscale upgrade process, the system can achieve normal consensus and block。For details, please refer to the 'Upgrade Guide'<./upgrade.html>`_ 。 + +FISCO BCOS design Feature control feature function is turned on and off, users can choose to turn on according to their own needs。Fix some bugs with compatibility problems through bugfix。See 'Feature and Bugfix List' for details<./feature_bugfix_list.html>`_ 。 + + +.. toctree:: + :hidden: + :maxdepth: 0 + + upgrade.md + +v3.8.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.8.0 <./3_8_0.html>`_ [`release `_] + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + - View light node binary version: ".. / fisco-bcos-lightnode --version" + - View light node related documents, please refer to ['Light Node Building Tool`_] + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_8_0.md + +v3.7.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.7.0 <./3_7_0.html>`_ [`release `_] + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + - View light node binary version: ".. / fisco-bcos-lightnode --version" + - View light node related documents, please refer to ['Light Node Building Tool`_] + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_7_0.md + 3_7_1.md + +v3.6.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.6.0 <./3_6_0.html>`_ [`release `_] + - `FISCO BCOS v3.6.1 <./3_6_1.html>`_ [`release `_] + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + - View light node binary version: ".. / fisco-bcos-lightnode --version" + - View light node related documents, please refer to ['Light Node Building Tool`_] + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_6_0.md + 3.6.1.md + +v3.5.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.5.0 <./3_5_0.html>`_ [`release `_] + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + - View light node binary version: ".. / fisco-bcos-lightnode --version" + - View light node related documents, please refer to ['Light Node Building Tool`_] + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_5_0.md + +v3.4.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.4.0 <./3_4_0.html>`_ [`release `_] + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + - View light node binary version: ".. / fisco-bcos-lightnode --version" + - View light node related documents, please refer to ['Light Node Building Tool`_] + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_4_0.md + +v3.3.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.3.0 <./3_3_0.html>`_ [`release `_] + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + - View light node binary version: ".. / fisco-bcos-lightnode --version" + - View light node related documents, please refer to ['Light Node Building Tool`_] + + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_3_0.md + + + +v3.2.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + - `FISCO BCOS v3.2.6 <./3_2_6.html>`_ [`release `_] + - `FISCO BCOS v3.2.5 <./3_2_5.html>`_ [`release `_] + - `FISCO BCOS v3.2.4 <./3_2_4.html>`_ [`release `_] + - `FISCO BCOS v3.2.3 <./3_2_3.html>`_ [`release `_] + - `FISCO BCOS v3.2.2 <./3_2_2.html>`_ [`release `_] + - `FISCO BCOS v3.2.1 <./3_2_1.html>`_ [`release `_] + - `FISCO BCOS v3.2.0 <./3_2_0.html>`_ [`release `_] + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + - View light node binary version: ".. / fisco-bcos-lightnode --version" + - View light node related documents, please refer to ['Light Node Building Tool`_] + + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_2_7.md + 3_2_6.md + 3_2_5.md + 3_2_4.md + 3_2_3.md + 3_2_2.md + 3_2_1.md + 3_2_0.md + +v3.1.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.1.2 <./3_1_2.html>`_ [`release `_] + - `FISCO BCOS v3.1.1 <./3_1_1.html>`_ [`release `_] + - `FISCO BCOS v3.1.0 <./3_1_0.html>`_ [`release `_] + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_1_2.md + 3_1_1.md + 3_1_0.md + +v3.0.x +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.0.1 <./3_0_1.html>`_ [`release `_] + - `FISCO BCOS v3.0.0 <./3_0_0.html>`_ [`release `_] + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_0_1.md + 3_0_0.md + +v3.0.0-rc4 +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.0.0-rc4 <./3_0_0_rc4.html>`_ [`release `_] + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + - View Max Version FISCO BCOS Node Binary Version: "." / BcosMaxNodeService--version, "". "/ BcosRpcService--version," "." / BcosGatewayService--version, "". "/ BcosExecutorService--version" + + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_0_0_rc4.md + + +v3.0.0-rc3 +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.0.0-rc3 <./3_0_0_rc3.html>`_ [`release `_] + + -v3.0.0-rc3 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_0_0_rc3.md + + +v3.0.0-rc2 +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.0.0-rc2 <./3_0_0_rc2.html>`_ [`release `_] + + -v3.0.0-rc2 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions + + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_0_0_rc2.md + + + +v3.0.0-rc1 +------------------ + +.. admonition:: FISCO BCOS 3.x Releases + + - `FISCO BCOS v3.0.0-rc1 <./3_0_0_rc1.html>`_ [`release `_] + + -v3.0.0-rc1 does not include the "FISCO BCOS Max" version, the Max version of FISCO BCOS will be available in subsequent versions + +.. admonition:: Viewing Node and Data Versions + + - View Air version FISCO BCOS node binary version: ". / fisco-bcos --version" + - View Pro version FISCO BCOS node binary version: "." / BcosNodeService--version ",". "/ BcosRpcService--version", "." / BcosGatewayService--version " + +.. toctree:: + :hidden: + :maxdepth: 0 + + 3_0_0_rc1.md diff --git a/3.x/en/docs/introduction/change_log/upgrade.md b/3.x/en/docs/introduction/change_log/upgrade.md index 89869eb0f..815439fe5 100644 --- a/3.x/en/docs/introduction/change_log/upgrade.md +++ b/3.x/en/docs/introduction/change_log/upgrade.md @@ -2,9 +2,9 @@ FISCO BCOS version iteration, designed to support compatibility upgrades between versions [Compatibility Scheme](../design/compatibility.md), support can be gray scale upgrade, and gray scale upgrade process, the system can be normal consensus, out of the block。 Specific system version upgrade steps are as follows: -1. Upgrade the binary: Stop the nodes that need to be upgraded, and gradually replace the binary of all nodes with the current version.。In order not to affect the business, the replacement process can be carried out in grayscale, replacing and restarting nodes one by one.。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。 +1. Upgrade the binary: Stop the nodes that need to be upgraded, and gradually replace the binary of all nodes with the current version。In order not to affect the business, the replacement process can be carried out in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。 2. Upgrade the data compatible version number: After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatible version number to the current version. The steps are as follows: -- Connect to the node through the console and run the upgrade compatibility command: "'setSystemConfigByKey compatibility _ version 3.x.x"' +- Connect to the node through the console and run the upgrade compatibility command: "setSystemConfigByKey compatibility _ version 3.x.x" ``` [group0]: /apps> setSystemConfigByKey compatibility_version 3.x.x diff --git a/3.x/en/docs/introduction/function_overview.md b/3.x/en/docs/introduction/function_overview.md index e323e4472..4d74be5b1 100644 --- a/3.x/en/docs/introduction/function_overview.md +++ b/3.x/en/docs/introduction/function_overview.md @@ -3,7 +3,7 @@ Tags: "Features Overview" ----- -In order to support the demand for massive services, FISCO BCOS v3.0 Stable Edition has designed the system architecture, processing flow, execution, and storage accordingly, and launched three different forms to meet the differentiated needs of different blockchain deployment scenarios.。The functional overview is as follows: +In order to support the demand for massive services, FISCO BCOS v3.0 Stable Edition has designed the system architecture, processing flow, execution, and storage accordingly, and launched three different forms to meet the differentiated needs of different blockchain deployment scenarios。The functional overview is as follows: |**Overall architecture** | | | - | - | @@ -12,7 +12,7 @@ In order to support the demand for massive services, FISCO BCOS v3.0 Stable Edit | distributed storage| Support massive data storage| | parallel computing| DAG-based support(directed acyclic graph)、DMC(parallel deterministic contract algorithm)and intra-block sharding techniques| | Node Type| Consensus node, observation node, light node| -| calculation model| Sort-Execute-Verify| +| calculation model| Sort-Execute-Validate| | **System performance** || | Peak TPS| 100,000+ TPS(PBFT)| | Transaction confirmation delay| second level| @@ -55,7 +55,7 @@ In order to support the demand for massive services, FISCO BCOS v3.0 Stable Edit |Permission Control| Supports fine-grained permission control| | **privacy protection** || |Physical isolation| Data isolation between groups| -|Scenario-based privacy protection mechanism|Based on [WeDPR](https://github.com/WeBankBlockchain/WeDPR-Lab-Core)Support hidden payment, anonymous voting, anonymous bidding, selective disclosure and other scenarios.| +|Scenario-based privacy protection mechanism|Based on [WeDPR](https://github.com/WeBankBlockchain/WeDPR-Lab-Core)Support hidden payment, anonymous voting, anonymous bidding, selective disclosure and other scenarios| | **cross-chain protocol** || |SPV|Provides an interface for obtaining SPV attestations| |cross-chain protocol|Based on [WeCross](https://github.com/WeBankBlockchain/WeCross)Support isomorphic, heterogeneous cross-chain| diff --git a/3.x/en/docs/introduction/introduction.md b/3.x/en/docs/introduction/introduction.md index 9abc270d3..fb73bc87c 100644 --- a/3.x/en/docs/introduction/introduction.md +++ b/3.x/en/docs/introduction/introduction.md @@ -4,15 +4,15 @@ Tag: "FISCO BCOS Introduction" --- -FISCO BCOS is a financial-grade, domestic-made secure and controllable blockchain underlying platform led by the open source working group of Shenzhen Financial Blockchain Development Promotion Association (hereinafter referred to as "Golden Chain Alliance").。As one of the earliest open source domestic alliance chain underlying platform, FISCO BCOS in 2017 for the global open source。 +FISCO BCOS is a financial-grade, domestic-made secure and controllable blockchain underlying platform led by the open source working group of Shenzhen Financial Blockchain Development Promotion Association (hereinafter referred to as "Golden Chain Alliance")。As one of the earliest open source domestic alliance chain underlying platform, FISCO BCOS in 2017 for the global open source。 -Since the sixth anniversary of open source, the FISCO BCOS open source community has made extraordinary achievements in technological innovation, application industry and open source ecology.。 +Since the sixth anniversary of open source, the FISCO BCOS open source community has made extraordinary achievements in technological innovation, application industry and open source ecology。 -FISCO BCOS continues to tackle key core technologies, with single-chain performance exceeding 100,000 TPS。The first DMC algorithm greatly improves performance and introduces three architectural forms to flexibly adapt to business needs.;Full link localization, the use of national secret algorithm and hardware and software system, support for domestic OS, adapted to domestic chips and servers, support for multi-language multi-terminal national secret access.。Have the bottom layer covered+Middleware+Rich peripheral components of application components。 +FISCO BCOS continues to tackle key core technologies, with single-chain performance exceeding 100,000 TPS。The first DMC algorithm greatly improves performance and introduces three architectural forms to flexibly adapt to business needs;Full link localization, the use of national secret algorithm and hardware and software system, support for domestic OS, adapted to domestic chips and servers, support for multi-language multi-terminal national secret access。Have the bottom layer covered+Middleware+Rich peripheral components of application components。 -The usability of the underlying platform has been widely used and tested in practice, supporting more than 400 benchmark applications in key areas related to the national economy and people's livelihood, such as government affairs, finance, medical care, dual-carbon and cross-border data circulation, contributing to the development of the real economy and promoting fairness and sustainability.。 +The usability of the underlying platform has been widely used and tested in practice, supporting more than 400 benchmark applications in key areas related to the national economy and people's livelihood, such as government affairs, finance, medical care, dual-carbon and cross-border data circulation, contributing to the development of the real economy and promoting fairness and sustainability。 -As of December 2023, the domestic open source alliance chain ecosystem built around FISCO BCOS has gathered more than 5,000 institutions, more than 100,000 individual members, as well as 50 certified partners and more than 500 core contributors.。The community has certified 63 FISCO BCOS MVPs, developed 12 special interest groups SIG, and cooperated with hundreds of well-known universities to cultivate more than 80,000 talents in the blockchain industry, which has developed into one of the largest and most active domestic open source alliance chain ecosystems.。 +As of December 2023, the domestic open source alliance chain ecosystem built around FISCO BCOS has gathered more than 5,000 institutions, more than 100,000 individual members, as well as 50 certified partners and more than 500 core contributors。The community has certified 63 FISCO BCOS MVPs, developed 12 special interest groups SIG, and cooperated with hundreds of well-known universities to cultivate more than 80,000 talents in the blockchain industry, which has developed into one of the largest and most active domestic open source alliance chain ecosystems。 - [Six-year symbiosis to create rainforest ecology| FISCO BCOS Open Source 6th Anniversary Transcript](https://mp.weixin.qq.com/s/VVxRQaRJrwqZqOIgzpN3bQ) diff --git a/3.x/en/docs/introduction/key_feature.md b/3.x/en/docs/introduction/key_feature.md index 7fb503a4f..6217d2dd9 100644 --- a/3.x/en/docs/introduction/key_feature.md +++ b/3.x/en/docs/introduction/key_feature.md @@ -4,50 +4,50 @@ Tags: "Key Features" **Air, Pro, Max: Deployable in three architectural forms** -- **Lightweight Air Edition**: Has the same form as v2.0, all functions in one blockchain node (all-in-one)。The architecture is simple and can be quickly deployed in any environment。You can use it for blockchain entry, development, testing, POC verification, etc.。 -- **Pro Edition**The architecture allows the blockchain core function module to be extended in a multi-group manner while implementing the partition deployment of the access layer and the core module by using the access layer module of the blockchain node as a process.。The architecture implements partition isolation to cope with possible future business expansion and is suitable for production environments with continuous business expansion.。 -- **Large Capacity Max Edition**Based on the Pro version, the architecture provides the ability to switch the core module of the chain between master and standby, and can deploy transaction executors and access distributed storage TiKV through multiple machines to achieve parallel expansion of computing and storage.。A node in this architecture consists of a series of microservices, but it relies on high O & M capabilities and is suitable for scenarios that require massive computing and storage.。 +- **Lightweight Air Edition**: Has the same form as v2.0, all functions in one blockchain node (all-in-one)。The architecture is simple and can be quickly deployed in any environment。You can use it for blockchain entry, development, testing, POC verification, etc。 +- **Pro Edition**The architecture allows the blockchain core function module to be extended in a multi-group manner while implementing the partition deployment of the access layer and the core module by using the access layer module of the blockchain node as a process。The architecture implements partition isolation to cope with possible future business expansion and is suitable for production environments with continuous business expansion。 +- **Large Capacity Max Edition**Based on the Pro version, the architecture provides the ability to switch the core module of the chain between master and standby, and can deploy transaction executors and access distributed storage TiKV through multiple machines to achieve parallel expansion of computing and storage。A node in this architecture consists of a series of microservices, but it relies on high O & M capabilities and is suitable for scenarios that require massive computing and storage。 **Pipeline: Block pipeline to generate blocks continuously and compactly** -- The block generation process can be split into four stages: packaging, consensus, execution, and placement.。In previous designs, the system had to wait for the previous block to complete four stages before entering the next block generation.。This version uses a pipeline design, so that the four stages of adjacent blocks overlap before and after, reducing the waiting time between blocks and improving the speed of continuous block output.。For example, block 103 is being packaged, 102 is in consensus, 101 is being executed, and 100 is falling.。[Related Documents: Two-Stage Parallel Byzantine Consensus](../design/consensus/consensus.md) +-The block generation process can be split into four stages: packaging, consensus, execution, and placement。In previous designs, the system had to wait for the previous block to complete four stages before entering the next block generation。This version uses a pipeline design, so that the four stages of adjacent blocks overlap before and after, reducing the waiting time between blocks and improving the speed of continuous block output。For example, block 103 is being packaged, 102 is in consensus, 101 is being executed, and 100 is falling。[Related Documents: Two-Stage Parallel Byzantine Consensus](../design/consensus/consensus.md) **DMC realizes multi-machine expansion of transaction processing performance** -- In the traditional design, the transaction execution can only be single machine。v3.0 Stable Edition uses an original deterministic multi-contract parallel solution (Deterministic Multi-Contract, referred to as DMC), can automatically process transaction conflicts when the system is running, and schedule multiple transactions to different machines for parallel execution, users can expand the computing instance to achieve parallel expansion of transaction processing performance.。[Related Documentation: Deterministic Multi-Contract Parallelism](../design/parallel/DMC.md) +- In the traditional design, the transaction execution can only be single machine。V3.0 stable version adopts the original deterministic multi-contract parallel solution (DMC), which can automatically process transaction conflicts when the system is running, and schedule multiple transactions to different machines for parallel execution。[Related Documentation: Deterministic Multi-Contract Parallelism](../design/parallel/DMC.md) **+TiKV: Distributed transactional commit, supporting mass storage** -- V3.0 stable version integrates the TiKV storage engine and is developed on the basis of it, supports distributed transactional commit, combines DMC multi-computing instances, gives full play to storage performance, and supports massive data on the chain.。At the same time, this version introduces the KeyPage mechanism, referring to the cache mechanism of memory pages, the key-The value is organized into pages for access, which solves the problem of using key in the past.-When storing data in the way of value, the storage data is fragmented, which improves the locality of data access and is more suitable for large-scale data access.。[Related Documentation: Transaction-Based Storage Module](../design/storage/storage.md) +-v3.0 stable version integrates TiKV storage engine, and secondary development on its basis, supports distributed transactional submission, combines DMC multiple computing instances, gives full play to storage performance, and supports massive data on the chain。At the same time, this version introduces the KeyPage mechanism, referring to the cache mechanism of memory pages, the key-value organization into pages of access, to solve the previous key-value way to store data, storage data scattered problems, improve data access locality, more suitable for large-scale data access。[Related Documentation: Transaction-Based Storage Module](../design/storage/storage.md) **Blockchain File System: WYSIWYG Contract Data Management** -- The blockchain file system can be used to manage the resources on the chain. You can manage the contracts on the chain like a file system and call them through the path of the contract. Commands include pwd, cd, ls, tree, mkdir, and ln.。Users can experience the feature through the console。[Related Document: Blockchain Contract File System](../design/contract_directory.md) +- Supports the management of resources on the chain through the blockchain file system, which can manage the contracts on the chain like the file system and call them through the path of the contract. Commands include: pwd, cd, ls, tree, mkdir, ln。Users can experience the feature through the console。[Related Document: Blockchain Contract File System](../design/contract_directory.md) **SDK basic library: more convenient access to the whole platform** -- The v3.0 stable version builds a general-purpose national secret basic component, which encapsulates the national secret algorithm, national secret communication protocol, domestic cipher machine access protocol and FISCO BCOS blockchain basic data structure, based on which SDKs on different platforms, different operating systems and different programming languages can be quickly developed, greatly improving R & D efficiency.。[Related Documentation: Multilingual SDK](../sdk/index.md) +-V3.0 Stable Edition builds a universal national secret basic component, which encapsulates the national secret algorithm, national secret communication protocol, domestic cipher machine access protocol and FISCO BCOS blockchain basic data structure, based on which SDKs of different platforms, different operating systems and different programming languages can be quickly developed, greatly improving the development efficiency。[Related Documentation: Multilingual SDK](../sdk/index.md) **Transaction Parallel Conflict Analysis Tool: Automatically Generate Transaction Conflict Variables** -- To implement parallel transactions in v2.0, you need to manually specify the transaction conflict variable when writing the contract。This version introduces a transaction parallel conflict analysis tool, no need to manually specify transaction conflict variables when writing contracts, just focus on their own code implementation, contract compilation tool automatically generates transaction conflict variables, the corresponding transactions can be automatically executed in parallel.。 +-To implement parallel transactions in v2.0, you need to manually specify transaction conflict variables when writing contracts。This version introduces a transaction parallel conflict analysis tool, no need to manually specify transaction conflict variables when writing contracts, just focus on their own code implementation, contract compilation tool automatically generates transaction conflict variables, the corresponding transactions can be automatically executed in parallel。 -**WBC-Liquid: Write a contract with Rust** +**WBC-Liquid: Writing Contracts with Rust** -- In addition to supporting the Soldity language, this version also supports writing contracts in Rust.。WBC-Liquid is a Rust-based smart contract programming language developed by Microblockchain. With the help of Rust language features, it can achieve more powerful programming functions than Solidity language.。[Related Documentation: Liquid Online Documentation](https://liquid-doc.readthedocs.io/zh_CN/latest/) +- In addition to supporting Soldity language, this version also supports writing contracts with Rust。WBC-Liquid is a Rust-based smart contract programming language developed by Weizhong Blockchain. With the help of Rust language features, it can achieve more powerful programming functions than Solidity language。[Related Documentation: Liquid Online Documentation](https://liquid-doc.readthedocs.io/zh_CN/latest/) **Permission governance framework: multi-party voting governance blockchain** -- This version has a built-in permission governance framework that provides effective permission control directly from the blockchain implementation layer.。After the permission governance function is enabled, multi-party voting authorization is required to modify the blockchain.。Based on the framework, blockchain participants can customize governance policies on the blockchain and update them iteratively through voting.。[Related Documents: Rights Management System Design](../design/committee_design.md) +- This version has a built-in permission governance framework that provides effective permission control directly from the blockchain implementation layer。After the permission governance function is enabled, multi-party voting authorization is required to modify the blockchain。Based on the framework, blockchain participants can customize governance policies on the blockchain and update them iteratively through voting。[Related Documents: Rights Management System Design](../design/committee_design.md) **Feature Inheritance and Upgrade** The stable version of v3.0 also inherits and upgrades many of the important features of v2.0, including: -- PBFT consensus algorithm: immediate consensus algorithm for second-level confirmation of transactions +- PBFT consensus algorithm: immediately consistent consensus algorithm, to achieve transaction second-level confirmation - Solidity: Support up to version 0.8.11 -- CRUD: uses a table structure to store data. This version encapsulates an easier-to-use interface, which is more friendly to business development. -- AMOP: on-chain messenger protocol, with the help of blockchain P2P network to achieve information transmission, to achieve access to the blockchain data communication between applications -- Disk encryption: The private key and data of the blockchain node are encrypted and stored in the physical hard disk, and the physical hardware cannot be decrypted even if it is lost. +-CRUD: Use a table structure to store data. This version encapsulates an easier-to-use interface, which is more friendly to business development +-AMOP: On-chain messenger protocol, which enables information transmission through the P2P network of the blockchain and data communication between applications accessing the blockchain +- Disk encryption: The private key and data of the blockchain node are encrypted and stored in the physical hard disk, and the physical hardware cannot be decrypted even if it is lost - Cryptographic algorithm: built-in group ring signature and other cryptographic algorithms, can support a variety of secure multi-party computing scenarios -- Blockchain monitoring: real-time monitoring and data reporting of blockchain status +- Blockchain monitoring: Realize real-time monitoring and data reporting of blockchain status diff --git a/3.x/en/docs/key_concepts.md b/3.x/en/docs/key_concepts.md index 8bd751403..7feabb08a 100644 --- a/3.x/en/docs/key_concepts.md +++ b/3.x/en/docs/key_concepts.md @@ -1,96 +1,96 @@ # Key Concepts -Blockchain is a technology formed by the cross-combination of multiple disciplines, and this chapter will explain the basic concepts related to blockchain and provide a popular introduction to the basic theories involved.。If you are already familiar with these basic techniques, you can skip this chapter。 +Blockchain is a technology formed by the cross-combination of multiple disciplines, and this chapter will explain the basic concepts related to blockchain and provide a popular introduction to the basic theories involved。If you are already familiar with these basic techniques, you can skip this chapter。 ## What is Blockchain? -Blockchain (blockchain) is a concept proposed after Bitcoin, in Satoshi Nakamoto's [paper] on Bitcoin(https://bitcoin.org/bitcoin.pdf)The concept of blockchain is not directly introduced in, but a data structure is described in terms of chain of block.。 +Blockchain (blockchain) is a concept proposed after Bitcoin, in Satoshi Nakamoto's [paper] on Bitcoin(https://bitcoin.org/bitcoin.pdf)The concept of blockchain is not directly introduced in, but a data structure is described in terms of chain of block。 -Chain of block refers to the data organization method in which multiple blocks are concatenated into a chain structure through hash.。Blockchain, on the other hand, uses a cross-combination of multiple technologies to maintain and manage the chain of block data structure to form a comprehensive technical field of non-tamperable distributed ledgers.。 +Chain of block refers to the data organization method in which multiple blocks are concatenated into a chain structure through hash。Blockchain, on the other hand, uses a cross-combination of multiple technologies to maintain and manage the chain of block data structure to form a comprehensive technical field of non-tamperable distributed ledgers。 -Blockchain technology is a mode of generating, accessing and using trusted data in a peer-to-peer network environment through transparent and trusted rules to build a block-chain data structure that is non-forgeable, difficult to tamper with and traceable.。In terms of technical architecture, blockchain is an overall solution consisting of distributed architecture and distributed storage, block chain data structure, peer-to-peer network, consensus algorithm, cryptography algorithm, game theory, smart contract and other information technologies.。 +Blockchain technology is a mode of generating, accessing and using trusted data in a peer-to-peer network environment through transparent and trusted rules to build a block-chain data structure that is non-forgeable, difficult to tamper with and traceable。In terms of technical architecture, blockchain is an overall solution consisting of distributed architecture and distributed storage, block chain data structure, peer-to-peer network, consensus algorithm, cryptography algorithm, game theory, smart contract and other information technologies。 -Blockchain technology and ecology originated from Bitcoin. As more industries such as finance, justice, supply chain, culture and entertainment, social management, and the Internet of Things pay attention to this field of technology, they hope to apply its technical value to a wider range of distributed collaboration.。 +Blockchain technology and ecology originated from Bitcoin. As more industries such as finance, justice, supply chain, culture and entertainment, social management, and the Internet of Things pay attention to this field of technology, they hope to apply its technical value to a wider range of distributed collaboration。 ### Account Book -As the name implies, the ledger is used to manage data such as accounts and transaction flows, and supports functions such as classified bookkeeping, reconciliation, and settlement.。In multi-party cooperation, multiple participants hope to jointly maintain and share a timely, correct and secure distributed ledger to eliminate information asymmetry, improve operational efficiency, and ensure financial and business security.。The blockchain is usually considered to be a core technology for building a "distributed shared ledger," through the chain of block data structure, multi-party consensus mechanism, smart contracts, world state storage and a series of technologies, can achieve consistent, trusted, transaction security, difficult to tamper with the traceable shared ledger.。 +As the name implies, the ledger is used to manage data such as accounts and transaction flows, and supports functions such as classified bookkeeping, reconciliation, and settlement。In multi-party cooperation, multiple participants hope to jointly maintain and share a timely, correct and secure distributed ledger to eliminate information asymmetry, improve operational efficiency, and ensure financial and business security。The blockchain is usually considered to be a core technology for building a "distributed shared ledger," through the chain of block data structure, multi-party consensus mechanism, smart contracts, world state storage and a series of technologies, can achieve consistent, trusted, transaction security, difficult to tamper with the traceable shared ledger。 -The basic contents contained in the ledger are blocks, transactions, accounts, and world status.。 +The basic contents contained in the ledger are blocks, transactions, accounts, and world status。 #### Block -A block is a data structure constructed in chronological order. The first block of a block chain is called a "genesis block." The subsequent blocks are identified by "height." The height of each block increases one by one. The new block will introduce the hash information of the previous block, and then use the hash algorithm and the data of the block to generate a unique data fingerprint, thus forming an interlocking block chain structure, called "Blockchain" or block chain。Sophisticated data structure design, so that the data on the chain according to the time of occurrence, traceable and verifiable, if you modify any of the data in any one block, will lead to the entire block chain validation does not pass, and thus the cost of tampering will be high.。 +A block is a data structure constructed in chronological order. The first block of a block chain is called a "genesis block." The subsequent blocks are identified by "height." The height of each block increases one by one. The new block will introduce the hash information of the previous block, and then use the hash algorithm and the data of the block to generate a unique data fingerprint, thus forming an interlocking block chain structure, called "Blockchain" or block chain。Sophisticated data structure design, so that the data on the chain according to the time of occurrence, traceable and verifiable, if you modify any of the data in any one block, will lead to the entire block chain validation does not pass, and thus the cost of tampering will be high。 -The basic data structure of a block is a block header and a block block. The block header contains some basic information such as block height, hash, blocker signature, and state tree root. The block block contains a batch of receipt information related to the transaction data list. Depending on the size of the transaction list, the size of the entire block will vary. Considering factors such as network propagation, it is generally not too large, ranging from 1M to several M bytes.。 +The basic data structure of a block is a block header and a block block. The block header contains some basic information such as block height, hash, blocker signature, and state tree root. The block block contains a batch of receipt information related to the transaction data list. Depending on the size of the transaction list, the size of the entire block will vary. Considering factors such as network propagation, it is generally not too large, ranging from 1M to several M bytes。 #### Transaction -A transaction can be thought of as a request data sent to the blockchain system for deploying contracts, calling contract interfaces, maintaining the life cycle of contracts, managing assets, exchanging values, etc. The basic data structure of a transaction includes sender, recipient, transaction data, etc.。Users can build a transaction, sign the transaction with their own private key, send it to the chain (through interfaces such as sendRawTransaction), process it by the consensus mechanism of multiple nodes, execute the relevant smart contract code, generate the status data specified by the transaction, then package the transaction into a block, and store it together with the status data. The transaction is confirmed, and the confirmed transaction is considered to be transactional and consistent.。 +A transaction can be thought of as a request data sent to the blockchain system for deploying contracts, calling contract interfaces, maintaining the life cycle of contracts, managing assets, exchanging values, etc. The basic data structure of a transaction includes sender, recipient, transaction data, etc。Users can build a transaction, sign the transaction with their own private key, send it to the chain (through interfaces such as sendRawTransaction), process it by the consensus mechanism of multiple nodes, execute the relevant smart contract code, generate the status data specified by the transaction, then package the transaction into a block, and store it together with the status data. The transaction is confirmed, and the confirmed transaction is considered to be transactional and consistent。 -With the confirmation of the transaction, a transaction receipt (receipt) is generated, which corresponds to the transaction and is stored in the block, which is used to store some information generated during the transaction execution process, such as result codes, logs, and the amount of gas consumed.。Users can use the transaction hash to check the transaction receipt to determine whether the transaction is complete.。 +With the confirmation of the transaction, a transaction receipt (receipt) is generated, which corresponds to the transaction and is stored in the block, which is used to store some information generated during the transaction execution process, such as result codes, logs, and the amount of gas consumed。Users can use the transaction hash to check the transaction receipt to determine whether the transaction is complete。 -Corresponding to the "write operation" transaction, there is also a"Read Only"The call method is used to read the data on the chain. After receiving the request, the node will access the status information according to the parameters of the request and return it. It will not add the request to the consensus process and will not cause the data on the chain to be modified.。 +Corresponding to the "write operation" transaction, there is also a"Read Only"The call method is used to read the data on the chain. After receiving the request, the node will access the status information according to the parameters of the request and return it. It will not add the request to the consensus process and will not cause the data on the chain to be modified。 #### Account In a blockchain system designed with an account model, the term account represents the unique existence of the user, the smart contract。 -In a blockchain system that uses a public-private key system, a user creates a public-private key pair, which is converted by an algorithm such as hash to obtain a unique address string that represents the user's account, and the user uses the private key to manage the assets in the account.。The user account does not necessarily have the corresponding storage space on the chain, but the smart contract manages the user's data on the chain, so this kind of user account will also be called "external account."。 +In a blockchain system that uses a public-private key system, a user creates a public-private key pair, which is converted by an algorithm such as hash to obtain a unique address string that represents the user's account, and the user uses the private key to manage the assets in the account。The user account does not necessarily have the corresponding storage space on the chain, but the smart contract manages the user's data on the chain, so this kind of user account will also be called "external account."。 -For smart contracts, when a smart contract is deployed, it has a unique address on the chain, also known as a contract account, pointing to the contract's status bits, binary code, and an index of relevant status data.。During the operation of the smart contract, binary code will be loaded at this address to access the corresponding data in the world state storage according to the state data index, write the data to the world state storage according to the operation results, and update the state data index in the contract account.。When a smart contract is cancelled, it mainly updates the contract status bit in the contract account and invalidates it, generally without directly clearing the actual data of the contract account.。 +For smart contracts, when a smart contract is deployed, it has a unique address on the chain, also known as a contract account, pointing to the contract's status bits, binary code, and an index of relevant status data。During the operation of the smart contract, binary code will be loaded at this address to access the corresponding data in the world state storage according to the state data index, write the data to the world state storage according to the operation results, and update the state data index in the contract account。When a smart contract is cancelled, it mainly updates the contract status bit in the contract account and invalidates it, generally without directly clearing the actual data of the contract account。 #### State of the World -FISCO BCOS uses the "account model" design, that is, in addition to the storage space for blocks and transactions, there will be a storage space for the results of the smart contract operation.。The state data generated during the execution of the smart contract is confirmed by the consensus mechanism and stored on each node in a distributed manner.。 +FISCO BCOS uses the "account model" design, that is, in addition to the storage space for blocks and transactions, there will be a storage space for the results of the smart contract operation。The state data generated during the execution of the smart contract is confirmed by the consensus mechanism and stored on each node in a distributed manner。 -The existence of state storage space enables the blockchain to store a variety of rich data, including user account information such as balances, smart contract binary codes, smart contract running results and other related data, smart contract execution process will be from the state storage to obtain some data to participate in the operation, for the realization of complex contract logic provides a basis.。 +The existence of state storage space enables the blockchain to store a variety of rich data, including user account information such as balances, smart contract binary codes, smart contract running results and other related data, smart contract execution process will be from the state storage to obtain some data to participate in the operation, for the realization of complex contract logic provides a basis。 -On the other hand, maintaining state data requires a lot of storage costs. As the chain continues to run, state data will continue to expand. For example, if a complex data structure such as Patricia Tree is used, the capacity of state data will be further expanded.。 +On the other hand, maintaining state data requires a lot of storage costs. As the chain continues to run, state data will continue to expand. For example, if a complex data structure such as Patricia Tree is used, the capacity of state data will be further expanded。 ### consensus mechanism -Consensus mechanism is the core concept in the field of blockchain, no consensus, no blockchain。As a distributed system, the blockchain can be used by different nodes to participate in the calculation, witness the execution of transactions, and confirm the final calculation results.。The process of collaborating with these loosely coupled, non-trusting participants to reach a trust relationship and ensure consistency, continuous collaboration, can be abstracted as a "consensus" process, and the algorithms and strategies involved are collectively referred to as consensus mechanisms.。 +Consensus mechanism is the core concept in the field of blockchain, no consensus, no blockchain。As a distributed system, the blockchain can be used by different nodes to participate in the calculation, witness the execution of transactions, and confirm the final calculation results。The process of collaborating with these loosely coupled, non-trusting participants to reach a trust relationship and ensure consistency, continuous collaboration, can be abstracted as a "consensus" process, and the algorithms and strategies involved are collectively referred to as consensus mechanisms。 #### Node -A computer that has installed the hardware and software required for the blockchain system and is added to the blockchain network can be called a "node."。Nodes participate in the network communication, logic operation, data verification, verification and storage of blocks, transactions, status and other data of the blockchain system, and provide the client with the interface for transaction processing and data query.。The node identifier uses a public-private key mechanism to generate a string of unique NodeIDs to ensure its uniqueness on the network.。 +A computer that has installed the hardware and software required for the blockchain system and is added to the blockchain network can be called a "node."。Nodes participate in the network communication, logic operation, data verification, verification and storage of blocks, transactions, status and other data of the blockchain system, and provide the client with the interface for transaction processing and data query。The node identifier uses a public-private key mechanism to generate a string of unique NodeIDs to ensure its uniqueness on the network。 -According to the degree of participation in the calculation and the stock of data, the nodes can be divided into consensus nodes and observation nodes.。Consensus nodes participate in the entire consensus process, as bookkeepers package blocks and as validators validate blocks to complete the consensus process.。The observation node does not participate in consensus, synchronizes data, verifies and saves it, and can be used as a service for data service providers.。 +According to the degree of participation in the calculation and the stock of data, the nodes can be divided into consensus nodes and observation nodes。Consensus nodes participate in the entire consensus process, as bookkeepers package blocks and as validators validate blocks to complete the consensus process。The observation node does not participate in consensus, synchronizes data, verifies and saves it, and can be used as a service for data service providers。 #### consensus algorithm The core issues that consensus algorithms need to address are: 1. Select the role with bookkeeping rights in the entire system as the leader to initiate a bookkeeping。 -2. Participants adopt the bookkeeping given by Leader after multi-level verification using non-repudiation and non-tampering algorithms.。 +2. Participants adopt the bookkeeping given by Leader after multi-level verification using non-repudiation and non-tampering algorithms。 3. Ensure that the final results received by all participants are consistent and error-free through data synchronization and distributed consistency collaboration。 -Common consensus algorithms in the blockchain field include Proof of Work, Proof of Stake, Delegated Proof of Stake, and Practical Byzantine Fault Tolerance (PBFT), Raft, etc., which are commonly used in federated chains. Other cutting-edge consensus algorithms usually combine the organic random number generator with the above-mentioned consensus algorithms to improve their performance and energy consumption.。 +Common consensus algorithms in the blockchain field include Proof of Work, Proof of Stake, Delegated Proof of Stake, and Practical Byzantine Fault Tolerance (PBFT), Raft, etc., which are commonly used in federated chains. Other cutting-edge consensus algorithms usually combine the organic random number generator with the above-mentioned consensus algorithms to improve their performance and energy consumption。 -The FISCO BCOS consensus module is a plug-in design that supports a variety of consensus algorithms, including PBFT and Raft, and will continue to implement larger and faster consensus algorithms.。 +The FISCO BCOS consensus module is a plug-in design that supports a variety of consensus algorithms, including PBFT and Raft, and will continue to implement larger and faster consensus algorithms。 ### Smart Contracts -The concept of smart contract was first proposed by Nick Szabo in 1995 and refers to a contract that is defined in digital form to automatically enforce terms, which means that the contract must be implemented in computer code because the rights and obligations established by the smart contract are automatically executed as soon as the parties reach an agreement, and the result cannot be denied.。 +The concept of smart contract was first proposed by Nick Szabo in 1995 and refers to a contract that is defined in digital form to automatically enforce terms, which means that the contract must be implemented in computer code because the rights and obligations established by the smart contract are automatically executed as soon as the parties reach an agreement, and the result cannot be denied。 -FISCO BCOS uses smart contracts not only for asset management, rule definition and value exchange, but also for global configuration, operation and maintenance governance, permission management, etc.。 +FISCO BCOS uses smart contracts not only for asset management, rule definition and value exchange, but also for global configuration, operation and maintenance governance, permission management, etc。 #### Smart Contract Lifecycle -The life cycle of a smart contract is designed, developed, tested, deployed, run, upgraded, and destroyed.。 +The life cycle of a smart contract is designed, developed, tested, deployed, run, upgraded, and destroyed。 -Developers write, compile, and unit test smart contract code as needed.。Contract development languages may include solidity, C++, java, go, javascript, rust, etc., the choice of language depends on the platform virtual machine selection。After the contract passes the test, the deployment order is issued to the chain, and after confirmation by the consensus algorithm, the contract takes effect and is called by subsequent transactions.。 +Developers write, compile, and unit test smart contract code as needed。Contract development languages may include solidity, C++, java, go, javascript, rust, etc., the choice of language depends on the platform virtual machine selection。After the contract passes the test, the deployment order is issued to the chain, and after confirmation by the consensus algorithm, the contract takes effect and is called by subsequent transactions。 -When the contract needs to be updated and upgraded, repeat the above steps from development to deployment to release the new contract. The new contract will have a new address and independent storage space, not overwriting the old contract.。The new contract can access the data stored in the old contract through the old contract data interface, or migrate the data of the old contract to the storage of the new contract through data migration.。 +When the contract needs to be updated and upgraded, repeat the above steps from development to deployment to release the new contract. The new contract will have a new address and independent storage space, not overwriting the old contract。The new contract can access the data stored in the old contract through the old contract data interface, or migrate the data of the old contract to the storage of the new contract through data migration。 -Destroying an old contract does not mean erasing all of the contract's data, just setting its status to "invalid" and the contract can no longer be called.。 +Destroying an old contract does not mean erasing all of the contract's data, just setting its status to "invalid" and the contract can no longer be called。 #### Smart Contract Virtual Machine -In order to run digital smart contracts, blockchain systems must have compilers and executors that can compile, parse, and execute computer code, collectively known as virtual machine architectures.。After the contract is written, it is compiled with the compiler, and the deployment transaction is sent to deploy the contract to the blockchain system. After the consensus of the deployment transaction is passed, the system assigns a unique address to the contract and saves the binary code of the contract. When a contract is called by another transaction, the virtual machine executor loads the code from the contract store and executes it, and outputs the execution result.。 +In order to run digital smart contracts, blockchain systems must have compilers and executors that can compile, parse, and execute computer code, collectively known as virtual machine architectures。After the contract is written, it is compiled with the compiler, and the deployment transaction is sent to deploy the contract to the blockchain system. After the consensus of the deployment transaction is passed, the system assigns a unique address to the contract and saves the binary code of the contract. When a contract is called by another transaction, the virtual machine executor loads the code from the contract store and executes it, and outputs the execution result。 -In a blockchain system that emphasizes security, transactionality, and consistency, the virtual machine should have a sandbox feature that shields factors that may cause uncertainty, such as random numbers, system time, external file systems, and networks, and can resist malicious code intrusion, so as to ensure that the results of the execution of the same transaction and the same contract on different nodes are consistent, and the execution process is safe.。 +In a blockchain system that emphasizes security, transactionality, and consistency, the virtual machine should have a sandbox feature that shields factors that may cause uncertainty, such as random numbers, system time, external file systems, and networks, and can resist malicious code intrusion, so as to ensure that the results of the execution of the same transaction and the same contract on different nodes are consistent, and the execution process is safe。 Currently popular virtual machine mechanisms include EVM, controlled Docker, WebAssembly, etc. FISCO BCOS virtual machine modules are modular in design and already support EVM, which is widely welcomed by the community, and will support more virtual machines。 @@ -100,120 +100,120 @@ Turing machine and Turing completeness are classic concepts in the field of comp Most of the blockchains that emerged after 2014 support Turing-complete smart contracts, making the blockchain system more programmable, and based on the basic features of the blockchain (such as multi-party consensus, difficult to tamper with, traceability, security, etc.), you can also implement business contracts with certain business logic, such as The Ricardian Contract (The Ricardian Contract), or you can use smart contracts to implement。 -The execution of the contract also needs to deal with the "shutdown problem," that is, to determine whether the program will solve the input problem within a limited time, and end the execution and release resources.。Imagine a contract that is deployed across the network and is executed on every node when it is called, and if the contract is an infinite loop, it means that the entire system may be exhausted.。Therefore, the handling of the shutdown problem is also an important concern of Turing's complete computing system in the blockchain field.。 +The execution of the contract also needs to deal with the "shutdown problem," that is, to determine whether the program will solve the input problem within a limited time, and end the execution and release resources。Imagine a contract that is deployed across the network and is executed on every node when it is called, and if the contract is an infinite loop, it means that the entire system may be exhausted。Therefore, the handling of the shutdown problem is also an important concern of Turing's complete computing system in the blockchain field。 ## Consortium chain concept analysis -The industry usually divides the type of blockchain into public chain, alliance chain, private chain.。A public chain is a chain in which everyone can participate anytime, anywhere, even anonymously.;A private chain refers to a chain owned by a subject (such as an institution or a natural person), managed and used by privatization.;Alliance chain usually refers to multiple subjects to reach a certain agreement, or the establishment of a business alliance, multi-party joint formation of the chain, to join the alliance chain members need to be verified, generally identity-aware。Because of the access mechanism, the alliance chain is also commonly referred to as the "permission chain."。 +The industry usually divides the type of blockchain into public chain, alliance chain, private chain。A public chain is a chain in which everyone can participate anytime, anywhere, even anonymously;A private chain refers to a chain owned by a subject (such as an institution or a natural person), managed and used by privatization;Alliance chain usually refers to multiple subjects to reach a certain agreement, or the establishment of a business alliance, multi-party joint formation of the chain, to join the alliance chain members need to be verified, generally identity-aware。Because of the access mechanism, the alliance chain is also commonly referred to as the "permission chain."。 -Because the alliance chain has access and identity management from the formation, accession, operation, transaction and other links, the operation on the chain can be controlled by permissions, consensus generally adopts PBFT and other consensus mechanisms based on multi-party multi-round verification voting, does not adopt the high energy consumption mechanism of POW mining, the network scale is relatively controllable, in the transaction delay, transaction consistency and certainty, concurrency and capacity can be greatly optimized.。 +Because the alliance chain has access and identity management from the formation, accession, operation, transaction and other links, the operation on the chain can be controlled by permissions, consensus generally adopts PBFT and other consensus mechanisms based on multi-party multi-round verification voting, does not adopt the high energy consumption mechanism of POW mining, the network scale is relatively controllable, in the transaction delay, transaction consistency and certainty, concurrency and capacity can be greatly optimized。 -While inheriting the advantages of blockchain technology, the alliance chain is more suitable for sensitive business scenarios with high performance and capacity requirements, emphasizing regulation and compliance, such as finance, justice, and a large number of businesses related to the real economy.。The route of the alliance chain, taking into account business compliance and stability and business innovation, is also the direction of the country and industry to encourage development.。 +While inheriting the advantages of blockchain technology, the alliance chain is more suitable for sensitive business scenarios with high performance and capacity requirements, emphasizing regulation and compliance, such as finance, justice, and a large number of businesses related to the real economy。The route of the alliance chain, taking into account business compliance and stability and business innovation, is also the direction of the country and industry to encourage development。 ### Performance #### performance index -The most common processing performance indicator of a software system is TPS (Transaction Per Second), which is the number of transactions that the system can process and confirm per second.。In addition to TPS, the performance indicators in the blockchain field include confirmation delay, network size, etc.。 +The most common processing performance indicator of a software system is TPS (Transaction Per Second), which is the number of transactions that the system can process and confirm per second。In addition to TPS, the performance indicators in the blockchain field include confirmation delay, network size, etc。 -The confirmation delay refers to the time taken by the transaction to be confirmed after it is sent to the blockchain network and after a series of processes such as verification, calculation and consensus, for example, a block in the Bitcoin network is 10 minutes, and it takes 6 blocks, or one hour, for the transaction to be confirmed with a high probability.。Using the PBFT algorithm, the transaction can be confirmed in seconds, and once confirmed, it has final certainty, which is more suitable for financial and other business needs.。 +The confirmation delay refers to the time taken by the transaction to be confirmed after it is sent to the blockchain network and after a series of processes such as verification, calculation and consensus, for example, a block in the Bitcoin network is 10 minutes, and it takes 6 blocks, or one hour, for the transaction to be confirmed with a high probability。Using the PBFT algorithm, the transaction can be confirmed in seconds, and once confirmed, it has final certainty, which is more suitable for financial and other business needs。 -Network size refers to how many consensus nodes can be supported to work together under the premise of ensuring a certain TPS and confirmation delay.。The industry generally believes that the use of PBFT consensus algorithm system, the node size of about 100, and then increase will lead to TPS decline, confirm the delay increased。The industry currently has a consensus mechanism for selecting bookkeeping groups through random number algorithms that can improve this problem。 +Network size refers to how many consensus nodes can be supported to work together under the premise of ensuring a certain TPS and confirmation delay。The industry generally believes that the use of PBFT consensus algorithm system, the node size of about 100, and then increase will lead to TPS decline, confirm the delay increased。The industry currently has a consensus mechanism for selecting bookkeeping groups through random number algorithms that can improve this problem。 #### Performance optimization -Performance optimization has two directions, scale up and scale out.。Upscaling refers to optimizing hardware and software configuration on the basis of limited resources, greatly improving processing power, such as the use of more efficient algorithms, the use of hardware acceleration, etc.。Parallel expansion refers to the system architecture has good scalability, can be used to carry different users, business flow processing, as long as the appropriate increase in hardware and software resources, can carry more requests.。 +Performance optimization has two directions, scale up and scale out。Upscaling refers to optimizing hardware and software configuration on the basis of limited resources, greatly improving processing power, such as the use of more efficient algorithms, the use of hardware acceleration, etc。Parallel expansion refers to the system architecture has good scalability, can be used to carry different users, business flow processing, as long as the appropriate increase in hardware and software resources, can carry more requests。 -Performance metrics and software architecture, hardware configurations such as CPU, memory, storage specifications, and network bandwidth are closely related, and with the increase of TPS, the pressure on storage capacity will increase accordingly, which needs to be considered comprehensively.。 +Performance metrics and software architecture, hardware configurations such as CPU, memory, storage specifications, and network bandwidth are closely related, and with the increase of TPS, the pressure on storage capacity will increase accordingly, which needs to be considered comprehensively。 ### Security -Security is a big topic, especially for blockchain systems built on distributed networks。At the system level, attention needs to be paid to network attacks, system penetration, data destruction and leakage, and at the business level to ultra vires operations, logical errors, asset losses due to system stability, and privacy violations.。 +Security is a big topic, especially for blockchain systems built on distributed networks。At the system level, attention needs to be paid to network attacks, system penetration, data destruction and leakage, and at the business level to ultra vires operations, logical errors, asset losses due to system stability, and privacy violations。 -Security should be concerned"The short board of the barrel "requires a comprehensive protection strategy that provides multi-faceted, comprehensive security protection, meets demanding security standards, and provides security best practices to align the security levels of all participants to ensure the security of the entire network.。 +Security should be concerned"The short board of the barrel "requires a comprehensive protection strategy that provides multi-faceted, comprehensive security protection, meets demanding security standards, and provides security best practices to align the security levels of all participants to ensure the security of the entire network。 #### access mechanism -Access mechanism refers to the need to meet the identity-aware, credible qualifications, and technically reliable standards before either institutions or individuals form and join the chain, and the formation of the alliance chain will only be initiated after the subject information is reviewed by multiple parties, and then the nodes of the audited subjects will be added to the network to assign public and private keys that can be sent to the audited personnel.。 +Access mechanism refers to the need to meet the identity-aware, credible qualifications, and technically reliable standards before either institutions or individuals form and join the chain, and the formation of the alliance chain will only be initiated after the subject information is reviewed by multiple parties, and then the nodes of the audited subjects will be added to the network to assign public and private keys that can be sent to the audited personnel。 After the completion of access, institutions, nodes, personnel information will be registered to the chain or reliable information services, all the chain of behavior can be traced back to the institutions and people。 #### Permission Control -Permission control on the alliance chain is the control of different personnel to read and write data at various sensitive levels. The subdivision can list different permissions such as contract deployment, in-contract data access, block data synchronization, system parameter access and modification, node start and stop, etc. According to business needs, more permission control points can be added.。 +Permission control on the alliance chain is the control of different personnel to read and write data at various sensitive levels. The subdivision can list different permissions such as contract deployment, in-contract data access, block data synchronization, system parameter access and modification, node start and stop, etc. According to business needs, more permission control points can be added。 -Permissions are assigned to roles and can be followed by typical role-based permission access control (Role-Based Access Control) design, a reference design is to divide the roles into operation managers, transaction operators, application developers, operation and maintenance managers, supervisors, each role can also be subdivided according to the needs of the level, a complete model may be very large and complex, according to the needs of the scene to carry out appropriate design, can achieve the degree of business security and control.。 +Permissions are assigned to roles, and the typical role-based access control (Role-Based Access Control) design can be used. A reference design is to divide roles into operation managers, transaction operators, application developers, operation and maintenance managers, and supervisors。 #### privacy protection -Business scenarios based on blockchain architecture require all participants to output and share relevant data for joint calculation and verification.。How to protect the privacy-related parts of shared data and how to avoid leaking privacy during operations is a very important issue.。 +Business scenarios based on blockchain architecture require all participants to output and share relevant data for joint calculation and verification。How to protect the privacy-related parts of shared data and how to avoid leaking privacy during operations is a very important issue。 -Privacy protection is first of all a management issue, which requires that when building a system to carry out business, we should grasp the principle of "minimum authorization and express consent," manage the whole life cycle of data collection, storage, application, disclosure, deletion and recovery, establish daily management and emergency management systems, set up regulatory roles in highly sensitive business scenarios, introduce third-party inspection and auditing, and engage in pre-event and post-event control of all aspects.。 +Privacy protection is first of all a management issue, which requires that when building a system to carry out business, we should grasp the principle of "minimum authorization and express consent," manage the whole life cycle of data collection, storage, application, disclosure, deletion and recovery, establish daily management and emergency management systems, set up regulatory roles in highly sensitive business scenarios, introduce third-party inspection and auditing, and engage in pre-event and post-event control of all aspects。 -Technically, you can use data desensitization, business isolation or system physical isolation to control the scope of data distribution, but also the introduction of cryptographic methods such as zero-knowledge proof, secure multi-party computing, ring signature, group signature, blind signature, etc., the data for high-strength encryption protection.。 +Technically, you can use data desensitization, business isolation or system physical isolation to control the scope of data distribution, but also the introduction of cryptographic methods such as zero-knowledge proof, secure multi-party computing, ring signature, group signature, blind signature, etc., the data for high-strength encryption protection。 #### Physical isolation -This concept is mainly used in the field of privacy protection, "physical isolation" is a complete means to avoid privacy data leakage, physical isolation means that only participants who share data communicate with each other in the network communication layer, participants who do not participate in sharing data can not communicate with each other in the network, do not exchange even a byte of data.。 +This concept is mainly used in the field of privacy protection, "physical isolation" is a complete means to avoid privacy data leakage, physical isolation means that only participants who share data communicate with each other in the network communication layer, participants who do not participate in sharing data can not communicate with each other in the network, do not exchange even a byte of data。 -Relatively speaking, it is logical isolation, where participants can receive data that is not related to them, but the data itself is protected by permission control or encryption, so that participants without authorization or keys cannot access and modify it.。However, with the development of technology, the rights controlled data or encrypted data may still be cracked after several years.。 +Relatively speaking, it is logical isolation, where participants can receive data that is not related to them, but the data itself is protected by permission control or encryption, so that participants without authorization or keys cannot access and modify it。However, with the development of technology, the rights controlled data or encrypted data may still be cracked after several years。 -For highly sensitive data, a "physical isolation" strategy can be used to eliminate the possibility of being cracked at the root.。The corresponding cost is the need to carefully screen the sensitivity level of the data, carefully plan the isolation strategy, and allocate sufficient hardware resources to carry different data。 +For highly sensitive data, a "physical isolation" strategy can be used to eliminate the possibility of being cracked at the root。The corresponding cost is the need to carefully screen the sensitivity level of the data, carefully plan the isolation strategy, and allocate sufficient hardware resources to carry different data。 ### Governance and Regulation #### Alliance chain governance -Alliance chain governance involves a series of issues such as multi-participant coordination, incentive mechanism, safe operation, supervision and audit, etc. The core is to clarify the responsibilities and rights of each participant, workflow, build a smooth development and operation and maintenance system, and ensure the legal compliance of the business, including security issues can be prevented in advance after the emergency treatment.。In order to achieve governance, rules need to be developed and implemented to ensure that all participants agree on and implement them.。 +Alliance chain governance involves a series of issues such as multi-participant coordination, incentive mechanism, safe operation, supervision and audit, etc. The core is to clarify the responsibilities and rights of each participant, workflow, build a smooth development and operation and maintenance system, and ensure the legal compliance of the business, including security issues can be prevented in advance after the emergency treatment。In order to achieve governance, rules need to be developed and implemented to ensure that all participants agree on and implement them。 A typical alliance chain governance reference model is that all participants jointly form an alliance chain committee, discuss and decide together, set various roles and assign tasks according to the needs of the scenario, such as some institutions are responsible for development, some institutions are involved in operation management, all institutions are involved in trading and operation, smart contracts are used to realize management rules and maintain system data, committees and regulators can have certain management authority, such as reviewing and setting up contracts for business, institutions, personnel, etc。 -In the alliance chain with a sound governance mechanism, the participants carry out peer-to-peer cooperation in accordance with the rules, including asset transactions, data exchange, greatly improve operational efficiency, promote business innovation, while compliance and security are also guaranteed.。 +In the alliance chain with a sound governance mechanism, the participants carry out peer-to-peer cooperation in accordance with the rules, including asset transactions, data exchange, greatly improve operational efficiency, promote business innovation, while compliance and security are also guaranteed。 #### Rapid deployment The general steps of building a blockchain system include: obtaining hardware resources, including servers, networks, memory, hard disk storage, etc., configuring the environment, including selecting a specified operating system, opening network ports and related policies, bandwidth planning, storage space allocation, etc., obtaining blockchain binary runnable software or compiling from the source code, and then configuring the blockchain system, including the creation block configuration, runtime parameter configuration, log configuration, etc., for multiple interconnection configurations, including node。 How to simplify and speed up the above steps to make the construction and group chain process simple, fast, error-free, and low-cost needs to be considered from the following aspects: -First, standardize the target deployment platform, prepare the operating system, dependent software list, network bandwidth and storage capacity, network strategy and other key hardware and software in advance, align versions and parameters, make the platform available, rely on complete.。Popular cloud services, docker and other methods can help build such a standardized platform.。 +First, standardize the target deployment platform, prepare the operating system, dependent software list, network bandwidth and storage capacity, network strategy and other key hardware and software in advance, align versions and parameters, make the platform available, rely on complete。Popular cloud services, docker and other methods can help build such a standardized platform。 -Then, from the user's point of view, optimize the blockchain software construction, configuration and group chain process, provide rapid construction, automatic group chain tools, so that users do not need to pay attention to many details, a few simple steps to run the chain for development debugging, online operation.。 +Then, from the user's point of view, optimize the blockchain software construction, configuration and group chain process, provide rapid construction, automatic group chain tools, so that users do not need to pay attention to many details, a few simple steps to run the chain for development debugging, online operation。 -FISCO BCOS attaches great importance to the user's deployment experience, provides a one-click deployment command line, helps developers quickly build a development and debugging environment, provides enterprise-level tethering tools, and is oriented to multi-agency joint group chain scenarios. It flexibly configures parameters such as hosts and networks, manages relevant certificates, and facilitates collaboration among multiple enterprises.。Optimized for rapid deployment, reducing the time for users to set up the blockchain to less than a few minutes to half an hour。 +FISCO BCOS attaches great importance to the user's deployment experience, provides a one-click deployment command line, helps developers quickly build a development and debugging environment, provides enterprise-level tethering tools, and is oriented to multi-agency joint group chain scenarios. It flexibly configures parameters such as hosts and networks, manages relevant certificates, and facilitates collaboration among multiple enterprises。Optimized for rapid deployment, reducing the time for users to set up the blockchain to less than a few minutes to half an hour。 #### Data governance -Blockchain emphasizes data layer-by-layer verification and history traceability. A common solution is that since the Genesis block, all data will be stored on all participating nodes (except light nodes), resulting in data expansion and capacity constraints. Especially in scenarios that carry massive services, after a certain period of time, general storage solutions can no longer accommodate data, and massive storage costs are high. Another aspect is security. Full data is permanently stored and may face the risk of historical data leakage.。 +Blockchain emphasizes data layer-by-layer verification and history traceability. A common solution is that since the Genesis block, all data will be stored on all participating nodes (except light nodes), resulting in data expansion and capacity constraints. Especially in scenarios that carry massive services, after a certain period of time, general storage solutions can no longer accommodate data, and massive storage costs are high. Another aspect is security. Full data is permanently stored and may face the risk of historical data leakage。 -Data governance consists of several strategies: tailoring migration, parallel scaling, and distributed storage.。How to choose the need to combine scenario analysis。 +Data governance consists of several strategies: tailoring migration, parallel scaling, and distributed storage。How to choose the need to combine scenario analysis。 -For data with strong time characteristics, if the clearing and settlement cycle of a business is one week, then the data before one week does not need to participate in online calculation and verification, and the old data can be migrated from the node to the big data storage to meet the requirements of data query and verifiability and the requirements of business storage life.。 +For data with strong time characteristics, if the clearing and settlement cycle of a business is one week, then the data before one week does not need to participate in online calculation and verification, and the old data can be migrated from the node to the big data storage to meet the requirements of data query and verifiability and the requirements of business storage life。 -For a business that continues to expand in scale, such as a sharp increase in the number of users or contract certificates, you can allocate different logical partitions for different users and contracts, each logical partition has an independent storage space, and only carries a certain amount of data.。The design of the partition makes it easier to control the allocation of resources and cost management.。 +For a business that continues to expand in scale, such as a sharp increase in the number of users or contract certificates, you can allocate different logical partitions for different users and contracts, each logical partition has an independent storage space, and only carries a certain amount of data。The design of the partition makes it easier to control the allocation of resources and cost management。 -Combined with data pruning migration and parallel expansion, the capacity cost and security level of data are well controlled, facilitating the development of large-scale business.。 +Combined with data pruning migration and parallel expansion, the capacity cost and security level of data are well controlled, facilitating the development of large-scale business。 #### Operation and Maintenance Monitoring -The blockchain system is logically consistent in terms of construction and operation, and the hardware and software systems of different nodes are basically the same.。Its standardized features bring convenience to operation and maintenance personnel, and can use common tools, operation and maintenance policies and operation and maintenance processes to build, deploy, configure, and troubleshoot blockchain systems, thereby reducing operation and maintenance costs and improving efficiency.。 +The blockchain system is logically consistent in terms of construction and operation, and the hardware and software systems of different nodes are basically the same。Its standardized features bring convenience to operation and maintenance personnel, and can use common tools, operation and maintenance policies and operation and maintenance processes to build, deploy, configure, and troubleshoot blockchain systems, thereby reducing operation and maintenance costs and improving efficiency。 -Operations and maintenance personnel on the alliance chain operations will be controlled by the permission system, operations and maintenance personnel have the right to modify the system configuration, start and stop the process, view the operation log, troubleshooting and other permissions, but do not participate in business transactions, can not directly view the user data with a high level of security and privacy, transaction data.。 +Operations and maintenance personnel on the alliance chain operations will be controlled by the permission system, operations and maintenance personnel have the right to modify the system configuration, start and stop the process, view the operation log, troubleshooting and other permissions, but do not participate in business transactions, can not directly view the user data with a high level of security and privacy, transaction data。 -During the operation of the system, various operating indicators can be monitored through the monitoring system, the health of the system can be evaluated, and an alarm notification can be issued when a failure occurs, which is convenient for operation and maintenance to respond quickly and deal with it.。 +During the operation of the system, various operating indicators can be monitored through the monitoring system, the health of the system can be evaluated, and an alarm notification can be issued when a failure occurs, which is convenient for operation and maintenance to respond quickly and deal with it。 -The monitoring dimensions include basic environment monitoring, such as CPU proportion, system memory proportion and growth, disk IO situation, network connection number and traffic, etc.。 +The monitoring dimensions include basic environment monitoring, such as CPU proportion, system memory proportion and growth, disk IO situation, network connection number and traffic, etc。 -Blockchain system monitoring includes such things as block height, transaction volume and virtual machine computation, consensus node out-of-block voting, etc.。 +Blockchain system monitoring includes such things as block height, transaction volume and virtual machine computation, consensus node out-of-block voting, etc。 -Interface monitoring includes, for example, interface call count, interface call time consumption, interface call success rate, etc.。 +Interface monitoring includes, for example, interface call count, interface call time consumption, interface call success rate, etc。 -Monitoring data can be output through logs or network interfaces to facilitate interfacing with the organization's existing monitoring systems, reusing the organization's monitoring capabilities and existing O & M processes.。After receiving the alarm, the operation and maintenance personnel use the operation and maintenance tools provided by the alliance chain to view system information, modify the configuration, start and stop the process, and handle faults.。 +Monitoring data can be output through logs or network interfaces to facilitate interfacing with the organization's existing monitoring systems, reusing the organization's monitoring capabilities and existing O & M processes。After receiving the alarm, the operation and maintenance personnel use the operation and maintenance tools provided by the alliance chain to view system information, modify the configuration, start and stop the process, and handle faults。 #### Regulatory audit -With the development of blockchain technology and business form exploration, it is necessary to provide regulatory support functions on the blockchain technology platform to prevent the blockchain system from being outside the laws, regulations and industry rules and becoming a carrier of money laundering, illegal financing or criminal transactions.。 +With the development of blockchain technology and business form exploration, it is necessary to provide regulatory support functions on the blockchain technology platform to prevent the blockchain system from being outside the laws, regulations and industry rules and becoming a carrier of money laundering, illegal financing or criminal transactions。 -The audit function is mainly used to meet the audit internal control, responsibility identification and event traceability requirements of the blockchain system, which requires effective technical means to carry out accurate audit management in line with the industry standards to which the business belongs.。 +The audit function is mainly used to meet the audit internal control, responsibility identification and event traceability requirements of the blockchain system, which requires effective technical means to carry out accurate audit management in line with the industry standards to which the business belongs。 Regulators can be used as nodes to access the blockchain system, or interact with the blockchain system through the interface, regulators can synchronize all the data for audit analysis, tracking the global business process, if found abnormal, can be issued to the blockchain with regulatory authority instructions, business, participants, accounts and other control, to achieve "penetrating supervision"。 -FISCO BCOS supports regulatory audits in terms of role and authority design, functional interfaces, audit tools, etc.。 +FISCO BCOS supports regulatory audits in terms of role and authority design, functional interfaces, audit tools, etc。 diff --git a/3.x/en/docs/manual/certificate_list.md b/3.x/en/docs/manual/certificate_list.md index 7d799afdc..0ecf1a586 100644 --- a/3.x/en/docs/manual/certificate_list.md +++ b/3.x/en/docs/manual/certificate_list.md @@ -4,7 +4,7 @@ Tags: "CA black and white list" "development manual" "refused to connect" ---- -This document describes the practical operation of CA black and white lists. It is recommended that you understand [Introduction to CA Black and White Lists] before reading this operation document.(../design/security_control/certificate_list.md)。 +This document describes the practical operation of CA black and white lists. It is recommended that you understand [Introduction to CA Black and White Lists] before reading this operation document(../design/security_control/certificate_list.md)。 ## Blacklist @@ -34,7 +34,7 @@ $ curl -X POST --data '{"jsonrpc":"2.0","method":"getPeers","params":[1],"id":1} ## Whitelist -By configuring the whitelist, you can connect to only the specified nodes and deny connections to nodes outside the whitelist.。 +By configuring the whitelist, you can connect to only the specified nodes and deny connections to nodes outside the whitelist。 **Configuration Method** @@ -47,7 +47,7 @@ Edit 'config.ini',**No configuration means that the whitelist is closed and a co cal.1=f306eb1066ceb9d46e3b77d2833a1bde2a9899cfc4d0433d64b01d03e79927aa60a40507c5739591b8122ee609cf5636e71b02ce5009f3b8361930ecc3a9abb0 ``` -If the node is not started, start the node directly. If the node is started, use the script 'reload _ whitelist.sh' to refresh the whitelist configuration.。 +If the node is not started, start the node directly. If the node is started, use the script 'reload _ whitelist.sh' to refresh the whitelist configuration。 ```shell # If the node is not started @@ -66,12 +66,12 @@ $ curl -X POST --data '{"jsonrpc":"2.0","method":"getPeers","params":[1],"id":1} ## Usage scenario: Public CA -All chains that use CFCA to issue certificates, the CA of the chain is CFCA。This CA is shared。Whitelist feature must be enabled。If you use a common CA, the two chains share the same CA. As a result, the nodes of the two unrelated chains can be connected to each other.。At this point, you need to configure a whitelist to deny connections to nodes in unrelated chains.。 +All chains that use CFCA to issue certificates, the CA of the chain is CFCA。This CA is shared。Whitelist feature must be enabled。If you use a common CA, the two chains share the same CA. As a result, the nodes of the two unrelated chains can be connected to each other。At this point, you need to configure a whitelist to deny connections to nodes in unrelated chains。 **Chain operation steps** 1. Use tools to link -2. Query the NodeID of all nodes. +2. Query the NodeID of all nodes 3. Configure all NodeIDs in**Each**node in the whitelist 4. Start the node or refresh the node whitelist configuration with the script 'reload _ whitelist.sh' @@ -313,7 +313,7 @@ View Node Connections $ curl -X POST --data '{"jsonrpc":"2.0","method":"getPeers","params":[1],"id":1}' http://127.0.0.1:8545 |jq ``` -Although node1 is configured on the whitelist, node0 cannot establish a connection with node1 because node1 is also configured in the blacklist. +Although node1 is configured on the whitelist, node0 cannot establish a connection with node1 because node1 is also configured in the blacklist ```json { diff --git a/3.x/en/docs/manual/log_description.md b/3.x/en/docs/manual/log_description.md index b5f9782b9..26eb91868 100644 --- a/3.x/en/docs/manual/log_description.md +++ b/3.x/en/docs/manual/log_description.md @@ -4,7 +4,7 @@ Tags: "Log Format" "Log Keywords" "Troubleshooting" "View Log" ---- -All group logs of FISCO BCOS are output to the file 'log _% YYYY% mm% dd% HH.% MM' in the log directory, and the log format is customized, so that users can view the running status of the chain through logs.。 +All group logs of FISCO BCOS are output to the file 'log _% YYYY% mm% dd% HH.% MM' in the log directory, and the log format is customized, so that users can view the running status of the chain through logs。 ## Log Format @@ -21,13 +21,13 @@ info|2022-11-21 20:00:35.479505|[SCHEDULER][blk-1]BlockExecutive prepare: fillBl The fields have the following meanings: -- `log_level`: Log level. Currently, log levels include 'trace', 'debug', 'info', 'warning', 'error', and 'fatal'. +- `log_level`: Log level. Currently, log levels include 'trace', 'debug', 'info', 'warning', 'error', and 'fatal' - `time`: Log output time, accurate to nanoseconds -- 'module _ name ': module keyword. For example, the synchronization module keyword is' SYNC 'and the consensus module keyword is' CONSENSUS' +- 'module _ name': module keyword. For example, the synchronization module keyword is' SYNC 'and the consensus module keyword is' CONSENSUS' -- 'content ': logging content +- 'content': logging content ## Common Log Description @@ -37,22 +37,22 @@ The fields have the following meanings: ```eval_rst .. note:: - - Only consensus nodes periodically output consensus packed logs(The command "tail" can be used in the node directory.-f log/* | grep "${group_id}.*++""View consensus packaging logs for a specified group) + - only the consensus node periodically outputs consensus packed logs(The command "tail -f log /* | grep "${group_id}.*++""View consensus packaging logs for a specified group) - - Pack logs to check whether the consensus node of a specified group is abnormal.**Abnormal consensus node does not output packed logs** + - Package logs to check whether the consensus node of the specified group is abnormal**Abnormal consensus node does not output packed logs** ``` The following is an example of consensus packed logs: ```bash info|2022-11-21 20:00:45.530293|[CONSENSUS][PBFT]addCheckPointMsg,reqHash=c2e031c8...,reqIndex=2,reqV=9,fromIdx=3,Idx=1,weight=4,minRequiredWeight=3 ``` -- 'reqHash ': hash of the PBFT request -- 'reqIndex ': block height corresponding to PBFT request +- 'reqHash': the hash of the PBFT request +- 'reqIndex': the block height corresponding to the PBFT request - `reqV`: View corresponding to PBFT request - `fromIdx`: The node index number that generated the PBFT request - `Idx`: Current Node Index Number - `weight`: Total consensus weight of the proposal corresponding to the request -- `minRequiredWeight`: The minimum voting weight required to reach consensus on the proposal corresponding to the request. +- `minRequiredWeight`: The minimum voting weight required to reach consensus on the proposal corresponding to the request **Exception Log** @@ -62,30 +62,30 @@ Network jitter, network disconnect, or configuration error(Genesis block file as ```bash warning|2022-11-17 00:58:03.621465|[CONSENSUS][PBFT]onCheckPointTimeout: resend the checkpoint message package,index=176432,hash=d411d77d...,committedIndex=176431,consNum=176432,committedHash=ecac3705...,view=1713,toView=1713,changeCycle=0,expectedCheckPoint=176433,Idx=0,unsealedTxs=168,sealUntil=176432,waitResealUntil=176431,nodeId=0318568d... ``` -- 'index ': consensus index number -- 'hash ': consensus block hash +- 'index': consensus index number +- 'hash': consensus block hash - `committedIndex`: Falling block block height - `consNum`: Next consensus block high - `committedHash`: Drop Block Hash - `view`: Current View - `toview`: Next View - `changeCycle`: Current Timeout Clock Cycle -- `expectedCheckPoint`: The next block to be executed is high. +- `expectedCheckPoint`: The next block to be executed is high - `Idx`: The index number of the current node -- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil. -- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch.+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil. +- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil +- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil - `unsealedTxs`: Number of unpackaged transactions in the trading pool - `nodeId`: current consensus node id **Block Drop Log** -If the block consensus is successful or the node is synchronizing blocks from other nodes, the disk drop log will be output.。 +If the block consensus is successful or the node is synchronizing blocks from other nodes, the disk drop log will be output。 ```eval_rst .. note:: - Send transactions to nodes, if the transaction is processed, non-free nodes will output drop logs.(The command "tail" can be used in the node directory.-f log/* | grep "Report""View node out-of-block status)If the log is not output, the node is in an abnormal state. Please check whether the network connection is normal and whether the node certificate is valid. + Send transactions to nodes, if the transaction is processed, non-free nodes will output drop logs(The command "tail -f log /* | grep "Report""View node out-of-block status)If the log is not output, the node is in an abnormal state. Please check whether the network connection is normal and whether the node certificate is valid ``` @@ -95,18 +95,18 @@ info|2022-11-21 20:00:45.531121|[CONSENSUS][PBFT][METRIC]^^^^^^^^Report,sealer=3 ``` The fields in the log are described as follows: -- 'sealer ': the index number of the consensus node that generates the proposal -- 'txs': Number of transactions contained in the block +- 'sealer': the index number of the consensus node that generates the proposal +- 'txs': Number of transactions contained within the block - `committedIndex`: Falling block block height - `consNum`: Next consensus block high - `committedHash`: Drop Block Hash - `view`: Current View - `toview`: Next View - `changeCycle`: Current Timeout Clock Cycle -- `expectedCheckPoint`: The next block to be executed is high. +- `expectedCheckPoint`: The next block to be executed is high - `Idx`: The index number of the current node -- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil. -- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch.+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil. +- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil +- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil - `unsealedTxs`: Number of unpackaged transactions in the trading pool - `nodeId`: current consensus node id @@ -116,7 +116,7 @@ The fields in the log are described as follows: ```eval_rst .. note:: - The command "tail" can be used in the node directory.-f log/* | grep "connected count""Check the network status. If the number of network connections in the log output does not meet expectations, run the-anp | grep fisco-bcos "command to check node connections + The command "tail -f log /* | grep "connected count""Check the network status. If the number of network connections in the log output does not meet the expectation, use the" netstat -anp| grep fisco-bcos "command to check node connectivity ``` An example of a log is as follows: diff --git a/3.x/en/docs/manual/operation_and_maintenance.md b/3.x/en/docs/manual/operation_and_maintenance.md index c10f2706c..991061659 100644 --- a/3.x/en/docs/manual/operation_and_maintenance.md +++ b/3.x/en/docs/manual/operation_and_maintenance.md @@ -4,19 +4,19 @@ Tags: "Operation and Maintenance" ## Deploy -Alliance chain is a distributed network and distributed system composed of multiple nodes, the node geographic location rate belongs to a certain partition, and the attribution rate belongs to an organization.。The deployment of alliance chain needs to consider many factors such as organization, partition, node, etc.。Here are some basic principles of deployment: +Alliance chain is a distributed network and distributed system composed of multiple nodes, the node geographic location rate belongs to a certain partition, and the attribution rate belongs to an organization。The deployment of alliance chain needs to consider many factors such as organization, partition, node, etc。Here are some basic principles of deployment: ||目的|Content |:--|:--|:-- |1|Consensus has fault-tolerant space|The number of nodes satisfies N = 3F+1. The chain needs at least 4 nodes |2|partition fault tolerance|Number of consensus nodes per partition should not exceed F |3|Avoiding single points of failure within the mechanism|At least 2 nodes per institution -|4|Save resources and increase efficiency|Some nodes in the mechanism are observation nodes. -|5|Institutional Weight Adjustment|Adjust the number of nodes in the organization and the weight of the consensus node according to the weight agreed by all parties. +|4|Save resources and increase efficiency|Some nodes in the mechanism are observation nodes +|5|Institutional Weight Adjustment|Adjust the number of nodes in the organization and the weight of the consensus node according to the weight agreed by all parties ## Log Description -FISCO BCOS provides a standardized log output format, which can be used to analyze the running status of the system, locate problems, monitor statistics, etc.。 +FISCO BCOS provides a standardized log output format, which can be used to analyze the running status of the system, locate problems, monitor statistics, etc。 ```bash # Log format: @@ -29,37 +29,37 @@ info|2022-11-21 20:00:35.479505|[SCHEDULER][blk-1]BlockExecutive prepare: fillBl where log _ level is the log level, from small to large, including trace, debug, info, warning, error, and fatal, time indicates the log printing time, [module _ name] indicates the module name, including consensus, synchronization, transaction pool, storage, etc., and content is the specific log content。General log analysis and problem location, you can view [Log Description](./log_description.md)。 -The log output level is configured in the config.ini file. In the test environment, it is recommended to set it to the trace or debug level, which can output logs of all levels for easy analysis and positioning.。In a production environment, we recommend that you set it to the info level to reduce the amount of log output (the amount of trace and debug logs is large) and avoid excessive log disk usage.。 +The log output level is configured in the config.ini file. In the test environment, it is recommended to set it to the trace or debug level, which can output logs of all levels for easy analysis and positioning。In a production environment, we recommend that you set it to the info level to reduce the amount of log output (the amount of trace and debug logs is large) and avoid excessive log disk usage。 ## monitoring alarm -The monitoring of FISCO BCOS includes two parts: blockchain monitoring and system monitoring.。 +The monitoring of FISCO BCOS includes two parts: blockchain monitoring and system monitoring。 -[Blockchain monitoring] FISCO BCOS provides its own system monitoring tool monitor.sh, which can monitor node survival, consensus status, and ledger status.。The monitor.sh tool can connect the output content to the organization's own operation and maintenance monitoring system, so that blockchain monitoring can be connected to the organization's operation and maintenance monitoring platform.。 +[Blockchain monitoring] FISCO BCOS provides its own system monitoring tool monitor.sh, which can monitor node survival, consensus status, and ledger status。The monitor.sh tool can connect the output content to the organization's own operation and maintenance monitoring system, so that blockchain monitoring can be connected to the organization's operation and maintenance monitoring platform。 -[System Monitoring] In addition to monitoring the FISCO BCOS node itself, it is also necessary to monitor relevant indicators from the perspective of the system environment.。It is recommended that the operation and maintenance should monitor the CPU, memory, bandwidth consumption and disk consumption of the node to find out the abnormal system environment in time.。FISCO BCOS3.0 can monitor whether the blockchain is working properly, including monitoring consensus, abnormal synchronization, and disk space. It also provides a simple way to access the user alarm system. You can view the [light _ monitor.sh monitoring tool](../operation_and_maintenance/light_monitor.md)。 +[System Monitoring] In addition to monitoring the FISCO BCOS node itself, it is also necessary to monitor relevant indicators from the perspective of the system environment。It is recommended that the operation and maintenance should monitor the CPU, memory, bandwidth consumption and disk consumption of the node to find out the abnormal system environment in time。FISCO BCOS3.0 can monitor whether the blockchain is working properly, including monitoring consensus, abnormal synchronization, and disk space. It also provides a simple way to access the user alarm system. You can view the [light _ monitor.sh monitoring tool](../operation_and_maintenance/light_monitor.md)。 ## Data Backup and Recovery FISCO BCOS supports two data backup methods, you can choose the appropriate method according to your needs。 -[Method 1]: Stop the node, package and compress the data directory of the node as a whole and back it up to another location, decompress the backup data when needed, and restore the node。This method is equivalent to a snapshot of the data in a ledger state for subsequent recovery from this state. For details, see [Node Monitoring Configuration].(https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/air/build_chain.html?highlight=%E7%9B%91%E6%8E%A7#id4)。 +[Method 1]: Stop the node, package and compress the data directory of the node as a whole and back it up to another location, decompress the backup data when needed, and restore the node。This method is equivalent to a snapshot of the data in a ledger state for subsequent recovery from this state. For details, see [Node Monitoring Configuration](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/air/build_chain.html?highlight=%E7%9B%91%E6%8E%A7#id4)。 -[Method 2]: According to the data archiving service tool, the data on the chain can be archived and stored.。When you need to restore or add new nodes, you can restore the archived data to realize data backup and recovery. For specific data archiving operations, please refer to [Data Archiving Usage](../operation_and_maintenance/data_archive_tool.md) +[Method 2]: According to the data archiving service tool, the data on the chain can be archived and stored。When you need to restore or add new nodes, you can restore the archived data to realize data backup and recovery. For specific data archiving operations, please refer to [Data Archiving Usage](../operation_and_maintenance/data_archive_tool.md) -The advantage of method 1 is that there is no need to deploy new services and operations, and the operation and maintenance is simple. The disadvantage is that a historical state is backed up, and the data recovered from this state is not the latest data. After recovery, the updated data of the ledger needs to be synchronized from other nodes.。Method 2 is the opposite of Method 1, which requires the deployment of services, which is more expensive to operate and maintain, but can be restored to the latest ledger state at any time.。 +The advantage of method 1 is that there is no need to deploy new services and operations, and the operation and maintenance is simple. The disadvantage is that a historical state is backed up, and the data recovered from this state is not the latest data. After recovery, the updated data of the ledger needs to be synchronized from other nodes。Method 2 is the opposite of Method 1, which requires the deployment of services, which is more expensive to operate and maintain, but can be restored to the latest ledger state at any time。 ## Expansion method -FISCO BCOS expansion mainly considers two aspects: the expansion of the number of nodes and the expansion of the number of disks.。 +FISCO BCOS expansion mainly considers two aspects: the expansion of the number of nodes and the expansion of the number of disks。 [node number expansion]: FISCO BCOS supports dynamic addition and removal of nodes, and can change the identity status of nodes (consensus, observation, free)。Reject and change status can be done directly through console commands。To add a node, you need to perform the following steps: 1. Prepare certificates for new nodes and issue node certificates with agency certificates; 2. Prepare the machine of the new node, allocate the RPC and P2P ports, ensure that the ports can be connected, and ensure that the P2P ports can communicate with other nodes; -3. Generate the configuration of the new node, mainly the network configuration in config.ini。During configuration, we recommend that you copy a copy from another node and modify the network-related configuration items on this basis.; -4. Publish the new node to the machine, start the node, verify whether the network connection between the new node and other nodes is established, and eliminate exceptions such as certificate problems and network policy problems.; +3. Generate the configuration of the new node, mainly the network configuration in config.ini。During configuration, we recommend that you copy a copy from another node and modify the network-related configuration items on this basis; +4. Publish the new node to the machine, start the node, verify whether the network connection between the new node and other nodes is established, and eliminate exceptions such as certificate problems and network policy problems; 5. Send a command from the console to add the new node as an observation node; -6. At this time, the node does not participate in the consensus, it will synchronize the ledger and wait for the block height to reach an agreement with other nodes.; +6. At this time, the node does not participate in the consensus, it will synchronize the ledger and wait for the block height to reach an agreement with other nodes; 7. Send a command from the console to change the new node status to the consensus node。 FISCO BCOS supports node expansion regardless of air, pro or max. The above steps are the same. For details, please refer to [Air node expansion](../tutorial/air/expand_node.md)[Pro node expansion](../tutorial/pro/expand_node.md)[Max Node Expansion](../tutorial/max/max_builder.md). @@ -73,7 +73,7 @@ Air chain data disk expansion: FISCO BCOS uses the rocksdb storage engine by def 4. Migrate node to new disk; 5. Restart the node; 6. Send a command from the console to add the node to the consensus。 - Some cloud platforms provide one-click upgrade, expansion of hard disk and other functions, the above 3-4 steps can replace this function。 + Some cloud platforms provide functions such as one-click upgrade and hard disk expansion. The above 3-4 steps can replace this function。 Max Chain Data Disk Expansion:We recommend that you use the TIKV cluster version for Max nodes in the production environment. The TiKV cluster version can be used as the backend of the nodes to easily and simply scale out。For specific expansion and contraction, please refer to [TIKV Expansion](../tutorial/max/max_builder.md)。 @@ -81,28 +81,28 @@ Max Chain Data Disk Expansion:We recommend that you use the TIKV cluster version FISCO BCOS supports node-friendly, contract-compatible upgrades。 -[Node upgrade]: FISCO BCOS uses compatibility _ version to control the compatibility version of the block chain. Compatibility _ version must be determined in the construction chain. This configuration cannot be changed during subsequent node upgrades.。For example, the compatibility _ version is 3.1.0 when the chain is established, and the compatibility _ version configuration must remain at 3.1.0 after subsequent node upgrades to 3.2.0 and 3.3.0.。The node upgrade steps are as follows: +[Node upgrade]: FISCO BCOS uses compatibility _ version to control the compatibility version of the block chain. Compatibility _ version must be determined in the construction chain. This configuration cannot be changed during subsequent node upgrades。For example, the compatibility _ version is 3.1.0 when the chain is established, and the compatibility _ version configuration must remain at 3.1.0 after subsequent node upgrades to 3.2.0 and 3.3.0。The node upgrade steps are as follows: 1. Stop Node; -2. Back up the FICO of the old version node-bcos binary executable, replaced with new version; +2. Back up the disco-bcos binary executable of the old version node and replace it with the new version; 3. Restart the node; 4. Check the consensus and synchronization to ensure the normal operation of the node。 Contract upgrade can refer to the document [upgrade of smart contract](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/contract_life_cycle.html#id5)。 ## Key Management -FISCO BCOS involves the management of private keys and certificates of chains, institutions, nodes, and SDKs. If a self-signed certificate is used (FISCO BCOS is provided by default), O & M needs to manage all these private keys and certificates and make backups.。The specific management method can be the organization's own management system, or the key escrow service provided by FISCO BCOS (the service needs to be deployed and maintained).。The certificates and private keys involved include: +FISCO BCOS involves the management of private keys and certificates of chains, institutions, nodes, and SDKs. If a self-signed certificate is used (FISCO BCOS is provided by default), O & M needs to manage all these private keys and certificates and make backups。The specific management method can be the organization's own management system, or the key escrow service provided by FISCO BCOS (the service needs to be deployed and maintained)。The certificates and private keys involved include: -1. The private key and certificate of the chain. +1. The private key and certificate of the chain 2. Private key and certificate of the institution -3. The node's private key and certificate. +3. The node's private key and certificate 4. SDK private key and certificate -All keys and certificates support the national secret. The generated national secret certificate and private key have their own sm prefix. For example, the normal key and certificate are ca.key and ca.crt, and the national secret private key and certificate are sm _ ca.key and sm _ ca.crt.。 +All keys and certificates support the national secret. The generated national secret certificate and private key have their own sm prefix. For example, the normal key and certificate are ca.key and ca.crt, and the national secret private key and certificate are sm _ ca.key and sm _ ca.crt。 ## TLS Communication Certificate Maintenance -In order to ensure the security of system communication operation and maintenance, FISCO BCOS regularly updates the TLS communication key of nodes to prevent attackers from analyzing the key by intercepting a large number of ciphertexts over a long period of time.。 +In order to ensure the security of system communication operation and maintenance, FISCO BCOS regularly updates the TLS communication key of nodes to prevent attackers from analyzing the key by intercepting a large number of ciphertexts over a long period of time。 Key update is divided into two methods: updating all certificates and keys of the node and updating only the TLS communication certificate of the node. The steps for updating the root certificate are as follows: 1. Back up the original CA certificate and key; @@ -119,7 +119,7 @@ To update only the node TLS communication certificate, follow these steps: If the certificate is compromised, the longer the certificate is used, the greater the loss。 Therefore, the use of the certificate, should set the validity period, when the certificate exceeds the validity period or stop using, the certificate is destroyed, the specific destruction process is as follows: -1. Check the validity period of the node communication certificate. If the certificate expires, the certificate will be archived and destroyed. If the key is stopped, the user can also take the initiative to destroy the certificate after the certificate is archived.; +1. Check the validity period of the node communication certificate. If the certificate expires, the certificate will be archived and destroyed. If the key is stopped, the user can also take the initiative to destroy the certificate after the certificate is archived; 2. Update the node TLS communication key to generate a new communication certificate; 3. Back up the new certificate, restart the node, and enable the new certificate。 diff --git a/3.x/en/docs/operation_and_maintenance/add_new_node.md b/3.x/en/docs/operation_and_maintenance/add_new_node.md index d683478b3..6f1695164 100644 --- a/3.x/en/docs/operation_and_maintenance/add_new_node.md +++ b/3.x/en/docs/operation_and_maintenance/add_new_node.md @@ -6,29 +6,29 @@ Label: "node management" "expansion group" "free node" "new node" "consensus nod FISCO BCOS introduces [free nodes, observer nodes and consensus nodes](../design/security_control/node_management.html#id6)The three node types can be converted to each other through the console。 -- Consensus node: The node that participates in the consensus and owns all the data of the group (consensus nodes are generated by default when the chain is connected)。 +- Consensus node: the node participating in the consensus, which owns all the data of the group (consensus nodes are generated by default when the chain is connected)。 - Observer nodes: nodes that do not participate in consensus, but can synchronize data on the chain in real time。 -- Free node: node that has been started and is waiting to join the group。In a temporary node state, can not get the data on the chain。 +- Free nodes: nodes that have been started and are waiting to join the group。In a temporary node state, can not get the data on the chain。 -Convert the specified node into a consensus node, an observer node, and a free node. +Convert the specified node into a consensus node, an observer node, and a free node -- [addSealer: Set the corresponding node as a consensus node based on the node NodeID](./console/console_commands.html#addsealer) +- [addSealer: Set the corresponding node as the consensus node according to the node NodeID](./console/console_commands.html#addsealer) - [addObserver: Set the corresponding node as the observation node according to the node NodeID](./console/console_commands.html#addobserver) -- [removeNode: Set the corresponding node as a free node based on the node NodeID](./console/console_commands.html#removenode) -- [getSealerList: View list of consensus nodes in a group](./console/console_commands.html#getsealerlist) +- [removeNode: Set the corresponding node as a free node according to the node NodeID](./console/console_commands.html#removenode) +- [getSealerList: View the list of consensus nodes in the group](./console/console_commands.html#getsealerlist) - [getObserverList: View the list of observation nodes in the group](./console/console_commands.html#getobserverlist) - [getNodeIDList: View the NodeIDs of all other nodes to which the node is connected](./console/console_commands.html#getnodeidlist) -The following is a detailed description of how the group can expand a new node in combination with a specific operation case.。The expansion operation is divided into two phases, namely**Generate certificates for nodes and launch**、**Add node to group**。 +The following is a detailed description of how the group can expand a new node in combination with a specific operation case。The expansion operation is divided into two phases, namely**Generate certificates for nodes and launch**、**Add node to group**。 This section assumes that the user has already referred to [Building the First Blockchain Network](../quick_start/air_installation.md)Build a 4-node alliance chain, the next operation will generate a new node, and then join the node to group 1。 If you are using the O & M deployment tool, please refer to [here to expand the operation](./build_chain.md)。 -## 1. Generate a certificate for the node and start it. +## 1. Generate a certificate for the node and start it -Each node needs to have a set of certificates to establish connections with other nodes on the chain, and to expand a new node, you first need to issue a certificate for it.。 +Each node needs to have a set of certificates to establish connections with other nodes on the chain, and to expand a new node, you first need to issue a certificate for it。 ### Generate private key certificate for new node @@ -42,14 +42,14 @@ curl -#LO https://raw.githubusercontent.com/FISCO-BCOS/FISCO-BCOS/master-2.0/too ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/gen_node_cert.sh` + -If you cannot download for a long time due to network problems, please try'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/gen_node_cert.sh` ``` 2. Generate a new node private key certificate ```bash -# -c path of the specified institution certificate and private key -# -o Output to the specified folder, where the new certificate and private key issued by the agency will exist in node4 / conf +# -c specifies the path of the institution certificate and private key +# -o output to the specified folder, where the newly issued certificate and private key of the agency will exist in node4 / conf # All completed prompt will be output successfully bash gen_node_cert.sh -c ../cert/agency -o node4 ``` @@ -130,7 +130,7 @@ The string similar to the following is nodeid, which is the hexadecimal represen ### Join node4 to group 1 using the console -1. Use addObserver to add node4 as an observation node to group 1. +1. Use addObserver to add node4 as an observation node to group 1 ```bash [group:1]> getObserverList @@ -148,7 +148,7 @@ The string similar to the following is nodeid, which is the hexadecimal represen ] ``` -2. Use addSealer to add node4 as a consensus node to group 1. +2. Use addSealer to add node4 as a consensus node to group 1 ```bash [group:1]> getSealerList diff --git a/3.x/en/docs/operation_and_maintenance/browser.md b/3.x/en/docs/operation_and_maintenance/browser.md index 40cb0aac8..4bd76831e 100644 --- a/3.x/en/docs/operation_and_maintenance/browser.md +++ b/3.x/en/docs/operation_and_maintenance/browser.md @@ -1,10 +1,10 @@ -# 15. Blockchain Browser. +# 15. Blockchain Browser Tags: "blockchain browser" "graphical" ------ -Blockchain Browser--The WeBASE management platform can visualize the data in the blockchain and display it in real time, making it easy for users to obtain the information in the current blockchain in the form of a Web page.。Browser version adapted to FISCO BCOS 2.0+and 3.0+The blockchain browser mainly displays the specific information of the data on the chain, which includes: overview information, block information, transaction information, etc.。 -This document will mainly introduce the blockchain browser--Features of the WeBASE management platform and how to deploy and upgrade it。 +The blockchain browser - WeBASE management platform, which can visualize the data in the blockchain and display it in real time, making it easy for users to obtain the information in the current blockchain in the form of a Web page。Browser version adapted to FISCO BCOS 2.0+and 3.0+The blockchain browser mainly displays the specific information of the data on the chain, which includes: overview information, block information, transaction information, etc。 +This document will mainly introduce the functional features of the blockchain browser-WeBASE management platform and its deployment and upgrade methods。 ### 1. Functional overview WeBASE Management Platform Key Features Summary: @@ -14,12 +14,12 @@ WeBASE Management Platform Key Features Summary: - Private key management - Application Management - Systems Management -- System Monitoring -- Transaction Audit +- System monitoring +- Transaction audit - Subscribe to events - Account Management -- Group Management -- Mobile Management Desk +- Group management +-Mobile Management Desk - Data monitoring large screen For detailed description of each function, please refer to [WeBASE Management Platform User Manual](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Console-Suit/index.html)。 @@ -27,8 +27,8 @@ For detailed description of each function, please refer to [WeBASE Management Pl ### 2. One-click deployment WeBASE Management Platform Support [One-click Deployment](https://webasedoc.readthedocs.io/zh_CN/lab-dev/docs/WeBASE/install.html), You can quickly build a WeBASE management console environment on the same machine to facilitate users to quickly experience the WeBASE management platform。 -One-click deployment will build: node (FISCO-BCOS 3.0+), management platform (WeBASE-Web), Node Management Subsystem (WeBASE-Node-Manager), Node Front Subsystem (WeBASE-Front), signing service (WeBASE-Sign)。 -Among them, the construction of the node is optional, you can choose to use the existing chain or build a new chain through the configuration.。 +One-click deployment will build: node (FISCO-BCOS 3.0+), management platform (WeBASE-Web), node management subsystem (WeBASE-Node-Manager), node front subsystem (WeBASE-Front), signature service (WeBASE-Sign)。 +Among them, the construction of the node is optional, you can choose to use the existing chain or build a new chain through the configuration。 The one-click deployment architecture is as follows: ![](../../images/webase/img.png) diff --git a/3.x/en/docs/operation_and_maintenance/build_chain.md b/3.x/en/docs/operation_and_maintenance/build_chain.md index 2696d7580..81d8a6f1f 100644 --- a/3.x/en/docs/operation_and_maintenance/build_chain.md +++ b/3.x/en/docs/operation_and_maintenance/build_chain.md @@ -6,7 +6,7 @@ Tags: "build _ chain" "Build an Air version of the blockchain network" ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` diff --git a/3.x/en/docs/operation_and_maintenance/committee_usage.md b/3.x/en/docs/operation_and_maintenance/committee_usage.md index 99c296015..90d66308d 100644 --- a/3.x/en/docs/operation_and_maintenance/committee_usage.md +++ b/3.x/en/docs/operation_and_maintenance/committee_usage.md @@ -4,39 +4,39 @@ Tags: "Contract Permissions" "Deployment Permissions" "Permission Control" "Perm ---- -FISCO BCOS 3.x introduces the authority governance system of contract granularity.。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting.。 +FISCO BCOS 3.x introduces the authority governance system of contract granularity。The governance committee can manage the deployment of the contract and the interface call permission of the contract by voting。 For detailed design, please refer to the link: [Permission Management System Design](../design/committee_design.md) ## 1. Enable permission governance mode -Before the blockchain is initialized and started, you must enable and set the permission governance configuration in the configuration to correctly start the permission governance mode.。Reconfiguration after blockchain startup will not work。 +Before the blockchain is initialized and started, you must enable and set the permission governance configuration in the configuration to correctly start the permission governance mode。Reconfiguration after blockchain startup will not work。 -To enable the permission governance mode, set the 'is _ auth _ check' option to 'true' and set the 'auth _ admin _ account' initial committee account address to the correct address.。 +To enable the permission governance mode, set the 'is _ auth _ check' option to 'true' and set the 'auth _ admin _ account' initial committee account address to the correct address。 -**Updated:** After version 3.3, it will be supported to dynamically open the permission mode after the chain is started and deploy the governance committee contract.。Please pay attention to the section of dynamic opening permission mode in this chapter.。 +**Updated:** After version 3.3, it will be supported to dynamically open the permission mode after the chain is started and deploy the governance committee contract。Please pay attention to the section of dynamic opening permission mode in this chapter。 -Different node deployment modes of FISCO BCOS have slightly different ways to enable permission governance。This section will discuss separately how to turn on permission governance in different node deployment modes.。 +Different node deployment modes of FISCO BCOS have slightly different ways to enable permission governance。This section will discuss separately how to turn on permission governance in different node deployment modes。 ### 1.1 FISCO BCOS Air Edition Opens Permission Governance -FISCO BCOS Air version of the chain deployment tool details, please refer to: [Air deployment tool](../tutorial/air/build_chain.md)。Take building four nodes as an example to enable permission governance settings.。 +FISCO BCOS Air version of the chain deployment tool details, please refer to: [Air deployment tool](../tutorial/air/build_chain.md)。Take building four nodes as an example to enable permission governance settings。 -Chain building deployment tools are-A 'and'-a 'Two modes for enabling permission mode: +The deployment tool has two modes, '-A' and '-a', to enable the permission mode: -**Note:** After version 3.3, the 'build _ chain.sh' script will turn on the permission mode by default, and there will be no more '-A 'option, if you do not specify an account address, an account public-private key pair will be generated by default and placed in the' ca 'directory of the chain.(../develop/account.md)。 +**Note:** After version 3.3, the 'build _ chain.sh' script will turn on the permission mode by default and no longer have the '-A' option. If you do not specify an account address, an account public-private key pair will be generated by default and placed in the 'ca' directory of the chain. For the creation and use of accounts, please refer to: [Creating and Using Accounts](../develop/account.md)。 - `-A`: The permission setting will be enabled, and an account address will be randomly generated by using the 'get _ account.sh' and 'get _ gm _ account.sh' scripts, and the public-private key pair of the generated account will be placed in the 'ca' directory of the chain. For details about creating and using an account, see [Creating and Using an Account](../develop/account.md) -- `-a ': will open the permission settings and specify an account address as the only account for initializing the governance committee.**When specifying, you must confirm that the account exists and that the account address is correct, otherwise permission governance will be unavailable because there is no governance committee authority.**。 +- '-a': will turn on permission settings and specify an account address as the only account to initialize the governance committee**When specifying, you must confirm that the account exists and that the account address is correct, otherwise permission governance will be unavailable because there is no governance committee authority**。 #### 1.1.1 Examples of enabling permission governance -Use '-A 'option to enable permission mode, you can see that' Auth Mode 'has been enabled,' Auth init account 'initial account is' 0x976fe0c250181c7ef68a17d3bc34916978da103a '。 +Use the '-A' option to enable permission mode. You can see that 'Auth Mode' is enabled and the initial account of 'Auth init account' is' 0x976fe0c250181c7ef68a17d3bc34916978da103a '。 **Note:** After version 3.3, the 'build _ chain.sh' script will enable the permission mode by default. If the account address is not specified, an account public-private key pair will be generated by default and placed in the 'ca' directory of the chain. For the creation and use of accounts, please refer to: [Creating and Using Accounts](../develop/account.md)。 ```shell -## If you use-A option, the permission setting is turned on, and an account address is randomly generated as the only admin account for initializing the governance committee. +## If the -A option is used, the permission setting is turned on and an account address is randomly generated as the only admin account to initialize the governance committee bash build_chain.sh -l 127.0.0.1:4 -o nodes -A [INFO] Downloading fisco-bcos binary from https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.0.0/fisco-bcos-linux-x86_64.tar.gz ... @@ -71,11 +71,11 @@ ls nodes/ca/accounts 0x976fe0c250181c7ef68a17d3bc34916978da103a.pem 0x976fe0c250181c7ef68a17d3bc34916978da103a.public.pem ``` -Use '-a 'option to enable permission mode, specify the account address as the initial governance member, you can see that' Auth Mode 'has been enabled,' Auth init account 'initial account is' 0x976fe0c250181c7ef68a17d3bc34916978da103a ' +Use the '-a' option to enable permission mode, specify the account address as the initial governance member, you can see that 'Auth Mode' is enabled, and the initial account of 'Auth init account' is' 0x976fe0c250181c7ef68a17d3bc34916978da103a ' ```shell -## If you use-a option, the permission settings are turned on and the account address is specified as the only admin account for initializing the governance committee +## If you use the -a option, turn on permission settings and specify the account address as the only admin account to initialize the governance committee bash build_chain.sh -l 127.0.0.1:4 -o nodes -a 0x976fe0c250181c7ef68a17d3bc34916978da103a [INFO] Generate ca cert successfully! @@ -104,7 +104,7 @@ Processing IP:127.0.0.1 Total:4 #### 1.1.2 View node permission configuration -Either use '-A 'or'-a 'option enables permission governance, which is reflected in the configuration of each node。When the node starts initialization, it will read the configuration and initialize the permission contract.。 +Whether permission governance is enabled with the '-A' or '-a' option, it is reflected in the configuration of each node。When the node starts initialization, it will read the configuration and initialize the permission contract。 Let's take 'nodes / 127.0.0.1 / node0 / config.genesis' as an example: @@ -121,9 +121,9 @@ Let's take 'nodes / 127.0.0.1 / node0 / config.genesis' as an example: ### 1.2 FISCO BCOS Pro / Max Edition Opens Permission Governance -FISCO BCOS Pro version of the build chain deployment tool details please refer to: [build Pro version of the blockchain network](../tutorial/pro/installation.md)。Take BcosBuilder as an example to enable permission governance settings.。 +FISCO BCOS Pro version of the build chain deployment tool details please refer to: [build Pro version of the blockchain network](../tutorial/pro/installation.md)。Take BcosBuilder as an example to enable permission governance settings。 -Before enabling the Pro / Max blockchain network permission mode, ensure that [Deploy Pro Blockchain Node] has been completed.(../tutorial/pro/installation.html#id4)All previous steps。 +Before enabling the Pro / Max blockchain network permission mode, ensure that [Deploy Pro Blockchain Node] has been completed(../tutorial/pro/installation.html#id4)All previous steps。 When copying a configuration file, you need to manually configure permissions to initialize the configuration。To copy a configuration file, refer to: [Deploying RPC Services](../tutorial/pro/installation.html#rpc) @@ -151,18 +151,18 @@ init_auth_address="0x976fe0c250181c7ef68a17d3bc34916978da103a" ... ``` -After completing the configuration items, you can continue to deploy RPC services, GateWay services, and node services.。Continue process reference: [Deploy RPC Service](../tutorial/pro/installation.html#rpc) +After completing the configuration items, you can continue to deploy RPC services, GateWay services, and node services。Continue process reference: [Deploy RPC Service](../tutorial/pro/installation.html#rpc) ### 1.3 Dynamic opening permissions **Note:** This section applies only to binary and chain versions 3.3.0 and later。 -After the blockchain is started, if the previous data of the blockchain has not been configured with the functions of the permission part, you can use the 'initAuth' command in the console to initialize the chain's governance committee, and then use the 'setSystemConfigByKey' to enable permission checking.。Specific steps are as follows: +After the blockchain is started, if the previous data of the blockchain has not been configured with the functions of the permission part, you can use the 'initAuth' command in the console to initialize the chain's governance committee, and then use the 'setSystemConfigByKey' to enable permission checking。Specific steps are as follows: -**When appointing the first governance member, you must confirm that the account exists and that the account address is correct, otherwise it will result in the inability to govern the authority because there is no governance member authority.**。 +**When appointing the first governance member, you must confirm that the account exists and that the account address is correct, otherwise it will result in the inability to govern the authority because there is no governance member authority**。 ```shell -# First, you must upgrade the data version number of the chain to 3.3.0 and above. If it is already 3.3.0 or above, skip this step. +# First, you must upgrade the data version number of the chain to 3.3.0 and above. If it is already 3.3.0 or above, skip this step [group0]: /apps> setSystemConfigByKey compatibility_version 3.3.0 { "code":0, @@ -188,7 +188,7 @@ ParticipatesRate: 0% , WinRate: 0% Governor Address | Weight index0 : 0xf4a3d4177ce2bad732a63702a751a8e41ea6bce4 | 1 -# At this point, the permission management committee has been initialized and permission checking has not been enabled. Use the setSystemConfigByKey command to enable the permission check. +# At this point, the permission management committee has been initialized and permission checking has not been enabled. Use the setSystemConfigByKey command to enable the permission check [group0]: /apps> setSystemConfigByKey auth_check_status 1 { "code":0, @@ -212,13 +212,13 @@ This section describes the check that closes the authority, the governance commi ## 2. Console use -The console has commands dedicated to permission governance and commands to switch console accounts.。You can use the console to manage permissions. For more information, see [Permission command](../operation_and_maintenance/console/console_commands.html#id14)。The permission governance command will only appear if the console is connected to the node with permission governance enabled.。 +The console has commands dedicated to permission governance and commands to switch console accounts。You can use the console to manage permissions. For more information, see [Permission command](../operation_and_maintenance/console/console_commands.html#id14)。The permission governance command will only appear if the console is connected to the node with permission governance enabled。 Console operation commands include the following three types. For details, see [Permission Operation Commands](../operation_and_maintenance/console/console_commands.html#id14): - Query status command, which has no permission control and is accessible to all accounts。 -- Governance Committee Special Orders, which can only be used if the account of the Governance Committee is held。 -- Contract administrator-specific commands that can only be accessed by an administrator account with administrative privileges on a contract。 +- Governance Committee-specific orders, which can only be used if the account of the Governance Committee is held。 +- Contract administrator-specific commands, which can only be accessed by an administrator account with administrative privileges on a contract。 ## 3. Use examples @@ -226,7 +226,7 @@ First, use the build _ chain.sh script to build a blockchain with permission mod Reference here [Creating and Using an Account](./account.md)link to create a new account, specifying that the account address of the initialization governance member is 0x1cc06388cd8a12dcf7fb8967378c0aea4e6cf642 -You can use '-A 'option to automatically generate an account。Accounts are distinguished between state and non-state secrets and are automatically generated based on the type of chain。 +You can use the '-A' option to automatically generate an account。Accounts are distinguished between state and non-state secrets and are automatically generated based on the type of chain。 ```shell bash build_chain.sh -l 127.0.0.1:4 -o nodes4 -a 0x1cc06388cd8a12dcf7fb8967378c0aea4e6cf642 @@ -234,9 +234,9 @@ You can use '-A 'option to automatically generate an account。Accounts are dist ### 3.1 Use of governance members -Use the 'getCommitteeInfo' command to see that there is only one governance committee at initialization, with a weight of 1. +Use the 'getCommitteeInfo' command to see that there is only one governance committee at initialization, with a weight of 1 -And the account used in the current console is the member. +And the account used in the current console is the member ```shell [group0]: /> getCommitteeInfo @@ -259,7 +259,7 @@ As you can see, a proposal was launched, proposal number 1。 Because the current governance committee has only one member and both the participation threshold and the weight threshold are zero, the proposal initiated is certain to succeed。 -Use the 'getCommitteeInfo' command to see that the weight of the governance committee has indeed been updated. +Use the 'getCommitteeInfo' command to see that the weight of the governance committee has indeed been updated ```shell [group0]: /> updateGovernorProposal 0x1cc06388cd8a12dcf7fb8967378c0aea4e6cf642 2 @@ -278,7 +278,7 @@ Against Voters: You can also add new governance members using 'updateGovernorProposal': -Only length and character checks will be done here, not correctness checks.。You can see the successful addition of a governance committee with a weight of 1 +Only length and character checks will be done here, not correctness checks。You can see the successful addition of a governance committee with a weight of 1 ```shell [group0]: /> updateGovernorProposal 0xba0cd3e729cfe3ebdf1f74a10ec237bfd3954e1e 1 @@ -296,7 +296,7 @@ Against Voters: You can also use 'updateGovernorProposal' to delete governance members: -If the account weight is set to 0, the governance member is deleted. +If the account weight is set to 0, the governance member is deleted ```shell [group0]: /> updateGovernorProposal 0xba0cd3e729cfe3ebdf1f74a10ec237bfd3954e1e 0 @@ -352,7 +352,7 @@ At this point, the Commission's participation rate must be greater than 51, the Use the current account to initiate the 'setDeployAuthTypeProposal' proposal, change the global deployment permission policy, and use the whitelist mode。 -At this point, you can see that the type of the sixth proposal is' setDeployAuthType 'and the status is' notEnoughVotes'. The proposal cannot be passed yet, and the current deployment permission policy is still in the no-policy state.。 +At this point, you can see that the type of the sixth proposal is' setDeployAuthType 'and the status is' notEnoughVotes'. The proposal cannot be passed yet, and the current deployment permission policy is still in the no-policy state。 ```shell [group0]: /> setDeployAuthTypeProposal white_list @@ -371,7 +371,7 @@ Against Voters: There is no deploy strategy, everyone can deploy contracts. ``` -Switch to another committee account and vote on proposal 6, you can see that the vote was successful and the proposal status changed to end.。Deployment policy also becomes whitelist mode。 +Switch to another committee account and vote on proposal 6, you can see that the vote was successful and the proposal status changed to end。Deployment policy also becomes whitelist mode。 ```shell [group0]: /> loadAccount 0xba0cd3e729cfe3ebdf1f74a10ec237bfd3954e1e @@ -396,9 +396,9 @@ Deploy strategy is White List Access. ### 3.2 Deployment Permissions -Continue. The deployment permission of the current chain is in whitelist mode.。 +Continue. The deployment permission of the current chain is in whitelist mode。 -The governance committee does not have the permission to deploy, but the governance committee can initiate the deployment permission to open an account.。 +The governance committee does not have the permission to deploy, but the governance committee can initiate the deployment permission to open an account。 You can also initiate a proposal to turn off deployment permissions through the command 'closeDeployAuthProposal' @@ -456,11 +456,11 @@ At this point, the HelloWorld contract administrator for address 0x33E56a083e135 ### 3.3 Contract Administrator Use -The contract administrator of the current HelloWorld contract 0x33E56a083e135936C1144960a708c43A661706C0 is the '0xab835e87a86f94af10c81278bb9a82ea13d82d39' account. +The contract administrator of the current HelloWorld contract 0x33E56a083e135936C1144960a708c43A661706C0 is the '0xab835e87a86f94af10c81278bb9a82ea13d82d39' account The contract administrator can set the interface policy for the current contract: -The contract administrator's "set" to the HelloWorld contract.(string)"The contract sets the whitelist mode, and after the setting is successful, the administrator does not have permission to call set(string)Interface +The contract administrator's "set" to the HelloWorld contract(string)"The contract sets the whitelist mode, and after the setting is successful, the administrator does not have permission to call set(string)Interface ```shell [group0]: /> getContractAdmin 0x33E56a083e135936C1144960a708c43A661706C0 @@ -540,12 +540,12 @@ Return values:(May the flame guide thee.) Initiate a proposal to upgrade the logic of voting calculations。The upgrade proposal vote calculation logic is divided into the following steps: -1. Write contracts based on interfaces.; +1. Write contracts based on interfaces; 2. Deploy the written contract on the chain and get the address of the contract; 3. Initiate a proposal to upgrade the voting calculation logic, enter the address of the contract as a parameter, and vote on it in the governance committee; -4. After the vote is passed (the voting calculation logic is still the original logic at this time), the voting calculation logic is upgraded.;Otherwise do not upgrade。 +4. After the vote is passed (the voting calculation logic is still the original logic at this time), the voting calculation logic is upgraded;Otherwise do not upgrade。 -The voting calculation logic contract can only be used according to a certain interface implementation.。For contract implementation, see the following interface contract 'VoteComputerTemplate.sol': +The voting calculation logic contract can only be used according to a certain interface implementation。For contract implementation, see the following interface contract 'VoteComputerTemplate.sol': ```solidity // SPDX-License-Identifier: Apache-2.0 @@ -574,7 +574,7 @@ abstract contract VoteComputerTemplate is BasicAuth { address[] memory againstVoters ) public view virtual returns (uint8); - / / This is a verification interface for computational logic for other governance members to verify the validity of the contract. + / / This is a verification interface for computational logic for other governance members to verify the validity of the contract function voteResultCalc( uint32 agreeVotes, uint32 doneVotes, @@ -653,7 +653,7 @@ ParticipatesRate: 0% , WinRate: 0% Governor Address | Weight index0 : 0x4a37eba43c66df4b8394abdf8b239e3381ea4221 | 2 -# Deploy the VoteComputer contract. The first parameter 0x10001 is a fixed address, and the second parameter is the address of the current governance committee member Committee. +# Deploy the VoteComputer contract. The first parameter 0x10001 is a fixed address, and the second parameter is the address of the current governance committee member Committee [group0]: /apps> deploy VoteComputer 0x10001 0xa0974646d4462913a36c986ea260567cf471db1f transaction hash: 0x429a7ceccefb3a4a1649599f18b60cac1af040cd86bb8283b9aab68f0ab35ae4 contract address: 0x6EA6907F036Ff456d2F0f0A858Afa9807Ff4b788 diff --git a/3.x/en/docs/operation_and_maintenance/console/console_commands.md b/3.x/en/docs/operation_and_maintenance/console/console_commands.md index 2e0adfa0a..01e0d7cb8 100644 --- a/3.x/en/docs/operation_and_maintenance/console/console_commands.md +++ b/3.x/en/docs/operation_and_maintenance/console/console_commands.md @@ -6,33 +6,33 @@ Tags: "console" "console commands" "command line interactive tools" ```eval_rst .. important:: - - "Console" only supports FISCO BCOS version 3.x, based on 'Java SDK <.. / sdk / java _ sdk / index.html >' _ implementation。 - - You can use the command. "/ start.sh--version "View the current console version + - "Console" only supports FISCO BCOS 3.x version, based on 'Java SDK<../sdk/java_sdk/index.html>'_ Implementation。 + - You can view the current console version through the command. "/ start.sh --version" ``` ## Console Command Structure Console commands consist of two parts, directives and directive-related parameters: -- **Directive**: Instructions are operational commands that are executed, including instructions to query blockchain-related information, deploy contracts, and invoke contracts, some of which call JSON-RPC interface, so with JSON-RPC interface with same name。 +- **Directive**: Instructions are commands that are executed, including instructions for querying blockchain-related information, deploying contracts, and invoking contracts. Some of these instructions call the JSON-RPC interface, so they have the same name as the JSON-RPC interface。 **Tips: commands can be completed using the tab key, and support the up and down keys to display historical input commands。** -- **Instruction-related parameters**: The parameters required by the instruction call interface, instructions and parameters, and parameters and parameters are separated by spaces, and JSON.-For detailed explanation of input parameters and obtaining information fields of RPC interface commands with the same name, refer to [JSON-RPC API](../../develop/api.md)。 +- **Instruction-related parameters**: The parameters required by the instruction call interface, the instructions and parameters, and the parameters and parameters are separated by spaces, and the input parameters and information acquisition fields of the command with the same name as the JSON-RPC interface are explained in detail [JSON-RPC API](../../develop/api.md)。 ### Console Common Commands ### Contract Related Orders -- Common deployment and invocation contracts +- Common deployment and call contracts - Deployment contract: [deploy](./console_commands.html#deploy) - - Call Contract: [call](./console_commands.html#call) + - Call contract: [call](./console_commands.html#call) ### Other Commands -- Query the block height: [getBlockNumber](./console_commands.html#getblocknumber) -- Query the consensus node list: [getSealerList](./console_commands.html#getsealerlist) +- Query block height: [getBlockNumber](./console_commands.html#getblocknumber) +- Query consensus node list: [getSealerList](./console_commands.html#getsealerlist) - Query transaction receipt information: [getTransactionReceipt](./console_commands.html#gettransactionreceipt) -- Toggle Group: [switch](./console_commands.html#switch) +- Toggle groups: [switch](./console_commands.html#switch) ### Shortcut Keys @@ -47,16 +47,16 @@ Console commands consist of two parts, directives and directive-related paramete When a console command is launched, the console obtains the command execution result and displays the execution result on the terminal. The execution result is divided into two categories: -- **correct result:** The command returns the correct execution result, which is returned as a string or json.。 -- **Error result:** The command returns the execution result of the error, which is returned as a string or json.。 - - Console command calls JSON-Error code when RPC interface [refer here](../../develop/api.html#rpc)。 - - Error code when the command of the console calls the Precompiled Service interface [refer to here](../../develop/api.html#id5)。 +- **correct result:** The command returns the correct execution result, which is returned as a string or json。 +- **Error result:** The command returns the execution result of the error, which is returned as a string or json。 + -Error code when the command of the console calls the JSON-RPC interface [refer here](../../develop/api.html#rpc)。 + -Error code when the command of the console calls the Precompiled Service interface [refer here](../../develop/api.html#id5)。 ## Console Basic Commands ### 1. help -Run help or h to view all commands in the console.。 +Run help or h to view all commands in the console。 ```shell [group0]: /apps> help @@ -143,8 +143,8 @@ Run help or h to view all commands in the console.。 **Note:** -- Help shows the meaning of each command is: command command function description -- View the instructions for using a specific command and enter the command-h or\--Help View。For example: +-help shows the meaning of each command is: command command function description +- View the instructions for the specific command, enter the command -h or\ --help to view。For example: ```shell [group0]: /apps> getBlockByNumber -h @@ -184,8 +184,8 @@ Deployment contract。(HelloWorld contract and KVTableTest are provided by defau **Solidity deployment parameters:** -- Contract path: the path of the contract file, which supports relative path, absolute path, and default path.。When the user enters a file name, the file is obtained from the default directory, which is: 'contracts / solidity ', for example: HelloWorld。 -- Enable static analysis: optional, the default is off。If enabled, static analysis of parallel field conflict domains is enabled to accelerate parallel contract execution。Static analysis takes a long time. Please be patient。 +- Contract path: the path of the contract file, supports relative path, absolute path and default path。When the user enters a file name, the file is obtained from the default directory, which is: 'contracts / solidity ', for example: HelloWorld。 +- Turn on static analysis: optional, the default is off。If enabled, static analysis of parallel field conflict domains is enabled to accelerate parallel contract execution。Static analysis takes a long time. Please be patient。 ```shell # Deploy HelloWorld contract, default path @@ -207,14 +207,14 @@ contract address: 0x0102e8B6fC8cdF9626fDdC1C3Ea8C1E79b3FCE94 **Note:** -- To deploy a contract written by a user, you can place the Solidity contract file in the 'contracts / solidity /' directory of the console root directory, and then deploy it.。Press tab to search for contract names in the 'contracts / consolidation /' directory。 -- If the contract to be deployed references another contract or library, the reference format is' import '."./XXX.sol";`。The relevant introduced contracts and library libraries are placed in the 'contracts / consolidation /' directory。 -- If the contract references a library, the name of the library file must start with the string 'Lib' to distinguish between a normal contract and a library file.。library library files cannot be deployed and called separately。 +- To deploy a contract written by a user, you can place the Solidity contract file in the 'contracts / solidity /' directory of the console root directory, and then deploy it。Press tab to search for contract names in the 'contracts / consolidation /' directory。 +- If the contract to be deployed references another contract or library, the reference format is' import '"./XXX.sol";`。The relevant introduced contracts and library libraries are placed in the 'contracts / consolidation /' directory。 +- If the contract references the library library, the name of the library file must start with the 'Lib' string to distinguish between a normal contract and a library file。library library files cannot be deployed and called separately。 **Liquid deployment parameters:** -- Binary file folder path: cargo-Both the wasm file and the ABI file compiled by liquid must be placed in the same path. Absolute paths and relative paths are supported. -- Deploy BFS path: Path name in BFS file system +- Binary file folder path: Wasm file and ABI file compiled by cargo-liquid must be placed in the same path. Absolute path and relative path are supported +- Deploy BFS path: pathname in the BFS file system - Deployment construction parameters: Construction parameters required for deployment ```shell @@ -227,7 +227,7 @@ currentAccount: 0x52d8001791a646d7e0d63e164731b8b7509c8bda **deploy with BFS:** -Supports creating an alias in BFS when deploying a contract, using the parameter'-l 'Link the deployed address of HelloWorld to the / apps / hello / v1 directory: +You can create an alias in BFS when deploying a contract. Use the '-l' parameter to link the deployed address of HelloWorld to the / apps / hello / v1 directory: ```shell [group0]: /apps> deploy -l ./hello/v1 HelloWorld @@ -249,9 +249,9 @@ Run call, call contract。 **Solidity call parameters:** -- Contract path: the path of the contract file, which supports relative path, absolute path, and default path.。When the user enters a file name, the file is obtained from the default directory, which is: `contracts/solidity`。 -- Contract Address: Address obtained from the deployment contract。 -- Contract Interface Name: The name of the contract interface to call。 +- Contract path: the path of the contract file, supports relative path, absolute path and default path。When the user enters a file name, the file is obtained from the default directory, which is: `contracts/solidity`。 +- Contract address: Address obtained from the deployment contract。 +- Contract interface name: the name of the contract interface called。 - Parameters: Determined by contract interface parameters。**Parameters are separated by spaces;Array parameters need to be bracketed, such as [1,2,3], the array is a string or byte type, double quotation marks, such as ["alice," "bob"], note that the array parameters do not have spaces;Boolean type is true or false。** ```shell @@ -283,7 +283,7 @@ Return values:() Event logs Event: {} -# Call the get interface of HelloWorld to obtain the name string and check whether the setting takes effect. +# Call the get interface of HelloWorld to obtain the name string and check whether the setting takes effect [group0]: /apps> call HelloWorld 0x4721D1A77e0E76851D460073E64Ea06d9C104194 get --------------------------------------------------------------------------------------------- Return code: 0 @@ -299,8 +299,8 @@ Return values:(Hello, FISCO BCOS 3.0) **Liquid parameter:** -- Contract Path: The pathname in the BFS file system that was populated when the contract was deployed。 -- Contract Interface Name: The name of the contract interface to call。 +- Contract path: The pathname in the BFS file system that was populated when the contract was deployed。 +- Contract interface name: the name of the contract interface called。 - Parameters: Determined by contract interface parameters。**Parameters are separated by spaces;Array parameters need to be bracketed, such as [1,2,3], the array is a string or byte type, double quotation marks, such as ["alice," "bob"], note that the array parameters do not have spaces;Boolean type is true or false。** ```shell @@ -340,7 +340,7 @@ Event: {} **Call with BFS:** -You can call a link file created in the BFS directory. The call gesture is similar to calling a normal contract.。 +You can call a link file created in the BFS directory. The call gesture is similar to calling a normal contract。 ```shell [group0]: /apps> call ./hello/v1 set "Hello, BFS." @@ -361,7 +361,7 @@ Event: {} ### 3. getCode -Run getCode to query the contract binary code based on the contract address.。 +Run getCode to query the contract binary code based on the contract address。 Parameters: - Contract address: 0x contract address(Deploy the contract to get the contract address)。 @@ -376,7 +376,7 @@ Parameters: Show contract interface and Event list Parameters: -- Contract path: the path of the contract file, which supports relative path, absolute path, and default path.。When the user enters a file name, the file is obtained from the default directory, which is: 'contracts / solidity ', for example: TableTest。 +- Contract path: the path of the contract file, supports relative path, absolute path and default path。When the user enters a file name, the file is obtained from the default directory, which is: 'contracts / solidity ', for example: TableTest。 - Contract Name:(Optional)Contract name, which uses the contract file name as the contract name parameter by default - Contract Address:(Optional)After the contract address is deployed, listAbi initiates a getAbi request to the node @@ -414,7 +414,7 @@ Method list: Run getDeployLog to query the group**by the current console**Log information of the deployment contract。Log information includes when the contract was deployed, group ID, contract name, and contract address。Parameters: -- Number of log lines. Optional. The latest log information is returned based on the expected value. When the actual number of log lines is less than the expected value, the actual number of log lines is returned.。When the expected value is not given, the latest log information is returned as 20 by default.。 +-Number of log lines, optional, returns the latest log information according to the input expected value, when the actual number is less than the expected value, returns according to the actual number。When the expected value is not given, the latest log information is returned as 20 by default。 ```shell [group0]: /apps> getDeployLog 2 @@ -431,8 +431,8 @@ Run getDeployLog to query the group**by the current console**Log information of ### 6. listDeployContractAddress -Lists all contract addresses deployed with the specified contract name. -Lists the list of contract addresses generated by deploying a specified contract. +Lists all contract addresses deployed with the specified contract name +Lists the list of contract addresses generated by deploying a specified contract - contractNameOrPath: Contract name or contract absolute path, which specifies the contract; - recordNumber: The length of the displayed list of contract addresses, which defaults to 20 @@ -572,8 +572,8 @@ PeersInfo{ Run getBlockByHash to query the block information based on the block hash。 Parameters: -- Block hash: the hash value of the block starting with 0x。 -- Transaction flag: false by default. Only transaction hash is displayed for transactions in the block. Set to true to display transaction details.。 +- Block hash: block hash value starting with 0x。 +- Transaction flag: False by default, only transaction hash is displayed for transactions in the block, set to true, and transaction specific information is displayed。 ```shell [group0]: /apps> getBlockByHash 0x2cc22006edec686f116ac6b41859f7b23fa9b39f8a2baef33f17da46bfd13d42 @@ -655,11 +655,11 @@ Parameters: ### 5. getBlockByNumber -Run getBlockByNumber to query the block information based on the block height.。 +Run getBlockByNumber to query the block information based on the block height。 Parameters: - Block height: decimal integer。 -- Transaction flag: false by default. Only transaction hash is displayed for transactions in the block. Set to true to display transaction details.。 +- Transaction flag: False by default, only transaction hash is displayed for transactions in the block, set to true, and transaction specific information is displayed。 ```shell [group0]: /apps> getBlockByNumber 1 @@ -838,10 +838,10 @@ Run setSystemConfigByKey to set system parameters as key-value pairs。Currently **Note:** When the permission governance mode is enabled, this command can only be used by the governance committee and cannot be directly called by the user. For details, see the command 'setSysConfigProposal' - `tx_count_limit`: Maximum number of packaged transactions in a block -- `tx_gas_price`: Transaction gas price. The default unit is wei. Kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported. +- `tx_gas_price`: Transaction gas price. The default unit is wei. Kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported - `tx_gas_limit`: Gas limits for trade execution - `consensus_leader_period`: Consensus Select Primary Interval -- `compatibility_version`: Data-compatible version number. After all binaries in the blockchain are upgraded to the latest version, you can upgrade the data-compatible version number to the latest version by using setSystemConfigByKey. +- `compatibility_version`: Data-compatible version number. After all binaries in the blockchain are upgraded to the latest version, you can upgrade the data-compatible version number to the latest version by using setSystemConfigByKey - `auth_check_status`: (Effective after 3.3.0) Permission check status, if it is 0, all permission checks are turned off, and if it is not 0, all checks are turned on Parameters: @@ -976,7 +976,7 @@ Run addSealer to add the node as a consensus node。 Parameters: - Node nodeId -- node weight +- Node weights ```shell [group0]: /apps> addSealer bb21228b0762433ea6e4cb185e1c54aeb83cd964ec0e831f8732cb2522795bb569d58215dfbeb7d3fc474fdce33dc9a793d4f0e86ce69834eddc707b48915824 2 @@ -1006,9 +1006,9 @@ Parameters: ### 7. removeNode -Run removeNode and the node exits。You can use the addSealer command to add the exiting node as a consensus node and the addObserver command to add the node as an observation node.。 +Run removeNode and the node exits。You can use the addSealer command to add the exiting node as a consensus node and the addObserver command to add the node as an observation node。 -**Note:** When the permission governance mode is enabled, this command can only be used by the governance committee and cannot be directly called by the user. For details, see the command 'removeNodeProposal'. +**Note:** When the permission governance mode is enabled, this command can only be used by the governance committee and cannot be directly called by the user. For details, see the command 'removeNodeProposal' Parameters: @@ -1054,11 +1054,11 @@ t_demo **Note:** -- The length of the prefix on the created table name cannot exceed 50. For example, the length of / tables / t _ demo cannot exceed 50.。 -- The field type of the created table is a string type, even if other field types of the database are provided, the string type is set。 -- Primary key field must be specified。For example, create a t _ demo table. The primary key field is name.。 -- The primary key of a table is not the same as the primary key in a relational database. The value of the primary key is not unique here.。 -- You can specify a field as the primary key, but the modified keywords such as field self-increment, non-null, and index do not work。 +-The length of the prefix on the created table name cannot exceed 50. For example, the length of / tables / t _ demo cannot exceed 50。 +-The field types of the created table are all string types. Even if other field types of the database are provided, they are set according to the string type。 +- Primary key field must be specified。For example, create a t _ demo table. The primary key field is name。 +-The primary key of a table is not the same as the primary key in a relational database. The value of the primary key is not unique here. The primary key value needs to be passed in when processing records at the bottom of the blockchain。 +-You can specify the field as the primary key, but the set field self-increasing, non-empty, index and other modification keywords do not work。 ### 2. [alter sql] @@ -1085,14 +1085,14 @@ Alter 't_demo' Ok. **Note:** - The modified table must exist and currently**Only new fields are supported** -- The field type of the created table is a string type, even if other field types of the database are provided, they are also set according to the string type and cannot be repeated. +-The field types of the created table are all string types. Even if other field types of the database are provided, they are set according to the string type and cannot be repeated ### 3. desc Run the desc statement to query the field information of the table, using the mysql statement。 ```shell -# Queries the field information of the t _ demo table. You can view the primary key name and other field names of the table. +# Queries the field information of the t _ demo table. You can view the primary key name and other field names of the table [group0]: /apps> desc t_demo { "key_field":[ @@ -1118,13 +1118,13 @@ Insert OK: **Note:** - insert record sql statement must be inserted into the primary key field value of the table。 -- The entered value is a string containing letters with punctuation, spaces, or numbers that start with a number. Double quotation marks are required. Double quotation marks are not allowed in double quotation marks。 +- The entered value is a string containing letters with punctuation, spaces, or numbers that begin with double quotes. Double quotes are not allowed in double quotes。 ### 5. [select sql] Run the select sql statement to query records, using the mysql statement。 -Unlike regular SQL, conditional statements that traverse interfaces currently only support the condition of the key field.。 +Unlike regular SQL, conditional statements that traverse interfaces currently only support the condition of the key field。 ```text # Query records with all fields @@ -1160,10 +1160,10 @@ Insert OK, 1 row affected. **Note:** -- query record sql statement must provide the primary key field value of the table in the where clause。 -- The limit field in a relational database can be used, providing two parameters, respectively, offset(Offset)and number of records(count)。 -- The WHERE clause only supports the AND keyword. Other OR, IN, LIKE, INNER, JOIN, UNION, subqueries, and multi-table union queries are not supported.。 -- The entered value is a string containing letters with punctuation, spaces, or numbers that start with a number. Double quotation marks are required. Double quotation marks are not allowed in double quotation marks。 +- The query record sql statement must provide the primary key field value of the table in the where clause。 +-The limit field in the relational database can be used, providing two parameters, respectively offset(Offset)and number of records(count)。 +The -where clause only supports the and keyword. Other OR, IN, LIKE, INNER, JOIN, UNION, subqueries, and multi-table union queries are not supported。 +- The entered value is a string containing letters with punctuation, spaces, or numbers that begin with double quotes. Double quotes are not allowed in double quotes。 ### 6. [update sql] @@ -1176,8 +1176,8 @@ Update OK, 1 row affected. **Note:** -- The where clause of the update record sql statement currently only supports the primary key field value condition of the table.。 -- The entered value is a string containing letters with punctuation, spaces, or numbers that start with a number. Double quotation marks are required. Double quotation marks are not allowed in double quotation marks。 +- The where clause of the update record sql statement currently only supports the primary key field value condition of the table。 +- The entered value is a string containing letters with punctuation, spaces, or numbers that begin with double quotes. Double quotes are not allowed in double quotes。 ### 7. [delete sql] @@ -1190,14 +1190,14 @@ Remove OK, 1 row affected. **Note:** -- The where clause of the delete record sql statement currently only supports the primary key field value condition of the table.。 -- The entered value is a string containing letters with punctuation, spaces, or numbers that start with a number. Double quotation marks are required. Double quotation marks are not allowed in double quotation marks。 +- The where clause of the deleted record sql statement currently only supports the primary key field value condition of the table。 +- The entered value is a string containing letters with punctuation, spaces, or numbers that begin with double quotes. Double quotes are not allowed in double quotes。 ## BFS Operation Commands ### 1. cd -Similar to the Linux cd command, you can switch the current path and support absolute and relative paths.。 +Similar to the Linux cd command, you can switch the current path and support absolute and relative paths。 ```shell [group0]: /apps> cd ../tables @@ -1215,9 +1215,9 @@ Similar to the Linux cd command, you can switch the current path and support abs ### 2. ls -Similar to the Linux ls command, you can view the resources in the current path. If it is a directory, all resources in the directory are displayed.;In the case of a contract, display the contract's meta information。 +Similar to the Linux ls command, you can view the resources in the current path. If it is a directory, all resources in the directory are displayed;In the case of a contract, display the contract's meta information。 -When the ls parameter is 0, the current folder is displayed. When the ls parameter is 1, absolute and relative paths are supported.。 +When the ls parameter is 0, the current folder is displayed. When the ls parameter is 1, absolute and relative paths are supported。 ```shell [group0]: /> ls @@ -1232,7 +1232,7 @@ name: Hello, type: contract ### 3. mkdir -Similar to the mkdir command in Linux, a new directory is created under a folder, and absolute and relative paths are supported.。 +Similar to the mkdir command in Linux, a new directory is created under a folder, and absolute and relative paths are supported。 ```shell [group0]: /> mkdir /apps/test @@ -1250,11 +1250,11 @@ test ### 4. ln -Similar to the Linux ln command, you can create a link to a contract resource and initiate a call to the actual contract by calling the link directly.。 +Similar to the Linux ln command, you can create a link to a contract resource and initiate a call to the actual contract by calling the link directly。 Similar to version 2.0 of the CNS service, relying on the BFS multi-level directory, you can establish a mapping relationship between contract name and contract address, contract version number。 -For example, if the contract name is Hello and the contract version number is latest, you can create a soft connection of '/ apps / Hello / latest' in the '/ apps' directory.。Similarly, users can create multiple versions under '/ apps / Hello', for example: '/ apps / Hello / newOne', '/ apps / Hello / layerTwo', etc.。 +For example, if the contract name is Hello and the contract version number is latest, you can create a soft connection of '/ apps / Hello / latest' in the '/ apps' directory。Similarly, users can create multiple versions under '/ apps / Hello', for example: '/ apps / Hello / newOne', '/ apps / Hello / layerTwo', etc。 ```bash # Create a contract soft link with the contract name Hello and the contract version latest @@ -1264,7 +1264,7 @@ For example, if the contract name is Hello and the contract version number is la "msg":"Success" } -# The link file is created in the / apps / directory. +# The link file is created in the / apps / directory [group0]: /apps> ls ./Hello/latest latest -> 19a6434154de51c7a7406edf312f01527441b561 @@ -1296,7 +1296,7 @@ latest -> 2b5dcbae97f9d9178e8b051b08c9fb4089bae71b ### 5. tree -Similar to the tree command in Linux, the resources under the specified BFS path are displayed in a tree structure.。The default depth is 3, you can use the parameter to set the depth to no more than 5。 +Similar to the tree command in Linux, the resources under the specified BFS path are displayed in a tree structure。The default depth is 3, you can use the parameter to set the depth to no more than 5。 ```bash [group0]: /apps> tree .. @@ -1344,7 +1344,7 @@ Similar to the Linux pwd command, no parameters, showing the current path。 ### 1. getGroupPeers -Run getGroupPeers to view the list of consensus nodes and observation nodes in the group where the node is located.。 +Run getGroupPeers to view the list of consensus nodes and observation nodes in the group where the node is located。 ```shell [group0]: /apps> getGroupPeers @@ -1570,7 +1570,7 @@ Run the getGroupNodeInfo command to get information about a node in the current ## permission operation command -Permission governance operation commands are divided into: query permission governance status command, governance committee special command, contract administrator special command.。 +Permission governance operation commands are divided into: query permission governance status command, governance committee special command, contract administrator special command。 ### 1. Query permission governance commands @@ -1578,7 +1578,7 @@ This type of command has no permission control and is accessible to all accounts #### 1.1. getCommitteeInfo -At initialization, a governance committee is deployed whose address information is automatically generated or specified at build _ chain.sh.。Initialize only one member, and the weight of the member is 1 +At initialization, a governance committee is deployed whose address information is automatically generated or specified at build _ chain.sh。Initialize only one member, and the weight of the member is 1 ```shell [group0]: /apps> getCommitteeInfo @@ -1594,30 +1594,30 @@ index0 : 0x7fb008862ff69353a02ddabbc6cb7dc31683d0f6 | 1 #### 1.2. getProposalInfo -Obtain proposal information in a specific range in batches. If only a single ID is entered, the proposal information of a single ID is returned.。 +Obtain proposal information in a specific range in batches. If only a single ID is entered, the proposal information of a single ID is returned。 'proposalType 'and' status' can see the type and status of the proposal ProposalType is divided into the following categories: -- setWeight: Generated when the governance committee initiates the updateGovernorProposal -- setRate: The setRateProposal is generated -- setDeployAuthType: The setDeployAuthTypeProposal proposal generates -- modifyDeployAuth: openDeployAuthProposal and closeDeployAuthProposal generate -- resetAdmin: resetAdminProposal is generated +-setWeight: Generated when the governance committee initiates an updateGovernorProposal +-setRate: The setRateProposal is generated +-setDeployAuthType: The setDeployAuthTypeProposal proposal is generated +-modifyDeployAuth: openDeployAuthProposal and closeDeployAuthProposal are generated +-resetAdmin: The resetAdminProposal is generated - setConfig: setSysConfigProposal is generated - setNodeWeight: addObserverProposal, addSealerProposal, setSensusNodeWeightProposal Proposal Generation -- removeNode: removeNodeProposal generation -- unknown: When this type appears, there may be a bug +-removeNode: removeNodeProposal generation +-unknown: When this type appears, there may be a bug status is divided into the following categories: -- notEnoughVotes: proposal normal, not enough votes collected yet -- finish: Proposal execution complete -- Failed: Proposal failed -- Revoke: Proposal withdrawn -- outdated: Proposal exceeds voting deadline -- unknown: When this type appears, there may be a bug +-notEnoughVotes: The proposal is normal, not enough votes have been collected +-finish: Proposal execution complete +-failed: Proposal failed +-revoke: proposal withdrawn +-outdated: Proposal exceeds voting deadline +-unknown: When this type appears, there may be a bug ```shell [group0]: /apps> getProposalInfo 1 @@ -1649,7 +1649,7 @@ Against Voters: #### 1.3. getLatestProposal -In order to avoid the timeout of initiating a proposal and forgetting the proposal ID when exiting the console, the getLatestProposal command can obtain the latest proposal information of the current committee.。 +In order to avoid the timeout of initiating a proposal and forgetting the proposal ID when exiting the console, the getLatestProposal command can obtain the latest proposal information of the current committee。 ```shell [group0]: /apps> getLatestProposal @@ -1670,15 +1670,15 @@ Against Voters: Permission policies are divided into: - No permissions, everyone can deploy -- Blacklist. Users on the blacklist cannot be deployed -- Whitelist, only whitelisted users can be deployed +- Blacklist, users on the blacklist cannot be deployed +- Whitelist, only whitelisted users can deploy ```shell [group0]: /apps> getDeployAuth There is no deploy strategy, everyone can deploy contracts. ``` -Governance Committee-specific commands, which must have an account in the Governance Committee's Governors before they can be called. +Governance Committee-specific commands, which must have an account in the Governance Committee's Governors before they can be called If there is only one governance committee member and the proposal is initiated by that committee member, then the proposal is bound to succeed @@ -1687,11 +1687,11 @@ If there is only one governance committee member and the proposal is initiated b Check if the account has deployment permissions ```shell -# the current deployment permission is in whitelist mode. +# the current deployment permission is in whitelist mode [group0]: /apps> getDeployAuth Deploy strategy is White List Access. -# If you do not select the parameter, check whether the current account has the deployment permission. +# If you do not select the parameter, check whether the current account has the deployment permission [group0]: /apps> checkDeployAuth Deploy : PERMISSION DENIED Account: 0x7fb008862ff69353a02ddabbc6cb7dc31683d0f6 @@ -1704,7 +1704,7 @@ Account: 0xea9b0d13812f235e4f7eaa5b6131794c9c755e9a #### 1.6. getContractAdmin -Use the command to obtain the administrator of a contract. Only the administrator can control the permissions of the contract.。 +Use the command to obtain the administrator of a contract. Only the administrator can control the permissions of the contract。 ```shell # The admin account number of the contract address 0xCcEeF68C9b4811b32c75df284a1396C7C5509561 is 0x7fb008862ff69353a02ddabbc6cb7dc31683d0f6 @@ -1714,7 +1714,7 @@ Admin for contract 0xCcEeF68C9b4811b32c75df284a1396C7C5509561 is: 0x7fb008862ff6 #### 1.7. checkMethodAuth -Check whether the account has permission to call a contract interface. +Check whether the account has permission to call a contract interface ```shell # Set the set of the contract with address 0x600E41F494CbEEd1936D5e0a293AEe0ab1746c7b(string) for whitelist mode @@ -1724,7 +1724,7 @@ Check whether the account has permission to call a contract interface. "msg":"Success" } -# If no parameter is selected, check whether the current account has the calling permission. +# If no parameter is selected, check whether the current account has the calling permission [group0]: /apps> checkMethodAuth 0x600E41F494CbEEd1936D5e0a293AEe0ab1746c7b set(string) Method : PERMISSION DENIED Account : 0xea9b0d13812f235e4f7eaa5b6131794c9c755e9a @@ -1765,7 +1765,7 @@ Block address : #### 1.9. getContractStatus -Obtain the status of a contract. Currently, there are only two statuses: frozen and normal access. +Obtain the status of a contract. Currently, there are only two statuses: frozen and normal access ```shell [group0]: /apps> getContractStatus 0x31eD5233b81c79D5adDDeeF991f531A9BBc2aD01 @@ -1783,11 +1783,11 @@ Unavailable ### 2. Special Order of Governance Committee -These orders can only be used by holding the account of the governance committee.。 +These orders can only be used by holding the account of the governance committee。 #### 2.1. updateGovernorProposal -In the case of a new governance committee, add an address and weight.。 +In the case of a new governance committee, add an address and weight。 If you are deleting a governance member, you can set the weight of a governance member to 0 @@ -1829,7 +1829,7 @@ Against Voters: #### 2.3. setDeployAuthTypeProposal -Set the ACL policy for deployment. Only white _ list and black _ list policies are supported. +Set the ACL policy for deployment. Only white _ list and black _ list policies are supported ```shell [group0]: /apps> setDeployAuthTypeProposal white_list @@ -2081,12 +2081,12 @@ Against Voters: Initiate a proposal to upgrade the logic of voting calculations。The upgrade proposal vote calculation logic is divided into the following steps: -1. Write contracts based on interfaces.; +1. Write contracts based on interfaces; 2. Deploy the written contract on the chain and get the address of the contract; 3. Initiate a proposal to upgrade the voting calculation logic, enter the address of the contract as a parameter, and vote on it in the governance committee; -4. After the vote is passed (the voting calculation logic is still the original logic at this time), the voting calculation logic is upgraded.;Otherwise do not upgrade。 +4. After the vote is passed (the voting calculation logic is still the original logic at this time), the voting calculation logic is upgraded;Otherwise do not upgrade。 -The voting calculation logic contract can only be used according to a certain interface implementation.。For contract implementation, see the following interface contract 'VoteComputerTemplate.sol': +The voting calculation logic contract can only be used according to a certain interface implementation。For contract implementation, see the following interface contract 'VoteComputerTemplate.sol': ```solidity // SPDX-License-Identifier: Apache-2.0 @@ -2115,7 +2115,7 @@ abstract contract VoteComputerTemplate is BasicAuth { address[] memory againstVoters ) public view virtual returns (uint8); - / / This is a verification interface for computational logic for other governance members to verify the validity of the contract. + / / This is a verification interface for computational logic for other governance members to verify the validity of the contract function voteResultCalc( uint32 agreeVotes, uint32 doneVotes, @@ -2194,7 +2194,7 @@ ParticipatesRate: 0% , WinRate: 0% Governor Address | Weight index0 : 0x4a37eba43c66df4b8394abdf8b239e3381ea4221 | 2 -# Deploy the VoteComputer contract. The first parameter 0x10001 is a fixed address, and the second parameter is the address of the current governance committee member Committee. +# Deploy the VoteComputer contract. The first parameter 0x10001 is a fixed address, and the second parameter is the address of the current governance committee member Committee [group0]: /apps> deploy VoteComputer 0x10001 0xa0974646d4462913a36c986ea260567cf471db1f transaction hash: 0x429a7ceccefb3a4a1649599f18b60cac1af040cd86bb8283b9aab68f0ab35ae4 contract address: 0x6EA6907F036Ff456d2F0f0A858Afa9807Ff4b788 @@ -2234,7 +2234,7 @@ Agree Voters: --------------------------------------------------------------------------------------------- Against Voters: -# At this time, another governance committee account is simulated to log in. +# At this time, another governance committee account is simulated to log in [group0]: /apps> loadAccount 0xea9b0d13812f235e4f7eaa5b6131794c9c755e9a Load account 0xea9b0d13812f235e4f7eaa5b6131794c9c755e9a success! @@ -2402,7 +2402,7 @@ These commands are only accessible to an administrator account that has administ Permission policy for administrator setting method -**Special attention: the interface permission control of the contract can only control the write method at present.。** +**Special attention: the interface permission control of the contract can only control the write method at present。** ```shell # Set the set of the HelloWorld contract with the contract address 0xCcEeF68C9b4811b32c75df284a1396C7C5509561(string) Interface is in whitelist mode @@ -2412,7 +2412,7 @@ These commands are only accessible to an administrator account that has administ "msg":"Success" } -# This interface is currently in whitelist mode. Only accounts in whitelist mode can call the set interface. +# This interface is currently in whitelist mode. Only accounts in whitelist mode can call the set interface [group0]: /apps> call HelloWorld 0xCcEeF68C9b4811b32c75df284a1396C7C5509561 set 123 transaction hash: 0x51e43a93b8e6621e45357ba542112117c3dd3e089b5067e06084e36243458074 --------------------------------------------------------------------------------------------- @@ -2503,7 +2503,7 @@ Return message: Permission denied Run freezeContract to freeze the specified contract。Parameters: -- Contract address: The contract address can be obtained for the deployment contract, where the 0x prefix is not required.。 +- Contract address: The contract address can be obtained for the deployment contract, where the 0x prefix is not required。 ```shell [group0]: /apps> deploy HelloWorld @@ -2539,7 +2539,7 @@ Return message: ContractFrozen Run unfreezeContract to unfreeze the specified contract。Parameters: -- Contract address: The contract address can be obtained for the deployment contract, where the 0x prefix is not required.。 +- Contract address: The contract address can be obtained for the deployment contract, where the 0x prefix is not required。 ```shell [group0]: /apps> call HelloWorld 0xA28AC30A792A59C3CD114A87a75193C6B8278D7E get @@ -2572,11 +2572,11 @@ Return values:(Hello, World!) ### 1. newAccount -Create a new account for sending transactions. By default, the account is saved in the 'account' directory in the 'PEM' format.。 +Create a new account for sending transactions. By default, the account is saved in the 'account' directory in the 'PEM' format。 ```shell -# The account file is automatically saved in the 'account / ecdsa' directory when the console is connected to the non-national secret blockchain. -# The account file is automatically saved in the 'accout / gm' directory when the console is connected to the State Secret blockchain. +# The account file is automatically saved in the 'account / ecdsa' directory when the console is connected to the non-national secret blockchain +# The account file is automatically saved in the 'accout / gm' directory when the console is connected to the State Secret blockchain [group0]: /apps> newAccount AccountPath: account/ecdsa/0x1cc06388cd8a12dcf7fb8967378c0aea4e6cf642.pem Note: This operation does not create an account in the blockchain, but only creates a local account, and deploying a contract through this account will create an account in the blockchain @@ -2589,12 +2589,12 @@ $ -rw-r--r-- 1 octopus staff 258 9 30 16:34 account/ecdsa/0x1cc06388cd8a12dc ### 2. loadAccount -Load a private key file in the format of 'PEM' or 'P12'. The loaded private key can be used to send transaction signatures.。However, if the console uses the public and private keys of the cipher machine, the command cannot be used because the public and private keys are placed inside the cipher machine.。 +Load a private key file in the format of 'PEM' or 'P12'. The loaded private key can be used to send transaction signatures。However, if the console uses the public and private keys of the cipher machine, the command cannot be used because the public and private keys are placed inside the cipher machine。 Parameters: - Private key file path: Supports relative path, absolute path and default path。By default, the account is loaded from the account configuration option 'keyStoreDir' of 'config.toml'. For details about the configuration item 'keyStoreDir', see [here](./sdk/java_sdk/config.html#id9)。 -- Account Format: Optional. The file type of the loaded account. The file type is' pem 'and' p12 '. The default value is' pem '.。 +- Account format: Optional. The file type of the loaded account. The file type is' pem 'and' p12 '. The default value is' pem '。 ```shell [group0]: /apps> loadAccount 0x6fad87071f790c3234108f41b76bb99874a6d813 @@ -2611,11 +2611,11 @@ View all currently loaded account information 0x726d9f31cf44debf80b08a7e759fa98b360b0736 ``` -**Note: The private key account marked with the '< =' suffix is the private key account currently used to send the transaction, which can be switched using 'loadAccount'.** +**Note: With '<= 'The suffix is marked with the private key account currently used to send the transaction, which can be switched using' loadAccount '** ### 4. getCurrentAccount -Get current account address。If the console uses the public and private key of the cipher machine, the account address converted according to the internal public key of the cipher machine is displayed.。 +Get current account address。If the console uses the public and private key of the cipher machine, the account address converted according to the internal public key of the cipher machine is displayed。 ```shell [group0]: /apps> getCurrentAccount @@ -2634,7 +2634,7 @@ Create a shard Parameters: -* Split name: The username to be created. No duplicates are allowed. +* Split name: The username to be created. No duplicates are allowed ``` [group0]: /apps> makeShard hello_shard @@ -2689,7 +2689,7 @@ listBalanceGovernor: [0x77ed4ea0a43fb76a88ec81a466695a4a704bb30e] 注意 -* After you turn on the feature _ balance _ precompiled switch, the chain management account is added to the asset management account by default. +* After you turn on the feature _ balance _ precompiled switch, the chain management account is added to the asset management account by default interface to add another account。 * Up to 500 accounts can be displayed, more than 500 accounts will not be able to register again。 @@ -2717,7 +2717,7 @@ listBalanceGovernor: [0x77ed4ea0a43fb76a88ec81a466695a4a704bb30e, 0x7ef1de472584 ### 3. unregisterBalanceGovernor -Cancels the asset management permission of the registered account, and only the chain administrator account has this permission.。 +Cancels the asset management permission of the registered account, and only the chain administrator account has this permission。 Parameters @@ -2751,13 +2751,13 @@ balance: 0 wei ### 5. addBalance -Increase the asset balance of the account, only the asset management permission account has the permission to call the interface.。 +Increase the asset balance of the account, only the asset management permission account has the permission to call the interface。 Parameters * Account address: the address of the account where the asset needs to be added * Number of assets added: the number of assets to be added, the default unit is wei -* Unit of asset quantity: optional. The unit of asset quantity. The default value is wei. Wei, kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported. +* Unit of asset quantity: optional. The unit of asset quantity. The default value is wei. Wei, kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported ```shell [group0]: /apps> addBalance 0x77ed4ea0a43fb76a88ec81a466695a4a704bb30e 100 wei @@ -2777,13 +2777,13 @@ balance: 100100 wei ### 6. subBalance -Reduce the asset balance of the specified account, only the asset management permission account has the permission to call the interface.。 +Reduce the asset balance of the specified account, only the asset management permission account has the permission to call the interface。 Parameters * Account address: the address of the account where the assets need to be reduced * Number of assets reduced: the number of assets that need to be reduced, the default unit is wei -* Unit of asset quantity: optional. The unit of asset quantity. The default value is wei. Wei, kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported. +* Unit of asset quantity: optional. The unit of asset quantity. The default value is wei. Wei, kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported ```shell [group0]: /apps> subBalance 0x77ed4ea0a43fb76a88ec81a466695a4a704bb30e 100 @@ -2803,14 +2803,14 @@ balance: 99000 wei ### 7. transferBalance -Transfer, transfer assets from one account to another, only the asset management permission account has the permission to call the interface.。 +Transfer, transfer assets from one account to another, only the asset management permission account has the permission to call the interface。 Parameters * Transfer-out account address: the address of the account where the assets need to be transferred out -* Transfer to account address: the address of the account to which the asset needs to be transferred. +* Transfer to account address: the address of the account to which the asset needs to be transferred * Number of transferred assets: the number of assets to be transferred, the default unit is wei -* Unit of asset quantity: optional. The unit of asset quantity. The default value is wei. Wei, kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported. +* Unit of asset quantity: optional. The unit of asset quantity. The default value is wei. Wei, kwei, mwei, gwei, szabo, finney, ether, Kether, Mether, and Gether are supported ```shell [group0]: /apps> getBalance 0x77ed4ea0a43fb76a88ec81a466695a4a704bb30e diff --git a/3.x/en/docs/operation_and_maintenance/console/console_config.md b/3.x/en/docs/operation_and_maintenance/console/console_config.md index c79698857..86ae93247 100644 --- a/3.x/en/docs/operation_and_maintenance/console/console_config.md +++ b/3.x/en/docs/operation_and_maintenance/console/console_config.md @@ -6,22 +6,22 @@ Tags: "console" "Console Configuration" "Command Line Interactive Tools" ```eval_rst .. important:: - - "Console" only supports FISCO BCOS version 3.x, based on 'Java SDK <.. / sdk / java _ sdk / index.html >' _ implementation。 - - You can use the command. "/ start.sh--version "View the current console version + - "Console" only supports FISCO BCOS 3.x version, based on 'Java SDK<../sdk/java_sdk/index.html>'_ Implementation。 + - You can view the current console version through the command. "/ start.sh --version" ``` -[CONSOLE](https://github.com/FISCO-BCOS/console)is an important interactive client tool for FISCO BCOS 3.x, which is available through the [Java SDK](../../sdk/java_sdk/index.md)Establish a connection with a blockchain node to implement read and write access requests for blockchain node data。The console has a wealth of commands, including querying blockchain status, managing blockchain nodes, deploying and invoking contracts, and more.。In addition, the console provides a contract compilation tool that allows users to quickly and easily integrate Solidity and webankblockchain-liquid contract file(Hereinafter referred to as WBC-liquid) the compiled WASM file is converted to a Java contract file.。 +[CONSOLE](https://github.com/FISCO-BCOS/console)is an important interactive client tool for FISCO BCOS 3.x, which is available through the [Java SDK](../../sdk/java_sdk/index.md)Establish a connection with a blockchain node to implement read and write access requests for blockchain node data。The console has a wealth of commands, including querying blockchain status, managing blockchain nodes, deploying and invoking contracts, and more。In addition, the console provides a contract compilation tool that allows users to quickly and easily integrate Solidity and webankblockchain-liquid contract files(Hereinafter referred to as wbc-liquid) the compiled WASM file is converted to a Java contract file。 -wbc-Please refer to [wbc] for building the liquid compilation environment.-environment configuration of liquid](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/quickstart/prerequisite.html)。 +For details about how to set up the wbc-liquid compilation environment, see [wbc-liquid environment configuration](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/quickstart/prerequisite.html)。 ## Console Configuration and Operation ```eval_rst .. important:: - Precondition: To build the FISCO BCOS blockchain, see 'Building the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ + Precondition: To build the FISCO BCOS blockchain, see Building the first blockchain network<../../quick_start/air_installation.html>`_ Chain Building Tool Reference: - - 'Air version FISCO BCOS build chain script build _ chain <.. /.. / tutorial / air / build _ chain.html > '_ - - 'Pro version FISCO BCOS chain building tool BcosBuilder <.. /.. / tutorial / pro / pro _ builder.html > '_ + - 'Air version FISCO BCOS build chain script build _ chain<../../tutorial/air/build_chain.html>`_ + - 'Pro version FISCO BCOS chain building tool BcosBuilder<../../tutorial/pro/pro_builder.html>`_ ``` ### 1. Get the console @@ -36,12 +36,12 @@ bash download_console.sh ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh` + -If you cannot download for a long time due to network problems, please try'curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh` ``` #### 1.1 Getting consoles for other Solidity versions -Since the default Solc compiler version in the console is 0.8.11, you can use the following command to download the Solidity contract.(Currently supports 0.4.25, 0.5.2, 0.6.10, 0.8.11): +Since the default Solc compiler version in the console is 0.8.11, you can use the following command to download the Solidity contract(Currently supports 0.4.25, 0.5.2, 0.6.10, 0.8.11): ```shell # Download version 0.4.25 of the console @@ -58,7 +58,7 @@ bash download_console.sh -s 0.6 # And so on, support specified version(0.4,0.5,0 # In the directory where the script is executed, two new files are generated, similar to the other versions ls solcJ* solcJ-0.6.10.1.jar solcJ-0.6.tar.gz -# will solcJ-0.6.10.1.jar Manually replace the solcJ file in the lib directory of the folder where the console is located.。For example, the console directory is in this directory: +# Replace solcJ-0.6.10.1.jar manually with the solcJ file in the lib directory of the folder where the console is located。For example, the console directory is in this directory: mv ./console/lib/solcJ-0.8.11.1.jar . && cp solcJ-0.6.10.1.jar ./console/lib/ # At this point, the default Solidity version of the console is switched to version 0.6.10 ``` @@ -76,7 +76,7 @@ The configured console directory structure is as follows: ├── contracts # Contract Directory │ ├── console # Contract abi, bin, java file directory compiled during contract deployment in the console │ ├── sdk # contract abi, bin, java file directory compiled by sol2java.sh script -│ ├── liquid # WBC-Liquid contract storage directory +│ ├── liquid # WBC-Liquid Contract Storage Directory │ └── solidity # Solidity contract storage directory │ └── HelloWorld.sol # Common contract: HelloWorld contract, deployable and callable │ └── KVTableTest.sol # Contracts using the KV storage interface: KVTableTest contract, which can be deployed and invoked @@ -84,14 +84,14 @@ The configured console directory structure is as follows: │-- start.sh # Console Startup Script │-- get_account.sh # Account Generation Script │-- get_gm_account.sh # Account Generation Script, State Secret Edition -│-- contract2java.sh # Solidity/WBC-Liquid contract files are compiled into development tool scripts for java contract files. +│-- contract2java.sh # Solidity / WBC-Liquid contract files are compiled into development tool scripts for java contract files ``` ### 2. Configure Console - Configuration of blockchain nodes and certificates: - - Copy all files in the node sdk directory to the 'conf' directory。 - - Put the 'config' in the 'conf' directory-example.toml 'file renamed to' config.toml 'file。Configure the 'config.toml' file, where the content of the added comment is modified according to the blockchain node configuration。 + - copy all files in the node sdk directory to the 'conf' directory。 + - Rename the 'config-example.toml' file in the 'conf' directory to the 'config.toml' file。Configure the 'config.toml' file, where the content of the added comment is modified according to the blockchain node configuration。 The sample configuration file is as follows: @@ -142,7 +142,7 @@ Configuration item detailed description [refer here](../sdk/java_sdk/config.md) Console Description - - When the console configuration file configures multiple node connections in a group, some nodes in the group may exit the group during operation, so the information returned by the console polling node query may be inconsistent, which is a normal phenomenon.。We recommend that you use the console to configure a node or ensure that the configured node is always in the group, so that the information in the group queried during the synchronization time is consistent.。 + -When the console configuration file configures multiple node connections in a group, because some nodes in the group may exit the group during the operation, the information returned by the console polling node query may be inconsistent, which is normal。We recommend that you use the console to configure a node or ensure that the configured node is always in the group, so that the information in the group queried during the synchronization time is consistent。 ``` ### 3. Start the console @@ -181,7 +181,7 @@ console version: 3.0.0 ##### 4.2.1 Console Load Private Key -The console provides the account generation script get _ account.sh(Please refer to [Account Management Document] for script usage.(../../develop/account.md)The generated account file is in the accounts directory, and the account file loaded by the console must be placed in that directory。 +The console provides the account generation script get _ account.sh(Please refer to [Account Management Document] for script usage(../../develop/account.md)The generated account file is in the accounts directory, and the account file loaded by the console must be placed in that directory。 There are several ways to start the console: ```shell @@ -199,7 +199,7 @@ Starts with the default group number specified by the console profile。 ./start.sh ``` -**注意**: When the console starts without specifying a private key account, it will try to load an available private key account from the 'account' directory for sending transactions. If the load fails, a new 'PEM' account file will be created and saved in the 'account' directory.。 +**注意**: When the console starts without specifying a private key account, it will try to load an available private key account from the 'account' directory for sending transactions. If the load fails, a new 'PEM' account file will be created and saved in the 'account' directory。 ##### 4.2.3 Start by specifying the group name @@ -211,7 +211,7 @@ Start with the group name specified on the command line。 ##### 4.2.4 Start using PEM format private key file -- Start with the account of the specified pem file, enter the parameters: group number,-pem, pem file path +- Use the account startup of the specified pem file, enter parameters: group number, -pem, pem file path ```shell ./start.sh group0 -pem account/ecdsa/0x2dbb332a844e0e076f97c90ff5078ea7dd2de910.pem @@ -219,7 +219,7 @@ Start with the group name specified on the command line。 ##### 4.2.5 Start using PKCS12 format private key file -- Use the specified p12 file account, you need to enter a password, enter parameters: group number,-p12, p12 file path +- Use the specified p12 file account, you need to enter a password, enter parameters: group number, -p12, p12 file path ```shell ./start.sh group0 -p12 account/ecdsa/0x2dbb332a844e0e076f97c90ff5078ea7dd2de910.pem @@ -237,11 +237,11 @@ It may be the Java version. Refer to the solution: [https://stackoverflow.com/qu ## Java Contract Generation Tool -The console provides a specialized tool for generating Java contracts, making it easy for developers to integrate Solidity and WBC.-The liquid contract file is compiled into a Java contract file.。 +The console provides a special tool for generating Java contracts, which allows developers to compile Solidity and wbc-liquid contract files into Java contract files。 -The current contract generation tool supports automatic compilation of Solidity and generation of Java files, support for specifying wbc-Liquid compiles the WASM file and the ABI file to generate the Java file.。 +The current contract generation tool supports automatic compilation of Solidity and generation of Java files, WASM files compiled by specified wbc-liquid, and ABI files to generate Java files。 -**Note:** The Solidity contract generation tool is directly related to the Solc version number. For the corresponding Solidity contract, use the console with the corresponding Solc。Please refer to 1.1 above for consoles of other Solidity versions.。 +**Note:** The Solidity contract generation tool is directly related to the Solc version number. For the corresponding Solidity contract, use the console with the corresponding Solc。Please refer to 1.1 above for consoles of other Solidity versions。 ### Solidity Contract Use @@ -264,10 +264,10 @@ usage: contract2java.sh [OPTIONS...] Detailed parameters: - `package`: Generate the package name of the 'Java' file。 -- `sol`: (Optional)The path of the 'solidity' file. Two methods are supported: file path and directory path. When the parameter is a directory, all the 'solidity' files in the directory are compiled and converted.。The default directory is' contracts / solidity'。 -- `output`: (Optional)The directory where the 'Java' file is generated. By default, it is generated in the 'contracts / sdk / java' directory.。 +- `sol`: (Optional)The path of the 'solidity' file. Two methods are supported: file path and directory path. When the parameter is a directory, all the 'solidity' files in the directory are compiled and converted。The default directory is' contracts / solidity'。 +- `output`: (Optional)The directory where the 'Java' file is generated. By default, it is generated in the 'contracts / sdk / java' directory。 -### wbc-The liquid contract uses +### wbc-liquid contract using ```shell $ bash contract2java.sh liquid -h @@ -285,10 +285,10 @@ usage: contract2java.sh [OPTIONS...] Detailed parameters: -- 'abi ': (Required) WBC-Path to the 'ABI' file of the Liquid contract, which is generated in the target folder after using the 'cargo liquid build' command。 -- 'bin ': (Required) WBC-Path to the 'wasm bin' file of the Liquid contract, which is generated in the target folder after using the 'cargo liquid build' command。 -- 'package ': (Optional) Generate the package name of the' Java 'file, which is' org 'by default.。 -- `sm-bin ': (Required) WBC-The path to the 'wasm sm bin' file of the Liquid contract.-Generated in the target folder after the g 'command。 +- 'abi': (Required) The path of the WBC-Liquid contract 'ABI' file, which is generated in the target folder after using the 'cargo liquid build' command。 +- 'bin': (Required) The path of the WBC-Liquid contract 'wasm bin' file, which is generated in the target folder after using the 'cargo liquid build' command。 +- 'package': (Optional) The package name of the generated 'Java' file, which is' org 'by default。 +- 'sm-bin': (Required) The path of the WBC-Liquid contract 'wasm sm bin' file, which is generated in the target folder after using the 'cargo liquid build -g' command。 #### 使用 @@ -298,7 +298,7 @@ $ cd ~/fisco/console # Java code for generating Solidity contracts $ bash contract2java.sh solidity -p org.com.fisco -# Generate WBC-Java code for the Liquid contract +# Generate Java code for the WBC-Liquid contract $ bash contract2java.sh liquid -p org.com.fisco -b ./contracts/liquid/asset/asset.wasm -a ./contracts/liquid/asset/asset.abi -s ./contracts/liquid/asset/asset_gm.wasm ``` @@ -320,7 +320,7 @@ After running successfully, the java, abi, and bin directories will be generated | |-- HelloWorld.java # Solidity Compiled HelloWorld Java File | |-- KVTable.java # Solidity Compiled KV Storage Interface Contract Java File | |-- KVTableTest.java # Solidity compiled KVTableTest Java file -| |-- Asset.java # wbc-The asset file generated by liquid +| |-- Asset.java # Asset files generated by wbc-liquid ``` -The 'org / com / fisco /' package path directory is generated in the Java directory。The Java contract files' HelloWorld.java ',' KVTableTest.java ',' KVTable.java 'and' Asset.java 'will be generated in the package path directory.。where 'HelloWorld.java', 'KVTableTest.java' and 'Asset.java' are the Java contract files required by the Java application。 +The 'org / com / fisco /' package path directory is generated in the Java directory。The Java contract files' HelloWorld.java ',' KVTableTest.java ',' KVTable.java 'and' Asset.java 'will be generated in the package path directory。where 'HelloWorld.java', 'KVTableTest.java' and 'Asset.java' are the Java contract files required by the Java application。 diff --git a/3.x/en/docs/operation_and_maintenance/console/console_error.md b/3.x/en/docs/operation_and_maintenance/console/console_error.md index 7682d03cf..e65fbc422 100644 --- a/3.x/en/docs/operation_and_maintenance/console/console_error.md +++ b/3.x/en/docs/operation_and_maintenance/console/console_error.md @@ -6,8 +6,8 @@ Tags: "console" "Console Configuration" "Command Line Interactive Tools" ```eval_rst .. important:: - - "Console" only supports FISCO BCOS 3.x version, based on 'Java SDK <.. /.. / sdk / java _ sdk / index.html >' _ implementation。 - - You can use the command. "/ start.sh--version "View the current console version + - "Console" only supports FISCO BCOS 3.x version, based on 'Java SDK<../../sdk/java_sdk/index.html>'_ Implementation。 + - You can view the current console version through the command. "/ start.sh --version" ``` Possible errors in console startup: @@ -45,14 +45,14 @@ Connection node timeout, possible cause: 3. Check whether the network is connected - You can use tools such as' ping 'and' telnet 'to determine if the console is not connected to the server network where the node is located. + You can use tools such as' ping 'and' telnet 'to determine if the console is not connected to the server network where the node is located 对于**SSL handshake failed**问题: 1. Check whether the 'sdk' certificate is correct: - - The 'sdk' certificate location of the 'Air' installation package: `nodes/IP/sdk` - - The 'sdk' certificate location of the 'Pro' version installation package: `generated/rpc/chainID/IP/serviceName/sdk`(Remarks: chainID:Chain ID, IP:Node IP, serviceName:Service name, specified when setting up the environment) + - 'sdk' certificate location of the 'Air' installation package: `nodes/IP/sdk` + - 'sdk' certificate location of the 'Pro' version installation package: `generated/rpc/chainID/IP/serviceName/sdk`(Remarks: chainID:Chain ID, IP:Node IP, serviceName:Service name, specified when setting up the environment) will 'sdk /*'Copy the certificate in the directory to the console configuration directory 'console / conf' @@ -73,11 +73,11 @@ Connection node timeout, possible cause: sm_ssl=false ``` - The two configurations should be consistent, set to 'true' in the national secret environment and 'false' in the non-national secret environment. + The two configurations should be consistent, set to 'true' in the national secret environment and 'false' in the non-national secret environment ## `there has no connection available for the group, maybe all connections disconnected or the group does not exist` -The group id used by the console does not exist. There are two ways to start the console.: +The group id used by the console does not exist. There are two ways to start the console: Specify Group: `bash start.sh groupId` Default startup: 'bash start.sh ', the group id used at this time is the group configured in the' config.toml 'file: @@ -87,7 +87,7 @@ Default startup: 'bash start.sh ', the group id used at this time is the group c defaultGroup="group0" # Console default group to connect ``` -The group ID of the node. Check the node configuration file 'config.genesis'.: +The group ID of the node. Check the node configuration file 'config.genesis': ```shell // config.genesis diff --git a/3.x/en/docs/operation_and_maintenance/console/index.md b/3.x/en/docs/operation_and_maintenance/console/index.md index df7a530ac..41c8337c8 100644 --- a/3.x/en/docs/operation_and_maintenance/console/index.md +++ b/3.x/en/docs/operation_and_maintenance/console/index.md @@ -6,24 +6,24 @@ Tags: "console" "console" "command line interactive tools" " ```eval_rst .. important:: - - "Console" only supports FISCO BCOS 3.0+Version, based on 'Java SDK <.. / sdk / java _ sdk / index.html >' _。 - - You can use the command. "/ start.sh--version "View the current console version + - "Console" only supports FISCO BCOS 3.0+Version, based on 'Java SDK<../sdk/java_sdk/index.html>'_ Implementation。 + - You can view the current console version through the command. "/ start.sh --version" ``` -[CONSOLE](https://github.com/FISCO-BCOS/console)is an important interactive client tool for FISCO BCOS 3.0, which is available through the [Java SDK](../../sdk/java_sdk/index.md)Establish a connection with a blockchain node to implement read and write access requests for blockchain node data。The console has a wealth of commands, including querying blockchain status, managing blockchain nodes, deploying and invoking contracts, and more.。In addition, the console provides a contract compilation tool that allows users to quickly and easily integrate Solidity and webankblockchain-The compiled WASM file of the liquid contract file is converted into a Java contract file.。 +[CONSOLE](https://github.com/FISCO-BCOS/console)is an important interactive client tool for FISCO BCOS 3.0, which is available through the [Java SDK](../../sdk/java_sdk/index.md)Establish a connection with a blockchain node to implement read and write access requests for blockchain node data。The console has a wealth of commands, including querying blockchain status, managing blockchain nodes, deploying and invoking contracts, and more。In addition, the console provides a contract compilation tool that allows users to quickly and easily convert WASM files compiled from Solidity and webankblockchain-liquid contract files to Java contract files。 ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` -The command line interactive console is a tool for developers to query and manage nodes.。 +The command line interactive console is a tool for developers to query and manage nodes。 -The console has a wealth of commands, including querying blockchain status, managing blockchain nodes, deploying and invoking contracts, and more.。 +The console has a wealth of commands, including querying blockchain status, managing blockchain nodes, deploying and invoking contracts, and more。 ```eval_rst .. important:: - "> = v3.x" console must be used to access FISCO BCOS 3.x blockchain. You cannot use "2.x" or "1.x" console。 + Access to FISCO BCOS 3.x blockchain is required using ">= v3.x "version of the console, you cannot use" 2.x "or" 1.x "version of the console。 ``` Use manual: diff --git a/3.x/en/docs/operation_and_maintenance/data_archive_tool.md b/3.x/en/docs/operation_and_maintenance/data_archive_tool.md index 05f67d263..cbe153029 100644 --- a/3.x/en/docs/operation_and_maintenance/data_archive_tool.md +++ b/3.x/en/docs/operation_and_maintenance/data_archive_tool.md @@ -6,11 +6,11 @@ Tags: "data archiving" "data clipping" ## 简介 -The data archiving tool is used to archive node data, query archive data, and re-import archive data. It supports RocksDB and TiKV modes.。 +The data archiving tool is used to archive node data, query archive data, and re-import archive data. It supports RocksDB and TiKV modes。 ## Node Configuration -If you need to use the archive function, you need to configure the IP address and port of the archive service in the 'config.ini' file of the node. We recommend that you only use '127.0.0.1' for the IP address. After the data archive tool archives the data of the node, it deletes the archived data in the node through this port. +If you need to use the archive function, you need to configure the IP address and port of the archive service in the 'config.ini' file of the node. We recommend that you only use '127.0.0.1' for the IP address. After the data archive tool archives the data of the node, it deletes the archived data in the node through this port ```ini [storage] @@ -23,7 +23,7 @@ If you need to use the archive function, you need to configure the IP address an ### Data Archiving Tools -Data archiving tool in the source 'tools / archive-tool / archiveTool.cpp ', compile-time setting parameter' cmake-DTOOLS = ON.. ', the compiled binary is in' build / tools / archive-tool / archiveTool '. Note that the' config.ini 'and' config.genesis' files of the node are required in the running directory of the archive tool. That is, the archive tool must be executed in the node directory. The instructions for using the archive tool are as follows: +The data archiving tool is located in the source code 'tools / archive-tool / archiveTool.cpp', the parameter 'cmake-DTOOLS = ON..' is set at compile time, and the compiled binary is located in 'build / tools / archive-tool / archiveTool'. Note that the running directory of the archiving tool needs to have the node's' config.ini 'and' config.genesis archive 'files ```bash $ ./tools/archive-tool/archiveTool -h @@ -51,7 +51,7 @@ archive tool used to archive/reimport the data of FISCO BCOS v3: ### Archive Data -`-a 'option indicates that the data archiving operation is performed, the parameter is' [start block] [end block] ', where the end block will not be archived。`-e 'option specifies the IP and port of the node to delete the archive data service, for example' 127.0.0.1:8181`。Suppose the archive block [1,255)To the local '. / archive' rocksdb database, the node archive service address is' 127.0.0.1:8181 ', the corresponding operation is as follows: +The '-a' option indicates that the data archiving operation is performed. The parameter is' [start block] [end block] ', where the end block will not be archived。The '-e' option specifies the IP and port from which the node deletes the archive data service, for example, '127.0.0.1:8181`。Suppose the archive block [1,255)To the local '. / archive' rocksdb database, the node archive service address is' 127.0.0.1:8181 ', the corresponding operation is as follows: ```bash # Archive [1,255)255 of which will not be archived, data archived to rocksdb, rocksdb path is. / archive @@ -93,7 +93,7 @@ Content-Length: 76 ### Archive data re-import -`-The r 'option indicates that the data archive operation is performed, and the parameter is' [start block] [end block] ', where the end block will not be imported。`-The 'p' option indicates to import from rocksdb. The parameter is the rocksdb path. If you need to import from TiKV, use the '--pd 'parameter。If the re-imported node is RocksDB, stop the node before re-importing.。An example operation is as follows: +The '-r' option indicates that the data archiving operation is performed. The parameter is' [start block] [end block] ', where the end block will not be imported。The '-p' option indicates to import from rocksdb. The parameter is the rocksdb path. If you need to import from TiKV, use the '--pd' parameter。If the re-imported node is RocksDB, stop the node before re-importing。An example operation is as follows: ```bash # Archive [1,255)255 of which will not be archived, data archived to rocksdb, rocksdb path is. / archive @@ -108,7 +108,7 @@ reimport from archive database success, block range [1,255) ## Archive Data Query -The archive data query tool supports querying archived data. The tool is located in 'FISCO-BCOS/tools/archive-tool/archive-reader`。The tool is written using rust and compiled using the following methods to support TiKV and RocksDB。 +The archive data query tool supports querying archived data. The tool is located at 'FISCO-BCOS / tools / archive-tool / archive-reader'。The tool is written using rust and compiled using the following methods to support TiKV and RocksDB。 ```bash cd tools/archive-tool/archive-reader diff --git a/3.x/en/docs/operation_and_maintenance/data_index.md b/3.x/en/docs/operation_and_maintenance/data_index.md index 8681c1e1f..5de205e1b 100644 --- a/3.x/en/docs/operation_and_maintenance/data_index.md +++ b/3.x/en/docs/operation_and_maintenance/data_index.md @@ -1,57 +1,57 @@ -# 11. Common components of data governance. +# 11. Common components of data governance -Tag: "WeBankBlockchain-Data "" Data Governance "" Generic Components "" Data Export "" Data Warehouse "" Data Reconciliation " +Tags: "WeBankBlockchain-Data" "Data Governance" "" Common Components "" Data Export "" Data Warehouse "" Data Reconciliation " ---- ## Component positioning -The full name of the data governance common component is WeBankBlockchain-Data governance is a set of stable, efficient, and secure blockchain data governance component solutions that can seamlessly adapt to the underlying platform of the FISCO BCOS blockchain.。 +The full name of the data governance generic component is WeBankBlockchain-Data data governance generic component, which is a stable, efficient and secure blockchain data governance component solution that can seamlessly adapt to the underlying platform of FISCO BCOS blockchain。 It consists of the Data Export component(Data-Export), Data Warehouse Components(Data-Stash)Data Reconciliation Component(Data-Reconcile)These three independent, pluggable, flexible assembly components, out of the box, flexible and convenient, easy to secondary development。 -These three components provide key capabilities in data governance such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision from three aspects: the underlying data storage layer, the smart contract data parsing layer, and the application layer.。 -WeBankBlockchain-Data has been in the financial, public welfare, agricultural and animal husbandry products traceability, judicial deposit, retail and other industries landing and use.。 +These three components provide key capabilities in data governance such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision from three aspects: the underlying data storage layer, the smart contract data parsing layer, and the application layer。 +WeBankBlockchain-Data has been implemented and used in finance, public welfare, traceability of agricultural and animal husbandry products, judicial deposit certificate, retail and other industries。 ## Design Objectives -Between the bottom layer of blockchain and blockchain applications, there is a gap between blockchain technology, business and products, and there are many challenges such as the difficulty of expanding blockchain data, the difficulty of querying and analyzing data on the chain, and the lack of universal product or component support in typical scenarios.。 +Between the bottom layer of blockchain and blockchain applications, there is a gap between blockchain technology, business and products, and there are many challenges such as the difficulty of expanding blockchain data, the difficulty of querying and analyzing data on the chain, and the lack of universal product or component support in typical scenarios。 Friends in the community often ask: The disk of the blockchain node server is almost full, what should I do?? How to query data in smart contracts in batches? -I would like to inquire how to check all transactions sent by an account.? +I would like to inquire how to check all transactions sent by an account? What is the blockchain reconciliation solution for WeBank and is there a universal solution?? …… Why do powerful blockchains still have these problems?? -First of all, with the "explosive" growth of blockchain data, the chain has accumulated hundreds of millions of transactions, several tons of data, node servers gradually can not meet the storage needs of transaction data, simply expand the node storage space not only high development costs, high hardware costs, but also in the process of data expansion due to high technical requirements, easy to cause systemic risks, and can not solve the problem once and for all。On the other hand, a large amount of transaction cold data is not only a waste of space, but also affects the performance of blockchain nodes to block and execute transactions.。 +First of all, with the "explosive" growth of blockchain data, the chain has accumulated hundreds of millions of transactions, several tons of data, node servers gradually can not meet the storage needs of transaction data, simply expand the node storage space not only high development costs, high hardware costs, but also in the process of data expansion due to high technical requirements, easy to cause systemic risks, and can not solve the problem once and for all。On the other hand, a large amount of transaction cold data is not only a waste of space, but also affects the performance of blockchain nodes to block and execute transactions。 -Secondly, due to the specific chain storage structure of the blockchain, the data on the chain can only be obtained and called through the smart contract interface, which is not only inefficient, but also with the increase of the data on the chain, its query and computing performance gradually decreases, unable to meet the demands of big data analysis and complex queries, such as the need to retrieve all contracts that have been deployed on the chain.。Data export solutions based on specific scenarios require specific development for smart contracts due to the large differences in smart contracts, which are costly and cannot be reused.。 +Secondly, due to the specific chain storage structure of the blockchain, the data on the chain can only be obtained and called through the smart contract interface, which is not only inefficient, but also with the increase of the data on the chain, its query and computing performance gradually decreases, unable to meet the demands of big data analysis and complex queries, such as the need to retrieve all contracts that have been deployed on the chain。Data export solutions based on specific scenarios require specific development for smart contracts due to the large differences in smart contracts, which are costly and cannot be reused。 -Finally, blockchain-based trusted data lacks common products and reusable components, and there are similar needs between some scenarios, such as business reconciliation, blockchain browser, business analysis, regulatory audit, etc.。There is a lot of duplication of development between different projects, which is time-consuming and laborious, while developers of blockchain applications need to go through a steep learning curve to complete their work goals, which may also introduce various risks in development and testing.。 +Finally, blockchain-based trusted data lacks common products and reusable components, and there are similar needs between some scenarios, such as business reconciliation, blockchain browser, business analysis, regulatory audit, etc。There is a lot of duplication of development between different projects, which is time-consuming and laborious, while developers of blockchain applications need to go through a steep learning curve to complete their work goals, which may also introduce various risks in development and testing。 -WeBankBlockchain-Starting from the underlying data storage layer, smart contract data parsing layer, and application layer, Data provides key capabilities in data governance such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision to meet the needs of the entire data governance process development scenario, as shown in the following figure: +WeBankBlockchain-Data provides key data governance capabilities such as blockchain data mining, tailoring, scaling, trusted storage, extraction, analysis, auditing, reconciliation, and supervision from multiple levels, including the underlying data storage layer, smart contract data parsing layer, and application layer, to meet the needs of the entire data governance process development scenario, as shown in the following figure: ![](../../../../2.x/images/governance/data/data-comp-design.png) -The blockchain data passes through the multi-party consensus of the blockchain consensus node and is not modified once generated.。 +The blockchain data passes through the multi-party consensus of the blockchain consensus node and is not modified once generated。 -In the operation and maintenance layer, the historical block data of the blockchain can be exported by the data warehouse component in whole or in part to the local。As a trusted storage image, the exported data is only valid locally, and modifications will not affect the consensus on the chain.。We recommend that users establish management methods to limit changes to local data.。 +In the operation and maintenance layer, the historical block data of the blockchain can be exported by the data warehouse component in whole or in part to the local。As a trusted storage image, the exported data is only valid locally, and modifications will not affect the consensus on the chain。We recommend that users establish management methods to limit changes to local data。 -In the application data layer, the data export component supports exporting source data, preliminary parsing, and contract-based parsing of multidimensional data.。All participants can deploy their own export service as a trusted data source for local queries or analytics。 +In the application data layer, the data export component supports exporting source data, preliminary parsing, and contract-based parsing of multidimensional data。All participants can deploy their own export service as a trusted data source for local queries or analytics。 -At the business layer, the business reconciliation component supports internal and external reconciliation of off-chain business data within the organization.。 +At the business layer, the business reconciliation component supports internal and external reconciliation of off-chain business data within the organization。 ## Component Introduction -Currently, WeBankBlockchain-Data by Data Warehouse Component(Data-Stash)Data Export Components(Data-Export)Data Reconciliation Component(Data-Reconcile)It consists of three independent, pluggable, and flexibly assembled components. More functions and solution sub-components will be provided according to business and scenario requirements.。 +Currently, WeBankBlockchain-Data consists of the data warehouse component(Data-Stash)Data Export Components(Data-Export)Data Reconciliation Component(Data-Reconcile)It consists of three independent, pluggable, and flexibly assembled components. More functions and solution sub-components will be provided according to business and scenario requirements。 ![](../../../../2.x/images/governance/data/data-gov.png) ### WeBankBlockchain-Data-Stash Data Warehouse Components Provides FISCO BCOS node data expansion, backup and tailoring capabilities。 -The binlog protocol can be used to synchronize the data of the underlying nodes of the blockchain. It supports resumable transmission, data trust verification, and fast synchronization mechanism.。 +The binlog protocol can be used to synchronize the data of the underlying nodes of the blockchain. It supports resumable transmission, data trust verification, and fast synchronization mechanism。 ![](../../../../2.x/images/governance/data/Data-Stash.png) @@ -87,35 +87,35 @@ Please refer to ## Usage Scenarios -Enterprise-level blockchain applications involve multiple roles, such as business roles, operators, development roles, and operation and maintenance roles.。For blockchain data, each specific role has different data governance demands。WeBankBlockchain-Data abstracts and designs the corresponding components from the three dimensions of data maintenance, application data processing and business data application of the underlying nodes of the blockchain to meet the needs of different roles for data governance.。 +Enterprise-level blockchain applications involve multiple roles, such as business roles, operators, development roles, and operation and maintenance roles。For blockchain data, each specific role has different data governance demands。WeBankBlockchain-Data abstracts and designs the corresponding components to meet the needs of different roles for data governance from the three dimensions of blockchain underlying node data maintenance, application data processing, and business data application。 ### Scenario 1: Node data maintenance -Data Warehouse Components Data-Stash is a lightweight, high-security, and high-availability component for blockchain node data processing, mainly for operation and maintenance personnel and developers.。 +Data warehouse component Data-Stash is a lightweight, high-security, and high-availability component for blockchain node data processing, mainly for operation and maintenance personnel and developers。 -Data Backup: Data-Stash can back up the data of blockchain nodes in real time through the Binlog protocol, and the blockchain nodes can cut and separate hot and cold data according to the actual situation, which solves the problem of node expansion and reduces development and hardware costs on the basis of ensuring data security and credibility.。While solving the problem of node expansion, it can make the node "light load," which can not only reduce the cost of node space, but also effectively improve the performance of node execution transactions.。 +Data backup: Data-Stash can back up the data of blockchain nodes in quasi-real time through the Binlog protocol, and the blockchain nodes can cut and separate hot and cold data according to the actual situation, which solves the problem of node expansion and reduces development and hardware costs on the basis of ensuring data security and trustworthiness。While solving the problem of node expansion, it can make the node "light load," which can not only reduce the cost of node space, but also effectively improve the performance of node execution transactions。 -Data synchronization: For new nodes that join the blockchain network, you can use Data-Stash, with the cooperation of the Fisco Sync tool, quickly synchronizes data in the blockchain network, ensures that nodes participate in the "work" of the blockchain network as quickly as possible, and reduces the time waste caused by new nodes waiting for data synchronization.。 +Data synchronization: For new nodes that join the blockchain network, you can use Data-Stash to quickly synchronize the data of the blockchain network with the cooperation of the Fisco Sync tool to ensure that the nodes participate in the "work" of the blockchain network as quickly as possible, reducing the waste of time caused by new nodes waiting for data synchronization。 ### Scenario 2: Application Data Processing -Data Export Components Data-Export provides standard exported blockchain data and customized data automatically generated based on intelligent analysis of smart contract code, stored in storage media such as MySQL and ElasticSearch, mainly for developers.。 +Data-Export provides standard exported blockchain data and customized data automatically generated based on intelligent analysis of smart contract code, stored in storage media such as MySQL and ElasticSearch, mainly for developers。 -Complex query and analysis: The existing blockchain is not very friendly to query functions, and on-chain calculations are very valuable, Data-Export supports exporting blockchain data stored on the chain to a distributed storage system under the chain。Developers can deploy contract accounts, events, functions and other data based on the exported basic data of the blockchain system, perform secondary development, customize the logic of complex queries and data analysis, and quickly realize business requirements.。For example, developers can perform statistics and correlation query analysis on transaction details based on business logic, develop various anti-money laundering and audit supervision reports, and so on.。 +Complex query and analysis: The existing blockchain is not very friendly to query functions, and on-chain computing is very valuable. Data-Export supports exporting blockchain data stored on the chain to a distributed storage system under the chain。Developers can deploy contract accounts, events, functions and other data based on the exported basic data of the blockchain system, perform secondary development, customize the logic of complex queries and data analysis, and quickly realize business requirements。For example, developers can perform statistics and correlation query analysis on transaction details based on business logic, develop various anti-money laundering and audit supervision reports, and so on。 -Blockchain Data Visualization: Data-Export automatically generates Grafana configuration files, enabling blockchain data visualization without development。Blockchain data visualization can not only be used as a tool for blockchain data inventory, data viewing, and operational analysis, but also can be used in the application development, debugging, and testing phases to improve R & D experience and efficiency in a visible and accessible way.。In addition, data-Export also provides Restful APIs for external system integration。The operation and maintenance personnel can monitor the status of the business system in real time through Grafana, and the business personnel can obtain the real-time progress of the business on the integrated business background system.。 +Blockchain data visualization: Data-Export automatically generates Grafana configuration files, enabling blockchain data visualization without development。Blockchain data visualization can not only be used as a tool for blockchain data inventory, data viewing, and operational analysis, but also can be used in the application development, debugging, and testing phases to improve R & D experience and efficiency in a visible and accessible way。In addition, Data-Export provides Restful APIs for external system integration。The operation and maintenance personnel can monitor the status of the business system in real time through Grafana, and the business personnel can obtain the real-time progress of the business on the integrated business background system。 -The data export subsystem of the blockchain middleware platform WeBASE has integrated Data-Export, meanwhile, data-Export can also be independently integrated with the underlying blockchain to flexibly support business needs, and has so far been stable and safe in dozens of production systems.。 +The data export subsystem of the blockchain middleware platform WeBASE has been integrated with Data-Export, and at the same time, Data-Export can also be independently integrated with the underlying blockchain to flexibly support business needs, and has been stable and safe in dozens of production systems so far。 -Now, data-Export, as a key component of blockchain data governance, is released in open source form and perfected by community partners to adapt to more usage scenarios and create more features.。 +Today, as a key component of blockchain data governance, Data-Export is released in open source form and perfected by community partners to adapt to more usage scenarios and create more functions。 ### Scenario 3: Business Data Application -At the business level, data reconciliation is one of the most common scenarios in blockchain trading systems.。Based on the development and practical experience of several blockchain DAPP applications, we encapsulated and developed the data reconciliation component Data-Reconcile provides a universal data reconciliation solution based on the blockchain smart contract ledger, and provides a set of dynamically extensible reconciliation framework that supports customized development, mainly for developers, and provides services for business personnel.。 +At the business level, data reconciliation is one of the most common scenarios in blockchain trading systems。Based on the development and practical experience of several blockchain DAPP applications, we have packaged and developed the data reconciliation component Data-Reconcile, which provides a universal data reconciliation solution based on the blockchain smart contract ledger, and provides a set of dynamically extensible reconciliation framework, which supports customized development, mainly for developers, and provides services for business personnel。 -Internal Enterprise Reconciliation: Data-Reconcile supports reconciliation between internal enterprise systems, such as between data on the blockchain and off-chain business systems。Developers can take advantage of Data-Reconcile quickly conducts secondary development and compares business system data with on-chain data to ensure the reliability and operational security of internal business system data.。 +Internal enterprise reconciliation: Data-Reconcile supports reconciliation between internal enterprise systems, such as between data on the blockchain and off-chain business systems。Developers can use Data-Reconcile to quickly carry out secondary development, accounting and comparing business system data with on-chain data, ensuring the reliability and operational security of internal business system data。 -Inter-Enterprise Reconciliation: Data-Reconcile helps developers quickly build cross-agency reconciliation applications。For example, during settlement, Enterprise A regularly exports its own business system transaction data as reconciliation files and sends them to the file storage center.。B Enterprises can use Data-Reconcile regularly pulls A enterprise reconciliation files and cooperates with Data-Export, reconciling with on-chain data within the enterprise。Data-Reconcile improves the efficiency of reconciliation while ensuring the credibility of reconciliation results, enabling quasi-real-time reconciliation.。 +Inter-Enterprise Reconciliation: Data-Reconcile helps developers quickly build inter-agency reconciliation applications。For example, during settlement, Enterprise A regularly exports its own business system transaction data as reconciliation files and sends them to the file storage center。Enterprise B can use Data-Reconcile to regularly pull enterprise A reconciliation files, and cooperate with Data-Export to reconcile with the data on the chain within the enterprise。Data-Reconcile improves the efficiency of reconciliation while ensuring the credibility of reconciliation results, enabling quasi-real-time reconciliation。 -In summary, WeBankBlockchain-Data is a stable, efficient and secure three-dimensional blockchain data governance solution. It aims to provide a series of independent, pluggable and flexibly assembled components to deal with and handle the massive data of the blockchain, bringing users a more convenient, simple, low-cost and lightweight user experience, thus promoting the development of blockchain data governance.。 +To sum up, WeBankBlockchain-Data is a stable, efficient and secure three-dimensional blockchain data governance solution, which aims to provide a series of independent, pluggable and flexible components to deal with and deal with the problem of massive data in the blockchain, bringing users a more convenient, simple, low-cost and lightweight user experience, thus promoting the development of blockchain data governance。 diff --git a/3.x/en/docs/operation_and_maintenance/governance_index.md b/3.x/en/docs/operation_and_maintenance/governance_index.md index 12c254fcf..e38cc24f2 100644 --- a/3.x/en/docs/operation_and_maintenance/governance_index.md +++ b/3.x/en/docs/operation_and_maintenance/governance_index.md @@ -1,35 +1,35 @@ -# 12. Multi-party collaborative governance components. +# 12. Multi-party collaborative governance components -Tag: "WeBankBlockchain-Governance "" Blockchain Multi-Party Collaboration Governance "" Common Components "" Account Governance "" Permission Governance "" Private Key Management "" Certificate Management "" +Tags: "WeBankBlockchain-Governance" "Blockchain Multi-Party Collaboration Governance" "Common Components" "Account Governance" "Permission Governance" "Private Key Management" "Certificate Management" " ---- ## Component positioning -After more than 10 years of development, the basic technical framework of blockchain has been gradually improved, the business carried on the chain is becoming more and more abundant, and more and more participants are participating.。Whether multi-party collaboration can be carried out smoothly, whether business frictions can be effectively resolved, and whether past governance strategies and practices can meet the needs of rapid development in the future...... The industry's focus is gradually focusing on these more challenging challenges.。 +After more than 10 years of development, the basic technical framework of blockchain has been gradually improved, the business carried on the chain is becoming more and more abundant, and more and more participants are participating。Whether multi-party collaboration can be carried out smoothly, whether business frictions can be effectively resolved, and whether past governance strategies and practices can meet the needs of rapid development in the future...... The industry's focus is gradually focusing on these more challenging challenges。 In January 2021, on the basis of years of technical research and application practice, WeBank Blockchain released [White Paper on Blockchain-Oriented Multi-Party Collaborative Governance Framework](https://mp.weixin.qq.com/s?__biz=MzU0MDY4MDMzOA==&mid=2247486381&idx=1&sn=caae41a2241e3b1c2cd58181ef73a1bc&chksm=fb34c250cc434b46b2c1b72299c2eb71e1bd6b7597c341423c5d262f18a6e0af1628e0ba4037&scene=21#wechat_redirect)MCGF (Multilateral Collaborative Governance Framework)。 -As a reference architecture for blockchain governance, MCGF comprehensively covers the design specifications, participation roles, core system architecture, functional processes, and application scenarios of blockchain governance.。 +As a reference architecture for blockchain governance, MCGF comprehensively covers the design specifications, participation roles, core system architecture, functional processes, and application scenarios of blockchain governance。 -Its open framework can be adapted to a variety of heterogeneous blockchain underlying networks, and combines management and technical strategies to coordinate on-chain and off-chain governance.。At the system level, MCGF supports governance through a variety of tools, components and services.。Finally, MCGF designs visual, interactive, multi-terminal perception and operation methods for all participants to provide an excellent user experience.。 +Its open framework can be adapted to a variety of heterogeneous blockchain underlying networks, and combines management and technical strategies to coordinate on-chain and off-chain governance。At the system level, MCGF supports governance through a variety of tools, components and services。Finally, MCGF designs visual, interactive, multi-terminal perception and operation methods for all participants to provide an excellent user experience。 -Blockchain itself pursues multi-party collaboration, and the development of its system and technology cannot be achieved without the support of the community.。Adhering to the consistent concept of open source and openness, we sincerely invite partners from various industries to work together to build a blockchain governance system and jointly explore the way of blockchain governance.。 +Blockchain itself pursues multi-party collaboration, and the development of its system and technology cannot be achieved without the support of the community。Adhering to the consistent concept of open source and openness, we sincerely invite partners from various industries to work together to build a blockchain governance system and jointly explore the way of blockchain governance。 -We will gradually open source the content of MCGF one by one to benefit the community.。This open source list includes a set of out-of-the-box blockchain governance generic components (WeBankBlockchain-Governance)。These components are the implementation basis and atomic building blocks of the MCGF framework, reusable and customizable.。 +We will gradually open source the content of MCGF one by one to benefit the community。This open source list includes a set of out-of-the-box blockchain governance common components (WeBankBlockchain-Governance)。These components are the implementation basis and atomic building blocks of the MCGF framework, reusable and customizable。 They are embedded and run in all parts of the entire MCGF framework, just like the wheels, gears, transmission groups, and sensors on a high-speed car, and work together to help build a governance framework and improve development efficiency。Welcome the community to build and develop more and better high-availability components。 ## Design Objectives -In a federated chain based on distributed collaboration, the participants collaborate in a form that is loosely coupled and does not fully trust each other.。 +In a federated chain based on distributed collaboration, the participants collaborate in a form that is loosely coupled and does not fully trust each other。 -In the alliance chain, a variety of mechanisms are designed to help participants build trust and reach consensus, with private keys, certificates, accounts, and permission management all key supporting technologies.。 +In the alliance chain, a variety of mechanisms are designed to help participants build trust and reach consensus, with private keys, certificates, accounts, and permission management all key supporting technologies。 -However, the above technology is more complex, in the application effect, but also need more reusable, easy to land tools or components.。 +However, the above technology is more complex, in the application effect, but also need more reusable, easy to land tools or components。 We also often hear about issues in the development, use, and governance of affiliate chains: -The concept of private key is complex, and its algorithm types, storage files, and generation methods are numerous, which is difficult to understand and master.? +The concept of private key is complex, and its algorithm types, storage files, and generation methods are numerous, which is difficult to understand and master? The key on the blockchain node is stored in clear text on the hard disk, there is a great operational risk, is there a solution for secure storage? @@ -45,32 +45,32 @@ Certificate management not only involves the generation of certificates, but als …… -Analyzing and summarizing the above problems, it is not difficult to see that there are high thresholds for the management and use of private keys, accounts, permissions, and certificates: developers need to repeatedly and tediously solve the same problem in different scenarios, and users are prone to confusion and discomfort during use, and may even bring security risks and risks to the system due to imperfect governance solutions.。 +Analyzing and summarizing the above problems, it is not difficult to see that there are high thresholds for the management and use of private keys, accounts, permissions, and certificates: developers need to repeatedly and tediously solve the same problem in different scenarios, and users are prone to confusion and discomfort during use, and may even bring security risks and risks to the system due to imperfect governance solutions。 In order to solve the above problems, we have developed a common component of blockchain governance, aiming to provide lightweight decoupling, out-of-the-box, simple and easy-to-use, one-stop blockchain governance capabilities。 -- **lightweight decoupling**。All governance components are decoupled from the specific business。Lightweight integration, pluggable without invading the underlying。Through the class library, smart contract, SDK and other ways to provide.。Users can deploy and control governance processes even using the chain console。 -- **General scenario**。All governance components are aimed at all "just-in-time" scenarios in alliance chain governance, such as the first open source account reset, contract permissions, private key and certificate lifecycle management, accounts, contracts, private keys and certificates are the cornerstones of alliance chain technology and upper-level governance.。 -- **One-stop shop**。The common components of chain governance are committed to providing a one-stop experience.。Take the private key management component as an example, it supports a variety of private key generation methods and formats, covers almost all mainstream scenarios, provides file-based, multi-database and other managed methods, and supports private key derivation, sharding and other encryption methods.。 +- **lightweight decoupling**。All governance components are decoupled from the specific business。Lightweight integration, pluggable without invading the underlying。Through the class library, smart contract, SDK and other ways to provide。Users can deploy and control governance processes even using the chain console。 +- **General scenario**。All governance components are aimed at all "just-in-time" scenarios in alliance chain governance, such as the first open source account reset, contract permissions, private key and certificate lifecycle management, accounts, contracts, private keys and certificates are the cornerstones of alliance chain technology and upper-level governance。 +- **One-stop shop**。The common components of chain governance are committed to providing a one-stop experience。Take the private key management component as an example, it supports a variety of private key generation methods and formats, covers almost all mainstream scenarios, provides file-based, multi-database and other managed methods, and supports private key derivation, sharding and other encryption methods。 - **Simple and easy to use**。Committed to providing a simple user experience, so that users can easily get started。 -WeBankBlockchain-Government is positioned as a blockchain governance component, not only to provide tools at the development level, but also to provide blockchain participants with reference cases at the practical level to help improve the governance level of the blockchain industry as a whole.。 +WeBankBlockchain-Governance is positioned as a blockchain governance component. It not only hopes to provide tools at the development level, but also hopes to provide blockchain participants with reference cases at the practical level to help improve the governance level of the blockchain industry as a whole。 ## Component Introduction -This open source blockchain governance generic component consists of the private key management component (Governance-Key), Account Governance Component (Governance-Account), permission governance components (Governance-Authority), Certificate Management Components (Governance-Cert) and other components.。 +This open source blockchain governance common component consists of private key management component (Governance-Key), account governance component (Governance-Account), permission governance component (Governance-Authority), certificate management component (Governance-Cert) and other components。 ![](../../../../2.x/images/governance/MCGF/MCGF_overview.png) -Each governance component provides detailed usage documentation。Among them, the account governance component and permission governance component also provide contract code, Java language SDK, contract integration demo and Java version SDK use demo, so that users can freely and flexibly use and integrate based on their own business scenarios.。 +Each governance component provides detailed usage documentation。Among them, the account governance component and permission governance component also provide contract code, Java language SDK, contract integration demo and Java version SDK use demo, so that users can freely and flexibly use and integrate based on their own business scenarios。 ### WeBankBlockchain-Governance-Account Account Governance Component -Based on the development of smart contracts, it provides full life cycle management of blockchain user accounts, such as account registration, private key reset, freezing, and unfreezing, and supports multiple governance policies such as administrators, threshold voting, and multi-signature system.。 +Based on the development of smart contracts, it provides full life cycle management of blockchain user accounts, such as account registration, private key reset, freezing, and unfreezing, and supports multiple governance policies such as administrators, threshold voting, and multi-signature system。 -In the existing blockchain design, once the private key is lost, it is impossible to re-operate the corresponding identity.。As a result, the account governance component adheres to the concept of "account as the core" and proposes a two-tier account system to solve the pain point of strong binding of private keys and accounts, thus realizing the ability to replace the private key of accounts, which means that even if the private key is lost, the account can be recovered.。 +In the existing blockchain design, once the private key is lost, it is impossible to re-operate the corresponding identity。As a result, the account governance component adheres to the concept of "account as the core" and proposes a two-tier account system to solve the pain point of strong binding of private keys and accounts, thus realizing the ability to replace the private key of accounts, which means that even if the private key is lost, the account can be recovered。 -In the account governance component, accounts no longer use public key addresses, but a two-tier account system of public key accounts plus internal random accounts.。 +In the account governance component, accounts no longer use public key addresses, but a two-tier account system of public key accounts plus internal random accounts。 -The account governance component provides a variety of blockchain account governance rules, account life cycle management and other overall solutions, including creating governance accounts, selecting a variety of governance rules, authorizing governance permissions, creating accounts, freezing accounts, unfreezing accounts, replacing private keys, closing accounts and other account life cycle management functions.。 +The account governance component provides a variety of blockchain account governance rules, account life cycle management and other overall solutions, including creating governance accounts, selecting a variety of governance rules, authorizing governance permissions, creating accounts, freezing accounts, unfreezing accounts, replacing private keys, closing accounts and other account life cycle management functions。 ![](../../../../2.x/images/governance/MCGF/governance_account.png) @@ -81,15 +81,15 @@ Please refer to - [Quick Start](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Acct/quickstart.html) ### WeBankBlockchain-Governance-Authority Permission Governance Component -A generic component that provides access control at the granularity of blockchain accounts, contracts, functions, etc. based on smart contracts.。 +A generic component that provides access control at the granularity of blockchain accounts, contracts, functions, etc. based on smart contracts。 -With the emergence of blockchain application development cases based on smart contracts, the need for the control and grouping of smart contract permissions in various application development scenarios is becoming more and more urgent.。The permission governance component provides permission control at the granularity of blockchain accounts and contract functions based on smart contracts.。 +With the emergence of blockchain application development cases based on smart contracts, the need for the control and grouping of smart contract permissions in various application development scenarios is becoming more and more urgent。The permission governance component provides permission control at the granularity of blockchain accounts and contract functions based on smart contracts。 -The permission governance component supports intercepting illegal calls to contract functions and also supports permission grouping - by configuring the association between functions and groups, you can easily control the permissions of the grouping.。Permission control can be achieved by simply introducing the permission contract address into the business code and accessing the judgment interface of the permission contract in the function that requires permission control.。 +The permission governance component supports intercepting illegal calls to contract functions and also supports permission grouping - by configuring the association between functions and groups, you can easily control the permissions of the grouping。Permission control can be achieved by simply introducing the permission contract address into the business code and accessing the judgment interface of the permission contract in the function that requires permission control。 -The administrator only needs to operate the permission management contract without adjusting the business contract, and the modification of the permission can take effect in real time.。Permission control supports on-demand configuration of blacklist mode and whitelist mode。 +The administrator only needs to operate the permission management contract without adjusting the business contract, and the modification of the permission can take effect in real time。Permission control supports on-demand configuration of blacklist mode and whitelist mode。 -In addition, the permission governance component supports multiple permission governance rules, such as one vote pass, threshold vote, and so on.。 +In addition, the permission governance component supports multiple permission governance rules, such as one vote pass, threshold vote, and so on。 ![](../../../../2.x/images/governance/MCGF/governance_authority.png) @@ -100,13 +100,13 @@ Please refer to - [Quick Start](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Auth/quickstart.html) ### WeBankBlockchain-Governance-Key Private Key Management Component -Provides a common solution for the full life cycle management of private keys such as private key generation, storage, encryption and decryption, signing, and verification.。 +Provides a common solution for the full life cycle management of private keys such as private key generation, storage, encryption and decryption, signing, and verification。 -The private key management component provides the ability to generate, save, host, and use private keys, covering the entire life cycle of private key use.。 +The private key management component provides the ability to generate, save, host, and use private keys, covering the entire life cycle of private key use。 -This component supports a variety of standard protocols. In terms of private key generation, it supports random number generation, mnemonic generation, and derivative generation.;As far as saving is concerned, it supports threshold sharding restore, and also supports exporting in pkcs12 (p12), keystore, pem and other formats.;In terms of hosting, multiple trust models can be adapted to meet the diverse needs of enterprise users.;In terms of usage, support for private key signature, public key encryption, etc.。 +This component supports a variety of standard protocols. In terms of private key generation, it supports random number generation, mnemonic generation, and derivative generation;As far as saving is concerned, it supports threshold sharding restore, and also supports exporting in pkcs12 (p12), keystore, pem and other formats;In terms of hosting, multiple trust models can be adapted to meet the diverse needs of enterprise users;In terms of usage, support for private key signature, public key encryption, etc。 -The private key management component also provides full support for state secrets.。 +The private key management component also provides full support for state secrets。 ![](../../../../2.x/images/governance/MCGF/governance_key.png) @@ -116,12 +116,12 @@ Please refer to - [Documentation](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Key/index.html) - [Quick Start](https://governance-doc.readthedocs.io/zh_CN/latest/docs/WeBankBlockchain-Governance-Key/corequickstart.html) -### WeBankBlockchain-Governance-Cert Certificate Management Components -Provides a common solution for the full lifecycle management of certificates such as certificate generation, validation, and sub-certificate requests.。 +### WeBankBlockchain-Governance-Cert Certificate Management Component +Provides a common solution for the full lifecycle management of certificates such as certificate generation, validation, and sub-certificate requests。 -The certificate management component provides the ability to issue, verify, reset, revoke, export and host multi-level certificates in the X509 standard, covering the full life cycle of certificates, and supports a variety of signature algorithms, such as SHA256WITHRSA, SHA256WITHECDSA, SM3WITHSM2 and other signature algorithms, as well as state secret support.。 +The certificate management component provides the ability to issue, verify, reset, revoke, export and host multi-level certificates in the X509 standard, covering the full life cycle of certificates, and supports a variety of signature algorithms, such as SHA256WITHRSA, SHA256WITHECDSA, SM3WITHSM2 and other signature algorithms, as well as state secret support。 -Components include cert-toolkit and cert-mgr two modules, cert-toolkit provides basic capabilities such as certificate generation. It can be used as an independent toolkit.-mgr based on cert-toolkit, which provides the ability to host certificates and standardizes the issuance process.。 +The cert-toolkit component includes two modules, cert-toolkit and cert-mgr. The cert-toolkit provides basic capabilities such as certificate generation and can be used as an independent toolkit。 ![](../../../../2.x/images/governance/MCGF/governance_cert.png) @@ -136,48 +136,48 @@ Please refer to ###Private key management scenario Private key is indispensable in the design system of block chain。But the private key itself is difficult to understand, difficult to use, more difficult to keep, the management cost is huge, seriously weakened the use of blockchain experience。 -An effective tool for private key management in the actual scenario of the existing blockchain is still missing.。Private key management is generally difficult, high learning costs, poor user experience and other issues.。 +An effective tool for private key management in the actual scenario of the existing blockchain is still missing。Private key management is generally difficult, high learning costs, poor user experience and other issues。 -The private key management component provides a series of rich and independent private key management methods, and users can choose the appropriate solution according to their needs.。 +The private key management component provides a series of rich and independent private key management methods, and users can choose the appropriate solution according to their needs。 -**Private key generation**: Users can use mnemonic methods to generate。On the one hand, mnemonic words are composed of words, which are relatively easy to remember and reduce the difficulty of memorizing and expressing。On the other hand, if you use separate private keys for different scenarios, it will increase the cost of memory and the risk of loss, at this time you can use the private key derivation function, users only need to keep the root private key, in different scenarios the root private key will derive different sub-private keys.。 +**Private key generation**: Users can use mnemonic methods to generate。On the one hand, mnemonic words are composed of words, which are relatively easy to remember and reduce the difficulty of memorizing and expressing。On the other hand, if you use separate private keys for different scenarios, it will increase the cost of memory and the risk of loss, at this time you can use the private key derivation function, users only need to keep the root private key, in different scenarios the root private key will derive different sub-private keys。 -**Private key hosting**After obtaining the private key, you can choose to export it to a format such as keystore or pkcs12 after password encryption, or you can hand it over to an enterprise organization for hosting.;You can also choose to split into several sub-slices and distribute them to different devices for storage.。 +**Private key hosting**After obtaining the private key, you can choose to export it to a format such as keystore or pkcs12 after password encryption, or you can hand it over to an enterprise organization for hosting;You can also choose to split into several sub-slices and distribute them to different devices for storage。 -**Private key usage**After obtaining the private key, the user can use the private key to sign transactions, use the public key to encrypt the private key to decrypt, etc.。 +**Private key usage**After obtaining the private key, the user can use the private key to sign transactions, use the public key to encrypt the private key to decrypt, etc。 ### Account Governance Scenarios -The private key itself is easy to lose and leak.。Economic losses due to loss of private keys are common。Driven by huge economic interests, security attacks and thefts of private keys are also emerging.。How to reset the user's private key and protect the user's asset security is the bottom line of blockchain promotion。 +The private key itself is easy to lose and leak。Economic losses due to loss of private keys are common。Driven by huge economic interests, security attacks and thefts of private keys are also emerging。How to reset the user's private key and protect the user's asset security is the bottom line of blockchain promotion。 -The account governance component is designed to provide a self-consistent account governance mechanism based on smart contracts to achieve the effect of private key changes without changing identity.。The account governance component supports both the meta-governance of the Alliance Chain Governance Committee and governance scenarios based on the specific business applications of the Alliance Chain.。 +The account governance component is designed to provide a self-consistent account governance mechanism based on smart contracts to achieve the effect of private key changes without changing identity。The account governance component supports both the meta-governance of the Alliance Chain Governance Committee and governance scenarios based on the specific business applications of the Alliance Chain。 -Alliance Chain Governance Board Account Governance: There is a unique risk in traditional centralized solutions。In the alliance chain, a polycentric governance committee is often used to avoid a single point of risk。Members of the Alliance Chain Governance Committee can rely on governance contracts to perform management functions and vote and vote on matters.。 +Alliance Chain Governance Board Account Governance: There is a unique risk in traditional centralized solutions。In the alliance chain, a polycentric governance committee is often used to avoid a single point of risk。Members of the Alliance Chain Governance Committee can rely on governance contracts to perform management functions and vote and vote on matters。 -However, there is still a risk of disclosure or loss of private keys associated with committee members。The account governance component can be applied to the account governance of the Alliance Chain Governance Committee, and the accounts of the Alliance Chain Governance Committee members are also managed by the account governance component.。 +However, there is still a risk of disclosure or loss of private keys associated with committee members。The account governance component can be applied to the account governance of the Alliance Chain Governance Committee, and the accounts of the Alliance Chain Governance Committee members are also managed by the account governance component。 -Blockchain depository business account governance: Users can use the current private key to open an account in the account governance component to generate an internal identity.。The business system can rely on this internal identity, for example, in a depository business contract, the record of the data is bound to that internal identity.。 +Blockchain depository business account governance: Users can use the current private key to open an account in the account governance component to generate an internal identity。The business system can rely on this internal identity, for example, in a depository business contract, the record of the data is bound to that internal identity。 -When you need to modify the private key, you can modify the private key by voting through the associated account or governance committee, and apply for binding the old identity with the new private key, so that you can continue to operate the old identity with the new private key, while the old private key is invalidated.。 +When you need to modify the private key, you can modify the private key by voting through the associated account or governance committee, and apply for binding the old identity with the new private key, so that you can continue to operate the old identity with the new private key, while the old private key is invalidated。 ### Permission governance scenario -In application development, the lack of a security mechanism will inevitably have serious consequences.。On the one hand, blockchain applications need to refine security access control to the level of contract function granularity.;On the other hand, grouping permissions for different users to prevent loopholes such as transaction overreach and avoid being attacked by hackers is also a rigid need for blockchain application security.。 +In application development, the lack of a security mechanism will inevitably have serious consequences。On the one hand, blockchain applications need to refine security access control to the level of contract function granularity;On the other hand, grouping permissions for different users to prevent loopholes such as transaction overreach and avoid being attacked by hackers is also a rigid need for blockchain application security。 -The permission governance component provides business permission governance tools, including grouping information for different accounts and permissions for different groups.。Permission configuration meets various requirements, allowing developers to quickly integrate permission control functions for their smart contract applications.。Typical functions are as follows: +The permission governance component provides business permission governance tools, including grouping information for different accounts and permissions for different groups。Permission configuration meets various requirements, allowing developers to quickly integrate permission control functions for their smart contract applications。Typical functions are as follows: - **Account Grouping**You can group account addresses and set permissions for the group to reuse the group。 - **Black and White List Mode**: Supports two permission modes of black and white lists. Administrators or governance committees can set a function to be accessed only by members of a group, or only allow accounts outside the group to access。 -- **Cross-Contract**Allows you to configure permissions across contracts. For example, you can set a group member to be prohibited by functions in multiple contracts at the same time.。 -- **Lightweight Access**The business contract does not need to know these complex permission configurations, but only needs to call the interception interface of the permission contract in its own function. When the user calls the function, the business contract will automatically submit the call information context to the permission system for judgment and interception.。 +- **Cross-Contract**Allows you to configure permissions across contracts. For example, you can set a group member to be prohibited by functions in multiple contracts at the same time。 +- **Lightweight Access**The business contract does not need to know these complex permission configurations, but only needs to call the interception interface of the permission contract in its own function. When the user calls the function, the business contract will automatically submit the call information context to the permission system for judgment and interception。 ### Certificate Management Scenarios -Certificate is the cornerstone of network security in the enterprise authentication management of the alliance chain。The disadvantages of certificate operation and use experience will endanger the participants of the entire alliance chain network, affecting mutual trust and business security.。 +Certificate is the cornerstone of network security in the enterprise authentication management of the alliance chain。The disadvantages of certificate operation and use experience will endanger the participants of the entire alliance chain network, affecting mutual trust and business security。 -For example, FISCO BCOS network adopts CA-oriented admission mechanism, uses the certificate format of x509 protocol, supports any multi-level certificate structure, and ensures information confidentiality, authentication, integrity and non-repudiation.。 +For example, FISCO BCOS network adopts CA-oriented admission mechanism, uses the certificate format of x509 protocol, supports any multi-level certificate structure, and ensures information confidentiality, authentication, integrity and non-repudiation。 -The certificate management component provides a solution for certificate lifecycle management, standardizes the certificate issuance process, supports certificate hosting, and supports multiple signature algorithms for personal or enterprise use.。Take certificate management and toolkit usage as an example: +The certificate management component provides a solution for certificate lifecycle management, standardizes the certificate issuance process, supports certificate hosting, and supports multiple signature algorithms for personal or enterprise use。Take certificate management and toolkit usage as an example: -**On-chain node admission certificate management**: The issuance of certificates for nodes on the chain is completed by the certificate management component, which can be integrated or deployed independently, and the service is managed by the authority.。 +**On-chain node admission certificate management**: The issuance of certificates for nodes on the chain is completed by the certificate management component, which can be integrated or deployed independently, and the service is managed by the authority。 -During chain initialization, the deployer can call the interface to complete the generation of the root certificate。The new authority or node can query the root certificate and submit a sub-certificate request through the query interface provided by the certificate management component.。The root certificate manager can choose to issue sub-certificates from the list of requests through the query。Through the certificate management component for certificate management, you can standardize the issuance process, improve efficiency.。 +During chain initialization, the deployer can call the interface to complete the generation of the root certificate。The new authority or node can query the root certificate and submit a sub-certificate request through the query interface provided by the certificate management component。The root certificate manager can choose to issue sub-certificates from the list of requests through the query。Through the certificate management component for certificate management, you can standardize the issuance process, improve efficiency。 -**Certificate Toolkit Use**: cert in the certificate management component-The toolkit can be referenced in the project as a standalone JAVA toolkit instead of the command line to complete the generation and issuance of certificates.。Enterprise or personal projects can integrate certificate management components as a certificate issuance toolkit。 +**Certificate Toolkit Use**The cert-toolkit in the certificate management component can be referenced in the project as an independent JAVA toolkit instead of the command line to complete the generation and issuance of certificates。Enterprise or personal projects can integrate certificate management components as a certificate issuance toolkit。 diff --git a/3.x/en/docs/operation_and_maintenance/light_monitor.md b/3.x/en/docs/operation_and_maintenance/light_monitor.md index 37aef575b..04e256e02 100644 --- a/3.x/en/docs/operation_and_maintenance/light_monitor.md +++ b/3.x/en/docs/operation_and_maintenance/light_monitor.md @@ -6,12 +6,12 @@ Tags: "monitor" "monitor" ## light_monitor.sh -`FISCO-BCOS 3.0 'blockchain lightweight monitoring tool can monitor whether the blockchain is working properly, and also provides a simple way to access the user alarm system. +'FISCO-BCOS 3.0 'blockchain lightweight monitoring tool can monitor whether the blockchain is working properly, and also provides a simple way to access the user alarm system -- Monitor whether the consensus is normal. -- Monitor whether block synchronization is normal. -- Monitor disk space. -- Connect to the alarm system and send alarm information. +- Monitor if consensus is normal +- Monitor whether the block synchronization is normal +- Monitor disk space +- Connect to the alarm system and send alarm information ### 使用 @@ -41,7 +41,7 @@ Example: - `-p`: rpc port - `-t`: The threshold of the block synchronization alarm. If the block height difference between consensus nodes exceeds the threshold, consensus or block synchronization is abnormal. The default value is' 30' - `-d`: Directory to be monitored for disk capacity -- `-T`: The disk alarm threshold. If the percentage of disk space is less than this value, an alarm is triggered. The default value is 5%. +- `-T`: The disk alarm threshold. If the percentage of disk space is less than this value, an alarm is triggered. The default value is 5% - `-h`: Help Information #### Status Description @@ -70,7 +70,7 @@ Check whether the network connection is normal。 Insufficient disk space, remaining '${disk_space_left_percent}% 'of space -To continuously monitor the status of blockchain nodes, configure 'light _ monitor.sh' to 'crontab' for periodic execution. +To continuously monitor the status of blockchain nodes, configure 'light _ monitor.sh' to 'crontab' for periodic execution ```shell # Execute once per minute to check whether the node is started normally, normal consensus, and whether there is critical error printing @@ -79,7 +79,7 @@ To continuously monitor the status of blockchain nodes, configure 'light _ monit 'light _ monitor.log 'saves the output of' light _ monitor.sh' -**You need to modify the path in the example based on the actual deployment.** +**You need to modify the path in the example based on the actual deployment** ### docking alarm system @@ -99,7 +99,7 @@ alarm() { } ``` - 'light _ monitor.sh 'The function is called at all critical errors triggered by the execution, and the error message is used as an input parameter. The user can call the API of the monitoring platform to send the error message to the alarm platform. + 'light _ monitor.sh 'The function is called at all critical errors triggered by the execution, and the error message is used as an input parameter. The user can call the API of the monitoring platform to send the error message to the alarm platform - Example @@ -128,7 +128,7 @@ alarm() { ## **Node Monitoring** -`FISCO-BCOS 3.0 'blockchain monitoring tool, you can monitor the blockchain block height and other indicators, displayed in the graphical interface. +'FISCO-BCOS 3.0 'blockchain monitoring tool, you can monitor the blockchain block height and other indicators, displayed in the graphical interface The components involved include grafana(Used to show indicators),prometheus(Used to collect indicator information),mtail(Used to analyze blockchain log information acquisition metrics). @@ -138,7 +138,7 @@ The monitoring tool can choose whether to deploy with the block chain when build ### **'m 'Node Monitoring Options [**Optional**]** -Optional parameter. When the blockchain node is enabled for node monitoring, the-m 'option to deploy nodes with monitoring. If this option is not selected, only nodes without monitoring are deployed。 +Optional parameter. When node monitoring is enabled for blockchain nodes, you can use the '-m' option to deploy nodes with monitoring. If this option is not selected, only nodes without monitoring are deployed。 An example of deploying an Air version blockchain with monitoring enabled is as follows: @@ -170,7 +170,7 @@ Processing IP:127.0.0.1 Total:4 [INFO] output dir : nodes [INFO] All completed. Files in nodes ``` -Prompt All completed.Files in nodes, indicating that the block chain node file has been generated. +Prompt All completed.Files in nodes, indicating that the block chain node file has been generated ### Use process @@ -181,7 +181,7 @@ Prompt All completed.Files in nodes, indicating that the block chain node file h ```shell bash nodes/127.0.0.1/start_all.sh ``` -Successful startup will output the following information。Otherwise use 'netstat-an |grep tcp 'check machine' 30300 ~ 30303,20200 ~ 20203 'ports are occupied。 +Successful startup will output the following information。Otherwise use 'netstat -an|grep tcp 'check machine' 30300 ~ 30303,20200 ~ 20203 'ports are occupied。 ```shell try to start node0 @@ -204,4 +204,4 @@ sh nodes/monitor/start_monitor.sh #### Step 3. Log in to grafana according to the prompt and view the indicators -The URL startup script prints the corresponding address. The default username and password are admin / admin.([github source code](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/tools/template/Dashboard.json))and configure the prometheus source(http://ip:9090/)You can view the real-time display of each indicator.。 \ No newline at end of file +The URL startup script prints the corresponding address. The default username and password are admin / admin([github source code](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/tools/template/Dashboard.json))and configure the prometheus source(http://ip:9090/)You can view the real-time display of each indicator。 \ No newline at end of file diff --git a/3.x/en/docs/operation_and_maintenance/log/index.md b/3.x/en/docs/operation_and_maintenance/log/index.md index 2b21dd73c..12fa90573 100644 --- a/3.x/en/docs/operation_and_maintenance/log/index.md +++ b/3.x/en/docs/operation_and_maintenance/log/index.md @@ -3,9 +3,9 @@ Tags: "Log Description" "Log Audit" Use manual: -FISCO BCOS outputs the key steps in the blockchain node to the 'log _% YYY% mm% dd% HH.% MM' file in the 'nodeX / log /' directory, and the log format is customized, so that users can view the running status of the blockchain through logs.。 +FISCO BCOS outputs the key steps in the blockchain node to the 'log _% YYY% mm% dd% HH.% MM' file in the 'nodeX / log /' directory, and the log format is customized, so that users can view the running status of the blockchain through logs。 -Users can quickly understand the status of blockchain nodes and the basis of transaction execution errors through log files, and can use other audit tools to capture logs to obtain the actual status of the blockchain.。 +Users can quickly understand the status of blockchain nodes and the basis of transaction execution errors through log files, and can use other audit tools to capture logs to obtain the actual status of the blockchain。 ```eval_rst .. toctree:: diff --git a/3.x/en/docs/operation_and_maintenance/log/log_description.md b/3.x/en/docs/operation_and_maintenance/log/log_description.md index cb71ab50f..318c9f3a7 100644 --- a/3.x/en/docs/operation_and_maintenance/log/log_description.md +++ b/3.x/en/docs/operation_and_maintenance/log/log_description.md @@ -4,7 +4,7 @@ Tags: "Log Format" "Log Keywords" "Troubleshooting" "View Log" ---- -All group logs of FISCO BCOS are output to the file 'log _% YYYY% mm% dd% HH.% MM' in the log directory, and the log format is customized, so that users can view the running status of the chain through logs.。 +All group logs of FISCO BCOS are output to the file 'log _% YYYY% mm% dd% HH.% MM' in the log directory, and the log format is customized, so that users can view the running status of the chain through logs。 ## Log Format @@ -21,13 +21,13 @@ info|2022-11-21 20:00:35.479505|[SCHEDULER][blk-1]BlockExecutive prepare: fillBl The fields have the following meanings: -- `log_level`: Log level. Currently, log levels include 'trace', 'debug', 'info', 'warning', 'error', and 'fatal'. +- `log_level`: Log level. Currently, log levels include 'trace', 'debug', 'info', 'warning', 'error', and 'fatal' - `time`: Log output time, accurate to nanoseconds -- 'module _ name ': module keyword. For example, the synchronization module keyword is' SYNC 'and the consensus module keyword is' CONSENSUS' +- 'module _ name': module keyword. For example, the synchronization module keyword is' SYNC 'and the consensus module keyword is' CONSENSUS' -- 'content ': logging content +- 'content': logging content ## Common Log Description @@ -37,22 +37,22 @@ The fields have the following meanings: ```eval_rst .. note:: - - Only consensus nodes periodically output consensus packed logs(The command "tail" can be used in the node directory.-f log/* | grep "${group_id}.*++""View consensus packaging logs for a specified group) + - only the consensus node periodically outputs consensus packed logs(The command "tail -f log /* | grep "${group_id}.*++""View consensus packaging logs for a specified group) - - Pack logs to check whether the consensus node of a specified group is abnormal.**Abnormal consensus node does not output packed logs** + - Package logs to check whether the consensus node of the specified group is abnormal**Abnormal consensus node does not output packed logs** ``` The following is an example of consensus packed logs: ```bash info|2022-11-21 20:00:45.530293|[CONSENSUS][PBFT]addCheckPointMsg,reqHash=c2e031c8...,reqIndex=2,reqV=9,fromIdx=3,Idx=1,weight=4,minRequiredWeight=3 ``` -- 'reqHash ': hash of the PBFT request -- 'reqIndex ': block height corresponding to PBFT request +- 'reqHash': the hash of the PBFT request +- 'reqIndex': the block height corresponding to the PBFT request - `reqV`: View corresponding to PBFT request - `fromIdx`: The node index number that generated the PBFT request - `Idx`: Current Node Index Number - `weight`: Total consensus weight of the proposal corresponding to the request -- `minRequiredWeight`: The minimum voting weight required to reach consensus on the proposal corresponding to the request. +- `minRequiredWeight`: The minimum voting weight required to reach consensus on the proposal corresponding to the request **Exception Log** @@ -62,30 +62,30 @@ Network jitter, network disconnect, or configuration error(Genesis block file as ```bash warning|2022-11-17 00:58:03.621465|[CONSENSUS][PBFT]onCheckPointTimeout: resend the checkpoint message package,index=176432,hash=d411d77d...,committedIndex=176431,consNum=176432,committedHash=ecac3705...,view=1713,toView=1713,changeCycle=0,expectedCheckPoint=176433,Idx=0,unsealedTxs=168,sealUntil=176432,waitResealUntil=176431,nodeId=0318568d... ``` -- 'index ': consensus index number -- 'hash ': consensus block hash +- 'index': consensus index number +- 'hash': consensus block hash - `committedIndex`: Falling block block height - `consNum`: Next consensus block high - `committedHash`: Drop Block Hash - `view`: Current View - `toview`: Next View - `changeCycle`: Current Timeout Clock Cycle -- `expectedCheckPoint`: The next block to be executed is high. +- `expectedCheckPoint`: The next block to be executed is high - `Idx`: The index number of the current node -- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil. -- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch.+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil. +- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil +- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil - `unsealedTxs`: Number of unpackaged transactions in the trading pool - `nodeId`: current consensus node id **Block Drop Log** -If the block consensus is successful or the node is synchronizing blocks from other nodes, the disk drop log will be output.。 +If the block consensus is successful or the node is synchronizing blocks from other nodes, the disk drop log will be output。 ```eval_rst .. note:: - Send transactions to nodes, if the transaction is processed, non-free nodes will output drop logs.(The command "tail" can be used in the node directory.-f log/* | grep "Report""View node out-of-block status)If the log is not output, the node is in an abnormal state. Please check whether the network connection is normal and whether the node certificate is valid. + Send transactions to nodes, if the transaction is processed, non-free nodes will output drop logs(The command "tail -f log /* | grep "Report""View node out-of-block status)If the log is not output, the node is in an abnormal state. Please check whether the network connection is normal and whether the node certificate is valid ``` @@ -95,18 +95,18 @@ info|2022-11-21 20:00:45.531121|[CONSENSUS][PBFT][METRIC]^^^^^^^^Report,sealer=3 ``` The fields in the log are described as follows: -- 'sealer ': the index number of the consensus node that generates the proposal -- 'txs': Number of transactions contained in the block +- 'sealer': the index number of the consensus node that generates the proposal +- 'txs': Number of transactions contained within the block - `committedIndex`: Falling block block height - `consNum`: Next consensus block high - `committedHash`: Drop Block Hash - `view`: Current View - `toview`: Next View - `changeCycle`: Current Timeout Clock Cycle -- `expectedCheckPoint`: The next block to be executed is high. +- `expectedCheckPoint`: The next block to be executed is high - `Idx`: The index number of the current node -- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil. -- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch.+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil. +- `sealUntil`: The height of the block that can be packaged to generate the next block. In a system block scenario, the block can be packaged to generate the next block if and only if the disk height exceeds sealUntil +- `waitResealUntil`: Same as above, the block height of the next block can be packaged to produce the next block, when there is a view switch+ In the system block scenario, the next block can only be packaged if and only if the drop height exceeds waitResealUntil - `unsealedTxs`: Number of unpackaged transactions in the trading pool - `nodeId`: current consensus node id @@ -116,7 +116,7 @@ The fields in the log are described as follows: ```eval_rst .. note:: - The command "tail" can be used in the node directory.-f log/* | grep "connected count""Check the network status. If the number of network connections in the log output does not meet expectations, run the-anp | grep fisco-bcos "command to check node connections + The command "tail -f log /* | grep "connected count""Check the network status. If the number of network connections in the log output does not meet the expectation, use the" netstat -anp| grep fisco-bcos "command to check node connectivity ``` An example of a log is as follows: diff --git a/3.x/en/docs/operation_and_maintenance/log/system_log_audit.md b/3.x/en/docs/operation_and_maintenance/log/system_log_audit.md index 58af26f50..fa1638eab 100644 --- a/3.x/en/docs/operation_and_maintenance/log/system_log_audit.md +++ b/3.x/en/docs/operation_and_maintenance/log/system_log_audit.md @@ -4,13 +4,13 @@ Tags: "system transaction" "log audit" --- -This article describes the log content of the key steps in the FISCO BCOS blockchain node. It is intended that users can quickly understand the status of the blockchain node and the basis of transaction execution errors.。 +This article describes the log content of the key steps in the FISCO BCOS blockchain node. It is intended that users can quickly understand the status of the blockchain node and the basis of transaction execution errors。 ## 1. Block Status Log ### 1.1 Block Packing Successfully -The consensus node receives the transaction and broadcasts the actively packaged consensus proposal to other nodes for publicity within the minimum packaging time. The proposal contains all the transactions that have been sorted for execution.。 +The consensus node receives the transaction and broadcasts the actively packaged consensus proposal to other nodes for publicity within the minimum packaging time. The proposal contains all the transactions that have been sorted for execution。 - Log Keywords:++++++++++++++++ Generate proposal - Log level: INFO @@ -30,11 +30,11 @@ info|2022-11-24 10:32:35.034810|[CONSENSUS][SEALER]++++++++++++++++ Generate pro ### 1.2 Block Start Execution -When consensus receives enough Pre on a block proposal-Commit proposal, which will call the Scheduler module to request the execution of the block。 +When the consensus receives enough Pre-Commit proposals on a block proposal, the Scheduler module is called to request the execution of the block。 -When the synchronization module synchronizes blocks from other nodes to the local, it also calls the Scheduler module to request the execution of blocks to verify the validity of the blocks.。 +When the synchronization module synchronizes blocks from other nodes to the local, it also calls the Scheduler module to request the execution of blocks to verify the validity of the blocks。 -- Log Keywords: ExecuteBlock request +- Log keyword: ExecuteBlock request - Log level: INFO - Log example: @@ -45,15 +45,15 @@ info|2022-11-24 10:32:35.046634|[SCHEDULER][METRIC][blk-32]ExecuteBlock request, - Log interpretation: - blk-32: The block height executed is 32 - gasLimit: gas limit of the current execution block - - verify: Whether to verify. If the current node is the leader node, no verification is required. - - signatureSize: Number of signatures. If the current node is the leader node, the signature does not need to be verified. + - verify: Whether to verify. If the current node is the leader node, no verification is required + - signatureSize: Number of signatures. If the current node is the leader node, the signature does not need to be verified - tx count: The number of intra-block transactions, which is greater than 0 if the block is synchronized from another node - - meta tx count: The number of transaction meta-information in the block, which is greater than 0 if the request was initiated by the consensus module. + - meta tx count: The number of transaction meta-information in the block, which is greater than 0 if the request was initiated by the consensus module - version: Version number of the currently executed block ### 1.3 Block Execution Success -This log is output when the consensus module or synchronization module requests the Scheduler module to execute the block.。 +This log is output when the consensus module or synchronization module requests the Scheduler module to execute the block。 - Log Keywords: asyncExecuteBlock success - Log level: INFO @@ -76,7 +76,7 @@ info|2022-11-24 10:32:35.048912|[CONSENSUS][Core][METRIC]asyncExecuteBlock succe ### 1.4 Block Drop -After the consensus or synchronization module passes the verification after executing the block, it will initiate a call to the Scheduler to take the initiative to drop the disk.。 +After the consensus or synchronization module passes the verification after executing the block, it will initiate a call to the Scheduler to take the initiative to drop the disk。 - Log Keywords: ^ ^ ^ ^ ^ ^ ^ ^ Report - Log level: INFO @@ -134,7 +134,7 @@ When the initiating node joins the consensus node, joins the observation node, m info|2022-11-24 12:27:04.210708|[EXECUTOR][PRECOMPILED][ConsensusPrecompiled]addSealer,nodeID=97af395f31cd52868162c790c2248e23f65c85a64cd0581d323515f6afffc0138279292a55f7bd706f8f1602f142b12a3407a45334eb0cf7daeb064dcec69369 ``` -- Log Interpretation +- Log interpretation - addSealer: The example above is a request to add to a consensus node, in addition to the following types: - addObserver: Add to Watch Node - setWeight: Set consensus node weights @@ -145,7 +145,7 @@ info|2022-11-24 12:27:04.210708|[EXECUTOR][PRECOMPILED][ConsensusPrecompiled]add ### 3.1 Deployment contract permission writing -The governance committee has approved the write proposal for setting the deployment contract type and deployment contract permissions, which will be output in the node log.。 +The governance committee has approved the write proposal for setting the deployment contract type and deployment contract permissions, which will be output in the node log。 - Log Keywords: AuthManagerPrecompiled - Log level: INFO @@ -165,13 +165,13 @@ info|2022-11-24 12:47:39.784532|[EXECUTOR][PRECOMPILED][AuthManagerPrecompiled]s ``` - Log Interpretation 2 - - Set the deployment permissions of an account + - set deployment permissions for an account - account: Account address - isClose: Whether to turn off deployment permissions ### 3.2 Contract permission writing -The contract administrator sets the ACL type of a contract interface and sets an account to write the ACL of the contract interface, which will be output in the node log.。 +The contract administrator sets the ACL type of a contract interface and sets an account to write the ACL of the contract interface, which will be output in the node log。 - Log Keywords: ContractAuthMgrPrecompiled - Log level: INFO @@ -182,7 +182,7 @@ info|2022-11-24 12:47:04.345608|[EXECUTOR][PRECOMPILED][blk-31909][ContractAuthM ``` - Log Interpretation 1 - - Set the permission type of the contract interface func + - set the permission type of the contract interface func - blk: Block height - path: Contract Address - func: Contract Interface Selector @@ -195,7 +195,7 @@ isClose=false ``` - Log Interpretation 2 - - set the func permission of an account on a contract. + - set the func permission of an account on a contract - blk: Block height - path: Contract Address - func: Contract Interface Selector @@ -204,7 +204,7 @@ isClose=false ### 3.3 Contract Status Write -The contract administrator sets the status type of the contract to which it belongs, which will be output in the node log.。 +The contract administrator sets the status type of the contract to which it belongs, which will be output in the node log。 - Log Keywords: ContractAuthMgrPrecompiled - Log level: INFO diff --git a/3.x/en/docs/operation_and_maintenance/node_management.md b/3.x/en/docs/operation_and_maintenance/node_management.md index 6f58823a9..f23b4e027 100644 --- a/3.x/en/docs/operation_and_maintenance/node_management.md +++ b/3.x/en/docs/operation_and_maintenance/node_management.md @@ -15,12 +15,12 @@ FISCO BCOS introduces [free nodes, observer nodes and consensus nodes](../design The console provides the**[addSealer](./console/console_commands.html#addsealer)** 、**[addObserver](./console/console_commands.html#addobserver)** 和**[removeNode](./console/console_commands.html#removenode)** Three types of commands convert the specified node into a consensus node, an observer node, and a free node, and can be used to**[getSealerList](./console/console_commands.html#getsealerlist)**、**[getObserverList](./console/console_commands.html#getobserverlist)** 和**[getNodeIDList](./console/console_commands.htmml#getnodeidlist)** View the list of consensus nodes, the list of observer nodes, and the list of all nodes in the current group。 -- addSealer: Sets the corresponding node as a consensus node based on the node NodeID; -- addObserver: Set the corresponding node as the observation node based on the node NodeID.; -- removeNode: Sets the corresponding node as a free node based on the node's NodeID.; -- getSealerList: View the list of consensus nodes in a group; -- getObserverList: View the list of observation nodes in a group; -- getNodeIDList: View the NodeIDs of all other nodes to which the node is connected。 +-addSealer: Set the corresponding node as a consensus node based on the node NodeID; +-addObserver: Set the corresponding node as an observation node based on the node NodeID; +-removeNode: Sets the corresponding node as a free node based on the node NodeID; +-getSealerList: View the list of consensus nodes in a group; +-getObserverList: View the list of observation nodes in the group; +-getNodeIDList: View the NodeIDs of all other nodes to which the node is connected。 Example: Convert the specified node into a consensus node, an observer node, and a free node. The main operation commands are as follows: @@ -30,7 +30,7 @@ Convert the specified node into a consensus node, an observer node, and a free n Before node admission operations, ensure that: - - The node ID of the operation node exists. You can run cat conf / node.nodeid in the node directory to obtain the node ID. + - The node ID of the operation node exists. The node ID can be obtained by running cat conf / node.nodeid in the node directory - The consensus of all nodes in the blockchain that the node joins is normal: the node with normal consensus will output+++Log ``` @@ -78,29 +78,29 @@ $ bash start.sh ## Operation Case -The group expansion operation and node withdrawal operation are described in detail below in combination with specific operation cases.。The expansion operation is divided into two phases, namely**Join Node to Network**、**Add node to group**。The exit operation is also divided into two phases for the**Exit node from group**、**Exit node from network**。 +The group expansion operation and node withdrawal operation are described in detail below in combination with specific operation cases。The expansion operation is divided into two phases, namely**Join Node to Network**、**Add node to group**。The exit operation is also divided into two phases for the**Exit node from group**、**Exit node from network**。 ### Operation mode -- Modify node configuration: The node restarts after modifying its own configuration. The operations involved include**Join / exit of network, inclusion / removal of CA blacklist**。 -- Transaction consensus on-chain: The node sends on-chain transactions to modify the configuration items that require group consensus. The operations involved include**Modification of node type**。Currently, the way to send transactions is the precompiled service interface provided by the console and SDK.。 -- RPC query: Use the curl command to query information on the chain. The operations involved include**Querying Group Nodes**。 +- Modify node configuration: The node restarts to take effect after modifying its own configuration. The operations involved include**Join / exit of network, inclusion / removal of CA blacklist**。 +- Transaction consensus on the chain: the node sends the on-chain transaction to modify the configuration items that need group consensus, the operation items involved include**Modification of node type**。Currently, the way to send transactions is the precompiled service interface provided by the console and SDK。 +-RPC query: Use the curl command to query information on the chain. The operations involved include**Querying Group Nodes**。 ### Operation steps -This section will take the following figure as an example to describe the above expansion operation and network withdrawal operation.。The dotted line indicates that the nodes can communicate with each other, and the solid line indicates that the nodes have a group relationship on the basis of communication, and different colors distinguish different group relationships.。The following figure shows a network with three groups, where group Group3 has three nodes。Whether Group3 has intersection nodes with other groups does not affect the generality of the following operations。 +This section will take the following figure as an example to describe the above expansion operation and network withdrawal operation。The dotted line indicates that the nodes can communicate with each other, and the solid line indicates that the nodes have a group relationship on the basis of communication, and different colors distinguish different group relationships。The following figure shows a network with three groups, where group Group3 has three nodes。Whether Group3 has intersection nodes with other groups does not affect the generality of the following operations。 ![](../../images/node_management/multi_ledger_example.png) -< center > Group Examples < / center > +
Group Examples
For example, the related node information of Group3 is as follows: -Node 1 has a directory name of 'node0' and an IP port of 127.0.0.1:30400, nodeID first four bytes for b231b309... +Node 1 has a directory name of 'node0' and an IP port of 127.0.0.1:30400, nodeID first four bytes for b231b309.. -Node 2 has a directory name of 'node1' and an IP port of 127.0.0.1:30401, nodeID first four bytes for aab37e73... +Node 2 has a directory name of 'node1' and an IP port of 127.0.0.1:30401, nodeID first four bytes for aab37e73.. -Node 3 has a directory name of 'node2' and an IP port of 127.0.0.1:30402, the first four bytes of nodeID are d6b01a96... +Node 3 has a directory name of 'node2' and an IP port of 127.0.0.1:30402, the first four bytes of nodeID are d6b01a96.. #### Node A joins the network @@ -110,18 +110,18 @@ Node 3 was not originally in the network and now joins the network。 Operation sequence: -1. Enter the nodes directory at the same level, pull down and execute 'gen _ node _ cert.sh' to generate the node directory. The directory name is node2. There is a 'conf /' directory in node2.; +1. Enter the nodes directory at the same level, pull down and execute 'gen _ node _ cert.sh' to generate the node directory. The directory name is node2. There is a 'conf /' directory in node2; ``` # Get Script $ curl -#LO https://raw.githubusercontent.com/FISCO-BCOS/FISCO-BCOS/master-2.0/tools/gen_node_cert.sh && chmod u+x gen_node_cert.sh -# Execution,-c is the ca path provided by the generated node, agency is the organization name,-o is the directory name of the node to be generated (if it is a state secret node, use the-g parameters) +# Execute, -c is the ca path provided by the generated node, agency is the institution name, and -o is the directory name of the node to be generated (if it is a state secret node, use the -g parameter) $ ./gen_node_cert.sh -c nodes/cert/agency -o node2 ``` ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/gen_node_cert.sh` + -If you cannot download for a long time due to network problems, please try'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/gen_node_cert.sh` ``` 2. Copy node2 to 'nodes / 127.0.0.1 /' and the same level as other node directories ('node0' and 'node1'); @@ -172,7 +172,7 @@ $ ./node2/start.sh 7. Confirm that the connection between node 3 and node 1 and node 2 has been established, and the operation of joining the network is completed。 ``` -# Before you open the DEBUG level log, view the number of nodes connected to the node (node2) and the information about the connected nodes (nodeID). +# Before you open the DEBUG level log, view the number of nodes connected to the node (node2) and the information about the connected nodes (nodeID) # The following log shows that node node2 has established a connection with two nodes (the first 4 bytes of nodeID of the node are b231b309 and aab37e73) $ tail -f node2/log/log* | grep P2P debug|2019-02-21 10:30:18.694258| [P2P][Service] heartBeat ignore connected,endpoint=127.0.0.1:30400,nodeID=b231b309... @@ -182,11 +182,11 @@ info|2019-02-21 10:30:18.694294| [P2P][Service] heartBeat connected count,size=2 ```eval_rst .. note:: - - If whitelists are enabled, ensure that all nodes have been configured in the whitelists in the config.ini of all nodes, and refresh the whitelist configuration to the nodes correctly.。Reference to CA Black and White List; - - The rest of the configuration of config.ini copied from node 1 remains unchanged; - - In theory, nodes 1 and 2 do not need to modify their own P2P node connection list to complete the expansion of node 3.; - - The group selected in step 5 is recommended to be the group that node 3 needs to join later.; - - It is recommended that you add the information of node 3 to the P2P node connection list of config.ini of nodes 1 and 2 and restart nodes 1 and 2 to maintain the fully interconnected state of all nodes in the network.。 + - If whitelists are enabled, make sure that all nodes have been configured in the whitelists in the config.ini of all nodes, and refresh the whitelists to the nodes correctly。Reference to CA Black and White List; + - the rest of the configuration of config.ini copied from node 1 remains unchanged; + -Theoretically, nodes 1 and 2 can expand the operation of node 3 without modifying their own P2P node connection list; + - The group selected in step 5 is recommended as the group to be joined by node 3; + - It is recommended that the user add the information of node 3 to the P2P node connection list of config.ini of nodes 1 and 2 and restart nodes 1 and 2 to maintain the fully interconnected state of the entire network node。 ``` #### Node A exits the network @@ -213,8 +213,8 @@ nohup: appending output to ‘nohup.out’ ```eval_rst .. note:: - **Node 3 needs to exit the group before exiting the network. The exit order is guaranteed by the user, and the system no longer checks**; - - The network connection is initiated by the node. If step 2 is missing, node 3 can still sense the P2P connection request initiated by node 1 and node 2 and establish a connection. You can use the CA blacklist to avoid this situation.。 - - If the whitelist is enabled, you must delete the whitelist configuration of the exit node from the config.ini of all nodes and correctly swipe the new whitelist configuration into the node.。Reference to CA Black and White List。 + -The network connection is initiated by the node. If step 2 is missing, node 3 can still sense the P2P connection request initiated by node 1 and node 2 and establish a connection. CA blacklist can be used to avoid this situation。 + - If the whitelist is enabled, delete the whitelist configuration of the exit node from the config.ini of all nodes and correctly swipe the new whitelist configuration into the node。Reference to CA Black and White List。 ``` #### Node A joins the group @@ -226,15 +226,15 @@ Group Group3 The original node 1 and node 2, the two nodes in turn out of the bl Operation sequence: 1. Node 3 joins the network; -2. Use the console addSealer to set node 3 as based on the nodeID of node 3.**consensus node**; -3. Use the console getSealerList to check whether the consensus node of group3 contains the nodeID of node 3. If yes, join the group.。 +2. Use the console addSealer to set node 3 as based on the nodeID of node 3**consensus node**; +3. Use the console getSealerList to check whether the consensus node of group3 contains the nodeID of node 3. If yes, join the group。 ```eval_rst .. note:: - - The NodeID of node 3 can be obtained by using 'cat nodes / 127.0.0.1 / node2 / conf / node.nodeid'.; + - The NodeID of node 3 can be obtained by using 'cat nodes / 127.0.0.1 / node2 / conf / node.nodeid'; - When node 3 is started for the first time, the initial list of configured group nodes is written to the group node system table. After block synchronization,**The group node system table of each group node is consistent**; - - **Node 3 needs to complete the network access before performing the operation of joining the group. The system checks the operation sequence.**; - - **The group fixed profile of node 3 must be the same as that of nodes 1 and 2.**。 + - **Node 3 needs to complete the network access before performing the operation of joining the group. The system checks the operation sequence**; + - **The group fixed profile of node 3 must be the same as that of nodes 1 and 2**。 ``` #### A node exits the group @@ -245,12 +245,12 @@ Group Group3 The original node 1, node 2, and node 3, the three nodes in turn ou Operation sequence: -1. Use the console removeNode to set node 3 as based on the NodeID of node 3.**free node**; -2. Use the console getSealerList to check whether the consensus node of group3 contains the nodeID of node 3. If it has disappeared, the exit group operation is complete.。 +1. Use the console removeNode to set node 3 as based on the NodeID of node 3**free node**; +2. Use the console getSealerList to check whether the consensus node of group3 contains the nodeID of node 3. If it has disappeared, the exit group operation is complete。 Supplementary note: ```eval_rst .. note:: - - Node 3 can perform an exit operation as a consensus node or an observer node。 + - Node 3 can perform an exit operation as a consensus node or an observation node。 ``` diff --git a/3.x/en/docs/operation_and_maintenance/operation_and_maintenance.md b/3.x/en/docs/operation_and_maintenance/operation_and_maintenance.md index cfe67cb5c..90eb0fb52 100644 --- a/3.x/en/docs/operation_and_maintenance/operation_and_maintenance.md +++ b/3.x/en/docs/operation_and_maintenance/operation_and_maintenance.md @@ -4,19 +4,19 @@ Tags: "Operation and Maintenance" ## Deploy -Alliance chain is a distributed network and distributed system composed of multiple nodes, the node geographic location rate belongs to a certain partition, and the attribution rate belongs to an organization.。The deployment of alliance chain needs to consider many factors such as organization, partition, node, etc.。Here are some basic principles of deployment: +Alliance chain is a distributed network and distributed system composed of multiple nodes, the node geographic location rate belongs to a certain partition, and the attribution rate belongs to an organization。The deployment of alliance chain needs to consider many factors such as organization, partition, node, etc。Here are some basic principles of deployment: ||目的|Content |:--|:--|:-- |1|Consensus has fault-tolerant space|The number of nodes satisfies N = 3F+1. The chain needs at least 4 nodes |2|partition fault tolerance|Number of consensus nodes per partition should not exceed F |3|Avoiding single points of failure within the mechanism|At least 2 nodes per institution -|4|Save resources and increase efficiency|Some nodes in the mechanism are observation nodes. -|5|Institutional Weight Adjustment|Adjust the number of nodes in the organization and the weight of the consensus node according to the weight agreed by all parties. +|4|Save resources and increase efficiency|Some nodes in the mechanism are observation nodes +|5|Institutional Weight Adjustment|Adjust the number of nodes in the organization and the weight of the consensus node according to the weight agreed by all parties ## Log Description -FISCO BCOS provides a standardized log output format, which can be used to analyze the running status of the system, locate problems, monitor statistics, etc.。 +FISCO BCOS provides a standardized log output format, which can be used to analyze the running status of the system, locate problems, monitor statistics, etc。 ```bash # Log format: @@ -29,37 +29,37 @@ info|2022-11-21 20:00:35.479505|[SCHEDULER][blk-1]BlockExecutive prepare: fillBl where log _ level is the log level, from small to large, including trace, debug, info, warning, error, and fatal, time indicates the log printing time, [module _ name] indicates the module name, including consensus, synchronization, transaction pool, storage, etc., and content is the specific log content。General log analysis and problem location, you can view [Log Description](./log/index.md)。 -The log output level is configured in the config.ini file. In the test environment, it is recommended to set it to the trace or debug level, which can output logs of all levels for easy analysis and positioning.。In a production environment, we recommend that you set it to the info level to reduce the amount of log output (the amount of trace and debug logs is large) and avoid excessive log disk usage.。 +The log output level is configured in the config.ini file. In the test environment, it is recommended to set it to the trace or debug level, which can output logs of all levels for easy analysis and positioning。In a production environment, we recommend that you set it to the info level to reduce the amount of log output (the amount of trace and debug logs is large) and avoid excessive log disk usage。 ## monitoring alarm -The monitoring of FISCO BCOS includes two parts: blockchain monitoring and system monitoring.。 +The monitoring of FISCO BCOS includes two parts: blockchain monitoring and system monitoring。 -[Blockchain monitoring] FISCO BCOS provides its own system monitoring tool monitor.sh, which can monitor node survival, consensus status, and ledger status.。The monitor.sh tool can connect the output content to the organization's own operation and maintenance monitoring system, so that blockchain monitoring can be connected to the organization's operation and maintenance monitoring platform.。 +[Blockchain monitoring] FISCO BCOS provides its own system monitoring tool monitor.sh, which can monitor node survival, consensus status, and ledger status。The monitor.sh tool can connect the output content to the organization's own operation and maintenance monitoring system, so that blockchain monitoring can be connected to the organization's operation and maintenance monitoring platform。 -[System Monitoring] In addition to monitoring the FISCO BCOS node itself, it is also necessary to monitor relevant indicators from the perspective of the system environment.。It is recommended that the operation and maintenance should monitor the CPU, memory, bandwidth consumption and disk consumption of the node to find out the abnormal system environment in time.。FISCO BCOS3.0 can monitor whether the blockchain is working properly, including monitoring consensus, abnormal synchronization, and disk space. It also provides a simple way to access the user alarm system. You can view the [light _ monitor.sh monitoring tool](../operation_and_maintenance/light_monitor.md)。 +[System Monitoring] In addition to monitoring the FISCO BCOS node itself, it is also necessary to monitor relevant indicators from the perspective of the system environment。It is recommended that the operation and maintenance should monitor the CPU, memory, bandwidth consumption and disk consumption of the node to find out the abnormal system environment in time。FISCO BCOS3.0 can monitor whether the blockchain is working properly, including monitoring consensus, abnormal synchronization, and disk space. It also provides a simple way to access the user alarm system. You can view the [light _ monitor.sh monitoring tool](../operation_and_maintenance/light_monitor.md)。 ## Data Backup and Recovery FISCO BCOS supports two data backup methods, you can choose the appropriate method according to your needs。 -[Method 1]: Stop the node, package and compress the data directory of the node as a whole and back it up to another location, decompress the backup data when needed, and restore the node。This method is equivalent to a snapshot of the data in a ledger state for subsequent recovery from this state. For details, see [Node Monitoring Configuration].(https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/air/build_chain.html?highlight=%E7%9B%91%E6%8E%A7#id4)。 +[Method 1]: Stop the node, package and compress the data directory of the node as a whole and back it up to another location, decompress the backup data when needed, and restore the node。This method is equivalent to a snapshot of the data in a ledger state for subsequent recovery from this state. For details, see [Node Monitoring Configuration](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/air/build_chain.html?highlight=%E7%9B%91%E6%8E%A7#id4)。 -[Method 2]: According to the data archiving service tool, the data on the chain can be archived and stored.。When you need to restore or add new nodes, you can restore the archived data to realize data backup and recovery. For specific data archiving operations, please refer to [Data Archiving Usage](../operation_and_maintenance/data_archive_tool.md) +[Method 2]: According to the data archiving service tool, the data on the chain can be archived and stored。When you need to restore or add new nodes, you can restore the archived data to realize data backup and recovery. For specific data archiving operations, please refer to [Data Archiving Usage](../operation_and_maintenance/data_archive_tool.md) -The advantage of method 1 is that there is no need to deploy new services and operations, and the operation and maintenance is simple. The disadvantage is that a historical state is backed up, and the data recovered from this state is not the latest data. After recovery, the updated data of the ledger needs to be synchronized from other nodes.。Method 2 is the opposite of Method 1, which requires the deployment of services, which is more expensive to operate and maintain, but can be restored to the latest ledger state at any time.。 +The advantage of method 1 is that there is no need to deploy new services and operations, and the operation and maintenance is simple. The disadvantage is that a historical state is backed up, and the data recovered from this state is not the latest data. After recovery, the updated data of the ledger needs to be synchronized from other nodes。Method 2 is the opposite of Method 1, which requires the deployment of services, which is more expensive to operate and maintain, but can be restored to the latest ledger state at any time。 ## Expansion method -FISCO BCOS expansion mainly considers two aspects: the expansion of the number of nodes and the expansion of the number of disks.。 +FISCO BCOS expansion mainly considers two aspects: the expansion of the number of nodes and the expansion of the number of disks。 [node number expansion]: FISCO BCOS supports dynamic addition and removal of nodes, and can change the identity status of nodes (consensus, observation, free)。Reject and change status can be done directly through console commands。To add a node, you need to perform the following steps: 1. Prepare certificates for new nodes and issue node certificates with agency certificates; 2. Prepare the machine of the new node, allocate the RPC and P2P ports, ensure that the ports can be connected, and ensure that the P2P ports can communicate with other nodes; -3. Generate the configuration of the new node, mainly the network configuration in config.ini。During configuration, we recommend that you copy a copy from another node and modify the network-related configuration items on this basis.; -4. Publish the new node to the machine, start the node, verify whether the network connection between the new node and other nodes is established, and eliminate exceptions such as certificate problems and network policy problems.; +3. Generate the configuration of the new node, mainly the network configuration in config.ini。During configuration, we recommend that you copy a copy from another node and modify the network-related configuration items on this basis; +4. Publish the new node to the machine, start the node, verify whether the network connection between the new node and other nodes is established, and eliminate exceptions such as certificate problems and network policy problems; 5. Send a command from the console to add the new node as an observation node; -6. At this time, the node does not participate in the consensus, it will synchronize the ledger and wait for the block height to reach an agreement with other nodes.; +6. At this time, the node does not participate in the consensus, it will synchronize the ledger and wait for the block height to reach an agreement with other nodes; 7. Send a command from the console to change the new node status to the consensus node。 FISCO BCOS supports node expansion regardless of air, pro or max. The above steps are the same. For details, please refer to [Air node expansion](../tutorial/air/expand_node.md)[Pro node expansion](../tutorial/pro/expand_node.md)[Max Node Expansion](../tutorial/max/max_builder.md). @@ -73,7 +73,7 @@ Air chain data disk expansion: FISCO BCOS uses the rocksdb storage engine by def 4. Migrate node to new disk; 5. Restart the node; 6. Send a command from the console to add the node to the consensus。 - Some cloud platforms provide one-click upgrade, expansion of hard disk and other functions, the above 3-4 steps can replace this function。 + Some cloud platforms provide functions such as one-click upgrade and hard disk expansion. The above 3-4 steps can replace this function。 Max Chain Data Disk Expansion:We recommend that you use the TIKV cluster version for Max nodes in the production environment. The TiKV cluster version can be used as the backend of the nodes to easily and simply scale out。For specific expansion and contraction, please refer to [TIKV Expansion](../tutorial/max/max_builder.md)。 @@ -81,28 +81,28 @@ Max Chain Data Disk Expansion:We recommend that you use the TIKV cluster version FISCO BCOS supports node-friendly, contract-compatible upgrades。 -[Node upgrade]: FISCO BCOS uses compatibility _ version to control the compatibility version of the block chain. Compatibility _ version must be determined in the construction chain. This configuration cannot be changed during subsequent node upgrades.。For example, the compatibility _ version is 3.1.0 when the chain is established, and the compatibility _ version configuration must remain at 3.1.0 after subsequent node upgrades to 3.2.0 and 3.3.0.。The node upgrade steps are as follows: +[Node upgrade]: FISCO BCOS uses compatibility _ version to control the compatibility version of the block chain. Compatibility _ version must be determined in the construction chain. This configuration cannot be changed during subsequent node upgrades。For example, the compatibility _ version is 3.1.0 when the chain is established, and the compatibility _ version configuration must remain at 3.1.0 after subsequent node upgrades to 3.2.0 and 3.3.0。The node upgrade steps are as follows: 1. Stop Node; -2. Back up the FICO of the old version node-bcos binary executable, replaced with new version; +2. Back up the disco-bcos binary executable of the old version node and replace it with the new version; 3. Restart the node; 4. Check the consensus and synchronization to ensure the normal operation of the node。 Contract upgrade can refer to the document [upgrade of smart contract](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/contract_life_cycle.html#id5)。 ## Key Management -FISCO BCOS involves the management of private keys and certificates of chains, institutions, nodes, and SDKs. If a self-signed certificate is used (FISCO BCOS is provided by default), O & M needs to manage all these private keys and certificates and make backups.。The specific management method can be the organization's own management system, or the key escrow service provided by FISCO BCOS (the service needs to be deployed and maintained).。The certificates and private keys involved include: +FISCO BCOS involves the management of private keys and certificates of chains, institutions, nodes, and SDKs. If a self-signed certificate is used (FISCO BCOS is provided by default), O & M needs to manage all these private keys and certificates and make backups。The specific management method can be the organization's own management system, or the key escrow service provided by FISCO BCOS (the service needs to be deployed and maintained)。The certificates and private keys involved include: -1. The private key and certificate of the chain. +1. The private key and certificate of the chain 2. Private key and certificate of the institution -3. The node's private key and certificate. +3. The node's private key and certificate 4. SDK private key and certificate -All keys and certificates support the national secret. The generated national secret certificate and private key have their own sm prefix. For example, the normal key and certificate are ca.key and ca.crt, and the national secret private key and certificate are sm _ ca.key and sm _ ca.crt.。 +All keys and certificates support the national secret. The generated national secret certificate and private key have their own sm prefix. For example, the normal key and certificate are ca.key and ca.crt, and the national secret private key and certificate are sm _ ca.key and sm _ ca.crt。 ## TLS Communication Certificate Maintenance -In order to ensure the security of system communication operation and maintenance, FISCO BCOS regularly updates the TLS communication key of nodes to prevent attackers from analyzing the key by intercepting a large number of ciphertexts over a long period of time.。 +In order to ensure the security of system communication operation and maintenance, FISCO BCOS regularly updates the TLS communication key of nodes to prevent attackers from analyzing the key by intercepting a large number of ciphertexts over a long period of time。 Key update is divided into two methods: updating all certificates and keys of the node and updating only the TLS communication certificate of the node. The steps for updating the root certificate are as follows: 1. Back up the original CA certificate and key; @@ -119,7 +119,7 @@ To update only the node TLS communication certificate, follow these steps: If the certificate is compromised, the longer the certificate is used, the greater the loss。 Therefore, the use of the certificate, should set the validity period, when the certificate exceeds the validity period or stop using, the certificate is destroyed, the specific destruction process is as follows: -1. Check the validity period of the node communication certificate. If the certificate expires, the certificate will be archived and destroyed. If the key is stopped, the user can also take the initiative to destroy the certificate after the certificate is archived.; +1. Check the validity period of the node communication certificate. If the certificate expires, the certificate will be archived and destroyed. If the key is stopped, the user can also take the initiative to destroy the certificate after the certificate is archived; 2. Update the node TLS communication key to generate a new communication certificate; 3. Back up the new certificate, restart the node, and enable the new certificate。 diff --git a/3.x/en/docs/operation_and_maintenance/storage_tool.md b/3.x/en/docs/operation_and_maintenance/storage_tool.md index 4b0932534..b6f302a54 100644 --- a/3.x/en/docs/operation_and_maintenance/storage_tool.md +++ b/3.x/en/docs/operation_and_maintenance/storage_tool.md @@ -11,17 +11,17 @@ The storage read / write tool is used to read and write storage data and support - Read a single key in a table - Read full table data - Modify a single KV in a table -- Compare the data of two nodes -- State data and chain data size in statistics node database +- Compare data of two nodes +- Statistics of state data and chain data in the node database ## 使用 -The tool needs to be set when compiling the source code.-DTOOLS = on 'Compile option, the tool needs to run under the node directory to read the node's configuration file。 +The tool needs to set the '-DTOOLS = on' compilation option when compiling the source code. The tool needs to run under the node directory to read the configuration file of the node。 ```bash # Open TOOLS option at compile time cmake -DTOOLS=on .. -# The compiled tools are located in build / tools / storage-tool/storageTool +# The compiled tools are located in build / tools / storage-tool / storageTool $ ./tools/storage-tool/storageTool -h storage tool used to read/write the data of storage: -h [ --help ] help of storage tool @@ -43,7 +43,7 @@ storage tool used to read/write the data of storage: ### Read a single piece of data -`-r 'option is used to read a single piece of data in the table, the parameter is' [table name] [key] '。Can be combined '-H 'option to use,'-The H 'option will decode the key in the parameter with hex and use it to read the data.。Examples are as follows: +'-r 'option is used to read a single piece of data in the table, the parameter is' [table name] [key] '。Can be used in conjunction with the '-H' option, which will use the key in the parameter to read data after hex decoding。Examples are as follows: ```bash $ ./storageTool -r s_current_state current_number @@ -55,7 +55,7 @@ read s_current_state, key is current_number ### Traverse Table -`-I 'option is used to traverse the data in the table. The parameter is' [table name] '. The data in the table will be written to the file' [table name] .txt 'in the current directory. If there is' / 'in the table name, it will be replaced with an underscore.。Can be combined '-H 'option to use,'-The H 'option will hex encode the read data for easy viewing of binary data。Examples are as follows: +The '-i' option is used to traverse the data in the table. The parameter is' [table name] '. The data in the table is written to the' [table name] .txt 'file in the current directory。Can be used in conjunction with the '-H' option, which hex encodes the read data for easy viewing of binary data。Examples are as follows: ```bash $ ./storageTool -i s_current_state @@ -72,7 +72,7 @@ db path : data, table : s_current_state ### Modify data -`-w 'option is used to modify a certain piece of data in the table, the parameter is' [table name] [key] [value] ', if the value is an empty string, this data is deleted, you can combine'-H 'option to use,'-The H 'option will decode the value in the parameter using hex and write。Examples are as follows: +The '-w' option is used to modify a piece of data in the table. The parameter is' [table name] [key] [value] '. If the value is an empty string, the data is deleted. It can be used in combination with the' -H 'option. The' -H 'option will decode the value in the parameter using hex and write it。Examples are as follows: ```bash # Read @@ -111,7 +111,7 @@ get row not found,,table=s_current_state,key=current_number ### statistical function -`-s' option is used in statistical storage**Non-state data**Size of storage occupied, '-S 'option for statistical storage**Status Data**Size of storage occupied。The statistical result is the size of the data in memory, and the sum will be larger than the actual size of RocksDB, because RocksDB will have compression。Examples are as follows: +'-s' option is used in statistical storage**Non-state data**The size of the storage occupied, the '-S' option is used for statistical storage**Status Data**Size of storage occupied。The statistical result is the size of the data in memory, and the sum will be larger than the actual size of RocksDB, because RocksDB will have compression。Examples are as follows: ```bash $ du -sh ./data/ @@ -136,7 +136,7 @@ s_hash_2_receipt size is 16.5311GB ### Contrast function -`-The C 'option is used to compare the data of two nodes, which will be compared table by table, where the difference in' importTime 'is ignored during the transaction comparison, and the difference in the signature list is ignored during the block header comparison.。Examples are as follows: +The '-C' option is used to compare the data of two nodes, which will be compared table by table, where the difference of 'importTime' is ignored during transaction comparison, and the difference of signature list is ignored during block header comparison。Examples are as follows: ```bash $ ./storageTool -C rocksdb ../node1/data/ diff --git a/3.x/en/docs/operation_and_maintenance/stress_testing.md b/3.x/en/docs/operation_and_maintenance/stress_testing.md index 8e277caa0..bde9649fd 100644 --- a/3.x/en/docs/operation_and_maintenance/stress_testing.md +++ b/3.x/en/docs/operation_and_maintenance/stress_testing.md @@ -6,11 +6,11 @@ Tags: "Stress Test" "Java SDK Demo" ## Stress testing via Java SDK demo -Java SDK Demo is based on [Java SDK](./sdk/java_sdk/index.md)Benchmark test collection for stress testing FISCO BCOS nodes。Java SDK Demo provides contract compilation, which can convert Solidity contract files into Java contract files, and also provides sample stress test programs for transfer contracts, CRUD contracts, and AMOP functions.。 +Java SDK Demo is based on [Java SDK](./sdk/java_sdk/index.md)Benchmark test collection for stress testing FISCO BCOS nodes。Java SDK Demo provides contract compilation, which can convert Solidity contract files into Java contract files, and also provides sample stress test programs for transfer contracts, CRUD contracts, and AMOP functions。 ### Step 1. Install the JDK -The test program in the Java SDK demo can be run in an environment where JDK 1.8 ~ JDK 14 is deployed. Before executing the test program, make sure that the required JDK version is installed.。Take the example of installing OpenJDK 11 on an Ubuntu system: +The test program in the Java SDK demo can be run in an environment where JDK 1.8 ~ JDK 14 is deployed. Before executing the test program, make sure that the required JDK version is installed。Take the example of installing OpenJDK 11 on an Ubuntu system: ```shell # Install open JDK 11 @@ -36,12 +36,12 @@ $ bash gradlew build ```eval_rst .. note:: - When the network cannot access GitHub, call the://gitee.com/FISCO-BCOS/java-sdk-Download the source code at demo。 + When the network cannot access GitHub, call the:/ / download the source code at gitee.com / FISCO-BCOS / java-sdk-demo。 ``` ### Step 3. Configure Demo -Before using the Java SDK demo, you must first configure the Java SDK, including certificate copy and port configuration. For more information, see [here](./sdk/java_sdk/quick_start.md)For more information, see [SDK Connection Certificate Configuration].(../cert_config.md)。Take FISCO BCOS 3.x Air as an example: +Before using the Java SDK demo, you must first configure the Java SDK, including certificate copy and port configuration. For more information, see [here](./sdk/java_sdk/quick_start.md)For more information, see [SDK Connection Certificate Configuration](../cert_config.md)。Take FISCO BCOS 3.x Air as an example: ```shell # Copy Certificate(Assuming that the SDK certificate is located in the ~ / fisco / nodes / 127.0.0.1 / sdk directory, change the path according to the actual situation) @@ -49,13 +49,13 @@ Before using the Java SDK demo, you must first configure the Java SDK, including # Copy Configuration File # Note: - # The default RPC port of the FISCO BCOS blockchain system is 20200. If you modify this port, modify the [network.peers] configuration option in config.toml. + # The default RPC port of the FISCO BCOS blockchain system is 20200. If you modify this port, modify the [network.peers] configuration option in config.toml $ cp conf/config-example.toml conf/config.toml ``` ### Step 4. Perform the sample stress test procedure -Java SDK Demo provides a series of stress testing programs, including serial transfer contract stress testing and parallel transfer contract stress testing. +Java SDK Demo provides a series of stress testing programs, including serial transfer contract stress testing and parallel transfer contract stress testing **Note: The following stress test programs are all EVM node execution environments. For more information about node configuration, see [Node Configuration](../tutorial/air/config.md)** @@ -63,7 +63,7 @@ Java SDK Demo provides a series of stress testing programs, including serial tra # Enter dist directory $ cd dist -# multi-contract-Intra-Contract Parallel Transfer Contract: +# Multiple Contracts - Parallel Transfer Contracts Within Contracts: # groupId: Group ID of pressure test # userCount: Number of accounts created, recommended (4 to 32) # count: Total transaction volume of pressure measurement @@ -71,7 +71,7 @@ $ cd dist java -cp 'conf/:lib/*:apps/*' org.fisco.bcos.sdk.demo.perf.PerformanceDMC [groupId] [userCount] [count] [qps] -# multi-contract-Cross-Contract Parallel Transfer +# Multi-Contract - Cross-Contract Parallel Transfer # groupId: Group ID of pressure test # userCount: Number of accounts created, recommended (4 to 32) # count: Total transaction volume of pressure measurement @@ -182,14 +182,14 @@ Example test environment: - Hardware Condition: Apple M1 Max(10 cores CPU),32GB LPDDR5 RAM,1T SSD - System version: macOS 12.0.1 -- Compilation environment: clang-1300.0.29.3,cmake 3.22.1 +- Compile environment: clang-1300.0.29.3, cmake 3.22.1 - FISCO BCOS version: master branch, Git Commit: c0e9dadb6e7ad1bbaf3128a27803226fb7ba6a9a, build type: Darwin / appleclang / release #### Pressure Test Material Preparation Build the four-node environment of FISCO BCOS Air version. Refer to the link: [Build the FISCO BCOS Alliance Chain of Air Version](../quick_start/air_installation.md) -If the source code is compiled to generate fisco-bcos binary, add '-DCMAKE _ BUILD _ TYPE = Release ', compile**Release**Binary version for higher performance and better pressure test performance。 +Add '-DCMAKE _ BUILD _ TYPE = Release' and compile**Release**Binary version for higher performance and better pressure test performance。 ```shell # linux @@ -240,7 +240,7 @@ vim ~/fisco/nodes/127.0.0.1/node0/config.genesis node.3=027f25b1597d363babe412962b05995d56033636cb061737beeb7e9a6c811e19f1fccf763acc8271ba542eb4fe5d798e9b06ce0e28ef73285e7b86778ad879ca: 1 ``` -If the node has already been started, you need to delete the data in the data directory of the node and restart the node to modify the creation block configuration.。 +If the node has already been started, you need to delete the data in the data directory of the node and restart the node to modify the creation block configuration。 ```shell # Modify the config.ini configuration of node0 @@ -294,12 +294,12 @@ try to start node3 node3 start successfully ``` -Continue to prepare the pressure measurement program, this example uses' java-sdk-demo 'for stress testing, please refer to the first section of this article for details: [link](./stress_testing.html#jdk)I will not repeat it here.。 +Continue to prepare the pressure test program, this example uses' java-sdk-demo 'for stress testing, please refer to the first section of this article for details: [link](./stress_testing.html#jdk)I will not repeat it here。 Configure the Java SDK demo to send transactions to four nodes: ```shell -# Enter Java-sdk-Configuration item of demo +# Enter the java-sdk-demo configuration item vim ~/fiso/java-sdk-demo/dist/conf/config.toml ``` @@ -313,7 +313,7 @@ peers=["127.0.0.1:20200", "127.0.0.1:20201", "127.0.0.1:20202", "127.0.0.1:20203 After configuring the Java SDK demo, change the log level of the Java SDK to ERROR: ```shell -# Enter Java-sdk-log configuration entry of demo +# Enter the log configuration items of java-sdk-demo vim ~/fiso/java-sdk-demo/dist/conf/log4j2.xml vim ~/fiso/java-sdk-demo/dist/conf/clog.ini ``` @@ -345,7 +345,7 @@ Modify the log level: #### Start Stress Test -Back to Java-sdk-demo 'After compiling the environment, run the following command to start the pressure test: +Return to the 'java-sdk-demo' compiled environment and run the following command to start the stress test: The stress test here is to deploy 32 Account Solidity contracts to the group group and send 500,000 transactions with a QPS of 20,000。 @@ -360,5 +360,5 @@ Stress test results: ![](../../images/develop/stress_test.png) -- A total of 500,000 transactions were initiated, the time to send to the node was 32 seconds, and the time to collect all transaction receipts was 51 seconds。 +-A total of 500,000 transactions were initiated, the time to send to the node was 32 seconds, and the time to collect all transaction receipts was 51 seconds。 - TPS is 9712.321050484645 (transactions / sec) diff --git a/3.x/en/docs/operation_and_maintenance/upgrade.md b/3.x/en/docs/operation_and_maintenance/upgrade.md index ba01cc5f8..2fab25ffe 100644 --- a/3.x/en/docs/operation_and_maintenance/upgrade.md +++ b/3.x/en/docs/operation_and_maintenance/upgrade.md @@ -1,15 +1,15 @@ # 10. Version Upgrade Guide ---------- -This document mainly discusses the upgrade of FISCO BCOS from three aspects to answer the upgrade requirements of community users in the actual application of FISCO BCOS.。From easy to difficult, from near to far, the idea is divided into the following three parts: -- Part 1, How to implement an upgrade between FISCO BCOS 3.x versions; -- Part 2, How to Upgrade Between FISCO BCOS Air, Pro, and Max; -- Part III, How to Upgrade from FISCO BCOS 2.0 to 3.0。 +This document mainly discusses the upgrade of FISCO BCOS from three aspects to answer the upgrade requirements of community users in the actual application of FISCO BCOS。From easy to difficult, from near to far, the idea is divided into the following three parts: +-The first part, how to implement the upgrade between FISCO BCOS 3.x versions; +- Part 2, How to implement an upgrade between FISCO BCOS Air, Pro and Max; +-Part III, how to upgrade from FISCO BCOS 2.0 to 3.0。 ## 1. Upgrade between FISCO BCOS 3.x versions -Each version of FISOC BCOS will add new features to the original version.。There are two upgrade methods, which can be selected according to the upgrade requirements: +Each version of FISOC BCOS will add new features to the original version。There are two upgrade methods, which can be selected according to the upgrade requirements: 1. Improve system stability and performance: only upgrade node executable programs 2. Use new features: upgrade node executable, upgrade chain data version @@ -17,9 +17,9 @@ Each version of FISOC BCOS will add new features to the original version.。Ther - Upgrade effect: fix bugs, and bring stability, performance improvement -- Steps: Stop the node service step by step. After the node executable is upgraded to the current version, restart the node service +- Operation steps: gradually stop the node service, upgrade the node executable to the current version, and restart the node service -- Note: It is recommended to gradually replace the executable program for gray scale upgrade. Before the upgrade, back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the state before the upgrade. +- Note: It is recommended to gradually replace the executable program for gray-scale upgrade, and back up all the account book data of the original node before upgrading. If the upgrade fails due to operation errors, the original data can be rolled back to the state before upgrading Version supported for upgrade: v3.0.0+ @@ -27,9 +27,9 @@ Version supported for upgrade: v3.0.0+ - Upgrade effect: can use the latest features of the current version -- Operation steps: first complete the upgrade of all node executable programs, and then refer to the following steps, by sending the transaction upgrade chain data version to the new version v3.x.x +-Operation steps: first complete the upgrade of all node executable programs, and then refer to the following steps, by sending the transaction upgrade chain data version to the new version v3.x.x -- Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade. +- Note: Be sure to back up all the ledger data of the original node. If the upgrade fails due to an error, the original data can be rolled back to the status before the upgrade The detailed steps for upgrading a data-compatible version number are as follows: @@ -45,7 +45,7 @@ Use [console](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/operation_ #### b. Replace Node Binary -Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number.。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 +Need to be**All Nodes**Gradually replace the binary with the current version。In order not to affect the business, the replacement process can be done in grayscale, replacing and restarting nodes one by one。During the replacement process, the current chain continues to execute with the logic of the old data-compatible version number。After the binary replacement of all nodes is completed and restarted, you need to use the console to modify the data compatibility version number to the current version。 #### c. Turn on the new version feature @@ -53,13 +53,13 @@ Starting from versions 3.2.4 and 3.6.0, FISCO BCOS provides the ability to enabl | Feature Name| Type| Role| Minimum Version | --- | --- | --- | --- | -| bugfix_revert | Problem Fixes| Fix when using serial mode(is_serial=true)The problem that the written state data is not revoked after the smart contract is rolled back.| 3.2.4 3.6.0 | +| bugfix_revert | Problem Fixes| Fix when using serial mode(is_serial=true)The problem that the written state data is not revoked after the smart contract is rolled back| 3.2.4 3.6.0 | | bugfix_statestorage_hash | Problem Fixes| | | | bugfix_evm_create2_delegatecall_staticcall_codecopy | Problem Fixes| | | | bugfix_event_log_order | Problem Fixes| | | | bugfix_call_noaddr_return | Problem Fixes| | | | bugfix_precompiled_codehash | Problem Fixes| | | -| bugfix_dmc_revert | Problem Fixes| Fix when using DMC mode(is _ serial = false, feature _ sharding is not enabled)The problem that the written state data is not revoked after the smart contract is rolled back.| | +| bugfix_dmc_revert | Problem Fixes| Fix when using DMC mode(is _ serial = false, feature _ sharding is not enabled)The problem that the written state data is not revoked after the smart contract is rolled back| | | bugfix_keypage_system_entry_hash | Problem Fixes| | | | bugfix_internal_create_redundant_storage | feature_dmc2serial | New Features| | 3.2.4 | @@ -75,7 +75,7 @@ Starting from versions 3.2.4 and 3.6.0, FISCO BCOS provides the ability to enabl Enable problem fixes or new features, execute in the console ``` -setSystemConfigByKey < special name > +setSystemConfigByKey ``` #### d. Set the data compatibility version number (compatibility _ version) @@ -99,12 +99,12 @@ Set successfully, query again, the current version has been upgraded to 3.1.0 3.1.0 ``` -The current chain has been upgraded. At this point, the chain continues to run with new logic and supports new version features.。 +The current chain has been upgraded. At this point, the chain continues to run with new logic and supports new version features。 After setting the chain version number, chains of 3.6.x and above will automatically enable all bugfixes of the minimum version and above。 ## 2. Upgrade between FISCO BCOS Air, Pro and Max -Through deep research on the demands of users in different scenarios, FISCO BCOS has launched three different deployment types: Air, Pro and Max. Users can customize their choices according to their needs.。However, in the actual application process, with the change of business and needs, users have the need to upgrade Pro and Pro to upgrade Max.。Based on this background, this section explores upgrade operations between the three deployment types。 +Through deep research on the demands of users in different scenarios, FISCO BCOS has launched three different deployment types: Air, Pro and Max. Users can customize their choices according to their needs。However, in the actual application process, with the change of business and needs, users have the need to upgrade Pro and Pro to upgrade Max。Based on this background, this section explores upgrade operations between the three deployment types。 **注意**: Make sure to back up the data before upgrading to ensure that the chain can be rolled back to the state before upgrading。 ![](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/_images/fisco_bcos_version.png) @@ -112,106 +112,106 @@ Through deep research on the demands of users in different scenarios, FISCO BCOS ### 2.1 Air to Pro Upgrade To implement the Air upgrade Pro, you need to understand the differences in technical architecture and deployment between the two。 -- Similarities: The underlying storage database between Air and Pro is based on RocksDB. -- Difference: Air uses all-in-One's encapsulation mode, Pro by group layer+The architecture of the access layer provides services +- Similarities: The underlying storage database between Air and Pro is RocksDB +- Different points: Air adopts all-in-one encapsulation mode, Pro by group layer+The architecture of the access layer provides services Based on the similarities and differences between Air and Pro, there are two options for upgrading Air to Pro: -- Expand a Pro node based on an existing Air node -- Based on the current Air node, upgrade its refactoring to a Pro node +- Based on the existing Air node, expand a Pro node +- Based on the current Air node, upgrade its refactoring to Pro node The specific scheme is described as follows。 #### Solution 1: Air Blockchain Expansion Pro Node -Based on the existing Air node, expand a pro node on its basis, so that you can save the previous Air on-chain data, expand the Pro node and meet the business needs.。The advantage of this solution is that it is simple to operate and does not require changes to the existing organizational structure.。The detailed steps are as follows: +Based on the existing Air node, expand a pro node on its basis, so that you can save the previous Air on-chain data, expand the Pro node and meet the business needs。The advantage of this solution is that it is simple to operate and does not require changes to the existing organizational structure。The detailed steps are as follows: 1. Use the BcosBuilder deployment tool to deploy the tar service. For specific steps, please refer to [Building a Pro Blockchain Network](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/pro/installation.html)and [Pro Expansion Node](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/pro/expand_node.html); -2. Before deploying the pro node, configure config in the pro / conf directory-node-expand-example.toml file, configure information such as the path of the Air Genesis block file; +2. Before deploying the pro node, configure the config-node-expand-example.toml file in the pro / conf directory and configure the path of the Air Genesis block file; 3. Download the binary, deploy the Pro node according to the configuration file, and replace the ca file of Pro generated by expansion with the root certificate of Air. For specific expansion steps, please refer to [Pro Node Expansion](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/pro/expand_node.html); 4. Start the Pro node, connect the Pro node through the console, and add the Pro node to the Air blockchain network; -5. After the pro node is synchronized to the latest block, the user can choose whether to offline the Air node service.。 +5. After the pro node is synchronized to the latest block, the user can choose whether to offline the Air node service。 -Through the above steps, you can expand the Pro version of the node on the basis of the original Air, and then realize the upgrade from Air to Pro, users can also further expand the Pro network in the process of business use.。 +Through the above steps, you can expand the Pro version of the node on the basis of the original Air, and then realize the upgrade from Air to Pro, users can also further expand the Pro network in the process of business use。 #### Scheme 2: Air blockchain reconstruction and upgrade -The underlying storage mode of Air is the same as that of Pro, and the blockchain data of Air is still available after upgrading to Pro version。Therefore, in theory, you only need to refactor the Air directory structure to the Pro directory structure, deploy the relevant Pro services, and then reuse the previous Air data to the corresponding Pro directory.。The detailed steps for this scenario are as follows: +The underlying storage mode of Air is the same as that of Pro, and the blockchain data of Air is still available after upgrading to Pro version。Therefore, in theory, you only need to refactor the Air directory structure to the Pro directory structure, deploy the relevant Pro services, and then reuse the previous Air data to the corresponding Pro directory。The detailed steps for this scenario are as follows: -1. Stop the Air node and save the data and chain configuration information on the chain: backup the data of the Air node, such as the data directory, the creation block, the config.ini configuration file, and the certificate.; -2. According to the directory structure of the Pro version, the directory file structure of the Air chain is reconstructed into the directory structure of Pro.; -3. Deploy the Pro version of RPC, getawat, node and other services through the BcosBuilder deployment tool.; +1. Stop the Air node and save the data and chain configuration information on the chain: backup the data of the Air node, such as the data directory, the creation block, the config.ini configuration file, and the certificate; +2. According to the directory structure of the Pro version, the directory file structure of the Air chain is reconstructed into the directory structure of Pro; +3. Deploy the Pro version of RPC, getawat, node and other services through the BcosBuilder deployment tool; 4. Deploy the Pro chain through the BcosBuilder deployment tool step, and replace the files in the Genesis block, config.ini, certificate, and chain data data directory with the related files of the previous Air; 5. Import the Air data, import the previously backed up Air data to the data directory of the pro node, and start the node。 -Through the above operation, the user can replace the original Air network with the Pro network, which is more complicated than the operation of Scheme 1, requiring the user to have a deeper understanding of the organizational structure of Air in Pro.。 +Through the above operation, the user can replace the original Air network with the Pro network, which is more complicated than the operation of Scheme 1, requiring the user to have a deeper understanding of the organizational structure of Air in Pro。 ### 2.2 Pro to Max Upgrade -Because Pro and Max use different storage architectures: Pro uses a stand-alone RocksDB with outstanding performance, while Max uses a distributed storage architecture tikvDB.。So in Pro upgrade to Max, the ledger data cannot be reused。In this case, only the expansion scheme can achieve the Pro version upgrade Max。Specific programs are as follows: +Because Pro and Max use different storage architectures: Pro uses a stand-alone RocksDB with outstanding performance, while Max uses a distributed storage architecture tikvDB。So in Pro upgrade to Max, the ledger data cannot be reused。In this case, only the expansion scheme can achieve the Pro version upgrade Max。Specific programs are as follows: ### Solution: Pro blockchain scaling Max node The specific steps for scaling the Pro blockchain Max node are as follows: 1. Use the BcosBuilder deployment tool to install dependencies, install and configure the Tars service; 2. Download and install tiup, deploy and start Tikv; -3. Download the relevant binaries of Max and modify the sample configuration file config in the BcosBuilder / max / conf directory-node-expand-example.toml, configure the genesis block path(Genesis Block File for Pro Blockchain)and other related information; +3. Download the relevant binaries of Max and modify the sample configuration file config-node-expand-example.toml in the BcosBuilder / max / conf directory to configure the creation block path(Genesis Block File for Pro Blockchain)and other related information; 4. Deploy the Max node service, expand the Max node, and replace the generated ca file of the max node directory with the root certificate of the original Pro chain after the expansion is completed. For specific steps, please refer to [Building a Max Blockchain Network](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/max/installation.html); -5. Start the Max node, connect the Max node through the console, add the Max node to the original Pro blockchain network, and check whether the chain is running normally。After the Max node is synchronized to the latest fast, you can choose whether to offline the Pro node.。 +5. Start the Max node, connect the Max node through the console, add the Max node to the original Pro blockchain network, and check whether the chain is running normally。After the Max node is synchronized to the latest fast, you can choose whether to offline the Pro node。 Through the above steps, the Pro network expansion Max, the original Pro network service upgrade, users can experience the large-capacity Max version of the function, the same support in the subsequent use to continue to increase the expansion Max node。 ## 3. FISCO BCOS 2.0 Upgrade to 3.0 Guidelines FISCO BCOS 3.0 has been upgraded and optimized on the basis of 2.0, and has made major breakthroughs in scalability, performance and ease of use, including: -- Pipelined: Block pipelining for continuous and compact generation of blocks; -- Omni-directional parallel computing: parallel mechanisms such as intra-block sharding, DMC, and DAG to achieve powerful processing performance; -- Blockchain File System: WYSIWYG Contract Data Management; +- Pipelined: Block pipelining to generate blocks continuously and compactly; +- Omni-directional parallel computing: parallel mechanisms such as intra-block sharding, DMC, DAG, etc., to achieve powerful processing performance; +- Blockchain file system: WYSIWYG Contract Data Management; - Permission governance framework: Built-in permission governance framework, multi-party voting governance blockchain; - Distributed storage TiKV: Distributed transactional commit, supporting mass storage; -Due to FISCO BCOS 3.0+Version 2.0 has undergone a number of major refactorings and is not perfectly backwards compatible.。However, some users who have already launched version 2.x have an upgrade value of 3.0+demand of。To solve this problem, several feasible upgrade options are listed below, each with different advantages and disadvantages, suitable for different business scenarios。 +Due to FISCO BCOS 3.0+Version 2.0 has undergone a number of major refactorings and is not perfectly backwards compatible。However, some users who have already launched version 2.x have an upgrade value of 3.0+demand of。To solve this problem, several feasible upgrade options are listed below, each with different advantages and disadvantages, suitable for different business scenarios。 -**注意**Before upgrading, make sure to back up your data to ensure that it can be rolled back.。 +**注意**Before upgrading, make sure to back up your data to ensure that it can be rolled back。 ### Scenario 1: Data Replay -Data replay is currently one of the most common blockchain data migration methods because it is simple, intuitive, and has relatively few side effects.。 +Data replay is currently one of the most common blockchain data migration methods because it is simple, intuitive, and has relatively few side effects。 Implementation method: 1. Use the latest version of the link script to build a 3.0+new chain; -2. Use the data export component to quickly export data on the blockchain to obtain the execution history and detailed data of all methods and contracts on the chain.; -3. Write a program or script to replay the exported data according to the block height.。Enables contracts and transactions on the old chain of the production environment to be re-executed on the new chain at the original runtime.。 +2. Use the data export component to quickly export data on the blockchain to obtain the execution history and detailed data of all methods and contracts on the chain; +3. Write a program or script to replay the exported data according to the block height。Enables contracts and transactions on the old chain of the production environment to be re-executed on the new chain at the original runtime。 Advantages of this program: -- Simple, intuitive, small side effects, even if the upgrade failure does not affect the old chain operation +-Simple, intuitive, small side effects, even if the upgrade fails does not affect the old chain operation The disadvantages of this programme are: -- Long replay time: not suitable for long-running massive data scenarios, if the transaction block height exceeds 3 million, the replay time will exceed 1 month (assuming that the consensus and out-of-block time period is 1s); -- There is no guarantee of the order of execution of the same block: even when replaying, transactions are submitted and sent in the order of the block, but the order of execution of transactions within the same block is still random; -- There is no guarantee that transactions will fall accurately on the original block height: due to the random nature of the cycle and order of the block packaging consensus, it is difficult to guarantee the number and order of transactions under the specific block height when replaying.。 +-Long replay time: It is not suitable for long-running massive data scenarios. If the transaction block height exceeds 3 million, the replay time will exceed 1 month (assuming that the consensus and block time period is 1s); +-There is no guarantee of the order of execution of the same block: even when replaying, transactions are submitted and sent in the order of the block, but the order of execution of transactions within the same block is still random; +-There is no guarantee that transactions will fall accurately on the original block height: due to the random nature of the cycle and order of the block packaging consensus, it is difficult to guarantee the number and order of transactions under the specific block height when replaying。 -Therefore, this solution is suitable for scenarios where the consistency requirements for contract data are not strict, but there are certain requirements for data integrity.;Not suitable for scenarios with very strict data consistency and massive data。 +Therefore, this solution is suitable for scenarios where the consistency requirements for contract data are not strict, but there are certain requirements for data integrity;Not suitable for scenarios with very strict data consistency and massive data。 ### Scheme 2: Application layer adaptation -The core design of this scheme does not need to change the history chain and data, but provides a layer of data adaptation layer between the old and new chains to shield the details of the chain, so it can reduce the data operation on the chain.。The completeness and accuracy of historical data is also the biggest advantage of this program.。 +The core design of this scheme does not need to change the history chain and data, but provides a layer of data adaptation layer between the old and new chains to shield the details of the chain, so it can reduce the data operation on the chain。The completeness and accuracy of historical data is also the biggest advantage of this program。 Implementation method: -Users can develop a data adaptation application that is compatible with the old and new chains, and route different data by using features such as data ID or date as a sign of route differentiation.。At the same time, the new chain copies the old chain of smart contracts。 +Users can develop a data adaptation application that is compatible with the old and new chains, and route different data by using features such as data ID or date as a sign of route differentiation。At the same time, the new chain copies the old chain of smart contracts。 Advantages of this program: - Good historical data integrity and accuracy The disadvantages of this programme are: -- Data isolation: The data between the new chain and the old chain is physically isolated, so this solution cannot be adopted in scenarios where the new and old data are dependent.; -- High maintenance cost: the old and new chains must be maintained, and the maintenance cost of hardware is high; -- High development costs: a separate data routing adaptation layer must be developed, which is costly to develop。 +- Data isolation: The data between the new chain and the old chain is physically isolated, so this solution cannot be adopted in scenarios where the new and old data are dependent; +-High maintenance cost: the old and new chains must be maintained, and the maintenance cost of hardware is high; +-High development cost: a set of independent data routing adaptation layer must be developed, and the development cost is high。 -Therefore, this solution is suitable for scenarios where there is no dependency between parts of the data.;And some scenarios where data is strongly dependent, such as points, payments and settlements, are not applicable.。In addition, this solution requires the development of additional data adaptation procedures, the difficulty of which depends on the specific contract and scenario, but the subsequent maintenance costs are very high.。 +Therefore, this solution is suitable for scenarios where there is no dependency between parts of the data;And some scenarios where data is strongly dependent, such as points, payments and settlements, are not applicable。In addition, this solution requires the development of additional data adaptation procedures, the difficulty of which depends on the specific contract and scenario, but the subsequent maintenance costs are very high。 ### Scheme III: Cross-chain Scheme -This scheme is similar to the second scheme, the new chain and the old chain as two chains, the transaction involves the old chain data using the cross-chain platform to initiate cross-chain transaction requests, this scheme also does not need to change the historical chain and data, compared to the second scheme does not need to develop a set of independent data routing adaptation layer.。 +This scheme is similar to the second scheme, the new chain and the old chain as two chains, the transaction involves the old chain data using the cross-chain platform to initiate cross-chain transaction requests, this scheme also does not need to change the historical chain and data, compared to the second scheme does not need to develop a set of independent data routing adaptation layer。 Implementation method: -First, use the newly created chain script to build a new chain, connect the two chains through the cross-chain platform WeCross, the new business is processed by the new chain, the transaction involves historical data cross-chain platform to initiate cross-chain transaction requests, routed to the two chains to process, the processing results are stored in the two chains.。 +First, use the newly created chain script to build a new chain, connect the two chains through the cross-chain platform WeCross, the new business is processed by the new chain, the transaction involves historical data cross-chain platform to initiate cross-chain transaction requests, routed to the two chains to process, the processing results are stored in the two chains。 Advantages of this program: - Low development costs, no need to develop data adaptation applications; @@ -219,8 +219,8 @@ Advantages of this program: Disadvantages of this program: -- Slow transaction execution: cross-chain transaction processing is slow, performance is not high, every transaction in the early stage is a cross-chain transaction; -- High maintenance cost: the old and new chains must be maintained, and the maintenance cost of hardware is high。 +-Slow transaction execution: cross-chain transaction processing is slow, performance is not high, and every transaction in the previous period is a cross-chain transaction; +-High maintenance cost: the old and new chains must be maintained, and the maintenance cost of hardware is high。 -In this scheme, the performance of the new chain is low in the early stage of construction, and the frequent cross-link routing requires higher network requirements.。 +In this scheme, the performance of the new chain is low in the early stage of construction, and the frequent cross-link routing requires higher network requirements。 diff --git a/3.x/en/docs/operation_and_maintenance/webase.md b/3.x/en/docs/operation_and_maintenance/webase.md index ef66bc93f..faec5620e 100644 --- a/3.x/en/docs/operation_and_maintenance/webase.md +++ b/3.x/en/docs/operation_and_maintenance/webase.md @@ -6,12 +6,12 @@ Tags: "WeBASE" "Middleware Platform" "Node Management" "System Monitoring" "Syst ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` -WeBank's open source self-developed blockchain middleware platform - [WeBASE(WeBank Blockchain Application Software Extension)](https://webasedoc.readthedocs.io/zh_CN/latest/) It is a middleware platform built between blockchain applications and FISCO BCOS nodes.。WeBASE shields the complexity of the underlying blockchain, reduces the threshold for blockchain use, and greatly improves the development efficiency of blockchain applications, including subsystems such as node front, node management, transaction links, data export, and web management platforms.。Users can select subsystems for deployment according to their business needs, and can further experience the rich interactive experience, visual smart contract development environment IDE。 +WeBank's open source self-developed blockchain middleware platform - [WeBASE(WeBank Blockchain Application Software Extension)](https://webasedoc.readthedocs.io/zh_CN/latest/) It is a middleware platform built between blockchain applications and FISCO BCOS nodes。WeBASE shields the complexity of the underlying blockchain, reduces the threshold for blockchain use, and greatly improves the development efficiency of blockchain applications, including subsystems such as node front, node management, transaction links, data export, and web management platforms。Users can select subsystems for deployment according to their business needs, and can further experience the rich interactive experience, visual smart contract development environment IDE。 -The WeBASE Management Platform is a set of management FISCO consisting of four WeBASE subsystems-Toolset for the BCOS Alliance Chain。For more information, please refer to [WeBASE Management Platform User Manual](https://webasedoc.readthedocs.io/zh_CN/latest/) 。 +The WeBASE Management Platform is a set of four WeBASE subsystems to manage the FISCO-BCOS Alliance Chain。For more information, please refer to [WeBASE Management Platform User Manual](https://webasedoc.readthedocs.io/zh_CN/latest/) 。 ## Main functions @@ -34,12 +34,12 @@ For building, please refer to [WeBASE One - click Deployment Document](https://w ### [WeBASE Quick Start](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/developer.html) -Developers only need to build nodes and node pre-services.(WeBASE-Front)via WeBASE-Front contract editor for contract editing, compilation, deployment, debugging。Build can refer to ["WeBASE Quick Start Document"](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/developer.html)。 +Developers only need to build nodes and node pre-services(WeBASE-Front)You can edit, compile, deploy, and debug contracts through WeBASE-Front's contract editor。Build can refer to ["WeBASE Quick Start Document"](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE-Install/developer.html)。 ![](../../images/webase/webase-front.png) ### [WeBASE Console](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE/install.html) -Through the WeBASE one-click script, you can build a basic environment of WeBASE, which can facilitate users to experience the core functions of WeBASE, such as block browsing, node viewing, contract IDE, system management, node monitoring, transaction audit, and private key management.。For building, please refer to [WeBASE One - click Deployment Document](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE/install.html)。![](../../images/webase/webase-web.png) +Through the WeBASE one-click script, you can build a basic environment of WeBASE, which can facilitate users to experience the core functions of WeBASE, such as block browsing, node viewing, contract IDE, system management, node monitoring, transaction audit, and private key management。For building, please refer to [WeBASE One - click Deployment Document](https://webasedoc.readthedocs.io/zh_CN/latest/docs/WeBASE/install.html)。![](../../images/webase/webase-web.png) ### [WeBASE Other](https://webasedoc.readthedocs.io/zh_CN/latest) diff --git a/3.x/en/docs/quick_start/air_installation.md b/3.x/en/docs/quick_start/air_installation.md index 6206f5a8b..718016af8 100644 --- a/3.x/en/docs/quick_start/air_installation.md +++ b/3.x/en/docs/quick_start/air_installation.md @@ -6,22 +6,22 @@ Tags: "Building a Blockchain Network" "Blockchain Tutorial" "HelloWorld" "Consol ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` This chapter describes the necessary installation and configuration required to use the FISCO BCOS underlying blockchain system。This chapter helps users master the FISCO BCOS deployment process by deploying a 4-node FISCO BCOS alliance chain on a single machine. Please refer to [Hardware and System Requirements](./hardware_requirements.md)Operating with supported hardware and platforms。 ```eval_rst .. note:: - - For the system architecture of FISCO BCOS 3.x, please refer to 'here <.. / design / architecture.html >' _ - - FISCO BCOS 3.x Air version to build and use the tutorial, please refer to 'here <.. / tutorial / air / index.html >' _ - - FISCO BCOS 3.x Pro version to build and use the tutorial, please refer to 'here <.. / tutorial / pro / index.html >' _ - - FISCO BCOS 3.x Max version to build and use the tutorial, please refer to 'here <.. / tutorial / max / index.html >' _ + - The system architecture of FISCO BCOS 3.x, please refer to 'here<../design/architecture.html>`_ + -FISCO BCOS 3.x Air version to build and use the tutorial, please refer to the 'here<../tutorial/air/index.html>`_ + -FISCO BCOS 3.x Pro version to build and use the tutorial, please refer to the 'here<../tutorial/pro/index.html>`_ + -FISCO BCOS 3.x Max version to build and use the tutorial, please refer to the 'here<../tutorial/max/index.html>`_ ``` ## 1. Build Air version FISCO BCOS alliance chain -This section takes building a FISCO BCOS chain of a single group as an example, and uses the 'development and deployment tool build _ chain.sh' script to build a 4-node FISCO BCOS chain of the Air version locally, taking the Ubuntu 18.04 64-bit system as an example.。 +This section takes building a FISCO BCOS chain of a single group as an example, and uses the 'development and deployment tool build _ chain.sh' script to build a 4-node FISCO BCOS chain of the Air version locally, taking the Ubuntu 18.04 64-bit system as an example。 ### Step 1. Install dependencies @@ -78,8 +78,8 @@ bash build_chain.sh -l 127.0.0.1:4 -p 30300,20200 ```eval_rst .. note:: - - 其中-The p option specifies the starting port, which is the p2p listening port and the rpc listening port respectively - - Air build script build _ chain.sh introduction document 'refer here <.. / tutorial / air / build _ chain.html >' _ + - where the -p option specifies the starting port, which is the p2p listening port and the rpc listening port respectively + -Air version build script build _ chain.sh introduction document 'reference here<../tutorial/air/build_chain.html>`_ ``` After the command succeeds, it will output 'All completed': @@ -124,7 +124,7 @@ writing RSA key ```shell bash nodes/127.0.0.1/start_all.sh ``` -Successful startup will output the following information。Otherwise use 'netstat-an |grep tcp 'check machine' 30300 ~ 30303,20200 ~ 20203 'ports are occupied。 +Successful startup will output the following information。Otherwise use 'netstat -an|grep tcp 'check machine' 30300 ~ 30303,20200 ~ 20203 'ports are occupied。 ```shell try to start node0 @@ -175,11 +175,11 @@ info|2022-08-15 19:39:29.270427|[P2PService][Service][METRIC]heartBeat,connected ## 2. Configure and use the console -The console provides functions such as deploying contracts to FISCO BCOS nodes, initiating contract calls, and querying chain status.。 +The console provides functions such as deploying contracts to FISCO BCOS nodes, initiating contract calls, and querying chain status。 ### Step 1. Install the console dependencies -Console running depends on Java environment(We recommend Java 14.)and the installation command is as follows: +Console running depends on Java environment(We recommend Java 14)and the installation command is as follows: ```shell # Ubuntu system installation java @@ -197,7 +197,7 @@ cd ~/fisco && curl -LO https://github.com/FISCO-BCOS/console/releases/download/v ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, please try cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh + -If you cannot download for a long time due to network problems, please try cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh ``` ### Step 3. Configure the console @@ -210,12 +210,12 @@ cp -n console/conf/config-example.toml console/conf/config.toml ```eval_rst .. note:: - If the node does not use the default port, replace 20200 in the file with the corresponding rpc port of the node. You can use the "[rpc] .listen _ port" configuration item of the node config.ini to obtain the rpc port of the node.。 + If the node does not use the default port, replace 20200 in the file with the corresponding rpc port of the node. You can use the "[rpc] .listen _ port" configuration item of the node config.ini to obtain the rpc port of the node。 ``` -- Configure Console Certificates +- Configure console certificates -SSL connection is enabled by default between the console and the node. The console needs to configure a certificate to connect to the node.。The SDK certificate is generated at the same time as the node is generated. You can directly copy the generated certificate for the console to use: +SSL connection is enabled by default between the console and the node. The console needs to configure a certificate to connect to the node。The SDK certificate is generated at the same time as the node is generated. You can directly copy the generated certificate for the console to use: ```shell cp -r nodes/127.0.0.1/sdk/* console/conf @@ -226,7 +226,7 @@ cp -r nodes/127.0.0.1/sdk/* console/conf ```eval_rst .. note:: - Please make sure that the 30300 ~ 30303, 20200 ~ 20203 ports of the machine are not occupied。 - - For console configuration methods and commands, please refer to 'here <.. / operation _ and _ maintenance / console / console _ config.html >' _ + -For console configuration methods and commands, please refer to 'here<../operation_and_maintenance/console/console_config.html>`_ ``` - Start @@ -313,7 +313,7 @@ contract HelloWorld { ### Step 2. Deploy the HelloWorld contract -To facilitate the user's quick experience, the HelloWorld contract is built into the console and located in the console directory 'contracts / consolidation / HelloWorld.sol'. +To facilitate the user's quick experience, the HelloWorld contract is built into the console and located in the console directory 'contracts / consolidation / HelloWorld.sol' ```shell # Enter the following command in the console to return the contract address if the deployment is successful @@ -370,4 +370,4 @@ Event: {} [group0]: /> exit ``` -At this point, we have completed the first FISCO-Building the BCOS chain, configuring and using the console, and deploying and invoking the first contract。关于**Pro version FISCO BCOS build, configuration and use please refer to [here](../tutorial/pro/index.md)。** +At this point, we have completed the construction of the first FISCO-BCOS chain, the configuration and use of the console, and the deployment and invocation of the first contract。关于**Pro version FISCO BCOS build, configuration and use please refer to [here](../tutorial/pro/index.md)。** diff --git a/3.x/en/docs/quick_start/hardware_requirements.md b/3.x/en/docs/quick_start/hardware_requirements.md index 761746bce..6152d1014 100644 --- a/3.x/en/docs/quick_start/hardware_requirements.md +++ b/3.x/en/docs/quick_start/hardware_requirements.md @@ -6,18 +6,18 @@ Tags: "hardware requirements" "operating system" "development manual" "memory re ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` ## Hardware Requirements ```eval_rst .. note:: - - FISCO BCOS supports CPU for x86 _ 64 and aarch64 (ARM) architectures - - Because multiple groups of nodes share network bandwidth, CPU, and memory resources, it is not recommended to configure too many nodes on a machine to ensure service stability.。 + - FISCO BCOS supports x86 _ 64 and aarch64 (ARM) architecture CPUs + - Because multiple groups of nodes share network bandwidth, CPU, and memory resources, it is not recommended to configure too many nodes on a machine to ensure service stability。 ``` -The following table shows the recommended configurations for a single group and a single node. The resource consumption of nodes is linearly related to the number of groups. You can reasonably configure the number of nodes according to the actual business needs and machine resources.。 +The following table shows the recommended configurations for a single group and a single node. The resource consumption of nodes is linearly related to the number of groups. You can reasonably configure the number of nodes according to the actual business needs and machine resources。 | **Configuration** | **Minimum Configuration** | **Recommended Configuration** | diff --git a/3.x/en/docs/quick_start/solidity_application.md b/3.x/en/docs/quick_start/solidity_application.md index 08d7c5b34..39451a614 100644 --- a/3.x/en/docs/quick_start/solidity_application.md +++ b/3.x/en/docs/quick_start/solidity_application.md @@ -6,10 +6,10 @@ Tags: "Develop First Application" "Solidity" "Contract Development" "Blockchain ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` -This chapter will introduce the whole process of developing a business application scenario based on FISCO BCOS blockchain, from business scenario analysis to contract design and implementation, then contract compilation and how to deploy to the blockchain, and finally the implementation of an application module, through the [Java SDK] we provide.(../develop/sdk/java_sdk/index.md)Enables call access to contracts on the blockchain。 +This chapter will introduce the whole process of developing a business application scenario based on FISCO BCOS blockchain, from business scenario analysis to contract design and implementation, then contract compilation and how to deploy to the blockchain, and finally the implementation of an application module, through the [Java SDK] we provide(../develop/sdk/java_sdk/index.md)Enables call access to contracts on the blockchain。 This tutorial requires users to be familiar with the Linux operating environment, have basic skills in Java development, be able to use the Gradle tool, and be familiar with [Solidity syntax](https://solidity.readthedocs.io/en/latest/)。 @@ -17,15 +17,15 @@ If you have not built a blockchain network or downloaded the console, please com ## 1. Understand application requirements -Blockchain naturally has tamper-proof, traceability and other characteristics, these characteristics determine that it is more likely to be favored by the financial sector.。In this example, a simple development example of asset management will be provided, and the following features will eventually be implemented: +Blockchain naturally has tamper-proof, traceability and other characteristics, these characteristics determine that it is more likely to be favored by the financial sector。In this example, a simple development example of asset management will be provided, and the following features will eventually be implemented: - Ability to register assets on the blockchain - Ability to transfer different accounts -- Can query the asset amount of the account +- Can check the asset amount of the account ## Design and Development of Smart Contracts -When developing applications on the blockchain, combined with business requirements, it is first necessary to design the corresponding smart contract, determine the data that the contract needs to store, determine the interface provided by the smart contract on this basis, and finally give the specific implementation of each interface.。 +When developing applications on the blockchain, combined with business requirements, it is first necessary to design the corresponding smart contract, determine the data that the contract needs to store, determine the interface provided by the smart contract on this basis, and finally give the specific implementation of each interface。 ### Step 1: Designing Smart Contracts @@ -36,7 +36,7 @@ FISCO BCOS provides [contract KV storage interface](../develop/precompiled/use_k - account: Primary Key, Asset Account(string type) - asset_value: Amount of assets(uint256 type) -where account is the primary key, which is the field that needs to be passed in when operating the 't _ asset' table, and the blockchain queries the matching records in the table based on the primary key field.。't _ asset 'means for example the following: +where account is the primary key, which is the field that needs to be passed in when operating the 't _ asset' table, and the blockchain queries the matching records in the table based on the primary key field。't _ asset 'means for example the following: | account | asset_value | |---------|-------------| @@ -131,7 +131,7 @@ contract Asset { account : Asset Account Return value: - Parameter 1: successful return 0, account does not exist return-1 + Parameter 1: 0 returned successfully, -1 returned if the account does not exist Parameter 2: Valid when the first parameter is 0, asset amount */ function select(string memory account) public view returns (bool, uint256) { @@ -153,7 +153,7 @@ contract Asset { Return value: 0 Asset registration successful -1 Asset account already exists - -2 Other errors + - 2 other errors */ function register(string memory account, uint256 asset_value) public @@ -195,11 +195,11 @@ contract Asset { amount: transfer amount Return value: 0 Asset transfer successful - -1 Transfer asset account does not exist + - 1 Transfer asset account does not exist -2 Receiving asset account does not exist - -3 Insufficient amount - -4 Amount overflow - -5 Other errors + - 3 Insufficient amount + - 4 Amount overflow + - 5 other errors */ function transfer( string memory from_account, @@ -330,20 +330,20 @@ contract Asset { } ``` -The Table.sol referenced by Asset.sol is already in the "~ / fisco / console / contracts / consolidation" directory。The interface in the system contract file is implemented by the FISCO BCOS underlying layer.。When a business contract needs to operate a KV storage interface, the interface contract file needs to be introduced.。Table.sol contract detailed interface reference [here](../develop/precompiled/precompiled_contract_api.md)。 +The Table.sol referenced by Asset.sol is already in the "~ / fisco / console / contracts / consolidation" directory。The interface in the system contract file is implemented by the FISCO BCOS underlying layer。When a business contract needs to operate a KV storage interface, the interface contract file needs to be introduced。Table.sol contract detailed interface reference [here](../develop/precompiled/precompiled_contract_api.md)。 Run the "ls" command and make sure that "Asset.sol" and "Table.sol" are in the directory "~ / fisco / console / contracts / consolidation"。 ## 3. Compile Smart Contracts -Smart contracts for '.sol' need to be compiled into ABI and BIN files to be deployed on the blockchain network。With these two files, you can deploy and invoke contracts with the Java SDK.。However, this call is relatively cumbersome and requires the user to pass parameters and parse the results based on the contract ABI.。To this end, the console provides a compilation tool that not only compiles ABI and BIN files, but also automatically generates a contract Java class with the same name as the compiled smart contract.。This Java class is generated based on the ABI to help users parse the parameters and provide methods with the same name.。When an application needs to deploy and invoke a contract, you can call the corresponding method of the contract class and pass in the specified parameters.。Using this contract Java class to develop applications can greatly simplify the user's code。 +Smart contracts for '.sol' need to be compiled into ABI and BIN files to be deployed on the blockchain network。With these two files, you can deploy and invoke contracts with the Java SDK。However, this call is relatively cumbersome and requires the user to pass parameters and parse the results based on the contract ABI。To this end, the console provides a compilation tool that not only compiles ABI and BIN files, but also automatically generates a contract Java class with the same name as the compiled smart contract。This Java class is generated based on the ABI to help users parse the parameters and provide methods with the same name。When an application needs to deploy and invoke a contract, you can call the corresponding method of the contract class and pass in the specified parameters。Using this contract Java class to develop applications can greatly simplify the user's code。 ```shell -# Assuming that you have completed the download operation of the console, if not, please check the development source code steps in Section 2 of this article. +# Assuming that you have completed the download operation of the console, if not, please check the development source code steps in Section 2 of this article # Switch to fisco / console / directory cd ~/fisco/console/ -# Available via bash contract2java.sh solidity-H command to view the script solidity usage,-s specify sol file +# You can view the usage method of the script solidity through the bash contract2java.sh solidity -h command, and -s specifies the sol file bash contract2java.sh solidity -s contracts/solidity/Asset.sol -p org.fisco.bcos.asset.contract ``` @@ -351,9 +351,9 @@ After running successfully, the java, abi, and bin directories will be generated ```shell # Omission of other irrelevant documents -|-- abi # The generated abi directory, which stores the abi files generated by the compilation of the solidity contract. +|-- abi # The generated abi directory, which stores the abi files generated by the compilation of the solidity contract | |-- Asset.abi -|-- bin # The generated bin directory, which stores the bin file generated by compiling the Solidity contract. +|-- bin # The generated bin directory, which stores the bin file generated by compiling the Solidity contract | |-- Asset.bin |-- java # Store the compiled package path and Java contract file | |-- org @@ -364,7 +364,7 @@ After running successfully, the java, abi, and bin directories will be generated | |--Asset.java # Java files generated by the Asset.sol contract ``` -The 'org / fisco / bcos / asset / contract /' package path directory is generated in the java directory, which contains the 'Asset.java' file, which is the file required by the Java application to call the 'Asset.sol' contract.。 +The 'org / fisco / bcos / asset / contract /' package path directory is generated in the java directory, which contains the 'Asset.java' file, which is the file required by the Java application to call the 'Asset.sol' contract。 'Asset.java 'main interface: @@ -387,7 +387,7 @@ public class Asset extends Contract { } ``` -The load and deploy functions are used to construct the Asset object, and the other interfaces are used to call the corresponding solidity contract interfaces.。 +The load and deploy functions are used to construct the Asset object, and the other interfaces are used to call the corresponding solidity contract interfaces。 ## 4. Create a blockchain application project @@ -404,9 +404,9 @@ First, we need to install the JDK and the integrated development environment ### Step 2. Create a Java project -Create a gradle project in the IntelliJ IDE, select Gradle and Java, and enter the project name "asset-app-3.0``。 +Create a gradle project in IntelliJ IDE, select Gradle and Java, and enter the project name "asset-app-3.0"。 -Note: (This step is not a required step) The source code for this project can be obtained and referenced in the following way.。 +Note: (This step is not a required step) The source code for this project can be obtained and referenced in the following way。 ```bash $ cd ~/fisco @@ -418,12 +418,12 @@ $ unzip asset-app-3.0-solidity.zip && mv asset-app-demo-main asset-app-3.0 ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, please try to append '185.199.108.133 raw.githubusercontent.com' to '/ etc / hosts', or try 'curl-o asset-app-3.0-solidity.zip -#LO https://gitee.com/FISCO-BCOS/asset-app-demo/repository/archive/main.zip` + - If you cannot download for a long time due to network problems, please try to append '185.199.108.133 raw.githubusercontent.com' to '/ etc / hosts', or try 'curl -o asset-app-3.0-solidity.zip-#LO https://gitee.com/FISCO-BCOS/asset-app-demo/repository/archive/main.zip` ``` ### Step 3. Introducing the FISCO BCOS Java SDK -Modify the "build.gradle" file, "repositories" to set the maven source, introduce the Spring framework, and add a reference to the FISCO BCOS Java SDK under "dependencies" (note java-sdk version number)。 +Modify the "build.gradle" file, "repositories" to set the maven source, introduce the Spring framework, and add a reference to the FISCO BCOS Java SDK under "dependencies" (note the java-sdk version)。 ```groovy repositories { @@ -454,7 +454,7 @@ dependencies { ### Step 4. Configure the SDK certificate -in the "asset-app-3.0 / src / test / resources "Create the configuration file" applicationContext.xml "in the directory and write the configuration content。 +Create the configuration file "applicationContext.xml" in the "asset-app-3.0 / src / test / resources" directory and write the configuration content。 The contents of applicationContext.xml are as follows: @@ -539,12 +539,12 @@ The contents of applicationContext.xml are as follows: ``` -**Note:** If rpc listen _ ip is set to 127.0.0.1 or 0.0.0.0 and listen _ port is set to 20200, the 'applicationContext.xml' configuration does not need to be modified.。If the blockchain node configuration is changed, you must also modify the 'peers' configuration option under the 'network' attribute of the configuration 'applicationContext.xml' to configure the 'listen _ ip' of the '[rpc]' configuration of the connected node.:listen_port`。 +**Note:** If rpc listen _ ip is set to 127.0.0.1 or 0.0.0.0 and listen _ port is set to 20200, the 'applicationContext.xml' configuration does not need to be modified。If the blockchain node configuration is changed, you must also modify the 'peers' configuration option under the 'network' attribute of the configuration 'applicationContext.xml' to configure the 'listen _ ip' of the '[rpc]' configuration of the connected node:listen_port`。 -In the above configuration file, we specified the value of "certPath" for the bit where the certificate is stored as "conf"。Next, we need to put the certificate used by the SDK to connect to the node into the specified "conf" directory.。 +In the above configuration file, we specified the value of "certPath" for the bit where the certificate is stored as "conf"。Next, we need to put the certificate used by the SDK to connect to the node into the specified "conf" directory。 ```shell -# Suppose we take the asset-app-3.0 Put it in the ~ / fisco directory to enter the ~ / fisco directory +# Suppose we put asset-app-3.0 in the ~ / fisco directory and enter the ~ / fisco directory $ cd ~/fisco # Create a folder to place the certificate(The default unzipped project exists) $ mkdir -p asset-app-3.0/src/test/resources @@ -552,16 +552,16 @@ $ mkdir -p asset-app-3.0/src/test/resources $ mkdir asset-app-3.0/src/test/resources/conf # Copy the node certificate to the project resource directory $ cp -r nodes/127.0.0.1/sdk/* asset-app-3.0/src/test/resources/conf -# If you run the IDE directly, copy the certificate to the resources path. +# If you run the IDE directly, copy the certificate to the resources path $ mkdir -p asset-app-3.0/src/main/resources asset-app-3.0/src/main/resources/conf $ cp -r nodes/127.0.0.1/sdk/* asset-app-3.0/src/main/resources/conf ``` ## 5. Business Logic Development -We've covered how to introduce and configure the Java SDK in our own projects, and this section describes how to invoke contracts through Java programs, also with example asset management instructions.。 +We've covered how to introduce and configure the Java SDK in our own projects, and this section describes how to invoke contracts through Java programs, also with example asset management instructions。 -### Step 1. Introduce 3 compiled Java contracts into the project. +### Step 1. Introduce 3 compiled Java contracts into the project ```shell cd ~/fisco @@ -571,7 +571,7 @@ cp console/contracts/sdk/java/org/fisco/bcos/asset/contract/Asset.java asset-app ### Step 2. Develop business logic -in path 'asset-app-3.0 / src / main / java / org / fisco / bcos / asset / client 'directory, create the' AssetClient.java 'class, and deploy and invoke the contract by calling' Asset.java ' +Create the 'AssetClient.java' class in the 'asset-app-3.0 / src / main / java / org / fisco / bcos / asset / client' directory to deploy and invoke the contract by calling 'Asset.java' The 'AssetClient.java' code is as follows: @@ -779,7 +779,7 @@ Let's look at the call to the FISCO BCOS Java SDK using the AssetClient example: - Initialization -The main function of the initialization code is to construct the Client and CryptoKeyPair objects, which are created in the corresponding contract class object.(Call the deploy or load function of the contract class)need to use。 +The main function of the initialization code is to construct the Client and CryptoKeyPair objects, which are created in the corresponding contract class object(Call the deploy or load function of the contract class)need to use。 ```java / / Initialize in the initialize function @@ -794,9 +794,9 @@ client.getCryptoSuite().setCryptoKeyPair(cryptoKeyPair); logger.debug("create client for group, account address is " + cryptoKeyPair.getAddress()); ``` -- Constructing a Contract Class Object +- Construct contract class objects -The contract object can be initialized using the deploy or load function, which is used differently, the former for the initial deployment of the contract and the latter when the contract has been deployed and the contract address is known.。 +The contract object can be initialized using the deploy or load function, which is used differently, the former for the initial deployment of the contract and the latter when the contract has been deployed and the contract address is known。 ```java / / Deployment contract @@ -805,7 +805,7 @@ Asset asset = Asset.deploy(client, cryptoKeyPair); Asset asset = Asset.load(contractAddress, client, cryptoKeyPair); ``` -- interface invocation +- Interface calls Use the contract object to call the corresponding interface and process the returned result。 @@ -818,7 +818,7 @@ TransactionReceipt receipt = asset.register(assetAccount, amount); TransactionReceipt receipt = asset.transfer(fromAssetAccount, toAssetAccount, amount); ``` -in the "asset-app-3.0 / tool "Add a script to call AssetClient" asset _ run.sh "。 +Add a script to call AssetClient in the "asset-app-3.0 / tool" directory "asset _ run.sh"。 ```shell #!/bin/bash @@ -863,7 +863,7 @@ function usage() java -Djdk.tls.namedGroups="secp256k1" -cp 'apps/*:conf/:lib/*' org.fisco.bcos.asset.client.AssetClient $@ ``` -Next, configure the log。in the "asset-app-3.0 / src / test / resources "directory to create" log4j.properties " +Next, configure the log。Create "log4j.properties" in the "asset-app-3.0 / src / test / resources" directory ```properties ### set log levels ### @@ -885,18 +885,18 @@ log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=[%p] [%-d{yyyy-MM-dd HH:mm:ss}] %C{1}.%M(%L) | %m%n ``` -Next, specify the replication and compilation tasks by configuring the Jar command in gradle。And introduce the log library, in the "asset-app-3.0 / src / test / resources "directory, create an empty" contract.properties "file for the application to store the contract address at runtime。 +Next, specify the replication and compilation tasks by configuring the Jar command in gradle。The logstore is introduced. In the "asset-app-3.0 / src / test / resources" directory, an empty "contract.properties" file is created to store the contract address at runtime。 -So far, we have completed the development of this application。Finally, we get the asset-app-The directory structure for 3.0 is as follows: +So far, we have completed the development of this application。Finally, we get the asset-app-3.0 directory structure as follows: ```shell -|-- build.gradle / / gradle Configuration File +|--build.gradle / / gradle configuration file |-- gradle | |-- wrapper -| |-- gradle-Wrapper.jar / / is used to download the relevant code implementation of Gradle. -| |-- gradle-The configuration information used by wrapper.properties / / wrapper, such as the version of gradle. -|-- gradlew / / Shell script for executing the wrapper command under Linux or Unix -|-- gradlew.bat / / Batch script for executing the wrapper command under Windows +| |--gradle-wrapper.jar / / Code implementation for downloading Gradle +| |--gradle-wrapper.properties / / Configuration information used by wrapper, such as the gradle version +|--gradlew / / Shell script for executing the wrapper command under Linux or Unix +|--gradlew.bat / / Batch script for executing the wrapper command under Windows ├── LICENSE ├── README.md |-- src @@ -906,9 +906,9 @@ So far, we have completed the development of this application。Finally, we get | | | |-- fisco | | | |-- bcos | | | |-- asset -| | | |-- client / / Place the client call class +| | | |--client / / drop client call class | | | |-- AssetClient.java -| | | |-- contract / / Place Java contract classes +| | | |--contract / / Place Java contract classes | | | |-- Asset.java | | |-- resources | | |-- conf @@ -917,34 +917,34 @@ So far, we have completed the development of this application。Finally, we get | | |-- sdk.crt | | |-- sdk.key | | |-- sdk.nodeid -| | |-- applicationContext.xml / / project configuration file -| | |-- contract.properties / / File that stores the deployment contract address -| | |-- log4j.properties / / log configuration file -| | |-- contract / / Store the solidity contract file +| | |--applicationContext.xml / / project configuration file +| | |--contract.properties / / File that stores the deployment contract address +| | |--log4j.properties / / log configuration file +| | |--contract / / Store the solidity agreement file | | |-- Asset.sol | | |-- Table.sol | |-- test -| | |-- resources / / stores the code resource file +| | |--resources / / stores the code resource file | | |-- conf | | |-- ca.crt | | |-- cert.cnf | | |-- sdk.crt | | |-- sdk.key | | |-- sdk.nodeid -| | |-- applicationContext.xml / / project configuration file -| | |-- contract.properties / / File that stores the deployment contract address -| | |-- log4j.properties / / log configuration file -| | |-- contract / / Store the solidity contract file +| | |--applicationContext.xml / / project configuration file +| | |--contract.properties / / File that stores the deployment contract address +| | |--log4j.properties / / log configuration file +| | |--contract / / Store the solidity agreement file | | |-- Asset.sol | | |-- KVTable.sol | |-- tool - |-- asset _ run.sh / / project run script + |--asset _ run.sh / / project run script ``` ## 6. Run the application -So far, we have introduced all the processes and functions of developing asset management applications using blockchain, and then we can run the project to test whether the functions are normal.。 +So far, we have introduced all the processes and functions of developing asset management applications using blockchain, and then we can run the project to test whether the functions are normal。 - Compile @@ -955,9 +955,9 @@ $ cd ~/fisco/asset-app-3.0 $ ./gradlew build ``` -After successful compilation, the 'dist' directory will be generated in the project root directory。There is an 'asset _ run.sh' script in the dist directory to simplify project running。Now start to verify the requirements set at the beginning of this article.。 +After successful compilation, the 'dist' directory will be generated in the project root directory。There is an 'asset _ run.sh' script in the dist directory to simplify project running。Now start to verify the requirements set at the beginning of this article。 -- Deploying the 'Asset.sol' Contract +- Deploy the 'Asset.sol' contract ```shell # Enter dist directory @@ -966,7 +966,7 @@ $ bash asset_run.sh deploy deploy Asset success, contract address is 0xc8ead4b26b2c6ac14c9fd90d9684c9bc2cc40085 ``` -- Registered Assets +- Registered assets ```shell $ bash asset_run.sh register Alice 100000 @@ -975,7 +975,7 @@ $ bash asset_run.sh register Bob 100000 register asset account success => asset: Bob, value: 100000 ``` -- Query Assets +- Query assets ```shell $ bash asset_run.sh query Alice @@ -984,7 +984,7 @@ $ bash asset_run.sh query Bob asset account Bob, value 100000 ``` -- Asset Transfer +- Asset transfer ```shell $ bash asset_run.sh transfer Alice Bob 50000 @@ -995,4 +995,4 @@ $ bash asset_run.sh query Bob asset account Bob, value 150000 ``` -**To summarize:** So far, we have built a solid application based on FISCO BCOS consortium blockchain through contract development, contract compilation, SDK configuration and business development.。 +**To summarize:** So far, we have built a solid application based on FISCO BCOS consortium blockchain through contract development, contract compilation, SDK configuration and business development。 diff --git a/3.x/en/docs/quick_start/wbc_liquid_application.md b/3.x/en/docs/quick_start/wbc_liquid_application.md index 69cc6324d..e452116a1 100644 --- a/3.x/en/docs/quick_start/wbc_liquid_application.md +++ b/3.x/en/docs/quick_start/wbc_liquid_application.md @@ -1,19 +1,19 @@ -# Development of the first WBC-Liquid Blockchain Applications +# 4. Develop the first WBC-Liquid blockchain application -Tags: "Develop first app" "WBC-Liquid "" Contract Development "" Blockchain Application "" WASM "" +Tags: "Develop first app" "WBC-Liquid" "Contract development" "Blockchain app" "WASM" " --- ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` -This chapter will introduce the whole process of developing a business application scenario based on FISCO BCOS blockchain, from business scenario analysis to contract design and implementation, then contract compilation and how to deploy to the blockchain, and finally the implementation of an application module, through the [Java SDK] we provide.(../develop/sdk/java_sdk/index.md)Enables call access to contracts on the blockchain。 +This chapter will introduce the whole process of developing a business application scenario based on FISCO BCOS blockchain, from business scenario analysis to contract design and implementation, then contract compilation and how to deploy to the blockchain, and finally the implementation of an application module, through the [Java SDK] we provide(../develop/sdk/java_sdk/index.md)Enables call access to contracts on the blockchain。 -This tutorial requires users to be familiar with the Linux operating environment, have basic skills in Java development, be able to use Gradle tools, and be familiar with webankblockchain.-liquid syntax (hereinafter referred to as WBC-Liquid), and a [WBC-Liquid's environment configuration](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/quickstart/prerequisite.html)。 +This tutorial requires users to be familiar with the Linux operating environment, have basic skills in Java development, be able to use the Gradle tool, be familiar with the webankblockchain-liquid syntax (hereinafter referred to as WBC-Liquid), and perform [WBC-Liquid environment configuration](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/quickstart/prerequisite.html)。 -Developing the WBC-Liquid application, need to build**WASM**Configure the blockchain network。The steps are as follows: +To develop WBC-Liquid applications, you need to build**WASM**Configure the blockchain network。The steps are as follows: 1. If you have not built a blockchain network or downloaded the console, please complete the tutorial [Building the First Blockchain Network](./air_installation.md)and back to this tutorial。(Ignore this step if built); 2. Enable the node wasm configuration item: modify the '[executor]' of the node creation block configuration file 'config.genesis' to 'is _ wasm = true'; 3. Delete data and restart the node; @@ -31,15 +31,15 @@ At this point, the blockchain network has opened the WASM configuration。 ## 1. Understand application requirements -Blockchain naturally has tamper-proof, traceability and other characteristics, these characteristics determine that it is more likely to be favored by the financial sector.。In this example, a simple development example of asset management will be provided, and the following features will eventually be implemented: +Blockchain naturally has tamper-proof, traceability and other characteristics, these characteristics determine that it is more likely to be favored by the financial sector。In this example, a simple development example of asset management will be provided, and the following features will eventually be implemented: - Ability to register assets on the blockchain - Ability to transfer different accounts -- Can query the asset amount of the account +- Can check the asset amount of the account ## Design and Development of Smart Contracts -When developing applications on the blockchain, combined with business requirements, it is first necessary to design the corresponding smart contract, determine the data that the contract needs to store, determine the interface provided by the smart contract on this basis, and finally give the specific implementation of each interface.。 +When developing applications on the blockchain, combined with business requirements, it is first necessary to design the corresponding smart contract, determine the data that the contract needs to store, determine the interface provided by the smart contract on this basis, and finally give the specific implementation of each interface。 ### Step 1: Designing Smart Contracts @@ -50,7 +50,7 @@ For this application, you need to design a table for storing asset management. T - account: Primary Key, Asset Account(string type) - asset_value: Amount of assets(uint256 type) -where account is the primary key, which is the field that needs to be passed in when operating the table, and the blockchain queries the matching records in the table based on the primary key field.。Storage representation for example below: +where account is the primary key, which is the field that needs to be passed in when operating the table, and the blockchain queries the matching records in the table based on the primary key field。Storage representation for example below: | account | asset_value | |---------|-------------| @@ -77,9 +77,9 @@ pub fn transfer(&mut self, from: String, to: String, value: u128) -> i16 #### Create Create an Asset smart contract project based on our first step of storage and interface design。 -Execute the following command in the terminal to create the WBC-liquid smart contract project +Run the following command in the terminal to create a WBC-Liquid smart contract project: -**Special attention:** For the convenience of users, the console has prepared the 'asset' example under the 'console / contracts / liquid' path. The following process is to create a new WBC-Liquid Contract Process。 +**Special attention:** For the convenience of users, the console has prepared the 'asset' example under the 'console / contracts / liquid' path. The following process is the process of creating a new WBC-Liquid contract。 ```shell # Create working directory ~ / fisco @@ -94,7 +94,7 @@ cd ~/fisco/console/ # Enter the console / contracts directory cd ~/fisco/console/contracts/liquid -# Create a new contract(The console has prepared the asset directory. If you want to download the console, skip this step.) +# Create a new contract(The console has prepared the asset directory. If you want to download the console, skip this step) cargo liquid new contract asset ``` @@ -114,13 +114,13 @@ asset/ The function of each file is as follows: -- '.gitignore ': hidden file to tell version management software [Git](https://git-scm.com/)Which files or directories do not need to be added to version management。WBC-By default, Liquid excludes some unimportant issues (such as temporary files generated during compilation) from version management. If you do not need to use Git management to manage project versions, you can ignore this file.; +- '.gitignore': hidden file to tell version management software [Git](https://git-scm.com/)Which files or directories do not need to be added to version management。By default, WBC-Liquid excludes some unimportant issues (such as temporary files generated during compilation) from version management. If you do not need to use Git management to manage project versions, you can ignore this file; -- '.liquid / ': hidden directory, used to implement WBC-The 'abi _ gen' subdirectory contains the implementation of the ABI generator. The compilation configuration and code logic in this directory are fixed.; +- '.liquid /': hidden directory, used to implement the internal functions of WBC-Liquid intelligent combination, in which the 'abi _ gen' subdirectory contains the implementation of ABI generator, the compilation configuration and code logic in this directory are fixed, if modified may cause the ABI cannot be generated normally; -- 'Cargo.toml ': project configuration list, mainly including project information, external library dependencies, compilation configuration, etc. Generally, there is no need to modify the file, unless there are special requirements (such as referencing additional third-party libraries, adjusting optimization levels, etc.); +- 'Cargo.toml': project configuration list, mainly including project information, external library dependencies, compilation configuration, etc. Generally, the file does not need to be modified unless there are special requirements (such as referencing additional third-party libraries, adjusting optimization levels, etc.); -- `src/lib.rs`:WBC-Liquid smart contract project root file, where the contract code is stored。After the smart contract project is created, the 'lib.rs' file is automatically populated with some template code, which we can use for further development.。 +- 'src / lib.rs': WBC-Liquid smart contract project root file, the contract code is stored in this file。After the smart contract project is created, the 'lib.rs' file is automatically populated with some template code, which we can use for further development。 Once we have copied the code from Asset liquid into the 'lib.rs' file, we can proceed to the next steps。 @@ -323,11 +323,11 @@ mod asset { Run the following command in the asset project root directory to start the build: ```shell -# Compile the secret version of the wasm binary file. +# Compile the secret version of the wasm binary file cargo liquid build -g ``` -This command directs the Rust language compiler to 'wasm32-unknown-'Unknown 'compiles the smart contract code for the target, and finally generates the Wasm format bytecode and ABI.。`-g 'Build smart contracts that can run on the underlying platform of the State Secret FISCO BCOS blockchain。After the command is executed, the following content is displayed: +This command will guide the Rust language compiler to compile the smart contract code with the 'wasm32-unknown-unknown' as the target, and finally generate the Wasm format bytecode and ABI。'-g' Build a smart contract that can run on the underlying platform of the State Secret FISCO BCOS blockchain。After the command is executed, the following content is displayed: ```shell [1/4] 🔍 Collecting crate metadata @@ -340,7 +340,7 @@ Binary: ~/fisco/console/contracts/liquid/asset/target/asset_gm.wasm ABI: ~/fisco/console/contracts/liquid/asset/target/asset.abi ``` -Among them, "Binary:"followed by the absolute path of the generated bytecode file," ABI:after the absolute path for the generated ABI file。To simplify the adaptation of FISCO BCOS SDKs in various languages, WBC-Liquid uses an ABI format compatible with the Solidity ABI specification. +Among them, "Binary:"followed by the absolute path of the generated bytecode file," ABI:after the absolute path for the generated ABI file。To simplify the adaptation of FISCO BCOS SDKs in various languages, WBC-Liquid uses the ABI format compatible with the Solidity ABI specification Then generate the non-secret Binary, ABI file: @@ -348,22 +348,22 @@ Then generate the non-secret Binary, ABI file: cargo liquid build ``` -Note: Without '-g`。 +Note: Without '-g'。 After executing the command, the generated information is the same as the above, enter 'target', see the new Binary, ABI, and just 'asset _ gm.wasm' ## 3. Compile Smart Contracts -The "Liquid" smart contract needs to be compiled into ABI and WASM files before it can be deployed to the blockchain network. With these two files, you can deploy and call the contract with the Java SDK.。For details about how to build and compile the Liquid project environment, see: [Deploying the Liquid Compilation Environment](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/quickstart/prerequisite.html) [Liquid Development Guide](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/dev_testing/development.html)。 +The "Liquid" smart contract needs to be compiled into ABI and WASM files before it can be deployed to the blockchain network. With these two files, you can deploy and call the contract with the Java SDK。For details about how to build and compile the Liquid project environment, see: [Deploying the Liquid Compilation Environment](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/quickstart/prerequisite.html) [Liquid Development Guide](https://liquid-doc.readthedocs.io/zh_CN/latest/docs/dev_testing/development.html)。 -The Java generation tool provided by the console can compile the ABI and WASM files from the 'cargo liquid build' and automatically generate a contract Java class with the same name as the compiled smart contract.。This Java class is generated based on the ABI to help users parse the parameters and provide methods with the same name.。When an application needs to deploy and invoke a contract, you can call the corresponding method of the contract class and pass in the specified parameters.。Using this contract Java class to develop applications can greatly simplify the user's code。We use the console console script 'contract2java.sh' to generate the Java file。 +The Java generation tool provided by the console can compile the ABI and WASM files from the 'cargo liquid build' and automatically generate a contract Java class with the same name as the compiled smart contract。This Java class is generated based on the ABI to help users parse the parameters and provide methods with the same name。When an application needs to deploy and invoke a contract, you can call the corresponding method of the contract class and pass in the specified parameters。Using this contract Java class to develop applications can greatly simplify the user's code。We use the console console script 'contract2java.sh' to generate the Java file。 ```shell -# Assuming that you have completed the download operation of the console, if not, please check the development source code steps in Section 2 of this article. +# Assuming that you have completed the download operation of the console, if not, please check the development source code steps in Section 2 of this article # Switch to fisco / console / directory cd ~/fisco/console/ -# Compile Contract(Specify the path of the BINARY and abi files. You can specify the path based on the actual project path.)As follows: +# Compile Contract(Specify the path of the BINARY and abi files. You can specify the path based on the actual project path)As follows: bash contract2java.sh liquid -a ~/fisco/console/contracts/liquid/asset/target/asset.abi -b ~/fisco/console/contracts/liquid/asset/target/asset.wasm -s ~/fisco/console/contracts/liquid/asset/target/asset_gm.wasm -p org.fisco.bcos.asset.liquid.contract # Script Usage: @@ -404,7 +404,7 @@ public class Asset extends Contract { } ``` -The load and deploy functions are used to construct the Asset object, and the other interfaces are used to call the corresponding contract interfaces, respectively.。 +The load and deploy functions are used to construct the Asset object, and the other interfaces are used to call the corresponding contract interfaces, respectively。 ## 4. Create a blockchain application project @@ -412,7 +412,7 @@ The load and deploy functions are used to construct the Asset object, and the ot First, we need to install the JDK and the integrated development environment -- Java: JDK 11 (supported from JDK 1.8 to JDK 14) +- Java: JDK 11 (supported from JDK1.8 to JDK 14) First, download and install JDK11 on the official website, and modify the JAVA _ HOME environment variable by yourself @@ -422,9 +422,9 @@ First, we need to install the JDK and the integrated development environment ### Step 2. Create a Java project -Create a gradle project in the IntelliJ IDE, select Gradle and Java, and enter the project name "asset-app-liquid``。 +Create a gradle project in IntelliJ IDE, select Gradle and Java, and enter the project name "asset-app-liquid"。 -Note: (This step is not a required step) The source code for this project can be obtained and referenced in the following way.。 +Note: (This step is not a required step) The source code for this project can be obtained and referenced in the following way。 ```shell $ cd ~/fisco @@ -437,12 +437,12 @@ $ unzip asset-app-3.0-liquid.zip && mv asset-app-demo-main-liquid asset-app-liq ```eval_rst .. note:: -- If you cannot download for a long time due to network problems, please try to append '185.199.108.133 raw.githubusercontent.com' to '/ etc / hosts', or try 'curl-o asset-app-3.0-liquid.zip -#LO https://gitee.com/FISCO-BCOS/asset-app-demo/repository/archive/main-liquid.zip` +-If you cannot download for a long time due to network problems, please try to append '185.199.108.133 raw.githubusercontent.com' to '/ etc / hosts', or try 'curl -o asset-app-3.0-liquid.zip-#LO https://gitee.com/FISCO-BCOS/asset-app-demo/repository/archive/main-liquid.zip` ``` ### Step 3. Introducing the FISCO BCOS Java SDK -Modify the "build.gradle" file, introduce the Spring framework, and add a reference to the FISCO BCOS Java SDK under "dependencies" (note java-sdk version number)。 +Modify the "build.gradle" file, introduce the Spring framework, and add a reference to the FISCO BCOS Java SDK under "dependencies" (note the java-sdk version)。 ```groovy repositories { @@ -469,7 +469,7 @@ dependencies { ``` ### Step 4. Configure the SDK certificate -in the "asset-app-create a configuration file "applicationContext.xml" in the liquid / src / test / resources directory and write the configuration content。 +Create the configuration file "applicationContext.xml" in the "asset-app-liquid / src / test / resources" directory and write the configuration content。 The contents of applicationContext.xml are as follows: @@ -554,27 +554,27 @@ The contents of applicationContext.xml are as follows: ``` -**Note:** The liquid contract. The node needs to be enabled.**wasm**Options。If rpc listen _ ip is set to 127.0.0.1 or 0.0.0.0 and listen _ port is set to 20200, the 'applicationContext.xml' configuration does not need to be modified.。If the blockchain node configuration is changed, you must also modify the 'peers' configuration option under the 'network' attribute of the configuration 'applicationContext.xml' to configure the 'listen _ ip' of the '[rpc]' configuration of the connected node.:listen_port`。 +**Note:** The liquid contract. The node needs to be enabled**wasm**Options。If rpc listen _ ip is set to 127.0.0.1 or 0.0.0.0 and listen _ port is set to 20200, the 'applicationContext.xml' configuration does not need to be modified。If the blockchain node configuration is changed, you must also modify the 'peers' configuration option under the 'network' attribute of the configuration 'applicationContext.xml' to configure the 'listen _ ip' of the '[rpc]' configuration of the connected node:listen_port`。 -In the above configuration file, we specified the value of "certPath" for the bit where the certificate is stored as "conf"。Next, we need to put the certificate used by the SDK to connect to the node into the specified "conf" directory.。 +In the above configuration file, we specified the value of "certPath" for the bit where the certificate is stored as "conf"。Next, we need to put the certificate used by the SDK to connect to the node into the specified "conf" directory。 ```shell -# Suppose we take the asset-app-put liquid in the ~ / fisco directory to enter the ~ / fisco directory +# Suppose we put asset-app-liquid in the ~ / fisco directory and enter the ~ / fisco directory $ cd ~/fisco # Create a folder to place the certificate $ mkdir -p asset-app-liquid/src/test/resources/conf # Copy the node certificate to the project resource directory $ cp -r nodes/127.0.0.1/sdk/* asset-app-liquid/src/test/resources/conf -# If you run the IDE directly, copy the certificate to the resources path. +# If you run the IDE directly, copy the certificate to the resources path $ mkdir -p asset-app-liquid/src/main/resources/conf $ cp -r nodes/127.0.0.1/sdk/* asset-app-liquid/src/main/resources/conf ``` ## 5. Business Logic Development -We've covered how to introduce and configure the Java SDK in our own projects, and this section describes how to invoke contracts through Java programs, also with example asset management instructions.。 +We've covered how to introduce and configure the Java SDK in our own projects, and this section describes how to invoke contracts through Java programs, also with example asset management instructions。 -### The first step is to introduce 3 compiled Java contracts into the project. +### The first step is to introduce 3 compiled Java contracts into the project ```shell cd ~/fisco @@ -582,7 +582,7 @@ cd ~/fisco cp console/contracts/sdk/java/org/fisco/bcos/asset/liquid/contract/Asset.java asset-app-liquid/src/main/java/org/fisco/bcos/asset/liquid/contract/Asset.java ``` -### The second step is to develop business logic. +### The second step is to develop business logic Create the 'AssetClient.java' class in the '/ src / main / java / org / fisco / bcos / asset / liquid / client' directory to deploy and invoke the contract by calling 'Asset.java' @@ -803,7 +803,7 @@ Let's look at the call to the FISCO BCOS Java SDK using the AssetClient example: - Initialization -The main function of the initialization code is to construct the Client and CryptoKeyPair objects, which are created in the corresponding contract class object.(Call the deploy or load function of the contract class)need to use。 +The main function of the initialization code is to construct the Client and CryptoKeyPair objects, which are created in the corresponding contract class object(Call the deploy or load function of the contract class)need to use。 ```java / / Initialize in the initialize function @@ -818,9 +818,9 @@ client.getCryptoSuite().setCryptoKeyPair(cryptoKeyPair); logger.debug("create client for group, account address is " + cryptoKeyPair.getAddress()); ``` -- Constructing a Contract Class Object +- Construct contract class objects -The contract object can be initialized using the deploy or load function, which is used differently, the former for the initial deployment of the contract and the latter when the contract has been deployed and the contract address is known.。 +The contract object can be initialized using the deploy or load function, which is used differently, the former for the initial deployment of the contract and the latter when the contract has been deployed and the contract address is known。 ```java / / Deployment contract @@ -829,7 +829,7 @@ Asset asset = Asset.deploy(client, cryptoKeyPair, assetPath); Asset asset = Asset.load(contractAddress, client, cryptoKeyPair); ``` -- interface invocation +- Interface calls Use the contract object to call the corresponding interface and process the returned result。 @@ -842,7 +842,7 @@ TransactionReceipt receipt = asset.register(assetAccount, amount); TransactionReceipt receipt = asset.transfer(fromAssetAccount, toAssetAccount, amount); ``` -in the "asset-app-liquid / tool "directory to add a script that calls AssetClient" asset _ run.sh "。 +Add a script to call AssetClient in the "asset-app-liquid / tool" directory "asset _ run.sh"。 ```shell #!/bin/bash @@ -887,7 +887,7 @@ function usage() java -Djdk.tls.namedGroups="secp256k1" -cp 'apps/*:conf/:lib/*' org.fisco.bcos.asset.liquid.client.AssetClient $@ ``` -Next, configure the log。in the "asset-app-liquid / src / test / resources "Create" log4j.properties " +Next, configure the log。Create "log4j.properties" in the "asset-app-liquid / src / test / resources" directory ```properties ### set log levels ### @@ -909,18 +909,18 @@ log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=[%p] [%-d{yyyy-MM-dd HH:mm:ss}] %C{1}.%M(%L) | %m%n ``` -Next, specify the replication and compilation tasks by configuring the Jar command in gradle。And introduce the log library, in the "asset-app-create an empty "contract.properties" file in the "liquid / src / test / resources" directory to store the contract address at runtime.。 +Next, specify the replication and compilation tasks by configuring the Jar command in gradle。The logstore is introduced. In the "asset-app-liquid / src / test / resources" directory, an empty "contract.properties" file is created to store the contract address at runtime。 -So far, we have completed the development of this application。Finally, we get the asset-app-The directory structure of liquid is as follows: +So far, we have completed the development of this application。Finally, we get the asset-app-liquid directory structure as follows: ```shell -|-- build.gradle / / gradle Configuration File +|--build.gradle / / gradle configuration file |-- gradle | |-- wrapper -| |-- gradle-Wrapper.jar / / is used to download the relevant code implementation of Gradle. -| |-- gradle-The configuration information used by wrapper.properties / / wrapper, such as the version of gradle. -|-- gradlew / / Shell script for executing the wrapper command under Linux or Unix -|-- gradlew.bat / / Batch script for executing the wrapper command under Windows +| |--gradle-wrapper.jar / / Code implementation for downloading Gradle +| |--gradle-wrapper.properties / / Configuration information used by wrapper, such as the gradle version +|--gradlew / / Shell script for executing the wrapper command under Linux or Unix +|--gradlew.bat / / Batch script for executing the wrapper command under Windows |-- src | |-- main | | |-- java @@ -929,9 +929,9 @@ So far, we have completed the development of this application。Finally, we get | | | |-- bcos | | | |-- asset | | | |-- liquid -| | | |-- client / / Place the client call class +| | | |--client / / drop client call class | | | |-- AssetClient.java -| | | |-- contract / / Place Java contract classes +| | | |--contract / / Place Java contract classes | | | |-- Asset.java | | |-- resources | | |-- conf @@ -940,36 +940,36 @@ So far, we have completed the development of this application。Finally, we get | | |-- sdk.crt | | |-- sdk.key | | |-- sdk.nodeid -| | |-- applicationContext.xml / / project configuration file -| | |-- contract.properties / / File that stores the deployment contract address -| | |-- log4j.properties / / log configuration file -| | |-- contract / / Store WBC-Liquid Contract Files +| | |--applicationContext.xml / / project configuration file +| | |--contract.properties / / File that stores the deployment contract address +| | |--log4j.properties / / log configuration file +| | |--contract / / Store WBC-Liquid contract documents | | |-- asset | | |-- src -| | |-- lib.rs WBC-Liquid File +| | |--lib.rs WBC-Liquid file | |-- test -| |-- resources / / stores the code resource file +| |--resources / / stores the code resource file | |-- conf | |-- ca.crt | |-- cert.cnf | |-- sdk.crt | |-- sdk.key | |-- sdk.nodeid -| |-- applicationContext.xml / / project configuration file -| |-- contract.properties / / File that stores the deployment contract address -| |-- log4j.properties / / log configuration file -| |-- contract / / Store WBC-Liquid Contract Files +| |--applicationContext.xml / / project configuration file +| |--contract.properties / / File that stores the deployment contract address +| |--log4j.properties / / log configuration file +| |--contract / / Store WBC-Liquid contract documents | |-- asset | |-- src -| |-- lib.rs WBC-Liquid File +| |--lib.rs WBC-Liquid file | |-- tool - |-- asset _ run.sh / / project run script + |--asset _ run.sh / / project run script ``` ## 6. Run the application -So far, we have introduced all the processes and functions of developing asset management applications using blockchain, and then we can run the project to test whether the functions are normal.。 +So far, we have introduced all the processes and functions of developing asset management applications using blockchain, and then we can run the project to test whether the functions are normal。 - Compile @@ -980,7 +980,7 @@ $ cd ~/fisco/asset-app-liquid $ ./gradlew build ``` -After successful compilation, the 'dist' directory will be generated in the project root directory。There is an 'asset _ run.sh' script in the dist directory to simplify project running。Now start to verify the requirements set at the beginning of this article.。 +After successful compilation, the 'dist' directory will be generated in the project root directory。There is an 'asset _ run.sh' script in the dist directory to simplify project running。Now start to verify the requirements set at the beginning of this article。 - Deploy the 'Asset.liquid' contract @@ -991,7 +991,7 @@ $ bash asset_run.sh deploy deploy Asset success, contract address is /asset/liquid180 ``` -- Registered Assets +- Registered assets ```shell $ bash asset_run.sh register Alice 100000 @@ -1000,7 +1000,7 @@ $ bash asset_run.sh register Bob 100000 register asset account success => asset: Bob, value: 100000 ``` -- Query Assets +- Query assets ```shell $ bash asset_run.sh query Alice @@ -1009,7 +1009,7 @@ $ bash asset_run.sh query Bob asset account Bob, value 100000 ``` -- Asset Transfer +- Asset transfer ```shell $ bash asset_run.sh transfer Alice Bob 50000 @@ -1020,4 +1020,4 @@ $ bash asset_run.sh query Bob asset account Bob,, value 150000 ``` -**To summarize:** At this point, we passed the WBC-Liquid contract development, contract compilation, SDK configuration and business development build a WBC based on FISCO BCOS consortium blockchain.-Liquid Applications。 +**To summarize:** So far, we have built a WBC-Liquid application based on the FISCO BCOS consortium blockchain through WBC-Liquid contract development, contract compilation, SDK configuration and business development。 diff --git a/3.x/en/docs/sdk/c_sdk/api.md b/3.x/en/docs/sdk/c_sdk/api.md index 20df11a0b..457da86cd 100644 --- a/3.x/en/docs/sdk/c_sdk/api.md +++ b/3.x/en/docs/sdk/c_sdk/api.md @@ -1,10 +1,10 @@ # Interface List -Tag: "c-sdk`` ``API`` +Tags: "c-sdk" "API" ---------- -This section describes'c-API list of the SDK, module list: +This section describes the API list and module list of 'c-sdk': - [Basic Operation](../c_sdk/api.html#id2) - [Error Handling](../c_sdk/api.html#id3) @@ -19,18 +19,18 @@ This section describes'c-API list of the SDK, module list: ## 1. Basic operation -This section describes'c-basic operations of the sdk, including creating, starting, stopping, and releasing sdk objects。 +This section describes the basic operations of 'c-sdk', including the creation, start, stop, and release of 'sdk' objects。 ### `bcos_sdk_version` - Prototype: - `const char* bcos_sdk_version()` - Function: - - get c-sdk version and build information + - obtain the version and build information of the c-sdk - Parameters: - None - Return: - - String types, including c-sdk version and build information, example: + -String type, including the version and build information of c-sdk, example: ```shell FISCO BCOS C SDK Version : 3.1.0 @@ -40,8 +40,8 @@ This section describes'c-basic operations of the sdk, including creating, starti Git Commit : dbc82415510a0e59339faebcd72e540fe408d2d0 ``` -- 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. +- Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage ### `bcos_sdk_create` @@ -52,9 +52,9 @@ This section describes'c-basic operations of the sdk, including creating, starti - Parameters: - config: configuration object, refer to [configuration object](./config.html#id2) - Return: - - return 'sdk' pointer - - Failed to return 'NULL', you can call 'bcos _ sdk _ get _ last _ error' to get the error information, refer to the 'bcos _ sdk _ get _ last _ error' interface introduction -- 注意: + - returns the 'sdk' pointer + - 'NULL' is returned. You can call 'bcos _ sdk _ get _ last _ error' to obtain the error information. For more information, see 'bcos _ sdk _ get _ last _ error' +- Attention: - The created 'sdk' object needs to be released by the 'bcos _ sdk _ destroy' interface to avoid memory leakage ### `bcos_sdk_create_by_config_file` @@ -66,9 +66,9 @@ This section describes'c-basic operations of the sdk, including creating, starti - Parameters: - config_file: configuration files, refer to [Configuration Files](./config.html#id3) - Return: - - return 'sdk' pointer - - Failed to return 'NULL', you can call 'bcos _ sdk _ get _ last _ error' to get the error information, refer to the 'bcos _ sdk _ get _ last _ error' interface introduction -- 注意: + - returns the 'sdk' pointer + - 'NULL' is returned. You can call 'bcos _ sdk _ get _ last _ error' to obtain the error information. For more information, see 'bcos _ sdk _ get _ last _ error' +- Attention: - The created 'sdk' object needs to be released by the 'bcos _ sdk _ destroy' interface to avoid memory leakage ### `bcos_sdk_start` @@ -76,18 +76,18 @@ This section describes'c-basic operations of the sdk, including creating, starti - Prototype: - `void bcos_sdk_start(void* sdk)` - Function: - - start 'sdk' + - launch 'sdk' - Parameters: - sdk: 'sdk 'pointer - Return: - - None. You can use 'bcos _ sdk _ get _ last _ error' to check whether the startup is successful. For details, see 'bcos _ sdk _ get _ last _ error'. + - None. You can use 'bcos _ sdk _ get _ last _ error' to check whether the startup is successful. For details, see 'bcos _ sdk _ get _ last _ error' ### `bcos_sdk_stop` - Prototype: - `void bcos_sdk_stop(void* sdk)` - Function: - - stop 'sdk' + - Stop 'sdk' - Parameters: - sdk: 'sdk 'pointer - Return: @@ -98,7 +98,7 @@ This section describes'c-basic operations of the sdk, including creating, starti - Prototype: - `void bcos_sdk_destroy(void* sdk)` - Function: - - stop and release the 'sdk' + - Stop and release the 'sdk' - Parameters: - sdk: 'sdk 'pointer - Return: None @@ -115,7 +115,7 @@ This section describes'c-basic operations of the sdk, including creating, starti ## 2. Error handling -This section describes'c-error handling interface of sdk '。 +This section describes the error handling interface of 'c-sdk'。 **注意: These interfaces are only valid for synchronous calling interfaces, and the error message for the asynchronous interface is returned in the callback**。 @@ -124,7 +124,7 @@ This section describes'c-error handling interface of sdk '。 - Prototype: - `int bcos_sdk_is_last_opr_success()` - Function: - - Whether the last operation was successful. The result returned by 'bcos _ sdk _ get _ last _ error' is not 0.。 + - Whether the last operation was successful, which is equivalent to 'bcos _ sdk _ get _ last _ error'. The returned result is not 0。 - Parameters: - None - Return: @@ -136,7 +136,7 @@ This section describes'c-error handling interface of sdk '。 - Prototype: - `int bcos_sdk_get_last_error()` - Function: - - Obtain the return status of the previous operation. If the operation fails, you can call 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error description. + - Obtain the return status of the previous operation. If the operation fails, you can call 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error description - Parameters: - None - Return: @@ -147,29 +147,29 @@ This section describes'c-error handling interface of sdk '。 - Prototype: - `const char* bcos_sdk_get_last_error_msg()` - Function: - - Obtain the description of the error message of the previous operation, and use it with 'bcos _ sdk _ get _ last _ error' + - Obtain the error description of the last operation, and use it with 'bcos _ sdk _ get _ last _ error' - Parameters: - None - Return: Error Description Information ## 3. RPC interface -This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to interact with nodes。 +This section describes how to call the 'rpc' interface of 'FISCO-BCOS 3.0' in 'c-sdk' to interact with nodes。 ### `bcos_rpc_call` - Prototype: - `void bcos_rpc_call(void* sdk, const char* group, const char* node, const char* to, const char* data, bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Call contract, query operation, no consensus + - Call contracts, query operations, no consensus required - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: node name, the name of the node to which the request is sent.(The node name can be obtained by 'getGroupInfo')When the value is NULL or an empty string, a node is randomly selected according to the principle of the highest block height in the group. + - `node`: node name, the name of the node to which the request is sent(The node name can be obtained by 'getGroupInfo')When the value is NULL or an empty string, a node is randomly selected according to the principle of the highest block height in the group - `to`: Contract Address - `data`: Encoded parameters - - 'ABI 'encoding when calling' solidity 'contract - - Encode 'liquid' when calling 'liquid' contract + - 'ABI' encoding when calling the 'solidity' contract + - Encode 'liquid' when calling 'liquid' contracts - `callback`: callback function, function prototype: ```shell @@ -187,11 +187,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i void* data; / / return data, valid when error = 0 size_t size; / / Return data size, valid when error = 0 - void* context; / / The callback context. The 'context' parameter passed in when the interface is called. + void* context; / / The callback context. The 'context' parameter passed in when the interface is called }; ``` - **!!!注意: The callback data 'data' is only valid in the callback thread. In multi-thread scenarios, users need to copy the data themselves to ensure thread safety.** + **!!!注意: The callback data 'data' is only valid in the callback thread. In multi-thread scenarios, users need to copy the data themselves to ensure thread safety** - `context`: Callback context, returned in the 'context' field of callback 'bcos _ sdk _ c _ struct _ response' - Return: - None @@ -201,11 +201,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_send_transaction(void* sdk, const char* group, const char* node, const char* data, int proof, bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Sending transactions requires blockchain consensus + - Send transactions that require blockchain consensus - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `data`: signed transaction, hex c style string - `proof`: Whether to return the transaction receipt proof, 0: do not return, 1: return - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface @@ -215,13 +215,13 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i ### `bcos_rpc_get_transaction` -- function prototype: `void bcos_rpc_get_transaction(void* sdk, const char* group, const char* node, const char* tx_hash,int proof, bcos_sdk_c_struct_response_cb callback, void* context)` +- Function prototype: `void bcos_rpc_get_transaction(void* sdk, const char* group, const char* node, const char* tx_hash,int proof, bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Get transaction based on transaction hash + - Get transactions based on transaction hash - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `tx_hash`: Transaction Hash - `proof`: Return Proof of Transaction, 0: No Return, 1: Return - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface @@ -242,7 +242,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `tx_hash`: Transaction Hash - `proof`: Return transaction receipt proof, 0: No return, 1: Return - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface @@ -263,10 +263,10 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `block_hash`: Block Hash - `only_header`: Whether to get only the block header, 1: Yes, 0: 否 - - `only_tx_hash`: Whether to get only the transaction hash of the block, 1.: Yes, 0: 否 + - `only_tx_hash`: Whether to get only the transaction hash of the block, 1: Yes, 0: 否 - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -281,14 +281,14 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i ``` - Function: - - Get block based on block height + - Get blocks based on block height - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `block_number`: Block height - `only_header`: Whether to get only the block header, 1: Yes, 0: 否 - - `only_tx_hash`: Whether to get only the transaction hash of the block, 1.: Yes, 0: 否 + - `only_tx_hash`: Whether to get only the transaction hash of the block, 1: Yes, 0: 否 - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -303,7 +303,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `block_number`: Block height - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface @@ -314,24 +314,24 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `int64_t bcos_rpc_get_block_limit(void* sdk, const char* group)` - Function: - - Gets the block height limit, which needs to be used when creating signed transactions. + - Get block high limit, required when creating signature transactions - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - Return: - - '> 0' returns the 'block limit' value - - '< = 0' indicates that the group failed to be queried. + - `>0 'returns the' block limit 'value + - `<= 0 'indicates that the group failed to be queried ### `bcos_rpc_get_block_number` - Prototype: - `void bcos_rpc_get_block_number(void* sdk, const char* group, const char* node,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Get Group Block High + - Get cluster block high - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -342,11 +342,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_code(void* sdk, const char* group, const char* node, const char* address,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - According to the contract address, check the contract code. + - According to the contract address, check the contract code - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `address`: Contract Address - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface @@ -361,7 +361,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -372,11 +372,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_observer_list(void* sdk, const char* group, const char* node,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Obtain the list of group observation nodes + - Get group watch node list - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -391,7 +391,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Parameters; - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -402,11 +402,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_sync_status(void* sdk, const char* group, const char* node,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Obtain the block synchronization status of a group + - Get the block synchronization status of the group - Parameters; - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -417,11 +417,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_consensus_status(void* sdk, const char* group, const char* node,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Get the consensus state of a node + - Get the consensus status of the node - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -432,11 +432,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_system_config_by_key(void* sdk, const char* group, const char* node,const char* key,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Get System Configuration + - Get system configuration - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `key`: Configure 'key' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface @@ -448,11 +448,11 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_total_transaction_count(void* sdk, const char* group, const char* node, bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Obtain the total amount of transactions at the current block height + - Get the total amount of transactions at the current block height - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID - - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node'. + - `node`: The node name. For more information, see the description of the 'bcos _ rpc _ call' interface for the 'node' - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface - `context`: Refer to the description of 'context' for the 'bcos _ rpc _ call' interface - Return: @@ -463,7 +463,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_group_peers(void* sdk, const char* group, bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Obtaining Network Connection Information for a Group + - Get the network connection information of the group - Parameters: - `sdk`: 'sdk 'pointer - `group`: Group ID @@ -491,7 +491,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_group_list(void* sdk, bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Get Group List + - Get group list - Parameters: - `sdk`: 'sdk 'pointer - `callback`: Refer to the description of 'callback' for the 'bcos _ rpc _ call' interface @@ -531,7 +531,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i - Prototype: - `void bcos_rpc_get_group_node_info(void* sdk, const char* group, const char* node,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Obtaining Node Information of a Group + - Get the node information of the group - Parameters: - sdk: 'sdk 'pointer - group: Group ID @@ -543,7 +543,7 @@ This section describes how to-sdk 'call' FISCO-'rpc 'interface of BCOS 3.0' to i ## 4. AMOP interface -This section describes the-sdk'Using FISCO-BCOS 3.0 'AMOP' function interface。 +This section describes the interface for using the FISCO-BCOS 3.0 'AMOP' function in the 'c-sdk'。 ### `bcos_amop_subscribe_topic` @@ -575,18 +575,18 @@ This section describes the-sdk'Using FISCO-BCOS 3.0 'AMOP' function interface。 ``` Field Meaning: - - endpoint: The network connection tag of the received message. It is required when the reply message calls' bcos _ amop _ send _ response '. + - endpoint: The network connection tag of the received message. It is required when the reply message calls' bcos _ amop _ send _ response ' - seq: Message tag, required when 'bcos _ amop _ send _ response' is called in reply message - - resp: Refer to the description of 'bcos _ sdk _ c _ struct _ response' for the 'callback' interface of 'bcos _ rpc _ call'. + - resp: Refer to the description of 'bcos _ sdk _ c _ struct _ response' for the 'callback' interface of 'bcos _ rpc _ call' - - `context`: callback context. For more information, see the description of 'context' in the 'bcos _ rpc _ call' interface. + - `context`: callback context. For more information, see the description of 'context' in the 'bcos _ rpc _ call' interface ### `bcos_amop_set_subscribe_topic_cb` - Prototype: - `void bcos_amop_set_subscribe_topic_cb(void* sdk, bcos_sdk_c_amop_subscribe_cb cb, void* context)` - Function: - - Set a callback function for 'topic'. If no callback function is set separately for the received 'topic' message, the default callback function is called + - Set a callback function for 'topic'. If no callback function is set for the received 'topic' message, the default callback function is called - Parameters: - `sdk`: 'sdk 'pointer - `cb`: 'topic 'callback function, refer to the description of' bcos _ amop _ subscribe _ topic _ with _ cb 'interface for' cb' @@ -643,28 +643,28 @@ This section describes the-sdk'Using FISCO-BCOS 3.0 'AMOP' function interface。 - Send reply message - Parameters: - `sdk`: 'sdk 'pointer - - `peer`: The network connection tag of the received message. For more information, see the field 'endpoint' of the 'bcos _ amop _ subscribe _ topic _ with _ cb' callback function 'cb'. - - `seq`: Message tag. For details, see the field 'seq' of the 'bcos _ amop _ subscribe _ topic _ with _ cb' callback function 'cb'. + - `peer`: The network connection tag of the received message. For more information, see the field 'endpoint' of the 'bcos _ amop _ subscribe _ topic _ with _ cb' callback function 'cb' + - `seq`: Message tag. For details, see the field 'seq' of the 'bcos _ amop _ subscribe _ topic _ with _ cb' callback function 'cb' - `data`: Message content - `size`: Message length ## 5. EventSub interface -This section describes the-sdk'Using FISCO-Interface for BCOS 3.0 'EventSub' Event Subscription Function。 +This section describes the interface for using the 'c-sdk' FISCO-BCOS 3.0 'EventSub' event subscription function。 ### `bcos_event_sub_subscribe_event` - Prototype: - `const char* bcos_event_sub_subscribe_event(void* sdk, const char* group, const char* params,bcos_sdk_c_struct_response_cb callback, void* context)` - Function: - - Contract Event Subscription + - Contract event subscription - Parameters: - `sdk`: 'sdk 'pointer - `group`: Request Group ID - `params`: request parameter, c-style JSON string - addresses: String array, a list of contract addresses to subscribe to the Event, indicating all contracts when empty - - fromBlock: Shaping, initial block,-1 means starting from the current highest block - - toBlock: Shaping, ending blocks,-1 indicates that the block height is not limited, and it continues to wait for new blocks when it is already the highest block. + - fromBlock: Shaping, initial block, -1 means starting from the current highest block + - toBlock: Shaping, end block, -1 indicates that the block height is not limited, and it continues to wait for new blocks when it is already the highest block - topics: String array, a list of subscribed topics. When empty, all topics are represented Example: @@ -680,21 +680,21 @@ This section describes the-sdk'Using FISCO-Interface for BCOS 3.0 'EventSub' Eve - `context`: Callback Context - Return: - - Task ID of the contract event subscription, C-style string + - task id of contract event subscription, c-style string ### `bcos_event_sub_unsubscribe_event` - Prototype: - `void bcos_event_sub_unsubscribe_event(void* sdk, const char* id)` - Function: - - Cancel Contract Event Subscription + - Cancel contract event subscription - Parameters: - `sdk`: 'sdk 'pointer - `id`: The ID of the task to which the contract event is subscribed, and the return value of 'bcos _ event _ sub _ subscribe _ event' ## 6. Tool class -This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' signature object, the 'ABI' codec, and the construction of signature transactions.。 +This summary describes the use of the basic tools of 'c-sdk', including 'KeyPair' signature objects, 'ABI' encoding and decoding, and constructing signature transactions。 ### 6.1 'KeyPair' Signature Object @@ -702,72 +702,72 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si - Prototype: - `void* bcos_sdk_create_keypair(int crypto_type)` - Function: - - Create a 'KeyPair' object + - Creates a 'KeyPair' object - Parameters: - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - 'KeyPair 'object pointer - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - When the 'KeyPair' object is no longer in use, you need to call the 'bcos _ sdk _ destroy _ keypad' interface to release it to avoid memory leakage. + - 'KeyPair' object pointer + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - When the 'KeyPair' object is no longer in use, you need to call the 'bcos _ sdk _ destroy _ keypair' interface to release it to avoid memory leakage - `bcos_sdk_create_keypair_by_private_key` - Prototype: - `void* bcos_sdk_create_keypair_by_private_key(int crypto_type, void* private_key, unsigned length)` - Function: - - Loading the Private Key Creating the 'KeyPair' Object + - Load the private key to create the 'KeyPair' object - Parameters: - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - private_key: Private key, byte array format - length: Array length - Return: - - 'KeyPair 'object pointer + - 'KeyPair' object pointer - Failed to return 'NULL' Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - When the 'KeyPair' object is no longer in use, you need to call the 'bcos _ sdk _ destroy _ keypad' interface to release it to avoid memory leakage. + - Attention: + - When the 'KeyPair' object is no longer in use, you need to call the 'bcos _ sdk _ destroy _ keypair' interface to release it to avoid memory leakage - `bcos_sdk_create_keypair_by_hex_private_key` - Prototype: - `void* bcos_sdk_create_keypair_by_hex_private_key(int crypto_type, const char* private_key)` - Function: - - Loading the Private Key Creating the 'KeyPair' Object + - Load the private key to create the 'KeyPair' object - Parameters: - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - private_key: Private key, hexadecimal c-style string format - Return: - - 'KeyPair 'object pointer - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - When the 'KeyPair' object is no longer in use, you need to call the 'bcos _ sdk _ destroy _ keypad' interface to release it to avoid memory leakage. + - 'KeyPair' object pointer + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - When the 'KeyPair' object is no longer in use, you need to call the 'bcos _ sdk _ destroy _ keypair' interface to release it to avoid memory leakage - `bcos_sdk_get_keypair_type` - Prototype: - `int bcos_sdk_get_keypair_type(void* key_pair)` - Function: - - Gets the 'KeyPair' object type + - Get 'KeyPair' object type - Parameters: - key_pair: 'KeyPair 'object pointer - Return: - - type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) + - Type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - `bcos_sdk_get_keypair_address` - Prototype: - `const char* bcos_sdk_get_keypair_address(void* key_pair)` - Function: - - Obtain the account address corresponding to the 'KeyPair' object + - Get the account address corresponding to the 'KeyPair' object - Parameters: - key_pair: 'KeyPair 'object pointer - Return: - account address, hex c style string - - 注意: - - When the returned string is not used, use 'bcos _ sdk _ c _ free' to release it to avoid memory leakage. + - Attention: + -When the returned string is not used, use 'bcos _ sdk _ c _ free' to release it to avoid memory leakage - `bcos_sdk_get_keypair_public_key` - Prototype: - `const char* bcos_sdk_get_keypair_public_key(void* key_pair)` - Function: - - Gets the public key string of the 'KeyPair' object + - Get the public key string of the 'KeyPair' object - Parameters: - key_pair: 'KeyPair 'object pointer - Return: - - Public key, hex c-style string - - 注意: - - When the returned string is not used, use 'bcos _ sdk _ c _ free' to release it to avoid memory leakage. + - Public key, hex c style string + - Attention: + -When the returned string is not used, use 'bcos _ sdk _ c _ free' to release it to avoid memory leakage - `bcos_sdk_get_keypair_private_key` - Prototype: - `const char* bcos_sdk_get_keypair_private_key(void* key_pair)` @@ -776,9 +776,9 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si - Parameters: - key_pair: 'KeyPair 'object pointer - Return: - - Private key, hex c-style string - - 注意: - - When the returned string is not used, use 'bcos _ sdk _ c _ free' to release it to avoid memory leakage. + - Private key, hex c style string + - Attention: + -When the returned string is not used, use 'bcos _ sdk _ c _ free' to release it to avoid memory leakage - `bcos_sdk_destroy_keypair` - Prototype: - `void bcos_sdk_destroy_keypair(void* key_pair)` @@ -795,155 +795,155 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si - Prototype: - `const char* bcos_sdk_abi_encode_constructor(const char* abi, const char* bin, const char* params, int crypto_type)` - Function: - - encoding constructor parameters + - Encode constructor parameters - Parameters: - abi: Contract ABI, JSON string - bin: Contract BIN, hex c-style string - params: Constructor parameters, JSON string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Encoded parameter, hex c-style string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Encoded parameter, hex c style string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_encode_method` - Prototype: - `const char* bcos_sdk_abi_encode_method(const char* abi, const char* method_name, const char* params, int crypto_type)` - Function: - - encoding interface parameters + - coded interface parameters - Parameters: - abi: Contract ABI, JSON string - method_name: Interface Name - params: constructor parameter, JSON string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Encoded parameter, hex c-style string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Encoded parameter, hex c style string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_encode_method_by_method_id` - Prototype: - `const char* bcos_sdk_abi_encode_method_by_method_id(const char* abi, const char* method_id, const char* params, int crypto_type)` - Function: - - Encode parameters based on methodID + - Encode parameters according to methodID - Parameters: - abi: Contract ABI, JSON string - method_id: methodID - params: constructor parameter, JSON string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Encoded parameter, hex c-style string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Encoded parameter, hex c style string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_encode_method_by_method_sig` - Prototype: - `const char* bcos_sdk_abi_encode_method_by_method_sig(const char* method_sig, const char* params, int crypto_type)` - Function: - - Encode parameters according to interface signature + - encode parameters according to the signature of the interface - Parameters: - method_sig: interface signature - params: constructor parameter, JSON string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Encoded parameter, hex c-style string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Encoded parameter, hex c style string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_decode_method_input` - Prototype: - `const char* bcos_sdk_abi_decode_method_input(const char* abi, const char* method_name, const char* data, int crypto_type)` - Function: - - Parsing input parameters based on interface name + - Parse input parameters based on interface name - Parameters: - abi: Contract ABI, JSON string - method_name: Interface Name - data: Encoded parameters, hexadecimal c-style strings - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Parsed parameter, hex c-style JSON string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - parsed parameter, hex c style json string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_decode_method_input_by_method_id` - Prototype: - `const char* bcos_sdk_abi_decode_method_input_by_method_id(const char* abi, const char* method_id, const char* data, int crypto_type)` - Function: - - Parsing input parameters based on methodID + - Parse input parameters based on methodID - Parameters: - abi: Contract ABI - method_id: methodID - data: ABI-encoded parameters, hexadecimal c-style strings - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Parsed parameter, hex c-style JSON string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - parsed parameter, hex c style json string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_decode_method_input_by_method_sig` - Prototype: - `const char* bcos_sdk_abi_decode_method_input_by_method_sig(const char* method_sig, const char* data, int crypto_type)` - Function: - - Parsing input parameters according to interface signature + - parses input parameters according to the interface signature - Parameters: - method_sig: interface signature - data: Encoded parameters, hexadecimal c-style strings - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Parsed parameter, hex c-style JSON string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - parsed parameter, hex c style json string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_decode_method_output` - Prototype: - `const char* bcos_sdk_abi_decode_method_output(const char* abi, const char* method_name, const char* data, int crypto_type)` - Function: - - Resolve the return parameter based on the interface name + - return parameters based on interface name resolution - Parameters: - abi: Contract ABI - method_name: Interface Name - data: Encoded return, hex c-style string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Parsed return, hex c-style JSON string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Parsed return, hex c style JSON string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_decode_method_output_by_method_id` - Prototype: - `const char* bcos_sdk_abi_decode_method_output_by_method_id(const char* abi, const char* method_id, const char* data, int crypto_type)` - Function: - - Parsing return parameters based on methodID + - Parse return parameters based on methodID - Parameters: - abi: Contract ABI - method_id: methodID - data: Encoded return, hex c-style string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Parsed return, hex c-style JSON string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Parsed return, hex c style JSON string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_decode_event` - Prototype: - `const char* bcos_sdk_abi_decode_event(const char* abi, const char* event_name, const char* data, int crypto_type)` - Function: - - Parsing the event parameter based on the event name + - parse the event parameter based on the event name - Parameters: - abi: Contract ABI - event_name: event name - data: Encoded return, hex c-style string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Parsed event parameter, hexadecimal c-style JSON string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - parsed event parameter, hex c-style JSON string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_abi_decode_event_by_topic` - Prototype: - `const char* bcos_sdk_abi_decode_event_by_topic(const char* abi, const char* topic, const char* data, int crypto_type)` - Function: - - Parse the event parameter according to the topic + - parse the event parameter based on the topic - Parameters: - abi: Contract ABI - topic: event topic - data: Encoded return, hex c-style string - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - Return: - - Parsed event parameter, hexadecimal c-style JSON string - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - parsed event parameter, hex c-style JSON string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage ### 6.3 Transaction construction (without type) - `bcos_sdk_get_group_wasm_and_crypto` @@ -965,14 +965,14 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si - Prototype: - `const char* bcos_sdk_get_group_chain_id(void* sdk, const char* group_id)` - Function: - - Gets the chain ID of the group, which is used when constructing transactions. + - Get the chain ID of the group, which is used when constructing the transaction - Parameters: - `sdk`: sdk object, 'bcos _ sdk _ create' or 'bcos _ sdk _ create _ by _ config _ file' - `group_id`: Group ID - Return: - Chain ID of the group - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_create_transaction_data` - Prototype: @@ -983,45 +983,45 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si - Creates a 'TransactionData' object, which is an unsigned transaction object - Parameters: - `group_id`: Group ID - - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group. + - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group - `to`: Called contract address, set to empty string when contract is deployed"" - `data`: ABI encoded parameters, hexadecimal c-style string, refer to [ABI codec](../c_sdk/api.html#abi) - - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered."" + - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered"" - `block_limit`: The block limit. You can call the 'bcos _ rpc _ get _ block _ limit' interface to obtain the - Return: - - 'TransactionData 'object pointer - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - The 'TransactionData' object needs to be released by calling the 'bcos _ sdk _ destroy _ transaction _ data' interface to avoid memory leakage. + - 'TransactionData' object pointer + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - The 'TransactionData' object needs to be released by calling the 'bcos _ sdk _ destroy _ transaction _ data' interface to avoid memory leakage - `bcos_sdk_calc_transaction_data_hash` - Prototype: - `const char* bcos_sdk_calc_transaction_data_hash(int crypto_type, void* transaction_data)` - Function: - - Calculating the 'TransactionData' Object Hash + - Calculates the 'TransactionData' object hash - Parameters: - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - `transaction_data`: 'TransactionData 'object pointer - Return: - - 'TransactionData 'object hash - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: + - 'TransactionData' object hash + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: - **The hash of the 'TransactionData' object, which is also the hash of the transaction** - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_sign_transaction_data_hash` - Prototype: - `const char* bcos_sdk_sign_transaction_data_hash(void* keypair, const char* transcation_hash)` - Function: - - Transaction Hash Signature + - Transaction hash signature - Parameters: - keypair:'KeyPair 'object, reference [' KeyPair 'signature object](../c_sdk/api.html#keypair) - transcation_hash: Transaction hash, generated by the 'bcos _ sdk _ calc _ transaction _ data _ hash' interface - Return: - Transaction signature, string type - - Failed to return 'NULL', call 'bcos _ sdk _ get _ last _ error', 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - 'NULL' is returned. Call 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_create_signed_transaction_with_signed_data` - Prototype: @@ -1031,17 +1031,17 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si ``` - Function: - - Create a signed transaction + - Create signed transactions - Parameters: - transaction_data: 'TransactionData 'object - signed_transaction_data: Signature of transaction hash, hexadecimal C-style string, generated by the 'bcos _ sdk _ sign _ transaction _ data _ hash' interface - transaction_data_hash: Transaction hash, hexadecimal C-style string, generated by the 'bcos _ sdk _ calc _ transaction _ data _ hash' interface - attribute: Additional transaction attributes, to be expanded, default to 0 - Return: - - signed transaction, hex c style string - - Failed to return 'NULL', call 'bcos _ sdk _ get _ last _ error', 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Signed transactions, hex c style strings + - 'NULL' is returned. Call 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_create_signed_transaction` - Prototype: @@ -1051,24 +1051,24 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si ``` - Function: - - Create a signed transaction + - Create signed transactions - Parameters: - key_pair: 'KeyPair 'object, reference [' KeyPair 'signature object](../c_sdk/api.html#keypair) - group_id: Group ID - - chain_id: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group. + - chain_id: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group - to: Called contract address, set to empty string when contract is deployed"" - data: ABI encoded parameters, refer to [ABI codec](../c_sdk/api.html#abi) - - abi: The ABI of the contract. This parameter is optional. You can enter the ABI of the contract when you deploy the contract. The default value is an empty string."" + - abi: The ABI of the contract. This parameter is optional. You can enter the ABI of the contract when you deploy the contract. The default value is an empty string"" - block_limit: The block limit. You can call the 'bcos _ rpc _ get _ block _ limit' interface to obtain the - attribute: Additional transaction attributes, to be expanded, default to 0 - tx_hash: return value, transaction hash, hex c-style string - signed_tx: return value, signed transaction, hex c-style string - Return: - - Call the 'bcos _ sdk _ get _ last _ error' interface to determine whether it is successful. 0 indicates success, and other values indicate error codes. - - 注意: - - The returned 'tx _ hash' and 'signed _ tx' must be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Call the 'bcos _ sdk _ get _ last _ error' interface to determine whether it is successful. 0 indicates success, and other values indicate error codes + - Attention: + - The returned 'tx _ hash' and 'signed _ tx' must be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - **Description**: - - 'bcos _ sdk _ create _ signed _ transaction 'is equivalent to a combination of the following interfaces. When the transaction creation, transaction hash, and transaction signature processes need to be processed separately, use the following interfaces: + - 'bcos _ sdk _ create _ signed _ transaction' is equivalent to a combination of the following interfaces. When the transaction creation, transaction hash, and transaction signature processes need to be processed separately, use the following interfaces: - `bcos_sdk_create_transaction_data`: Create 'TransactionData' - `bcos_sdk_calc_transaction_data_hash`: Calculate Transaction Hash - `bcos_sdk_sign_transaction_data_hash`: Transaction Hash Signature @@ -1088,39 +1088,39 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si - Prototype: - `void* bcos_sdk_create_transaction_builder_service(void* sdk, const char* group_id)` - Function: - - Create a 'TransactionBuilderService' object to simplify the construction of signature transactions. You can compare the differences between 'bcos _ sdk _ create _ transaction _ data _ with _ tx _ builder _ service' and 'bcos _ sdk _ create _ transaction _ data'. + - Create a 'TransactionBuilderService' object to simplify the construction of signature transactions. You can compare the differences between 'bcos _ sdk _ create _ transaction _ data _ with _ tx _ builder _ service' and 'bcos _ sdk _ create _ transaction _ data' - Parameters: - sdk: sdk object pointer - group_id: Group ID - Return: - - 'TransactionBuilderService 'object pointer - - Failed to return 'NULL', call 'bcos _ sdk _ get _ last _ error', 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - 'TransactionBuilderService 'object needs to be destroyed using' bcos _ sdk _ destroy _ transaction _ builder _ service 'to avoid memory leakage + - 'TransactionBuilderService' object pointer + - 'NULL' is returned. Call 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - 'TransactionBuilderService' object needs to be destroyed using 'bcos _ sdk _ destroy _ transaction _ builder _ service' to avoid memory leakage - `bcos_sdk_destroy_transaction_builder_service` - Prototype: - `bcos_sdk_destroy_transaction_builder_service(void* service)` - Function: - - Destroying the 'TransactionBuilderService' Object + - Destroy the 'TransactionBuilderService' object - Parameters: - - 'TransactionBuilderService 'object pointer + - 'TransactionBuilderService' object pointer - Return: - None - `bcos_sdk_create_transaction_data_with_tx_builder_service` - Prototype: - `void* bcos_sdk_create_transaction_data_with_tx_builder_service(void* tx_builder_service, const char* to, const char* data, const char* abi)` - Function: - - Create a 'TransactionData' object + - Creates a 'TransactionData' object - Parameters: - tx_builder_service: 'TransactionBuilderService 'object pointer - to: Called contract address, set to empty string when contract is deployed"" - data: ABI encoded parameters, refer to [ABI codec](../c_sdk/api.html#abi) - - abi: The ABI of the contract. This parameter is optional. You can enter the ABI of the contract when you deploy the contract. The default value is an empty string."" + - abi: The ABI of the contract. This parameter is optional. You can enter the ABI of the contract when you deploy the contract. The default value is an empty string"" - Return: - - 'TransactionData 'object pointer + - 'TransactionData' object pointer - Failed to return 'NULL' Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: + - Attention: - The created 'TransactionData' object needs to be released by the 'bcos _ sdk _ destroy _ transaction _ data' interface to avoid memory leakage - `bcos_sdk_create_signed_transaction_with_tx_builder_service` @@ -1131,22 +1131,22 @@ This summary introduces'c-The basic tools of the SDK, including the 'KeyPair' si ``` - Function: - - Create a signed transaction + - Create signed transactions - Parameters: - tx_builder_service: 'TransactionBuilderService 'object pointer - key_pair: 'KeyPair 'object, reference [' KeyPair 'signature object](../c_sdk/api.html#keypair) - to: Called contract address, set to empty string when contract is deployed"" - data: ABI encoded parameters, refer to [ABI codec](../c_sdk/api.html#abi) - - abi: The ABI of the contract. This parameter is optional. You can enter the ABI of the contract when you deploy the contract. The default value is an empty string."" + - abi: The ABI of the contract. This parameter is optional. You can enter the ABI of the contract when you deploy the contract. The default value is an empty string"" - attribute: Additional transaction attributes, to be expanded, default to 0 - tx_hash: return value, transaction hash, hex c-style string - signed_tx: return value, signed transaction, hex c-style string - - 注意: - - The returned 'tx _ hash' and 'signed _ tx' must be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Attention: + - The returned 'tx _ hash' and 'signed _ tx' must be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage ### 6.4 Transaction Structure (Band Type) -- **c-sdk `3.3.0-tx-struct 'feature branch, adding support for transaction structures**。 -That is, the return value and input parameters support the transaction structure, which is as follows. +- **c-sdk '3.3.0-tx-struct' feature branch, added support for transaction structures**。 +That is, the return value and input parameters support the transaction structure, which is as follows ```c // transaction bytes struct bcos_sdk_c_bytes @@ -1190,16 +1190,16 @@ struct bcos_sdk_c_transaction - Create a 'bcos _ sdk _ c _ transaction _ data' transaction structure, which is an unsigned transaction object - Parameters: - `group_id`: Group ID - - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group. + - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group - `to`: Called contract address, set to empty string when contract is deployed"" - `input`: ABI-encoded parameter, hexadecimal C-style string, hex string - - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered."" + - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered"" - `block_limit`: The block limit. You can call the 'bcos _ rpc _ get _ block _ limit' interface to obtain the - Return: - 'bcos _ sdk _ c _ transaction _ data' transaction structure pointer - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - 'bcos _ sdk _ c _ transaction _ data 'transaction structure. You need to call the' bcos _ sdk _ destroy _ transaction _ data _ struct 'interface to release the transaction structure to avoid memory leakage. + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - 'bcos _ sdk _ c _ transaction _ data' transaction structure. You need to call the 'bcos _ sdk _ destroy _ transaction _ data _ struct' interface to release the transaction structure to avoid memory leakage - `bcos_sdk_create_transaction_data_struct_with_bytes` - Prototype: @@ -1210,32 +1210,32 @@ struct bcos_sdk_c_transaction - Create a 'bcos _ sdk _ c _ transaction _ data' transaction structure, which is an unsigned transaction object - Parameters: - `group_id`: Group ID - - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group. + - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group - `to`: Called contract address, set to empty string when contract is deployed"" - `bytes_input`: ABI-encoded parameter, byte array of bytes - `bytes_input_length`: length of byte array - - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered."" + - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered"" - `block_limit`: The block limit. You can call the 'bcos _ rpc _ get _ block _ limit' interface to obtain the - Return: - 'bcos _ sdk _ c _ transaction _ data' transaction structure pointer - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - 'bcos _ sdk _ c _ transaction _ data 'transaction structure. You need to call the' bcos _ sdk _ destroy _ transaction _ data _ struct 'interface to release the transaction structure to avoid memory leakage. + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - 'bcos _ sdk _ c _ transaction _ data' transaction structure. You need to call the 'bcos _ sdk _ destroy _ transaction _ data _ struct' interface to release the transaction structure to avoid memory leakage - `bcos_sdk_calc_transaction_data_struct_hash` - Prototype: - `const char* bcos_sdk_calc_transaction_data_struct_hash(int crypto_type, struct bcos_sdk_c_transaction_data* transaction_data)` - Function: - - Calculate the 'bcos _ sdk _ c _ transaction _ data' transaction structure hash + - Calculate the hash of the 'bcos _ sdk _ c _ transaction _ data' transaction structure - Parameters: - crypto_type: type, ECDSA: BCOS_C_SDK_ECDSA_TYPE(0), SM: BCOS_C_SDK_SM_TYPE(1) - `transaction_data`: 'bcos _ sdk _ c _ transaction _ data' transaction structure pointer - Return: - - 'bcos _ sdk _ c _ transaction _ data' Transaction Structure Hash - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: + - 'bcos _ sdk _ c _ transaction _ data' transaction structure hash + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: - **The hash of the 'bcos _ sdk _ c _ transaction _ data' transaction structure, which is also the hash of the transaction** - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_create_transaction_struct` - Prototype: @@ -1253,10 +1253,10 @@ struct bcos_sdk_c_transaction - attribute: Additional transaction attributes, to be expanded, default to 0 - extra_data: Transaction additional data, fill in""Empty string is enough - Return: - - 'bcos _ sdk _ c _ transaction 'signed transaction structure pointer - - Failed to return 'NULL', call 'bcos _ sdk _ get _ last _ error', 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - The transaction structure signed by 'bcos _ sdk _ c _ transaction'. You need to call the 'bcos _ sdk _ destroy _ transaction _ struct' interface to release it to avoid memory leakage. + - 'bcos _ sdk _ c _ transaction' signed transaction structure pointer + - 'NULL' is returned. Call 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - The transaction structure signed by 'bcos _ sdk _ c _ transaction', which needs to be released by calling the 'bcos _ sdk _ destroy _ transaction _ struct' interface to avoid memory leakage - `bcos_sdk_create_encoded_transaction` - Prototype: @@ -1276,34 +1276,34 @@ struct bcos_sdk_c_transaction - attribute: Additional transaction attributes, to be expanded, default to 0 - extra_data: Transaction additional data, fill in""Empty string is enough - Return: - - Signed transaction string - - Failed to return 'NULL', call 'bcos _ sdk _ get _ last _ error', 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description - - 注意: - - The returned signed transaction string. You need to call 'bcos _ sdk _ c _ free' to release it to avoid memory leakage. + - Signature of transaction string + - 'NULL' is returned. Call 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description + - Attention: + - The returned signed transaction string, which needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_encode_transaction_data_struct` - Prototype: - `const char* bcos_sdk_encode_transaction_data_struct(struct bcos_sdk_c_transaction_data* transaction_data)` - Function: - - Encode the 'bcos _ sdk _ c _ transaction _ data' transaction structure as a hex string + - encode the 'bcos _ sdk _ c _ transaction _ data' transaction structure as a hex string - Parameters: - `transaction_data`: 'bcos _ sdk _ c _ transaction _ data' transaction structure pointer - Return: - - hex string after 'transaction _ data' transaction structure encoding - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - 'transaction _ data' transaction structure encoded hex string + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_encode_transaction_data_struct_to_json` - Prototype: - `const char* bcos_sdk_encode_transaction_data_struct_to_json(struct bcos_sdk_c_transaction_data* transaction_data)` - Function: - - Encode the 'bcos _ sdk _ c _ transaction _ data' transaction structure as a json string + - encode the 'bcos _ sdk _ c _ transaction _ data' transaction structure as a json string - Parameters: - `transaction_data`: 'bcos _ sdk _ c _ transaction _ data' transaction structure pointer - Return: - json string after 'transaction _ data' transaction structure encoding - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_decode_transaction_data_struct` - Prototype: @@ -1314,20 +1314,20 @@ struct bcos_sdk_c_transaction - `transaction_data_hex_str`: encoded hex string - Return: - 'bcos _ sdk _ c _ transaction _ data' transaction structure pointer - - 注意: - - 'bcos _ sdk _ c _ transaction _ data 'transaction structure. You need to call the' bcos _ sdk _ destroy _ transaction _ data _ struct 'interface to release the transaction structure to avoid memory leakage. + - Attention: + - 'bcos _ sdk _ c _ transaction _ data' transaction structure. You need to call the 'bcos _ sdk _ destroy _ transaction _ data _ struct' interface to release the transaction structure to avoid memory leakage - `bcos_sdk_decode_transaction_data_struct_with_json` - Prototype: - `struct bcos_sdk_c_transaction_data* bcos_sdk_decode_transaction_data_struct_with_json(const char* transaction_data_json_str)` - Function: - - Decode the encoded json string into a 'bcos _ sdk _ c _ transaction _ data' transaction structure + - Decode the encoded json string into the 'bcos _ sdk _ c _ transaction _ data' transaction structure - Parameters: - `transaction_data_json_str`: encoded json string - Return: - 'bcos _ sdk _ c _ transaction _ data' transaction structure pointer - - 注意: - - 'bcos _ sdk _ c _ transaction _ data 'transaction structure. You need to call the' bcos _ sdk _ destroy _ transaction _ data _ struct 'interface to release the transaction structure to avoid memory leakage. + - Attention: + - 'bcos _ sdk _ c _ transaction _ data' transaction structure. You need to call the 'bcos _ sdk _ destroy _ transaction _ data _ struct' interface to release the transaction structure to avoid memory leakage - `bcos_sdk_destroy_transaction_data_struct` - Prototype: @@ -1343,25 +1343,25 @@ struct bcos_sdk_c_transaction - Prototype: - `const char* bcos_sdk_encode_transaction_struct(struct bcos_sdk_c_transaction* transaction)` - Function: - - Encode the transaction structure of the 'bcos _ sdk _ c _ transaction' signature as a hex string + - encode the transaction structure of the 'bcos _ sdk _ c _ transaction' signature as a hex string - Parameters: - `transaction`: 'bcos _ sdk _ c _ transaction 'signed transaction structure pointer - Return: - - hex string after the transaction structure of the 'transaction' signature is encoded - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - hex string after the transaction structure encoding of the 'transaction' signature + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_encode_transaction_struct_to_json` - Prototype: - `const char* bcos_sdk_encode_transaction_struct_to_json(struct bcos_sdk_c_transaction* transaction)` - Function: - - Encode the 'bcos _ sdk _ c _ transaction' signed transaction structure as a json string + - encode the 'bcos _ sdk _ c _ transaction' signed transaction structure as a json string - Parameters: - `transaction`: 'bcos _ sdk _ c _ transaction 'signed transaction structure pointer - Return: - json string after the transaction structure of the 'transaction' signature is encoded - - 注意: - - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage. + - Attention: + - The returned string needs to be released by calling 'bcos _ sdk _ c _ free' to avoid memory leakage - `bcos_sdk_decode_transaction_struct` - Prototype: @@ -1371,9 +1371,9 @@ struct bcos_sdk_c_transaction - Parameters: - `transaction_hex_str`: encoded hex string - Return: - - 'bcos _ sdk _ c _ transaction 'signed transaction structure pointer - - 注意: - - The transaction structure signed by 'bcos _ sdk _ c _ transaction'. You need to call the 'bcos _ sdk _ destroy _ transaction _ struct' interface to release it to avoid memory leakage. + - 'bcos _ sdk _ c _ transaction' signed transaction structure pointer + - Attention: + - The transaction structure signed by 'bcos _ sdk _ c _ transaction', which needs to be released by calling the 'bcos _ sdk _ destroy _ transaction _ struct' interface to avoid memory leakage - `bcos_sdk_decode_transaction_struct_with_json` - Prototype: @@ -1383,15 +1383,15 @@ struct bcos_sdk_c_transaction - Parameters: - `transaction_json_str`: encoded json string - Return: - - 'bcos _ sdk _ c _ transaction 'signed transaction structure pointer - - 注意: - - The transaction structure signed by 'bcos _ sdk _ c _ transaction'. You need to call the 'bcos _ sdk _ destroy _ transaction _ struct' interface to release it to avoid memory leakage. + - 'bcos _ sdk _ c _ transaction' signed transaction structure pointer + - Attention: + - The transaction structure signed by 'bcos _ sdk _ c _ transaction', which needs to be released by calling the 'bcos _ sdk _ destroy _ transaction _ struct' interface to avoid memory leakage - `bcos_sdk_destroy_transaction_struct` - Prototype: - `void bcos_sdk_destroy_transaction_struct(struct bcos_sdk_c_transaction* transaction)` - Function: - - Release the 'bcos _ sdk _ c _ transaction' signed transaction structure + - Release the transaction structure signed by 'bcos _ sdk _ c _ transaction' - Parameters: - `transaction_data`: 'bcos _ sdk _ c _ transaction 'signed transaction structure pointer - Return: diff --git a/3.x/en/docs/sdk/c_sdk/appendix.md b/3.x/en/docs/sdk/c_sdk/appendix.md index 093d83073..8f1428517 100644 --- a/3.x/en/docs/sdk/c_sdk/appendix.md +++ b/3.x/en/docs/sdk/c_sdk/appendix.md @@ -1,13 +1,13 @@ # APPENDIX -Tag: "c-sdk "" ABI Codec "" Signature Transaction Construction " +Tags: "c-sdk" "ABI Codec" "Signature Transaction Construction" ---------- -This summary introduces'c-the usage details of some tool classes of the sdk '.: +This summary introduces the details of using some of the 'c-sdk' tool classes: - ABI Codec -- signature transaction construction +- Signature transaction construction ## ABI Codec @@ -15,14 +15,14 @@ To be added ## Constructing Signature Transactions -There are two ways to construct a signature transaction, depending on whether the transaction construction and signature are distinguished together.: +There are two ways to construct a signature transaction, depending on whether the transaction construction and signature are distinguished together: -- Direct construction of signature transactions: You can load the signature private key and construct the signature transaction directly. -- Separation of Transaction Construction and Signature: In this case, the private key is hosted by other services due to security and other factors, the transaction construction is done locally, and the transaction signature needs to be done by the signing service. +- Direct construction of signature transactions: You can load the signature private key and construct the signature transaction directly +- Separation of transaction construction and signature: In this case, the private key is hosted by other services due to security and other factors, the transaction construction is done locally, and the transaction signature needs to be done by the signing service ### Direct construction of signature transactions -- Constructing a Signature Object +- Construct signature object Reference [KeyPair](./api.html#keypair)section, Creating the 'KeyPir' Object @@ -32,7 +32,7 @@ bcos_sdk_create_keypair_by_private_key: Loading the Private Key Creating a KeyPa bcos_sdk_create_keypair_by_hex_private_key: Loading a private key in hexadecimal string format Creating a KeyPair object ``` -- Constructing Signature Transactions +- Construct signature transactions Reference [Transaction Construction](./api.html#id5)Section, Constructing Signature Transactions diff --git a/3.x/en/docs/sdk/c_sdk/assemble_transaction.md b/3.x/en/docs/sdk/c_sdk/assemble_transaction.md index 37b7435ec..50e8cd3c3 100644 --- a/3.x/en/docs/sdk/c_sdk/assemble_transaction.md +++ b/3.x/en/docs/sdk/c_sdk/assemble_transaction.md @@ -1,17 +1,17 @@ # Transaction Construction and Sending -Tag: "c-sdk "" 'assembly transaction " +tags: "c-sdk" "assembly transaction" ---- ```eval_rst .. important:: - FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions after version 3.7.0. Please confirm the node version sent before using it.。Please refer to: 'v3.6.0 <.. / introduction / change _ log / 3 _ 6 _ 0.html >' for version 3.6.0 features + FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions after version 3.7.0. Please confirm the node version sent before using it。3.6.0 version features please refer to: 'v3.6.0<../introduction/change_log/3_6_0.html>`_ ``` ```eval_rst .. note:: - The data structure of the transaction can refer to 'here <. / transaction _ data _ struct.html >' _ + The data structure of the transaction can be found here<./transaction_data_struct.html>`_ ``` FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions after version 3.7.0. The following five fields are added: @@ -19,17 +19,17 @@ FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions afte ```c++ string value; / / v1 New transaction field, original transfer amount string gasPrice; / / The new field in the v1 transaction. The unit price of gas during execution(gas/wei) -long gasLimit; / / The upper limit of the gas used when the transaction is executed. +long gasLimit; / / The upper limit of the gas used when the transaction is executed string maxFeePerGas; / / v1 new transaction field, EIP1559 reserved field string maxPriorityFeePerGas; / / v1 new transaction field, EIP1559 reserved field vector extension; / / v2 new fields for additional storage ``` -In order to meet the requirements of adding transaction fields in the future, the C SDK supports a new transaction service that can support flexible assembly, which is convenient for users and developers to use flexibly.。 +In order to meet the requirements of adding transaction fields in the future, the C SDK supports a new transaction service that can support flexible assembly, which is convenient for users and developers to use flexibly。 ## 1. Transaction Structure Definition -After 3.6.0, support for the use of transaction structures has been added, i.e. return values and inputs all support the use of transaction structures.。The structure is as follows: +After 3.6.0, support for the use of transaction structures has been added, i.e. return values and inputs all support the use of transaction structures。The structure is as follows: ```c / / Basic bytes type @@ -105,19 +105,19 @@ struct bcos_sdk_c_transaction_v2 The SDK needs to assemble 'TransactionData' first, then assemble the transaction data structure as' Transaction ', and finally encode the transaction data structure and send it to the blockchain node。Specific steps are as follows: -- The actual parameters of the transaction call contract, encoded using ABI / Scale as the 'input' field; +- The actual parameters of the transaction call contract, using ABI / Scale encoding as the 'input' field; - Enter the 'blockLimit' field, which is usually the height of the current block+600; -- The 'nonce' field, which is a random hexadecimal string.; +- Incoming 'nonce' field, usually a random hexadecimal string; - Pass in other parameters to construct the 'TransactionData' structure object; - Hash the object of 'TransactionData'; -- Use the key to perform the signature calculation on the hash value (byte array) calculated in the previous step to obtain the signature; +-Use the key to perform signature calculation on the hash value (byte array) calculated in the previous step to obtain the signature; - Pass in other parameters to construct the 'Transaction' structure object; - Encode the 'Transaction' structure object using the 'Tars' encoding; - Get the final transaction raw data, send to the chain。 -## 3. Transaction structure calculation interface. +## 3. Transaction structure calculation interface -The following is an example of the 'v2' version of the transaction, using the process of transaction assembly as a timeline to introduce the calculation interface of the transaction structure.。 +The following is an example of the 'v2' version of the transaction, using the process of transaction assembly as a timeline to introduce the calculation interface of the transaction structure。 ### 3.1 Constructing the TransactionData structure @@ -128,22 +128,22 @@ Interface 'bcos _ sdk _ create _ transaction _ data _ struct _ v2' - Parameters: - `version`: The transaction version. The corresponding transaction version is passed in according to the transaction field used. The default value is 2 - `group_id`: Group ID - - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group. + - `chain_id`: The chain ID. You can call the 'bcos _ sdk _ get _ group _ chain _ id' operation to obtain the chain ID of the group - `to`: Called contract address, set to empty string when contract is deployed"" - - `input`: The ABI-encoded parameter, which is a bytes array. You need to pass in the bytes pointer and length. - - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered."" + - `input`: The ABI-encoded parameter, which is a bytes array. You need to pass in the bytes pointer and length + - `abi`: The ABI of the contract, which is a JSON string with optional parameters. You can enter the ABI of the contract when deploying the contract. By default, an empty string is entered"" - `block_limit`: The block limit. You can call the 'bcos _ rpc _ get _ block _ limit' interface to obtain the - `value`: Value of transaction transfer balance - `gas_price`: Trade a given gas price - `gas_limit`: Maximum number of gas used in transactions - - `max_fee_per_gas`: Trade a given EIP-1559 Field - - `max_priority_fee_per_gas`: Trade a given EIP-1559 Field - - `extension`: The bytes type of the transaction that can be stored additionally. You need to pass in the bytes pointer and length. + - `max_fee_per_gas`: Transaction given EIP-1559 field + - `max_priority_fee_per_gas`: Transaction given EIP-1559 field + - `extension`: The bytes type of the transaction that can be stored additionally. You need to pass in the bytes pointer and length - Return: - 'bcos _ sdk _ c _ transaction _ data _ v2' transaction structure pointer - - Failed to return 'NULL'. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description -- 注意: - - 'bcos _ sdk _ c _ transaction _ data _ v2 'transaction structure. You need to call the' bcos _ sdk _ destroy _ transaction _ data _ struct _ v2 'interface to release the transaction structure to avoid memory leakage. + - 'NULL' is returned. Use 'bcos _ sdk _ get _ last _ error' and 'bcos _ sdk _ get _ last _ error _ msg' to obtain the error code and error description +- Attention: + - 'bcos _ sdk _ c _ transaction _ data _ v2' transaction structure, you need to call the 'bcos _ sdk _ destroy _ transaction _ data _ struct _ v2' interface to release it to avoid memory leakage ### 3.2 Calculating TransactionData Structure Hash @@ -153,53 +153,53 @@ Interface 'bcos _ sdk _ calc _ transaction _ data _ struct _ hash _ v2' - Parameters: - `crypto_type`: Hash type, 0 is keccak256, 1 is SM3 - `bcos_sdk_c_transaction_data_v2`: transactionData pointer -- Return: Transaction hash, Hex String -- Note: When calculating the hash, different judgments will be made according to the version number of the transaction. The hash calculation error will cause the transaction chain check to fail, so please set the transaction version number correctly。 +- Return: Transaction hash, Hex String form +-Note: When calculating the hash, different judgments will be made according to the transaction version number. Incorrect hash calculation will cause the transaction chain check to fail, so please set the transaction version number correctly。 ### 3.3 Constructing Transaction Transaction Structures Using Signatures and Transaction Hashes -After calculating the transaction hash using the interface of 3.2, you can use C-The interface of the SDK, or the external signing service calculates the signature to the transaction hash。 +After calculating the transaction hash using the 3.2 interface, you can use the C-SDK interface, or an external signature service to calculate the signature of the transaction hash。 **Note: The signature of FISCO BCOS is constructed as follows:** -- If it is an ECDSA signature, the bytes of the signature are constructed as R||S||V, where V is the value of the standard ECDSA, the value range is [0,1] -- If it is an SM2 signature, the bytes of the signature are constructed as R||S||PK, where PK is the public key of the private key +- If it is an ECDSA signature, the bytes of the signature are constructed in R||S||V, where V is the value of the standard ECDSA, the value range is [0,1] +- If it is an SM2 signature, the bytes of the signature are constructed in R||S||PK, where PK is the public key of the private key Interface 'bcos _ sdk _ create _ encoded _ transaction _ v2' -- Function: Use signature, transaction hash to construct the encoded transaction, which can be sent directly to the chain. +-Function: Use signature, transaction hash to construct the encoded transaction, which can be sent directly to the chain - Parameters: - `bcos_sdk_c_transaction_data_v2` : Transaction TransactionData Object Pointer - - 'signature ': the signature of the transaction hash, in Hex String format + - 'signature': the signature of the transaction hash, in Hex String format - `transaction_data_hash` : Transaction hash, Hex String format - `attribute`: Transaction attribute, to be expanded, default to 0 - `extra_data`: Additional transaction data, additional transaction value can be saved, fill in""Empty string is enough -- Return: The encoded transaction data structure, in Hex String format, which can be used directly on the chain. +- Return: Encoded transaction data structure, Hex String format, can be used directly on the chain Interface 'bcos _ sdk _ encode _ transaction _ struct _ to _ hex _ v2' -- Function: You can additionally build a Transaction structure, encode it, and send it directly to the chain +-Function: You can additionally build a Transaction structure, encode it, and send it directly to the chain - Parameters: - `bcos_sdk_c_transaction_v2` : Transaction Object Pointer -- Return: The encoded transaction data structure, in Hex String format, which can be used directly on the chain. +- Return: Encoded transaction data structure, Hex String format, can be used directly on the chain ### 3.4 Parsing Encoded Transactions Interface 'bcos _ sdk _ decode _ transaction _ struct _ from _ hex _ v2': -- Function: You can parse the transaction Hex String encoded by 'Tars' and construct the Transaction structure. -- Parameter: 'transaction _ hex _ str', the transaction structure encoded by Tars, Hex String +- Function: Can parse the transaction Hex String encoded by 'Tars' to construct the Transaction structure +- Parameter: 'transaction _ hex _ str' Tars encoded transaction structure, Hex String - Return: 'bcos _ sdk _ c _ transaction _ v2': transaction object pointer ### 3.5 Release the TransactionData structure -Due to 'C-The SDK 'uses pointers when constructing transaction structures.。Therefore, according to C's standard practice, each time after using the transaction structure should actively call the release structure interface to avoid memory leaks.。 +Since 'C-SDK' uses pointers when constructing transaction structures。Therefore, according to C's standard practice, each time after using the transaction structure should actively call the release structure interface to avoid memory leaks。 Interface 'bcos _ sdk _ destroy _ transaction _ data _ struct _ v2' - Function: Release the constructed transactionData object。 - Parameters: 'bcos _ sdk _ c _ transaction _ data _ v2' transactionData pointer -- Note: After calling the interface, you should not use the pointer again, nor should you use the same pointer more than once to call the interface。 +-Note: The pointer should not be used again after calling the interface, nor should the interface be called multiple times with the same pointer。 ### 3.6 Release Transaction Structure @@ -207,4 +207,4 @@ Interface 'bcos _ sdk _ destroy _ transaction _ struct _ v2' - Function: Release the constructed transaction object。 - Parameters: 'bcos _ sdk _ c _ transaction _ v2' transactionData pointer -- Note: After calling the interface, you should not use the pointer again, nor should you use the same pointer more than once to call the interface。 +-Note: The pointer should not be used again after calling the interface, nor should the interface be called multiple times with the same pointer。 diff --git a/3.x/en/docs/sdk/c_sdk/compile.md b/3.x/en/docs/sdk/c_sdk/compile.md index 8814f31af..da80ff122 100644 --- a/3.x/en/docs/sdk/c_sdk/compile.md +++ b/3.x/en/docs/sdk/c_sdk/compile.md @@ -1,6 +1,6 @@ # source code compilation -Tag: "c-sdk "" source code compilation " +Tags: "c-sdk" "source code compilation" ---------- @@ -30,10 +30,10 @@ export CXXFLAGS="${CXXFLAGS} -fPIC" cd bcos-c-sdk mkdir build && cd build -cmake ../ -DBUILD_SAMPLE=ON # Centos uses cmake3, BUILD _ SAMPLE to compile the sample program of the sample directory. +cmake ../ -DBUILD_SAMPLE=ON # Centos uses cmake3, BUILD _ SAMPLE to compile the sample program of the sample directory ``` -Compile to generate 'libbcos-c-sdk.so` +Compile to generate 'libbcos-c-sdk.so' ```shell -rw-r--r-- 1 root root 548896 12 9 17:27 libbcos-c-sdk.so @@ -47,7 +47,7 @@ mkdir build && cd build cmake ../ -DBUILD_SAMPLE=ON # BUILD _ SAMPLE indicates the sample program for compiling the sample directory ``` -Compile to generate 'libbcos-c-sdk.dylib` +Compile to generate 'libbcos-c-sdk.dylib' ```shell -rw-r--r-- 1 root root 548896 12 9 17:27 libbcos-c-sdk.dylib diff --git a/3.x/en/docs/sdk/c_sdk/config.md b/3.x/en/docs/sdk/c_sdk/config.md index 2effd035d..287d73107 100644 --- a/3.x/en/docs/sdk/c_sdk/config.md +++ b/3.x/en/docs/sdk/c_sdk/config.md @@ -1,14 +1,14 @@ # Configuration Introduction -Tag: "c-sdk`` ``config`` +Tag: "c-sdk" "config" ---------- -`bcos-c-sdk 'supports the initialization of configuration objects and configuration files.: +'bcos-c-sdk 'supports both configuration object and configuration file initialization: -- Configuration Object Initialization: +- Configure object initialization: - `void* bcos_sdk_create(struct bcos_sdk_c_config* config)` -- Configuration file initialization: +- Profile initialization: - `void* bcos_sdk_create_by_config_file(const char* config_file)` This section describes the configuration object 'struct bcos _ sdk _ c _ config' and the configuration file 'config _ file'。 @@ -74,7 +74,7 @@ struct bcos_sdk_c_sm_cert_config ### `bcos_sdk_c_endpoint` - Function - - connection 'ip:port` + - Connect'ip:port` - Field - `host`: node 'rpc' connection, supports' ipv4 'and' ipv6 'formats**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** @@ -83,29 +83,29 @@ struct bcos_sdk_c_sm_cert_config ### `bcos_sdk_c_cert_config` - Function: - - 'ssl 'connection certificate configuration, valid when' ssl _ type 'is' ssl' + - 'ssl' connection certificate configuration, valid when 'ssl _ type' is' ssl' - Field: - - `ca_cert`: The root certificate can be configured in two ways: file path and file content. For more information, see the 'is _ cert _ path' field.**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** - - `node_cert`: The 'sdk' certificate supports both file path and file content. For more information, see the 'is _ cert _ path' field.**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** - - `node_key`: The 'sdk' private key, which supports both file path and file content. For details, see the 'is _ cert _ path' field.**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** + - `ca_cert`: The root certificate can be configured in two ways: file path and file content. For more information, see the 'is _ cert _ path' field**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** + - `node_cert`: The 'sdk' certificate supports both file path and file content. For more information, see the 'is _ cert _ path' field**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** + - `node_key`: The 'sdk' private key, which supports both file path and file content. For details, see the 'is _ cert _ path' field**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** ### `bcos_sdk_c_sm_cert_config` - Function: - - Configuration of 'ssl' connection certificate, valid when 'ssl _ type' is' sm _ ssl' + - Configuration of the 'ssl' connection certificate. Valid when 'ssl _ type' is' sm _ ssl' - Field: - - `ca_cert`: The national secret root certificate supports two methods: file path and file content.**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** + - `ca_cert`: The national secret root certificate supports two methods: file path and file content**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** - `node_cert`: 'sdk 'national secret signature certificate, supports two methods of file path and file content, refer to the' is _ cert _ path 'field**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** - `node_key`: 'sdk 'state-secret signature private key, supports two methods of file path and file content, see' is _ cert _ path 'field**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** - - `en_node_key`: The 'sdk' encryption certificate supports both file path and file content. For more information, see the 'is _ cert _ path' field.**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** - - `en_node_crt`: The 'sdk' encryption private key supports both file path and file content. For more information, see the 'is _ cert _ path' field.**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** + - `en_node_key`: The 'sdk' encryption certificate supports both file path and file content. For more information, see the 'is _ cert _ path' field**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** + - `en_node_crt`: The 'sdk' encryption private key supports both file path and file content. For more information, see the 'is _ cert _ path' field**注意: Use 'strdup' or 'malloc' initialization to ensure that you can use 'free' release** ### `bcos_sdk_c_config` - Field: - - `thread_pool_size`: Thread pool size, which is used to process network messages. + - `thread_pool_size`: Thread pool size, which is used to process network messages - `message_timeout_ms`: Message timeout - `peers`: connection list,**注意: Use 'malloc' initialization to ensure that you can use 'free' release** - `peers_count`: Connection list size @@ -117,9 +117,9 @@ struct bcos_sdk_c_sm_cert_config ## Profile -The fields in the configuration file correspond to the fields in the configuration object. +The fields in the configuration file correspond to the fields in the configuration object -- 'ssl 'connection profile +- 'ssl' connection profile Sample Configuration: [github](https://github.com/FISCO-BCOS/bcos-c-sdk/blob/v3.0.1/sample/config/config_sample.ini) [gitee](https://gitee.com/FISCO-BCOS/bcos-c-sdk/blob/v3.0.1/sample/config/config_sample.ini) @@ -188,7 +188,7 @@ Sample Configuration: [github](https://github.com/FISCO-BCOS/bcos-c-sdk/blob/v3. ## Initialization example -- How to configure objects: `bcos-c-sdk/sample/rpc/rpc.c` +- Configure object mode: `bcos-c-sdk/sample/rpc/rpc.c` - [github link](https://github.com/FISCO-BCOS/bcos-c-sdk/blob/v3.0.1/sample/rpc/rpc.c#L66) - [gitee link](https://gitee.com/FISCO-BCOS/bcos-c-sdk/blob/v3.0.1/sample/rpc/rpc.c#L66) diff --git a/3.x/en/docs/sdk/c_sdk/dev.md b/3.x/en/docs/sdk/c_sdk/dev.md index 134d8cbc5..39b20ca09 100644 --- a/3.x/en/docs/sdk/c_sdk/dev.md +++ b/3.x/en/docs/sdk/c_sdk/dev.md @@ -1,10 +1,10 @@ # Development Example -Tag: "c-sdk "" Example " +Tags: "c-sdk" "Example" ---------- -`bcos-c-The sdk / sample 'directory provides some examples of sdk use: +The 'bcos-c-sdk / sample' directory provides some examples of sdk usage: ```shell bcos-c-sdk/sample/ diff --git a/3.x/en/docs/sdk/c_sdk/dylibs.md b/3.x/en/docs/sdk/c_sdk/dylibs.md index dcc012455..ed8550c1c 100644 --- a/3.x/en/docs/sdk/c_sdk/dylibs.md +++ b/3.x/en/docs/sdk/c_sdk/dylibs.md @@ -1,10 +1,10 @@ # dynamic library download -Tag: "c-sdk`` ``dynamic library`` +Tags: "c-sdk" "dynamic library" ---------- - `bcos-c-sdk 'has provided dynamic libraries for various platforms, and users can download and use them directly.: + 'bcos-c-sdk 'has provided dynamic libraries for various platforms, and users can directly download and use: ## v3.7.0 diff --git a/3.x/en/docs/sdk/c_sdk/env.md b/3.x/en/docs/sdk/c_sdk/env.md index 2e8a533d4..f191ca94a 100644 --- a/3.x/en/docs/sdk/c_sdk/env.md +++ b/3.x/en/docs/sdk/c_sdk/env.md @@ -1,6 +1,6 @@ # Environmental Requirements -Tag: "c-sdk`` +Tag: "c-sdk" ---------- diff --git a/3.x/en/docs/sdk/c_sdk/faq.md b/3.x/en/docs/sdk/c_sdk/faq.md index ee8edfa82..f4b19c0fe 100644 --- a/3.x/en/docs/sdk/c_sdk/faq.md +++ b/3.x/en/docs/sdk/c_sdk/faq.md @@ -1,10 +1,10 @@ # FAQ -Tag: "c-sdk`` ``FAQ`` +Tag: "c-sdk" "" FAQ "" ---------- -This summary lists some uses'c-sdk 'some common problems: +This summary lists some common problems in using 'c-sdk': ## 1. Send transaction return exception: `transaction hash mismatching` @@ -24,9 +24,9 @@ The node detects that the transaction 'hash' field carried in the 'sdk' sending - Scene -This problem does not occur in general scenarios. It may only occur when users use the 'sdk' tool to assemble (call the interface to create, calculate hash, and sign) transactions. +This problem does not occur in general scenarios. It may only occur when users use the 'sdk' tool to assemble (call the interface to create, calculate hash, and sign) transactions -- Resolve +- Resolved Assemble the transaction with the correct interface: @@ -38,6 +38,6 @@ Assemble the transaction with the correct interface: Example: [c-sdk example](https://github.com/FISCO-BCOS/bcos-c-sdk/blob/v3.2.0/sample/tx/hello_sample.c#L308) -The 'sdk' of each language encapsulates the above interfaces.: +The 'sdk' of each language encapsulates the above interfaces: - ['Java SDK' link](https://github.com/FISCO-BCOS/bcos-sdk-jni/blob/v3.2.0/src/main/java/org/fisco/bcos/sdk/jni/utilities/tx/TransactionBuilderJniObj.java#L21) - - other language sdk, please refer to the specific documentation, or source code + - Other language SDK, please refer to the specific documentation, or source code diff --git a/3.x/en/docs/sdk/c_sdk/index.md b/3.x/en/docs/sdk/c_sdk/index.md index 1733c35d1..02c02379a 100644 --- a/3.x/en/docs/sdk/c_sdk/index.md +++ b/3.x/en/docs/sdk/c_sdk/index.md @@ -1,10 +1,10 @@ # 3. C SDK -Tag: "c-sdk "" blockchain application " +Tags: "c-sdk" "blockchain application" ---------- -`c-sdk 'is FISCO-The sdk of version c implemented by BCOS 3.0 provides a c - style interface to access the block chain and supports basic functions such as rpc, amop and contract event subscription.。users can use it to develop blockchain applications in the c language, and can also facilitate other developers to package sdks in other languages based on the c sdk, and quickly develop sdks in other languages.。 +'c-sdk 'is a version c sdk implemented by FISCO-BCOS 3.0. it provides a c-style interface for accessing the blockchain and supports basic functions such as rpc, amop, and contract event subscription。users can use it to develop blockchain applications in the c language, and can also facilitate other developers to package sdks in other languages based on the c sdk, and quickly develop sdks in other languages。 ```eval_rst .. toctree:: diff --git a/3.x/en/docs/sdk/c_sdk/transaction_data_struct.md b/3.x/en/docs/sdk/c_sdk/transaction_data_struct.md index ece532611..e482a7021 100644 --- a/3.x/en/docs/sdk/c_sdk/transaction_data_struct.md +++ b/3.x/en/docs/sdk/c_sdk/transaction_data_struct.md @@ -1,27 +1,27 @@ # Transaction and Receipt Data Structure and Assembly Process -Tag: "java-sdk "" 'Assembly Transaction ""' Data Structure "" 'Transaction "' Transaction Receipt" ' +Tags: "java-sdk" "assembly transaction" "data structure" "transaction" "transaction receipt" " --- ## 1. Transaction data structure interpretation -The transaction of 3.0 is defined in FISCO-BCOS warehouse in 'bcos-tars-protocol/bcos-tars-defined in protocol / tars / Transaction.tars', visible link: [Transaction.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/Transaction.tars)。The data structure is as follows: +The transaction of 3.0 is defined in 'bcos-tars-protocol / bcos-tars-protocol / tars / Transaction.tars' in the FISCO-BCOS repository. You can see the link: [Transaction.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/Transaction.tars)。The data structure is as follows: ```c++ module bcostars { struct TransactionData { - 1 optional int version; / / Transaction version number. Currently, there are three types of transactions: v0, v1, and v2. + 1 optional int version; / / Transaction version number. Currently, there are three types of transactions: v0, v1, and v2 2 optional string chainID; / / Chain name 3 optional string groupID; / / group name 4 optional long blockLimit; / / Block height of transaction limit execution 5 optional string nonce; / / Transaction uniqueness identification - 6 optional string to; / / The contract address of the transaction call. + 6 optional string to; / / The contract address of the transaction call 7 optional vector input; / / Parameters of the transaction call contract, encoded by ABI / Scale - 8 optional string abi; / / The JSON string of the ABI. We recommend that you add the ABI when deploying a contract. + 8 optional string abi; / / The JSON string of the ABI. We recommend that you add the ABI when deploying a contract 9 optional string value; / / v1 New transaction field, original transfer amount 10 optional string gasPrice; / / The new field in the v1 transaction. The unit price of gas during execution(gas/wei) - 11 optional long gasLimit; / / The upper limit of the gas used when the transaction is executed. + 11 optional long gasLimit; / / The upper limit of the gas used when the transaction is executed 12 optional string maxFeePerGas; / / v1 new transaction field, EIP1559 reserved field 13 optional string maxPriorityFeePerGas; / / v1 new transaction field, EIP1559 reserved field 14 optional vector extension; / / v2 new fields for additional storage @@ -40,9 +40,9 @@ module bcostars { }; ``` -## 2. Transaction receipt data structure interpretation. +## 2. Transaction receipt data structure interpretation -Transaction receipts for 3.0 are defined in FISCO-BCOS warehouse in 'bcos-tars-protocol/bcos-tars-defined in protocol / tars / TransactionReceipt.tars', visible link: [TransactionReceipt.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/TransactionReceipt.tars)。The data structure is as follows: +The transaction receipt of 3.0 is defined in 'bcos-tars-protocol / bcos-tars-protocol / tars / TransactionReceipt.tars' in the FISCO-BCOS warehouse. You can see the link: [TransactionReceipt.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/TransactionReceipt.tars)。The data structure is as follows: ```c++ module bcostars { @@ -60,7 +60,7 @@ module bcostars { 5 optional vector output; / / Transaction execution return value 6 optional vector logEntries; / / Event list 7 optional long blockNumber;/ / Block height where the transaction is executed - 8 optional string effectiveGasPrice; / / The gas unit price (gas / wei) that takes effect when the transaction is executed. + 8 optional string effectiveGasPrice; / / The gas unit price (gas / wei) that takes effect when the transaction is executed }; struct TransactionReceipt { / / Transaction receipt type @@ -73,19 +73,19 @@ module bcostars { ## 3. The assembly process of the transaction -As shown above, the SDK needs to assemble the 'TransactionData' first, then assemble the transaction data structure as' Transaction ', and finally send it to the blockchain node.。Specific steps are as follows: +As shown above, the SDK needs to assemble the 'TransactionData' first, then assemble the transaction data structure as' Transaction ', and finally send it to the blockchain node。Specific steps are as follows: -- The actual parameters of the transaction call contract, encoded using ABI / Scale as the 'input' field; +- The actual parameters of the transaction call contract, using ABI / Scale encoding as the 'input' field; - Enter the 'blockLimit' field, which is usually the height of the current block+600; -- The 'nonce' field, which is a random hexadecimal string.; +- Incoming 'nonce' field, usually a random hexadecimal string; - Pass in other parameters to construct the 'TransactionData' structure object; - Hash the object of 'TransactionData', the hash calculation algorithm can be found in Section 4; -- Use the key to perform the signature calculation on the hash value (byte array) calculated in the previous step to obtain the signature; +-Use the key to perform signature calculation on the hash value (byte array) calculated in the previous step to obtain the signature; - Pass in other parameters to construct the 'Transaction' structure object; - Encode the 'Transaction' structure object using the 'Tars' encoding; - Get the final transaction raw data, send to the chain。 -## 4. TransactionData hash calculation algorithm and example. +## 4. TransactionData hash calculation algorithm and example TransactionData performs a hash calculation by assembling the bytes of all the fields in the object and finally performing a hash calculation on the byte array。C++An example of an implementation is as follows: @@ -151,9 +151,9 @@ if (getVersion() == TransactionVersion.V2.getValue()) { return byteArrayOutputStream.toByteArray(); ``` -## 5. TransactionReceiptData hash calculation algorithm and example. +## 5. TransactionReceiptData hash calculation algorithm and example -As described in Section 4, TransactionReceiptData's hash is also calculated by assembling the bytes of all the fields within the object and finally hashing the byte array.。C++An example of an implementation is as follows: +As described in Section 4, TransactionReceiptData's hash is also calculated by assembling the bytes of all the fields within the object and finally hashing the byte array。C++An example of an implementation is as follows: ```c++ int32_t version = boost::endian::native_to_big((int32_t)hashFields.version); diff --git a/3.x/en/docs/sdk/cert_config.md b/3.x/en/docs/sdk/cert_config.md index 834add3e0..78262b19e 100644 --- a/3.x/en/docs/sdk/cert_config.md +++ b/3.x/en/docs/sdk/cert_config.md @@ -1,14 +1,14 @@ -# 10. SDK connection certificate configuration. +# 10. SDK connection certificate configuration Tags: "SDK," "Certificate Configuration" ---- -When you use the SDK to develop an application, you need to use the certificate file of the node to interact with the node.。FISCO BCOS 3.x provides three node deployment modes. The node SDK certificate files in each deployment mode are slightly different.(./java_sdk/index.md) For example, describe the correct way to configure the SDK application certificate in each of the three node modes.。 +When you use the SDK to develop an application, you need to use the certificate file of the node to interact with the node。FISCO BCOS 3.x provides three node deployment modes. The node SDK certificate files in each deployment mode are slightly different(./java_sdk/index.md) For example, describe the correct way to configure the SDK application certificate in each of the three node modes。 ## Single-group blockchain (Air version) deployment mode -[Single Group Blockchain (Air Version)](../tutorial/air/index.md) adopt all-in-The one encapsulation mode compiles all modules into a binary (process), and a process is a blockchain node.。 +[Single Group Blockchain (Air Version)](../tutorial/air/index.md) The all-in-one encapsulation mode is used to compile all modules into a binary (process), and a process is a blockchain node。 For installation and deployment of Air version, please refer to: [link](../tutorial/air/build_chain.md) 。 @@ -48,10 +48,10 @@ nodes/ When using the Java SDK, copy the node SSL certificate to the 'conf' directory in the compiled 'dist' directory of the project: -**Note: For ease of demonstration, the SDK application path here is' ~ / fisco 'by default. Please refer to the actual path when using it.。** +**Note: For ease of demonstration, the SDK application path here is' ~ / fisco 'by default. Please refer to the actual path when using it。** ```shell -# For the convenience of demonstration, there is a Java SDK application in the ~ / fisco directory, and a blockchain node is built using the build _ chain.sh build script. +# For the convenience of demonstration, there is a Java SDK application in the ~ / fisco directory, and a blockchain node is built using the build _ chain.sh build script tree -L 1 ~/fisco ~/fisco ├── java-sdk-demo # Java SDK Application @@ -76,7 +76,7 @@ cp -r ~/fisco/nodes/127.0.0.1/sdk/* ~/fisco/java-sdk-demo/dist/conf ## Multi-group blockchain (Pro version) deployment mode -[Multi-Group Blockchain (Pro version)](../tutorial/pro/index.md) It consists of RPC, Gateway access layer services, and multiple blockchain node services. One node service represents a group, and the storage uses local RocksDB. All nodes share access layer services.。 +[Multi-Group Blockchain (Pro version)](../tutorial/pro/index.md) It consists of RPC, Gateway access layer services, and multiple blockchain node services. One node service represents a group, and the storage uses local RocksDB. All nodes share access layer services。 For installation and deployment of Pro version, please refer to: [link](../tutorial/pro/installation.md) 。 @@ -87,7 +87,7 @@ tree generated/rpc/chain generated/rpc/chain ├── 172.25.0.3 # Please refer to the actual IP │ ├── agencyABcosRpcService # RPC Service Directory for Institution A -│ │ ├── sdk # The SDK certificate directory. The SDK client can copy certificates from this directory to connect to the RPC service. +│ │ ├── sdk # The SDK certificate directory. The SDK client can copy certificates from this directory to connect to the RPC service │ │ │ ├── ca.crt # SSL Connection Root Certificate │ │ │ ├── cert.cnf # SSL Certificate Configuration │ │ │ ├── sdk.crt # SSL Connection Certificate @@ -95,7 +95,7 @@ generated/rpc/chain │ │ └── ssl # RPC Service Certificate Directory │ └── agencyBBcosRpcService # RPC Service Configuration Directory for Institution B │ ├── config.ini.tmp # Configuration file for RPC service of institution B -│ ├── sdk # The SDK certificate directory. The SDK client copies the certificate from this directory to connect to the RPC service. +│ ├── sdk # The SDK certificate directory. The SDK client copies the certificate from this directory to connect to the RPC service │ │ ├── ca.crt │ │ ├── cert.cnf │ │ ├── sdk.crt @@ -106,10 +106,10 @@ generated/rpc/chain When using the Java SDK, copy the node SSL certificate to the 'conf' directory in the compiled 'dist' directory of the project: -**Note: For ease of demonstration, the SDK application path here is' ~ / fisco 'by default. Please refer to the actual path when using it.。** +**Note: For ease of demonstration, the SDK application path here is' ~ / fisco 'by default. Please refer to the actual path when using it。** ```shell -# For the convenience of demonstration, there is a Java SDK application in the ~ / fisco directory, and a blockchain node is built using the build _ chain.sh build script. +# For the convenience of demonstration, there is a Java SDK application in the ~ / fisco directory, and a blockchain node is built using the build _ chain.sh build script tree -L 2 ~/fisco ~/fisco ├── java-sdk-demo # Java SDK Application @@ -139,7 +139,7 @@ cp -r ~/fisco/BcosBuilder/generated/rpc/chain0/agencyABcosRpcService/172.25.0.3/ ## Appendix: Identifying the Cryptographic Environment Type of Blockchain (Non-State Secret / State Secret) -In the Air version mode and the Pro version mode, the node configuration file 'config.ini' is generated after the blockchain node is built.。From the file 'config.ini', you can determine whether the password box environment type of the current blockchain is national secret or non-national secret.。 +In the Air version mode and the Pro version mode, the node configuration file 'config.ini' is generated after the blockchain node is built。From the file 'config.ini', you can determine whether the password box environment type of the current blockchain is national secret or non-national secret。 Since the SDK is directly connected to the RPC module of the blockchain node, we only need to pay attention to the RPC configuration here: diff --git a/3.x/en/docs/sdk/cpp_sdk/index.md b/3.x/en/docs/sdk/cpp_sdk/index.md index 5b13b8bd9..13beb9666 100644 --- a/3.x/en/docs/sdk/cpp_sdk/index.md +++ b/3.x/en/docs/sdk/cpp_sdk/index.md @@ -1,11 +1,11 @@ # 8. CPP SDK -Tag: "cpp-sdk "" blockchain application " +tags: "cpp-sdk" "blockchain application" ---- -cpp-sdk is a C implemented by FISCO BCOS++SDK, which provides access interfaces for basic functions such as RPC, AMOP, and contract event subscription.。users can develop c by using it++version of the blockchain application。 +cpp-sdk is a C implemented by FISCO BCOS++SDK, which provides access interfaces for basic functions such as RPC, AMOP, and contract event subscription。users can develop c by using it++version of the blockchain application。 - This project supports FISCO BCOS 3.0.0 and above -Using CPP-For SDK application development, see [[github link]](https://github.com/FISCO-BCOS/bcos-cpp-sdk) +For application development using CPP-SDK, please refer to [[github link]](https://github.com/FISCO-BCOS/bcos-cpp-sdk) diff --git a/3.x/en/docs/sdk/csharp_sdk/index.md b/3.x/en/docs/sdk/csharp_sdk/index.md index b03abad50..f42dd59fe 100644 --- a/3.x/en/docs/sdk/csharp_sdk/index.md +++ b/3.x/en/docs/sdk/csharp_sdk/index.md @@ -19,7 +19,7 @@ FISCO BCOS C#Transaction resolution of Sdk (middle): -State secret version using the introduction and code analysis: < https://www.bilibili.com/video/BV1tY4y137GN?vd_source=d13b0630d8f5bdd49b00820fee2bcbde#reply118116701616> +State secret version using the introduction and code parsing: Have good suggestions, please contact me! My email: 2594771947 @ qq.com diff --git a/3.x/en/docs/sdk/csharp_sdk/quick_start.md b/3.x/en/docs/sdk/csharp_sdk/quick_start.md index 53e1cc504..bc9646121 100644 --- a/3.x/en/docs/sdk/csharp_sdk/quick_start.md +++ b/3.x/en/docs/sdk/csharp_sdk/quick_start.md @@ -12,38 +12,38 @@ FISCOBCOS C# Sdk uses net core 3.1, and the supporting development tools are vs ## Function Introduction -1. Implement RPC synchronous / asynchronous requests. +1. Implement RPC synchronous / asynchronous requests 2. Realize the generation of public and private keys and accounts of FISCO BCOS, expand the generation of Webase Front, import the user json, and directly import the Webase middleware。 -3. Implement contract operation encapsulation, such as contract deployment, request parameter construction, transaction signature, RLP encoding conversion, etc.。 -4. Realize contract deployment, contract trading, contract Call operation, contract transaction receipt acquisition, etc.。 -5. Realize the analysis of contract input, output, event, etc.。 -6. Unit test Demo for all operation configurations.。Can reference copy。 -7. Realize state secret support, create state secret accounts, deploy contracts under state secrets, trade, etc.。 +3. Implement contract operation encapsulation, such as contract deployment, request parameter construction, transaction signature, RLP encoding conversion, etc。 +4. Realize contract deployment, contract trading, contract Call operation, contract transaction receipt acquisition, etc。 +5. Realize the analysis of contract input, output, event, etc。 +6. Unit test Demo for all operation configurations。Can reference copy。 +7. Realize state secret support, create state secret accounts, deploy contracts under state secrets, trade, etc。 -Note: Sending a transaction and returning a transaction receipt test synchronously has a certain chance of being empty because the underlying transaction is being packaged and consensus has not yet been completed.。At present, the latest code adds polling acquisition to optimize the transaction receipt method and improve the user experience.。 +Note: Sending a transaction and returning a transaction receipt test synchronously has a certain chance of being empty because the underlying transaction is being packaged and consensus has not yet been completed。At present, the latest code adds polling acquisition to optimize the transaction receipt method and improve the user experience。 ## Installation Tutorial -Note: You can also use webase-front blockchain middleware export contract to get abi and bin files。 +Note: You can also use the webase-front blockchain middleware to export contracts to obtain abi and bin files。 -1. Download the source code, vs2019 nuget package restore; Or use the nuget package to install, the installation command is as follows: Install-Package FISCOBCOS.CSharpSdk -Version 1.0.0.6 +1. Download the source code, vs2019 nuget package restore; Or use the nuget package to install, the installation command is as follows: Install-Package FISCOBCOS. CSharpSdk-Version 1.0.0.6 2. install the solidity plug-in in vs code and create a folder in vs code to store the original sol contract。 3. vs code executes the compilation command "compile current Solidity contract" according to F5, and abi and bin corresponding to the contract will be generated。 -4. Put the abi and bin compiled above into your project and do the related operations.。 +4. Put the abi and bin compiled above into your project and do the related operations。 Reference: ![vs Code Compilation Contract Description](https://github.com/FISCO-BCOS/csharp-sdk/blob/master/Img/how-to-use-console-generator1.gif) ## Instructions for use -1. Configure the BaseConfig file in the FISCOBCOS. CSharpSdk class library and configure the corresponding underlying request DefaultUrl, such as: < http://127.0.0.1:8545> 。 -2. Use ContractService and QueryApiService for related business operations.。 -3. ContractService is mainly a package of operations such as contract calls, see the ContractTest.cs in the corresponding unit test in detail.。 -4. QueryApiService is the underlying non-transactional Json RPC API encapsulation. For more information, see Unit test ApiServiceTest.cs.。 -5. For more information, see RedisThreadWorkTest in the ConsoleTest project to enable multiple RedisSubClient projects for subscription.。 +1. Configure the BaseConfig file in the FISCOBCOS. CSharpSdk class library and configure the corresponding underlying request DefaultUrl, for example: 。 +2. Use ContractService and QueryApiService for related business operations。 +3. ContractService is mainly a package of operations such as contract calls, see the ContractTest.cs in the corresponding unit test in detail。 +4. QueryApiService is the underlying non-transactional Json RPC API encapsulation. For more information, see Unit test ApiServiceTest.cs。 +5. For more information, see RedisThreadWorkTest in the ConsoleTest project to enable multiple RedisSubClient projects for subscription。 (This function can expand the specified contract, specified event, etc. according to the actual situation to obtain the resolution operation)。 -Note: The general JSON RPC API is relatively simple and does not encapsulate the corresponding DTO entity. During operation, you can use online JSON to generate entities for business integration.。 +Note: The general JSON RPC API is relatively simple and does not encapsulate the corresponding DTO entity. During operation, you can use online JSON to generate entities for business integration。 ## **State Secret Usage Instructions** @@ -51,10 +51,10 @@ Note: The general JSON RPC API is relatively simple and does not encapsulate the 2. Configure DefaultPrivateKeyPemPath as the default user private key pem file in the BaseConfig.cs file [Optional]。 -3. Generate information such as national secret user accounts, which can be imported into webase-front, view unit tests。 +3. Generate information such as national secret user accounts, import webase-front, and view unit tests。 /// - / / / Generate a pair of public and private keys. The generated json can be copied to a txt file and directly imported into components such as webase front. + / / / Generate a pair of public and private keys. The generated json can be copied to a txt file and directly imported into components such as webase front /// [Fact] public void GMGeneratorAccountJsonTest() @@ -72,14 +72,14 @@ Note: The general JSON RPC API is relatively simple and does not encapsulate the /// / / / Call the contract method asynchronously. This test calls the contract set method, which can parse input and event - / / / If the transaction hash is empty, the production environment uses a scheduled service or queue to obtain the transaction hash before obtaining the corresponding data. + / / / If the transaction hash is empty, the production environment uses a scheduled service or queue to obtain the transaction hash before obtaining the corresponding data /// /// [Fact] public async Task SendTranscationWithReceiptDecodeAsyncTest() { var contractService = new ContractService(BaseConfig.DefaultUrl, BaseConfig.DefaultRpcId, BaseConfig.DefaultChainId, BaseConfig.DefaultGroupId, privateKey); - string contractAddress = "0x26cf8fcb783bbcc7b320a46b0d1dfff5fbb27feb";/ / Test the deployment contract above to get the contract address. + string contractAddress = "0x26cf8fcb783bbcc7b320a46b0d1dfff5fbb27feb";/ / Test the deployment contract above to get the contract address var inputsParameters = new[] { BuildParams.CreateParam("string", "n") }; var paramsValue = new object[] { "123" }; string functionName = "set";/ / Call contract method @@ -100,7 +100,7 @@ Note: The general JSON RPC API is relatively simple and does not encapsulate the Assert.NotNull(eventpramas2.Result); } - 5. Check the relevant source code for details about the generation, signature, encryption, sending transaction, etc. of the national secret account, as well as the video of the supporting station B.。 + 5. Check the relevant source code for details about the generation, signature, encryption, sending transaction, etc. of the national secret account, as well as the video of the supporting station B。 ## Added features diff --git a/3.x/en/docs/sdk/go_sdk/amopExamples.md b/3.x/en/docs/sdk/go_sdk/amopExamples.md index bc57d72bb..bb4687731 100644 --- a/3.x/en/docs/sdk/go_sdk/amopExamples.md +++ b/3.x/en/docs/sdk/go_sdk/amopExamples.md @@ -1,15 +1,15 @@ # AMOP Use Cases -Tag: "go-sdk`` ``AMOP`` +Tags: "go-sdk" "AMOP" ---- AMOP (Advanced Messages Onchain Protocol) is an on-chain messenger protocol designed to provide a secure and efficient message channel for the consortium chain. All institutions in the consortium chain can use AMOP to communicate as long as they deploy blockchain nodes, whether they are consensus nodes or observation nodes. AMOP has the following advantages: -- Real-time: AMOP messages do not rely on blockchain transactions and consensus. Messages are transmitted between nodes in real time with a latency of milliseconds.。 -- Reliable: When AMOP messages are transmitted, all feasible links in the blockchain network are automatically searched for communication, and as long as at least one link is available between the sending and receiving parties, the message is guaranteed to be reachable.。 -- Efficient: AMOP message structure is simple, efficient processing logic, only a small amount of cpu occupation, can make full use of network bandwidth。 -- Easy to use: when using AMOP, no need to do any additional configuration in the SDK。 +- Real-time: AMOP messages do not rely on blockchain transactions and consensus, and messages are transmitted in real time between nodes with a latency of milliseconds。 +-Reliable: When AMOP messages are transmitted, all feasible links in the blockchain network are automatically searched for communication, as long as at least one link is available between the sending and receiving parties, the message is guaranteed to be reachable。 +- Efficient: AMOP message structure is simple, efficient processing logic, only a small amount of CPU occupation, can make full use of network bandwidth。 +-Easy to use: When using AMOP, no additional configuration is required in the SDK。 To learn more about AMOP, please refer to: [On-Chain Messenger Protocol](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/amop_protocol.html)。Case source code, please refer to: [go-sdk](https://github.com/FISCO-BCOS/go-sdk) @@ -17,7 +17,7 @@ To learn more about AMOP, please refer to: [On-Chain Messenger Protocol](https:/ **Unicast** A node randomly selects a subscriber from multiple subscribers listening to the same topic to forward a message. For details about the process, see [Unicast Timing Diagram](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/p2p/p2p.html#id11) -- Start the AMOP message subscriber: +- Start AMOP message subscriber: ```shell # go run examples/amop/sub/subscriber.go [endpoint] [topic] @@ -44,9 +44,9 @@ To learn more about AMOP, please refer to: [On-Chain Messenger Protocol](https:/ ## Multicast Case -**Multicast** This means that the node forwards messages to all subscribers listening on the same topic.。As long as the network is normal, even if there is no subscriber listening to the topic, the message publisher will receive the response packet that the node message is pushed successfully. For details of the process, please refer to [Multicast Timing Diagram](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/p2p/p2p.html#id12) +**Multicast** This means that the node forwards messages to all subscribers listening on the same topic。As long as the network is normal, even if there is no subscriber listening to the topic, the message publisher will receive the response packet that the node message is pushed successfully. For details of the process, please refer to [Multicast Timing Diagram](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/design/p2p/p2p.html#id12) -- Start the AMOP message subscriber: +- Start AMOP message subscriber: ```shell # go run examples/amop/sub/subscriber.go [endpoint] [topic] diff --git a/3.x/en/docs/sdk/go_sdk/api.md b/3.x/en/docs/sdk/go_sdk/api.md index ceed94e97..fde8c0ed1 100644 --- a/3.x/en/docs/sdk/go_sdk/api.md +++ b/3.x/en/docs/sdk/go_sdk/api.md @@ -1,10 +1,10 @@ # Go API -Tag: "go-sdk`` ``AMOP`` +Tags: "go-sdk" "AMOP" ---- -The Go SDK provides a Go API interface for blockchain application developers to call externally as a service.。 +The Go SDK provides a Go API interface for blockchain application developers to call externally as a service。 - **client**Provides access to FISCO BCOS nodes [JSON-RPC](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/api.html)Interface support, providing support for deploying and invoking contracts; @@ -15,7 +15,7 @@ The Go SDK provides a Go API interface for blockchain application developers to | Interface Name| 描述| Parameters| | ----------------------------------- | -------------------------------------------------------- | ------------------------------------------------------------ | | AsyncSendTransaction | Asynchronously sends a signed transaction, which is then executed and agreed upon by nodes on the chain|Signed transactions and callbacks| -| Call | Call read-only contract| Contract Address < br / > Call Interface*< br / > Parameter list| +| Call | Call read-only contract| Contract Address
Call Interface*
Parameter List| | GetBlockNumber | Get Latest Block High| None| | GetBlockByHash | Obtain block information based on block hash| Block Hash & bool| | GetBlockByNumber | Obtain block information according to block height| Block height & bool| @@ -30,16 +30,16 @@ The Go SDK provides a Go API interface for blockchain application developers to | GetPbftView | Get PBFT View| None| | GetPeers | Obtain the connection information of a blockchain node| None| | GetSealerList | Get Consensus Node List| None| -| GetSystemConfigByKey | Obtain blockchain system configuration based on keywords| System configuration keyword, currently supported: < br / >\- tx_count_limit
\- tx_gas_limit
\- rpbft_epoch_sealer_num
\- rpbft_epoch_block_num | +| GetSystemConfigByKey | Obtain blockchain system configuration based on keywords| System configuration keywords, currently supported:
\- tx_count_limit
\- tx_gas_limit
\- rpbft_epoch_sealer_num
\- rpbft_epoch_block_num | | GetSyncStatus | Obtain the synchronization status of a blockchain node| None| | GetTransactionByHash | Get transaction information based on transaction hash| Transaction Hash| | GetTransactionReceipt | Get transaction receipt based on transaction hash| Transaction Hash| | GetPendingTxSize | Get the number of unchained transactions in the transaction pool| None| -| GetTotalTransactionCount | Obtains the number of transactions on the chain of a specified group.| None| +| GetTotalTransactionCount | Obtains the number of transactions on the chain of a specified group| None| | SendRawTransaction | Send a signed transaction, which is then executed and agreed upon by the nodes on the chain|Signed transactions| -| SubscribeEventLogs | Listen for contract events eventlog| The event parameter and the callback function of the received post-processing.| +| SubscribeEventLogs | Listen for contract events eventlog| The event parameter and the callback function of the received post-processing| | SubscribeTopic | Listen to the topic of the on-chain messenger protocol AMOP|topic and the callback function of the received post-processing| -| SendAMOPMsg | Send an AMOP message to an SDK that listens to this topic.|topic and message| +| SendAMOPMsg | Send an AMOP message to an SDK that listens to this topic|topic and message| | BroadcastAMOPMsg | Broadcast and send messages of the on-chain messenger protocol AMOP to all SDKs that listen to this topic|topic and message| | UnsubscribeTopic | Cancel listening to the topic of the on-chain messenger protocol AMOP|topic | | SubscribeBlockNumberNotify | Cancel Block High Notification|Callback function to receive block high notification| diff --git a/3.x/en/docs/sdk/go_sdk/console.md b/3.x/en/docs/sdk/go_sdk/console.md index bf719435e..36b26fc0c 100644 --- a/3.x/en/docs/sdk/go_sdk/console.md +++ b/3.x/en/docs/sdk/go_sdk/console.md @@ -1,6 +1,6 @@ # Console -Tag: "go-sdk "" Go SDK Console " +Tags: "go-sdk" "Go SDK Console" ---- @@ -23,7 +23,7 @@ cd go-sdk go build cmd/console.go ``` -- Please copy the corresponding ca.crt, sdk.crt and sdk.key certificates to the console executable working directory +- Copy the corresponding ca.crt, sdk.crt and sdk.key certificates to the console executable working directory ## getBlockByHash @@ -35,8 +35,8 @@ Obtain block information based on the block hash: Parameters include: -- blockhash: block hash value; -- true / false: true returns the details of all transactions in the block. False only returns the hash values of all transactions in the block. The default value is true.。 +-blockHash: block hash value; +-true / false: true will return the details of all transactions in the block, false will only return the hash value of all transactions in the block, the default is true。 ```shell > ./console getBlockByHash 0xce28a18b54ee72450c403968f705253a59c87a22801a88cc642ae800bb8b4848 true @@ -116,8 +116,8 @@ Obtain block information based on block height: Parameters include: -- blockNumber: block height; -- true / false: true returns the details of all transactions in the block. False only returns the hash values of all transactions in the block. The default value is true.。 +-blockNumber: block height; +-true / false: true will return the details of all transactions in the block, false will only return the hash value of all transactions in the block, the default is true。 ```shell > ./console getBlockByNumber 3 true @@ -199,7 +199,7 @@ Obtain the block hash based on the block height: Parameters include: -- blockNumber: block height。 +-blockNumber: block height。 ```shell > ./console getBlockHashByNumber 3 @@ -230,7 +230,7 @@ Query contract data based on contract address: Parameters include: -- contract address: contract address。 +-contract address: contract address。 ```shell > ./console getCode 0x65474dbd4f08170bc2dc30f9ae32f8e2206b15a6 @@ -515,7 +515,7 @@ Get transaction information based on transaction hash: Parameters include: -- transactionHash: transaction hash value。 +-transactionHash: transaction hash value。 ```shell > ./console getTransactionByHash 0x5518df7c2063efeb6481c35c4c58f378fac5f476c023c2019b9b01d221478434 @@ -546,7 +546,7 @@ Get transaction receipt based on transaction hash: Parameters include: -- transactionHash: transaction hash value。 +-transactionHash: transaction hash value。 ```shell > ./console getTransactionReceipt 0x5518df7c2063efeb6481c35c4c58f378fac5f476c023c2019b9b01d221478434 diff --git a/3.x/en/docs/sdk/go_sdk/contractExamples.md b/3.x/en/docs/sdk/go_sdk/contractExamples.md index 9cee9b8db..6e19347be 100644 --- a/3.x/en/docs/sdk/go_sdk/contractExamples.md +++ b/3.x/en/docs/sdk/go_sdk/contractExamples.md @@ -1,6 +1,6 @@ # Contract Development Sample -Tag: "go-sdk "" 'Contract Development " +tags: "go-sdk" "contract development" ---- @@ -8,25 +8,25 @@ Tag: "go-sdk "" 'Contract Development " This development example uses the standard single-group four-node blockchain network structure, please refer to: [Installation](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/quick_start/air_installation.html)。 -When using the SDK for project development, the operation of smart contracts requires the use of go-The 'abigen' tool of the sdk converts the Solidity smart contract into the 'Go' file code, which automatically generates the interface for event listening in the contract。The whole mainly contains six processes: +When you use the SDK to develop a project, you need to use the 'abigen' tool of go-sdk to convert the Solidity smart contract into 'Go' file code to automatically generate the interface for event listening in the contract。The whole mainly contains six processes: - Prepare smart contracts that need to be compiled -- Configure the corresponding version of the solc compiler -- Build Go-sdk contract compilation tool abigen +- Configure the appropriate version of the solc compiler +-build the contract compilation tool abigen for go-sdk - compile to generate go file - prepare the certificate required to establish an ssl connection -- Use the generated go file for contract deployment and invocation. +- Use the generated go file for contract deployment, invocation ### HelloWorld Sample #### Prepare the HelloWorld.sol contract file ```bash -# The instruction is in go-Execute in the sdk directory +# The instruction is executed in the go-sdk directory mkdir helloworld && cd helloworld ``` -In the go-Create a helloworld folder in the sdk home directory and create a HelloWorld.sol contract in the folder。The contract provides two interfaces, get()and set()to get / set the contract variable name。The contract is as follows +Create a new helloworld folder in the go-sdk home directory and create the HelloWorld.sol contract in this folder。The contract provides two interfaces, get()and set()to get / set the contract variable name。The contract is as follows ```solidity // SPDX-License-Identifier: Apache-2.0 @@ -59,25 +59,25 @@ contract HelloWorld { #### Installing the solc compiler -This compiler is used to compile sol contract files into abi and bin files. Currently, the 'solc' compiler provided by FISCO BCOS is 0.8.11 / 0.6.10.。 +This compiler is used to compile sol contract files into abi and bin files. Currently, the 'solc' compiler provided by FISCO BCOS is 0.8.11 / 0.6.10。 ```bash # The instruction is executed in the helloworld folder bash ../tools/download_solc.sh -v 0.8.11 ``` -#### Build Go-code generation tool for sdk abigen +#### Build code generation tool abigen for go-sdk This tool is used to convert abi and bin files to go files ```bash -# This instruction is executed in the helloworld folder to compile and generate the abigen tool. +# This instruction is executed in the helloworld folder to compile and generate the abigen tool go build ../cmd/abigen ``` #### compile to generate go file -First, use solc to compile the contract file HelloWorld.sol to generate abi and bin files. +First, use solc to compile the contract file HelloWorld.sol to generate abi and bin files ```bash # The instruction is executed in the helloworld folder @@ -103,7 +103,7 @@ When you use the build _ chain.sh script to build a blockchain, the sdk certific #### Deployment contract -Create the cmd folder in the helloworld folder and create the main.go file in the cmd folder. The content of main.go is as follows. +Create the cmd folder in the helloworld folder and create the main.go file in the cmd folder. The content of main.go is as follows ```go package main @@ -199,15 +199,15 @@ func main() { Build and execute。 ```bash -# The instruction is in go-Execute in the sdk directory +# The instruction is executed in the go-sdk directory go run helloworld/cmd/main.go ``` ```eval_rst .. note:: - - The contract address needs to be saved manually, which is used when calling the contract interface. - - If c-The dynamic library of the sdk is placed in a custom directory and needs to be 'go run'.-ldflags="-r Path to custom directory"` + - The contract address needs to be saved manually, which is used when calling the contract interface + -If the dynamic library of c-sdk is placed in a custom directory, you need 'go run-ldflags ="-r Path to custom directory"` ``` @@ -288,19 +288,19 @@ func main() { ## State Secret Sample -The development process for using the state secret feature is roughly the same as for non-state secrets, with the following differences. +The development process for using the state secret feature is roughly the same as for non-state secrets, with the following differences -- The FISCO BCOS blockchain network needs to open the national secret feature -- Need to replace non-state secret private key with state secret private key -- Need to prepare the TLS certificate and private key -- When installing the solc compiler, you need to add**-g** option, replace with the State Secret version -- When using the abigen tool to convert bin and abi to go files, you need to add parameters**--smcrypto=true** +- The FISCO BCOS blockchain network needs to turn on the national secret feature +- You need to replace the non-state secret private key with the state secret private key +- TLS certificate and private key need to be prepared +- need to add when installing solc compiler**-g** option, replace with the State Secret version +-When using the abigen tool to convert bin and abi to go files, you need to add parameters**--smcrypto=true** ### HelloWorld Sample #### Prepare the HelloWorld.sol contract file -In the go-Create a helloworld folder in the sdk home directory and create a HelloWorld.sol contract in the folder。The contract provides two interfaces, get()and set()to get / set the contract variable name。The contract is as follows +Create a new helloworld folder in the go-sdk home directory and create the HelloWorld.sol contract in this folder。The contract provides two interfaces, get()and set()to get / set the contract variable name。The contract is as follows ```solidity pragma solidity >=0.6.10 <0.8.20; @@ -324,25 +324,25 @@ contract HelloWorld { #### install the state secret solc compiler -The compiler is used to compile the sol contract file into abi and bin files. +The compiler is used to compile the sol contract file into abi and bin files ```bash # The instruction is executed in the helloworld folder bash ../tools/download_solc.sh -v 0.8.11 -g ``` -#### Build Go-code generation tool for sdk abigen +#### Build code generation tool abigen for go-sdk This tool is used to convert abi and bin files to go files ```bash -# This instruction is executed in the helloworld folder to compile and generate the abigen tool. +# This instruction is executed in the helloworld folder to compile and generate the abigen tool go build ../cmd/abigen ``` #### compile to generate go file -First, use solc to compile the contract file HelloWorld.sol to generate abi and bin files. +First, use solc to compile the contract file HelloWorld.sol to generate abi and bin files ```bash # The instruction is executed in the helloworld folder @@ -356,4 +356,4 @@ HelloWorld.bin and HelloWorld.abi are generated under the helloworld directory ./abigen --bin ./HelloWorld.bin --abi ./HelloWorld.abi --pkg helloworld --type HelloWorld --out ./HelloWorld.go --smcrypto=true ``` -- The next steps are the same as non-state secrets, not taking up extra space. +-The next steps are the same as those of non-state secrets and do not take up extra space diff --git a/3.x/en/docs/sdk/go_sdk/env_conf.md b/3.x/en/docs/sdk/go_sdk/env_conf.md index 6ba6d033c..91e6f640c 100644 --- a/3.x/en/docs/sdk/go_sdk/env_conf.md +++ b/3.x/en/docs/sdk/go_sdk/env_conf.md @@ -1,39 +1,39 @@ # Environment and Profiles -Tag: "go-sdk "" environment configuration " +Tags: "go-sdk" "environment configuration" ---- ## Development Environment -- Go Development Environment +- Go development environment - Golang >= 1.17 - - The project uses go module for package management. For more information, see [Using Go Modules](https://blog.golang.org/using-go-modules) - - If you have not deployed a Go environment, please refer to [Official Documentation](https://golang.org/doc/) + - The project uses go module for package management, see [Using Go Modules](https://blog.golang.org/using-go-modules) + -If you have not deployed a Go environment, please refer to [official documentation](https://golang.org/doc/) -- Basic Development Components +- Basic development components - - Git (required for Windows, Linux, and MacOS) + - Git (required for Windows, Linux and MacOS) - Git bash (required for Windows only) ## bcos-c-sdk dynamic library preparation -go-sdk v3 depends on bcos-c-sdk dynamic library, you need to download bcos first-c-sdk dynamic library, and then put the dynamic library in the specified directory。 +Go-sdk v3 depends on the bcos-c-sdk dynamic library. You need to download the bcos-c-sdk dynamic library and place the dynamic library in the specified directory。 -### download bcos-c-sdk dynamic library +### Download bcos-c-sdk dynamic library -From [here](https://github.com/FISCO-BCOS/bcos-c-sdk/releases/tag/v3.4.0)Download the dynamic library of the corresponding platform。We provide a script, the default download to the '/ usr / local / lib' directory, if you need to download to other directories, you can use the script '-o 'Options +From [here](https://github.com/FISCO-BCOS/bcos-c-sdk/releases/tag/v3.4.0)Download the dynamic library of the corresponding platform。We provide a script, the default download to the '/ usr / local / lib' directory, if you need to download to other directories, you can use the script '-o' option ```bash ./tools/download_csdk_lib.sh ``` -Please place the dynamic library in the '/ usr / local / lib' directory. There is no special operation in the future.。If the dynamic library is placed in a custom directory, such as'. / lib ', when used by other machines after native compilation,' go build 'needs to add the' ldflags' parameter, such as' go build-v -ldflags="-r ${PWD}/lib" main.go`。You can also pass' export LD _ LIBRARY _ PATH = ${PWD}/ lib 'Set the search path for the dynamic library。 +Please place the dynamic library in the '/ usr / local / lib' directory. There is no special operation in the future。If the dynamic library is placed in a custom directory, such as'. / lib ', when used by other machines after native compilation,' go build 'needs to add the' ldflags' parameter, such as' go build -v -ldflags ="-r ${PWD}/lib" main.go`。You can also pass' export LD _ LIBRARY _ PATH = ${PWD}/ lib 'Set the search path for the dynamic library。 ## Configuration -Go SDK v3 by calling bcos-c-sdk dynamic library implementation, provides two initialization methods, a bcos-c-The configuration file of the sdk, another type of configuration information passed in by parameters。 +Go SDK v3 is implemented by calling the dynamic library of bcos-c-sdk. It provides two initialization methods, one is the configuration file of bcos-c-sdk, and the other is the configuration information passed in through parameters。 ### Method 1: Incoming parameters @@ -96,7 +96,7 @@ bcos-c-sdk configuration file example ### bcos-c-sdk log configuration -bcos-c-The sdk requires a log configuration file, as shown in the following example: +The bcos-c-sdk requires a log configuration file, as shown in the following example: ```ini [log] diff --git a/3.x/en/docs/sdk/go_sdk/event_sub.md b/3.x/en/docs/sdk/go_sdk/event_sub.md index 4b9c436f4..4432ce3c4 100644 --- a/3.x/en/docs/sdk/go_sdk/event_sub.md +++ b/3.x/en/docs/sdk/go_sdk/event_sub.md @@ -1,16 +1,16 @@ # Contract Event Push -Tag: "go-sdk "" Event Subscription "" Event " +Tags: "go-sdk" "event subscription" "Event" ---- ## 1. Function Introduction -The contract event push function provides an asynchronous push mechanism for contract events. The client sends a registration request to the node, which carries the parameters of the contract events that the client is concerned about. The node filters the 'Event Log' of the request block range according to the request parameters and pushes the results to the client in stages.。We recommend that you use the generated golang wapper to subscribe to contract events. The subscription interface and resolution are automatically generated.。 +The contract event push function provides an asynchronous push mechanism for contract events. The client sends a registration request to the node, which carries the parameters of the contract events that the client is concerned about. The node filters the 'Event Log' of the request block range according to the request parameters and pushes the results to the client in stages。We recommend that you use the generated golang wapper to subscribe to contract events. The subscription interface and resolution are automatically generated。 ## 2. Interactive Protocol -The interaction between the client and the node is divided into three stages: registration request, node reply, and 'Event Log' data push.。 +The interaction between the client and the node is divided into three stages: registration request, node reply, and 'Event Log' data push。 ### Registration Request @@ -24,16 +24,16 @@ type EventLogParams struct { Topics []string `json:"topics"` } -/ / SubscribeEventLogs subscribes to the contract event. The parameters of the contract event to be subscribed to and the function that handles the received event are passed in. The ID of the subscription task is returned successfully, and the error message is returned when it fails.。Subscription id can be used to unsubscribe +/ / SubscribeEventLogs subscribes to the contract event. The parameters of the contract event to be subscribed to and the function that handles the received event are passed in. The ID of the subscription task is returned successfully, and the error message is returned when it fails。Subscription id can be used to unsubscribe func (c *Connection) SubscribeEventLogs(eventLogParams types.EventLogParams, handler func(int, []types.Log)) (string, error) -/ / Unsubscribe to the contract event. The ID of the subscription task. +/ / Unsubscribe to the contract event. The ID of the subscription task func (c *Connection) UnsubscribeEventLogs(taskID string) ``` - FromBlock: Start block (greater than 0) -- ToBlock:-1 indicates the latest block -- Addresses: string array. The array contains multiple contract addresses. The array can be empty. -- Topics: the array type. The array contains multiple topics. The array can be empty. +-ToBlock: -1 indicates the latest block +-Addresses: string array, the array is multiple contract addresses, the array can be empty +-Topics: the array type. The array contains multiple topics. The array can be empty ### Event Log Data Push @@ -66,7 +66,7 @@ type Log struct { ### Registration Interface -at FISCO-The 'client.Client' class in the BCOS Go SDK provides an interface for registering contract events. You can call 'SubscribeEventLogs' to send a registration request to a node and set a callback function.。 +In the FISCO-BCOS Go SDK, the 'client.Client' class provides an interface for registering contract events. You can call 'SubscribeEventLogs' to send a registration request to a node and set a callback function。 ```go func (c *Client) SubscribeEventLogs(eventLogParams types.EventLogParams, handler func(int, []types.Log)) error { @@ -91,8 +91,8 @@ type EventLogParams struct { #### topic calculation -Go language can use 'github.com / ethereum / go-The ethereum / common 'package calculates topics based on the events defined in the contract: -Note that uint and int are aliases of uint256 and int256. Even if the types used in the contract are uint and int, you still need to use uint256 and int256 in the calculation of the topic. +You can use the 'github.com / ethereum / go-ethereum / common' package to calculate topics based on events defined in contracts +Note that uint and int are aliases of uint256 and int256. Even if the types used in the contract are uint and int, you still need to use uint256 and int256 in the calculation of the topic ```go topic = common.BytesToHash(crypto.Keccak256([]byte("testEventLog(address,string,uint256)"))).Hex() @@ -126,7 +126,7 @@ contract eventDemo { The contract provides' setNumber 'and' getNumber 'methods, where calling the former will get a thrown event。 -There are two ways to subscribe to events in the Fisco Go SDK: directly using 'client' to subscribe to events and calling functions in the code generated by the Abigen tool to subscribe.。 +There are two ways to subscribe to events in the Fisco Go SDK: directly using 'client' to subscribe to events and calling functions in the code generated by the Abigen tool to subscribe。 ### 4.1 Event subscription using client @@ -134,7 +134,7 @@ There are two ways to subscribe to events in the Fisco Go SDK: directly using 'c [subscriber.go] is given in the Go SDK(https://github.com/FISCO-BCOS/go-sdk/blob/master/examples/eventLog/sub/subscriber.go)Contract Event Subscription Sample -*Note that the topic and address cannot be emptied, and the corresponding event cannot be received if emptied.。* +*Note that the topic and address cannot be emptied, and the corresponding event cannot be received if emptied。* ```go func main() { @@ -150,7 +150,7 @@ func main() { eventLogParams.FromBlock = 1 eventLogParams.ToBlock = -1 var topics = make([]string, 1) - topics[0] = common.BytesToHash(crypto.Keccak256([]byte("setNum(address,uint256,uint256)"))).Hex() / / The topic of the event. Use uint256 and int256 instead of uint and int. + topics[0] = common.BytesToHash(crypto.Keccak256([]byte("setNum(address,uint256,uint256)"))).Hex() / / The topic of the event. Use uint256 and int256 instead of uint and int eventLogParams.Topics = topics var addresses = make([]string, 1) addresses[0] = "0xd2cf82e18f3d2c5cae0de87d29994be622f3fdd3" / / The contract address corresponding to the subscribed event @@ -169,9 +169,9 @@ func main() { #### Log data parsing -In the client subscription method provided by the SDK, the obtained data needs to be parsed by 'abi'. +In the client subscription method provided by the SDK, the obtained data needs to be parsed by 'abi' -First, you need to build a structure for parsing based on the type of variables included in the event. +First, you need to build a structure for parsing based on the type of variables included in the event ```go type setNum struct { @@ -203,7 +203,7 @@ This can then be parsed using the 'abi' tool: ``` -Here, the data used by 'abi.Json' needs to be obtained after using the 'abigen' tool to generate the corresponding go file.: +Here, the data used by 'abi.Json' needs to be obtained after using the 'abigen' tool to generate the corresponding go file: ```go // EventDemoABI is the input ABI used to generate the binding from. diff --git a/3.x/en/docs/sdk/go_sdk/index.rst b/3.x/en/docs/sdk/go_sdk/index.rst new file mode 100644 index 000000000..91e78392e --- /dev/null +++ b/3.x/en/docs/sdk/go_sdk/index.rst @@ -0,0 +1,31 @@ +############################################################## +4. Go SDK +############################################################## + +Tags: "go-sdk" "Go SDK" + +---- + +`Go SDK '_ provides access to' FISCO BCOS'_ The Go API of the node, which supports node status query, deployment, and contract invocation. Based on the Go SDK, you can quickly develop blockchain applications. Currently, FISCO BCOS v2.2.0 is supported+with v3.3.0+ + +.. admonition:: **Main characteristics** + + - Provides calls to FISCO BCOS 'JSON-RPC<../../develop/api.html>Go API for '_ + -Provide contract compilation, compile Solidity contract files into abi and bin files, and then convert them into Go contract files + - Provide GO API for deploying and calling go contract files + - Provides Go APIs for calling precompiled contracts, which are fully supported in v2 and partially supported in v3 + - Support to establish TLS and Guodi TLS connection with nodes + - Provide CLI (Command-Line Interface) tool for users to interact with the blockchain conveniently and quickly from the command line + +Install and configure the environment. For application development using the Go SDK, see 'gitHub'_ LINK + +.. toctree:: + :hidden: + :maxdepth: 3 + + env_conf.md + api.md + console.md + contractExamples.md + amopExamples.md + event_sub.md \ No newline at end of file diff --git a/3.x/en/docs/sdk/index.md b/3.x/en/docs/sdk/index.md index b6569ef99..a917c55c2 100644 --- a/3.x/en/docs/sdk/index.md +++ b/3.x/en/docs/sdk/index.md @@ -6,15 +6,15 @@ Tag: "SDK" ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` -FISCO BCOS 3.x version of the multilingual SDK is designed with**Hierarchical architecture**implementation, from bottom to top, into the generic base component layer, CPP- SDK layer, C-SDK layer, multi-language, multi-terminal access layer。The core functionality is determined by the underlying CPP-SDK implementation, the upper layer of multi-language simple adaptation access, this way can quickly adapt to access multi-language SDK。 +FISCO BCOS 3.x version of the multilingual SDK is designed with**Hierarchical architecture**Implementation, from bottom to top, is divided into common base component layer, CPP-SDK layer, C-SDK layer, multi-language, multi-terminal access layer。The core function is implemented by the underlying CPP-SDK, and the upper layer is easily adapted for multi-language access, which can quickly adapt to access multi-language SDK。 - **Common Foundation Components**Encapsulating encryption algorithms, communication protocols, network protocols, encryption machine protocols; -- **CPP-SDK layer**: Based on the common basic components, realize the network management, group management, AMOP communication, event mechanism, ledger and RPC interface related to blockchain connection, using C.++Way to Encapsulate CPP-SDK; -- **C-SDK layer**: Based on CPP-The SDK wraps another layer of C interface call mode C-SDK; -- **Multi-language, multi-terminal access layer**: through c-The SDK interface can be quickly adapted to Java, golang, nodejs, python, rust, iOS, Android and other multi-language SDK, and compatible with Windows, Linux, macOS, KyLin multi-operating system and X86, ARM (including M1) and other platforms。 +- **CPP-SDK Layer**: Based on the common basic components, realize the network management, group management, AMOP communication, event mechanism, ledger and RPC interface related to blockchain connection, using C++Implementation of CPP-SDK by way of encapsulation; +- **C-SDK layer**CPP-SDK-based C-SDK with one layer of C interface call; +- **Multi-language, multi-terminal access layer**Through the C-SDK interface, you can quickly adapt to Java, golang, nodejs, python, rust, iOS, Android and other multi-language SDKs, and compatible with Windows, Linux, macOS, KyLin multi-operating system and X86, ARM (including M1) and other platforms。 The layered architecture diagram of the SDK is as follows: diff --git a/3.x/en/docs/sdk/java_sdk/amop.md b/3.x/en/docs/sdk/java_sdk/amop.md index 8bf19fd8f..1c38509e6 100644 --- a/3.x/en/docs/sdk/java_sdk/amop.md +++ b/3.x/en/docs/sdk/java_sdk/amop.md @@ -1,15 +1,15 @@ # AMOP function -Tag: "java-sdk "" AMOP "" On-Chain Messenger Protocol " +tags: "java-sdk" "AMOP" "on-chain messenger protocol" ---- -The Java SDK supports the Advanced Messages Onchain Protocol (AMOP). Users can use the AMOP protocol to exchange messages with other organizations.。 +The Java SDK supports the Advanced Messages Onchain Protocol (AMOP). Users can use the AMOP protocol to exchange messages with other organizations。 ## 1. Interface description -AMOP enables any subscriber who subscribes to a topic to receive push messages related to that topic. +AMOP enables any subscriber who subscribes to a topic to receive push messages related to that topic -The interface class of AMOP module can refer to the file java.-"sdk" in the sdk-The file amop / src / main / org / fisco / bcos / sdk / amop / Amop.java "contains the following interfaces: +For more information about the interface classes of AMOP modules, see the "sdk-amop / src / main / org / fisco / bcos / sdk / amop / Amop.java" file in the java-sdk file, which contains the following interfaces: ### 1.1 subscribeTopic @@ -18,7 +18,7 @@ Subscribe to a topic **Parameters:** * topic: Subscribe to Topic Name。Type: "String"。 -* callback: The function that processes the topic message, which is called when a message related to the topic is received.。Type: "AmopRequestCallback"。 +* callback: The function that processes the topic message, which is called when a message related to the topic is received。Type: "AmopRequestCallback"。 **Example:** @@ -33,7 +33,7 @@ amop.start(); AmopRequestCallback cb = new AmopRequestCallback() { @Override public void onRequest(String endpoint, String seq, byte[] data) { - / / You can write the processing logic after receiving the message here.。 + / / You can write the processing logic after receiving the message here。 System.out.println("Received msg, content:" + new String(data)); } }; @@ -55,7 +55,7 @@ Send AMOP messages as unicast **注意:** -For a unicast AMOP message, if there are multiple clients subscribing to the topic, a random one can receive the unicast message.。 +For a unicast AMOP message, if there are multiple clients subscribing to the topic, a random one can receive the unicast message。 **Example:** @@ -68,7 +68,7 @@ amop.start(); AmopResponseCallback cb = new AmopResponseCallback() { @Override public void onResponse(Response response) { - / / You can write the processing logic of the received reply here.。 + / / You can write the processing logic of the received reply here。 System.out.println( "Get response, { errorCode:" + response.getErrorCode() @@ -131,7 +131,7 @@ Reply Message。 **Parameters:** -* endpoint: The peer endpoint that receives the message. It is returned in the 'AmopRequestCallback' callback.。Type: "String" +* endpoint: The peer endpoint that receives the message. It is returned in the 'AmopRequestCallback' callback。Type: "String" * seq: Message seq, returned in the 'AmopRequestCallback' callback。Type: "String" * content: Reply message content。Type: "byte []" @@ -159,7 +159,7 @@ amop.subscribeTopic("MyTopic", cb); ### 1.6 setCallback -Set the default callback. When the callback specified by the subscription topic is empty, the default callback API is called when a message is received. +Set the default callback. When the callback specified by the subscription topic is empty, the default callback API is called when a message is received **Parameters:** @@ -167,7 +167,7 @@ Set the default callback. When the callback specified by the subscription topic ## 2. Example -For more examples, see Java.-sdk-demo](https://github.com/FISCO-BCOS/java-sdk-demo)Project source code "java-sdk-demo / src / main / java / org / fisco / bcos / sdk / demo / amop / ". Link: [java-sdk-demo GitHub Link](https://github.com/FISCO-BCOS/java-sdk-demo),[java-sdk-demo Gitee Link](https://gitee.com/FISCO-BCOS/java-sdk-demo)。 +More examples please see [java-sdk-demo](https://github.com/FISCO-BCOS/java-sdk-demo)Code demonstration under "java-sdk-demo / src / main / java / org / fisco / bcos / sdk / demo / amop /" project source code, link: [java-sdk-demo GitHub link](https://github.com/FISCO-BCOS/java-sdk-demo)[java-sdk-demo Gitee link](https://gitee.com/FISCO-BCOS/java-sdk-demo)。 * Example: @@ -209,7 +209,7 @@ For more examples, see Java.-sdk-demo](https://github.com/FISCO-BCOS/java-sdk-de ```shell mkdir -p ~/fisco && cd ~/fisco -# Get Java-sdk code +# get java-sdk code git clone https://github.com/FISCO-BCOS/java-sdk-demo # If the pull fails for a long time due to network problems, try the following command: @@ -226,19 +226,19 @@ According to [guidelines](../../../quick_start/air_installation.md)Building the ### Step 3: Configure -* Copy the certificate: set up your FISCO BCOS network node "nodes / ${ip}Copy the certificate in the / sdk / "directory to" java-sdk-demo / dist / conf "directory。 +* Copy the certificate: set up your FISCO BCOS network node "nodes / ${ip}/ sdk / "Copy the certificate from the directory to the" java-sdk-demo / dist / conf "directory。 -* Modify the configuration: 'cp config-example.toml config.toml` +* Modify the configuration: 'cp config-example.toml config.toml' ### Step 4: Run Demo #### Public topic Demo -Open a new terminal and download Java-sdk-demo code and build。 +Open a new terminal, download the java-sdk-demo code, and build。 ```shell cd ~/fisco -# Get Java-sdk-demo code +# Get the java-sdk-demo code git clone https://github.com/FISCO-BCOS/java-sdk-demo # If the pull fails for a long time due to network problems, try the following command: @@ -253,7 +253,7 @@ bash gradlew build **Run Subscribers:** ```shell -# Enter Java-sdk-demo / dist directory +# Enter the java-sdk-demo / dist directory cd dist # We subscribe to a topic called "testTopic" java -cp "apps/*:lib/*:conf/" org.fisco.bcos.sdk.demo.amop.Subscribe testTopic @@ -329,6 +329,6 @@ At the same time, return to the topic subscriber's terminal and find the termina Note: -1. The broadcast message is not returned.。 +1. The broadcast message is not returned。 2. The receiver may receive multiple repeated broadcast messages。 \ No newline at end of file diff --git a/3.x/en/docs/sdk/java_sdk/assemble_service.md b/3.x/en/docs/sdk/java_sdk/assemble_service.md index b1e5df156..79d4554b0 100644 --- a/3.x/en/docs/sdk/java_sdk/assemble_service.md +++ b/3.x/en/docs/sdk/java_sdk/assemble_service.md @@ -1,17 +1,17 @@ # (New)Construct new version transaction -Tag: "java-sdk "" Send Transaction "" Send Transaction Using Interface Signature "" Assembly Transaction "" Contract Call "" v1 "" +Tags: "java-sdk" "send transaction" "send transaction using interface signature" "assembly transaction" "contract invocation" "v1" " ---- ```eval_rst .. important:: - FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions after version 3.7.0. Please confirm the node version sent before using it.。Please refer to: 'v3.6.0 <.. / introduction / change _ log / 3 _ 6 _ 0.html >' for version 3.6.0 features + FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions after version 3.7.0. Please confirm the node version sent before using it。3.6.0 version features please refer to: 'v3.6.0<../introduction/change_log/3_6_0.html>`_ ``` ```eval_rst .. note:: - The data structure and assembly method of the transaction can refer to 'here <. / transaction _ data _ struct.html >' _ + The data structure of the transaction and the way it is assembled can be found here<./transaction_data_struct.html>`_ ``` FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions after version 3.7.0. The following five fields are added: @@ -19,22 +19,22 @@ FISCO BCOS supports V1 transactions after version 3.6.0 and V2 transactions afte ```c++ string value; / / v1 New transaction field, original transfer amount string gasPrice; / / The new field in the v1 transaction. The unit price of gas during execution(gas/wei) -long gasLimit; / / The upper limit of the gas used when the transaction is executed. +long gasLimit; / / The upper limit of the gas used when the transaction is executed string maxFeePerGas; / / v1 new transaction field, EIP1559 reserved field string maxPriorityFeePerGas; / / v1 new transaction field, EIP1559 reserved field vector extension; / / v2 new fields for additional storage ``` -In order to meet the requirements of adding transaction fields in the future, the Java SDK supports a new transaction service that can support flexible assembly, which is convenient for users and developers to use flexibly.。 +In order to meet the requirements of adding transaction fields in the future, the Java SDK supports a new transaction service that can support flexible assembly, which is convenient for users and developers to use flexibly。 ## 1. TransactionManager -Inspired by Web3J, it abstracts the interface for sending transactions / invocation requests, and provides the injection interface of GasProvider and NonceAndBlockLimitProvider for users to customize transactions.。The data passed in the TransactionManager is an ABI-encoded byte array.。 +Inspired by Web3J, it abstracts the interface for sending transactions / invocation requests, and provides the injection interface of GasProvider and NonceAndBlockLimitProvider for users to customize transactions。The data passed in the TransactionManager is an ABI-encoded byte array。 TransactionManager is an abstract class with the following implementation: -- 'DefaultTransactionManager ': The default TransactionManager, which uses the key generated at client initialization when signing transactions。 -- 'ProxySignTransactionManager ': A TransactionManager with an external signature. Users can implement the AsyncTransactionSignercInterface interface by themselves and set it into the ProxySignTransactionManager object.。 +- 'DefaultTransactionManager': The default TransactionManager, which uses the key generated during Client initialization when signing transactions。 +- 'ProxySignTransactionManager': A TransactionManager with an external signature. Users can implement the 'AsyncTransactionSignercInterface' interface by themselves and set it into the ProxySignTransactionManager object。 ### 1.1 Interface List @@ -59,16 +59,16 @@ public abstract void asyncSendCall(String to, byte[] data, RespCallback ca ### 1.2 DefaultTransactionManager -- DefaultTransactionManager is the default TransactionManager, which uses the key generated at Client initialization when signing transactions。 -- Use the default ContractGasProvider. By default, the returned gaslimit is 9000000 and the gas price is 4100000000. -- Use the default NonceAndBlockLimitProvider. The default returned block limit is the value returned by the client interface getBlockLimit. The default returned nonce is the UUID.。 +-DefaultTransactionManager is the default TransactionManager, which uses the key generated at Client initialization when signing transactions。 +- Use the default ContractGasProvider. By default, the returned gaslimit is 9000000 and the gas price is 4100000000 +- Use the default NonceAndBlockLimitProvider. The default returned block limit is the value returned by the client interface getBlockLimit. The default returned nonce is the UUID。 ### 1.3 ProxySignTransactionManager - The external signature of the TransactionManager, users can implement their own 'AsyncTransactionSignercInterface' interface, set into the ProxySignTransactionManager object, in the signature are signed using the implemented AsyncTransactionSignercInterface object。 -- Use the default ContractGasProvider. By default, the returned gaslimit is 9000000 and the gas price is 4100000000. -- Use the default NonceAndBlockLimitProvider. The default returned block limit is the value returned by the client interface getBlockLimit. The default returned nonce is the UUID.。 -- Use the default AsyncTransactionSignercInterface: TransactionJniSignerService, which still uses the key generated at client initialization by default。 +- Use the default ContractGasProvider. By default, the returned gaslimit is 9000000 and the gas price is 4100000000 +- Use the default NonceAndBlockLimitProvider. The default returned block limit is the value returned by the client interface getBlockLimit. The default returned nonce is the UUID。 +- Use the default AsyncTransactionSignercInterface: TransactionJniSignerService. By default, the key generated when the client is initialized is still used。 Users can call the 'setAsyncTransactionSigner' interface to replace their own objects that implement the AsyncTransactionSignercInterface interface。 @@ -95,7 +95,7 @@ proxySignTransactionManager.setAsyncTransactionSigner((hash, transactionSignCall / / Codec contract parameters byte[] abiEncoded = contractCodec.encodeMethod(abi, method, params); -/ / Construct the AbiEncodedRequest in a chained manner. Pass in important parameters such as contractAddress, nonce, and blockLimit. Finally, use buildAbiEncodedRequest to complete the construction.。 +/ / Construct the AbiEncodedRequest in a chained manner. Pass in important parameters such as contractAddress, nonce, and blockLimit. Finally, use buildAbiEncodedRequest to complete the construction。 AbiEncodedRequest request = new TransactionRequestBuilder() .setTo(contractAddress) @@ -112,9 +112,9 @@ TransactionReceipt receipt = proxySignTransactionManager.sendTransaction(request ## 2. AssembleTransactionService -AssembleTransactionService integrates TransactionManager, ContractCodec, and TransactionDecoderService, and the user only needs to pass in the parameters of the calling contract, and the returned result contains the parsed contract return value.。 +AssembleTransactionService integrates TransactionManager, ContractCodec, and TransactionDecoderService, and the user only needs to pass in the parameters of the calling contract, and the returned result contains the parsed contract return value。 -AssembleTransactionService can switch the dependent TransactionManager. The default value is DefaultTransactionManager and the default value is ProxySignTransactionManager.。 +AssembleTransactionService can switch the dependent TransactionManager. The default value is DefaultTransactionManager and the default value is ProxySignTransactionManager。 ### 2.1 Interface List @@ -169,7 +169,7 @@ TransactionResponse transactionResponse = transactionService.sendTransaction(req / / Parameters of type String can also be constructed List params = new ArrayList<>(); params.add("[[\"0xabcd\"],[\"0x1234\"]]"); -/ / Construct a request to call the setBytesArrayArray API. The parameter is a two-dimensional array of bytes. Pass in important parameters such as contractAddress, nonce, and blockLimit. Finally, use buildStringParamsRequest to end the construction.。 +/ / Construct a request to call the setBytesArrayArray API. The parameter is a two-dimensional array of bytes. Pass in important parameters such as contractAddress, nonce, and blockLimit. Finally, use buildStringParamsRequest to end the construction。 TransactionRequestWithStringParams requestWithStringParams = new TransactionRequestBuilder(abi, "setBytesArrayArray", contractAddress) .setNonce(nonce) @@ -181,11 +181,11 @@ TransactionRequestWithStringParams requestWithStringParams = TransactionResponse transactionResponse = transactionService.sendTransaction(requestWithStringParams); ``` -## 3. Solidity generates Java files using the new interface. +## 3. Solidity generates Java files using the new interface The detailed documentation of the Java interface file for generating smart contracts can be seen: [link](./contracts_to_java.html) -In the console after 3.6.0, the contract2java.sh script adds'-t 'option, when the value is 1, the Java file using the new interface is generated, using the same posture as before。For example: +In the console after 3.6.0, the '-t' option is added to the contract2java.sh script. When the value is 1, the Java file using the new interface is generated。For example: ```shell bash contract2java.sh solidity -t 1 -n -s ./contracts/solidity/Incremental.sol @@ -193,8 +193,8 @@ In the console after 3.6.0, the contract2java.sh script adds'-t 'option, when th Existing Java file transformation methods: -In Java-sdk-Take package org.fisco.bcos.sdk.demo.perf.PerformanceOk in demo as an example: -After the contract object is constructed by deploy, the TransactionManager is set up, and then the request is sent according to the new transaction interface.。 +Take package org.fisco.bcos.sdk.demo.perf.PerformanceOk in Java-sdk-demo as an example: +After the contract object is constructed by deploy, the TransactionManager is set up, and then the request is sent according to the new transaction interface。 ```java // build the client diff --git a/3.x/en/docs/sdk/java_sdk/assemble_transaction.md b/3.x/en/docs/sdk/java_sdk/assemble_transaction.md index fd8713569..a23f6bd9f 100644 --- a/3.x/en/docs/sdk/java_sdk/assemble_transaction.md +++ b/3.x/en/docs/sdk/java_sdk/assemble_transaction.md @@ -1,30 +1,30 @@ # Constructing transactions and sending -Tag: "java-sdk "" send transaction "" send transaction using interface signature "" assemble transaction "" contract invocation " +Tags: "java-sdk" "Send transaction" "Send transaction using interface signature" "Assembly transaction" "Contract invocation" " ---- ```eval_rst .. note:: - The Java SDK also supports calling the corresponding 'java' method to deploy and invoke contracts after converting 'solidity' into 'java' files, and also supports deploying and invoking contracts in the way of constructing transactions. Here, we mainly show the construction and sending of transactions. For the use of the former, please refer to 'here <. / contracts _ to _ java.html >' _ + The Java SDK also supports converting 'solidity' into 'java' files, calling the corresponding 'java' methods to deploy and invoke contracts, and also supports the way of constructing transactions to deploy and invoke contracts<./contracts_to_java.html>`_ ``` ```eval_rst .. note:: - The data structure of the transaction can refer to 'here <. / transaction _ data _ struct.html >' _ + The data structure of the transaction can be found here<./transaction_data_struct.html>`_ ``` ## 1. Concept analysis: contract deployment and invocation -Contract operations can be divided into two categories: contract deployment and contract invocation.。Among them, contract calls can be distinguished as "transactions" and "queries."。 +Contract operations can be divided into two categories: contract deployment and contract invocation。Among them, contract calls can be distinguished as "transactions" and "queries."。 -**Contract Deployment**Refers to the creation and release of a new contract.。The incoming data from the transaction creation is converted to EVM bytecode and executed, and the output of the execution is permanently stored as contract code。 +**Contract Deployment**Refers to the creation and release of a new contract。The incoming data from the transaction creation is converted to EVM bytecode and executed, and the output of the execution is permanently stored as contract code。 **contract invocation**is a function that calls a deployed contract。Contract calls can be distinguished as "transactions" and "queries."。 **Query**: The method modified by the view / pure modifier is generally called "query," "query" does not need to be synchronized and sent to other nodes across the network consensus。 -**"Deal"**: Only those that are not modified are called "transactions."。, and the "transaction" needs to be sent to the entire network for the consensus of the chain.。 +**"Deal"**: Only those that are not modified are called "transactions."。, and the "transaction" needs to be sent to the entire network for the consensus of the chain。 Here is the difference between "transaction" and "query" in more detail。 @@ -40,7 +40,7 @@ Here is the difference between "transaction" and "query" in more detail。 ## 2. Premise: Prepare abi and binary documents for the contract -The console provides a specialized tool for compiling contracts that allows developers to integrate Solidity / webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract file compilation to generate Java files and abi, binary files, the specific use of [reference here](./contracts_to_java.html)。 +The console provides a dedicated contract compilation tool that allows developers to compile Solidity / webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract files to generate Java files, abi, and binary files(./contracts_to_java.html)。 By running the contract2java script, the generated abi and binary files are located in the contracts / sdk / abi and contracts / sdk / bin directories respectively (the files generated by the compilation of the national secret version are located in the contracts / sdk / abi / sm and contracts / sdk / bin / sm folders respectively)。You can copy files to the project directory, such as src / main / resources / abi and src / main / resources / bin。 @@ -91,27 +91,27 @@ Initialize the SDK based on the configuration file, such as: ```java / / Initialize the BcosSDK object BcosSDK sdk = BcosSDK.build(configFile); -/ / Obtain the client object. The group name is group0. +/ / Obtain the client object. The group name is group0 Client client = sdk.getClient("group0"); -/ / To construct an AssembleTransactionProcessor object, you must pass in the client object, the CryptoKeyPair object, and the path where the abi and binary files are stored.。The abi and binary files need to be copied to the defined folder in the previous step。 +/ / To construct an AssembleTransactionProcessor object, you must pass in the client object, the CryptoKeyPair object, and the path where the abi and binary files are stored。The abi and binary files need to be copied to the defined folder in the previous step。 CryptoKeyPair keyPair = client.getCryptoSuite().getCryptoKeyPair(); ``` ## 4. Initialize the AssembleTransactionProcessor object -Java SDK provides a way to directly deploy and invoke contracts based on abi and binary files。This scenario applies to the default situation, by creating and using the 'AssembleTransactionProcessor' object to complete contract-related deployment, invocation, and query operations.。 +Java SDK provides a way to directly deploy and invoke contracts based on abi and binary files。This scenario applies to the default situation, by creating and using the 'AssembleTransactionProcessor' object to complete contract-related deployment, invocation, and query operations。 ```java AssembleTransactionProcessor transactionProcessor = TransactionProcessorFactory.createAssembleTransactionProcessor(client, keyPair, "src/main/resources/abi/", "src/main/resources/bin/"); ``` -**In particular:** If you only trade and query without deploying the contract, you do not need to copy the binary file and do not need to pass in the path of the binary file during construction, for example, the last parameter of the construction method can be passed in an empty string.。 +**In particular:** If you only trade and query without deploying the contract, you do not need to copy the binary file and do not need to pass in the path of the binary file during construction, for example, the last parameter of the construction method can be passed in an empty string。 ```java AssembleTransactionProcessor transactionProcessor = TransactionProcessorFactory.createAssembleTransactionProcessor(client, keyPair, "src/main/resources/abi/", ""); ``` -**In particular:** You can also not pass in any ABI file directory, and you can manually pass in the abi string for subsequent operations.。 +**In particular:** You can also not pass in any ABI file directory, and you can manually pass in the abi string for subsequent operations。 ```java AssembleTransactionProcessor transactionProcessor = TransactionProcessorFactory.createAssembleTransactionProcessor(client, keyPair, "", ""); @@ -119,10 +119,10 @@ AssembleTransactionProcessor transactionProcessor = TransactionProcessorFactory. ## 5. Deploy contracts synchronously -The deployment contract calls the deployByContractLoader method, passes in the contract name and constructor parameters, links the deployment contract, and obtains the result of the 'TransactionResponse'.。 +The deployment contract calls the deployByContractLoader method, passes in the contract name and constructor parameters, links the deployment contract, and obtains the result of the 'TransactionResponse'。 ```java -/ / Deploy the HelloWorld contract。The first parameter is the contract name and the second parameter is the list of contract constructors, which is of type List < Object >。 +/ / Deploy the HelloWorld contract。The first parameter is the contract name, and the second parameter is the list of contract constructors, which is ListType。 TransactionResponse response = transactionProcessor.deployByContractLoader("HelloWorld", new ArrayList<>()); / / You can also manually pass in bin and abi TransactionResponse response = transactionProcessor.deployAndGetResponse(abi, bin, new ArrayList<>(), null); @@ -134,14 +134,14 @@ The data structure of 'TransactionResponse' is as follows: - returnMessages: Error message returned。 - TransactionReceipt: transaction receipt returned on the chain。 - ContractAddress: Address of contract deployed or invoked。 -- events: If there is a trigger log record, the parsed log return value is returned, and a string in JSON format is returned.。 +- events: If there is a trigger log record, the parsed log return value is returned, and a string in JSON format is returned。 - returnObject: Return value Java type。 - returnABIObject: Return value ABI type。 - receiptMessages: Returns the parsed transaction receipt information。 Summary of corresponding tables of 'returnCode' and 'returnMessages' [see here](./retcode_retmsg.md) -## 6. Send transactions synchronously. +## 6. Send transactions synchronously Calling a contract transaction uses' sendTransactionAndGetResponseByContractLoader 'to invoke a contract transaction. Here's how to call the' set 'function in' HelloWorld'。 @@ -149,39 +149,39 @@ Calling a contract transaction uses' sendTransactionAndGetResponseByContractLoad / / Create a parameter to call the transaction function. Here, a parameter is passed in List params = new ArrayList<>(); params.add("test"); - / / Call the HelloWorld contract. The contract address is helloWorldAddress, the function name is set, and the function parameter type is params. + / / Call the HelloWorld contract. The contract address is helloWorldAddress, the function name is set, and the function parameter type is params TransactionResponse transactionResponse = transactionProcessor.sendTransactionAndGetResponseByContractLoader("HelloWorld", helloWorldAddrss, "set", params); / / You can also manually pass in ABI file calls TransactionResponse transactionResponse = transactionProcessor.sendTransactionAndGetResponse(helloWroldAddress, abi, "set", params); ``` -## 7. Call the contract query interface. +## 7. Call the contract query interface -Query contracts can return results directly by calling the node query function on the chain without consensus.;Therefore, all inquiry transactions are communicated in a synchronous manner.。Querying a contract uses the 'sendCallByContractLoader' function to query the contract. This section shows how to call the 'name' function in 'HelloWorld' to query the contract.。 +Query contracts can return results directly by calling the node query function on the chain without consensus;Therefore, all inquiry transactions are communicated in a synchronous manner。Querying a contract uses the 'sendCallByContractLoader' function to query the contract. This section shows how to call the 'name' function in 'HelloWorld' to query the contract。 ```java -/ / Query the name function of the HelloWorld contract. The contract address is helloWorldAddress and the parameter is empty. +/ / Query the name function of the HelloWorld contract. The contract address is helloWorldAddress and the parameter is empty CallResponse callResponse = transactionProcessor.sendCallByContractLoader("HelloWorld", helloWorldAddrss, "name", new ArrayList<>()); / / You can also manually pass in ABI file calls CallResponse callResponse = transactionProcessor.sendCall("", helloWorldAddrss, "name", new ArrayList<>()); ``` -## 8. Sending transactions by signing the contract method. +## 8. Sending transactions by signing the contract method -In addition, for special scenarios, DIY assembly transactions and sending transactions can be signed through the interface.。 +In addition, for special scenarios, DIY assembly transactions and sending transactions can be signed through the interface。 -For example, the signature of the set method defined by the above 'HelloWorld' smart contract is' set '.(string)` +For example, the signature of the set method defined by the above 'HelloWorld' smart contract is' set '(string)` ```java -/ / Use WBC-_ isWasm is true for Liquid contracts and false for Solidity contracts +/ / _ isWasm is true when using the WBC-Liquid contract and false when using the Solidity contract ContractCodec contractCodec = new ContractCodec(client.getCryptoSuite(), _isWasm); String setMethodSignature = "set(string)"; byte[] txData = contractCodec.encodeMethodByInterface(setMethodSignature, new Object[]{new String("Hello World")}); ``` -Since there is no need to provide abi by constructing the interface signature, you can construct a 'TransactionProcessor' to operate on it.。You can also use 'TransactionProcessorFactory' to construct。 +Since there is no need to provide abi by constructing the interface signature, you can construct a 'TransactionProcessor' to operate on it。You can also use 'TransactionProcessorFactory' to construct。 ```java TransactionProcessor transactionProcessor = TransactionProcessorFactory.createTransactionProcessor(client, keyPair); @@ -190,26 +190,26 @@ TransactionProcessor transactionProcessor = TransactionProcessorFactory.createTr Send transaction to FISCO BCOS node and receive receipt。 ```java -/ / If using WBC-Liquid, the third parameter should use TransactionAttribute. LIQUID _ SCALE _ CODEC +/ / If WBC-Liquid is used, the third parameter should use TransactionAttribute.LIQUID _ SCALE _ CODEC TransactionReceipt transactionReceipt = transactionProcessor.sendTransactionAndGetReceipt(contractAddress, txData, TransactionAttribute.EVM_ABI_CODEC); ``` -You need to manually parse the result information in the transaction receipt after successful execution.。For more detailed usage, please refer to: [Transaction Receipt Resolution](../transaction_decode.md) +You need to manually parse the result information in the transaction receipt after successful execution。For more detailed usage, please refer to: [Transaction Receipt Resolution](../transaction_decode.md) ```java TransactionDecoderService txDecoder = new TransactionDecoderService(client.getCryptoSuite(), client.isWASM()); TransactionResponse transactionResponse = txDecoder.decodeReceiptWithValues(abi,"set",transactionReceipt); ``` -## 9. Operate the contract asynchronously by callback. +## 9. Operate the contract asynchronously by callback ### 9.1 Define callback class When sending transactions asynchronously, you can customize the callback class, implement and rewrite the callback handler。 -The custom callback class needs to inherit the abstract class' TransactionCallback 'and implement the' onResponse 'method.。At the same time, you can decide on demand whether you need to override methods such as' onError 'and' onTimeout '.。 +The custom callback class needs to inherit the abstract class' TransactionCallback 'and implement the' onResponse 'method。At the same time, you can decide on demand whether you need to override methods such as' onError 'and' onTimeout '。 -For example, we define a simple callback class。The callback class implements a reentrant lock-based asynchronous call effect that reduces the thread's synchronous wait time.。 +For example, we define a simple callback class。The callback class implements a reentrant lock-based asynchronous call effect that reduces the thread's synchronous wait time。 ```java public class TransactionCallbackMock extends TransactionCallback { @@ -275,12 +275,12 @@ transactionProcessor.sendTransactionAsync(to, abi, "set", params, callbackMock); TransactionReceipt transactionReceipt = callbackMock.getResult(); ``` -## 10. Operate the contract asynchronously using CompletableFuture. +## 10. Operate the contract asynchronously using CompletableFuture The SDK also supports asynchronous contract deployment using 'CompletableFuture' encapsulation。 ```java -/ / Deploy the transaction asynchronously and obtain the CompletableFuture < TransactionReceipt > object +/ / Deploy the transaction asynchronously and get CompletableFuture Object CompletableFuture future = transactionProcessor.deployAsync(abi, bin, new ArrayList<>(),""); / / Define the business logic returned normally future.thenAccept( @@ -297,7 +297,7 @@ future.exceptionally( ## 11. Detailed API function introduction -'AssembleTransactionProcessor 'supports sending transactions with custom parameters, sending transactions asynchronously, and returning results in multiple encapsulation methods.。 +'AssembleTransactionProcessor 'supports sending transactions with custom parameters, sending transactions asynchronously, and returning results in multiple encapsulation methods。 Reference Java doc: [AssembleTransactionProcessor](./javadoc/javadoc/org/fisco/bcos/sdk/v3/transaction/manager/AssembleTransactionProcessor.html) @@ -305,7 +305,7 @@ The detailed API functions are as follows。 - **public void deployOnly(String abi, String bin, List\ params):** Incoming contract abi, bin, and constructor parameters to deploy the contract without receiving receipt results。 - **public TransactionResponse deployAndGetResponse(String abi, String bin, List\ params) :** Pass in contract abi, bin, and constructor parameters to deploy the contract and receive the receipt result -- **TransactionResponse deployAndGetResponseWithStringParams(String abi, String bin, List\ params):** The list of contract abi and String is passed in as constructor parameters to deploy the contract and receive the TransactionResponse result.。 +- **TransactionResponse deployAndGetResponseWithStringParams(String abi, String bin, List\ params):** The list of contract abi and String is passed in as constructor parameters to deploy the contract and receive the TransactionResponse result。 - **void deployAsync(String abi, String bin, List\ params, TransactionCallback callback):** Incoming contract abi, constructed contract constructor, and callback to deploy the contract asynchronously - **CompletableFuture\ deployAsync(String abi, String bin, List\ params):** Enter the contract abi, bin, and constructor parameters to deploy the contract and receive the receipt result encapsulated by CompletableFuture - **TransactionResponse deployByContractLoader(String contractName, List\ params):** Pass in the contract name and the constructed contract constructor to receive the TransactionResponse result。 diff --git a/3.x/en/docs/sdk/java_sdk/config.md b/3.x/en/docs/sdk/java_sdk/config.md index 5096e2c24..4da6bd7d7 100644 --- a/3.x/en/docs/sdk/java_sdk/config.md +++ b/3.x/en/docs/sdk/java_sdk/config.md @@ -1,6 +1,6 @@ # Configuration Description -Tag: "java-sdk "" 'Configuration " +Tags: "java-sdk" "Configuration" ---- @@ -30,12 +30,12 @@ Examples of configuration files in 'properties', 'yml' and 'xml' formats and how 2. From node 'nodes / ${ip}Copy the certificate from the / sdk / 'directory to the new' conf 'directory。 -3. Put the configuration file config.-example.toml, stored in the home directory of the application。 +3. Store the configuration file config-example.toml in the application's home directory。 - * config-example.toml can be found in java-sdk [GitHub link](https://github.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/config-example.toml)or [Gitee link](https://gitee.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/config-example.toml)The source file for 'src / test / resources / config' is found in the following location:-example.toml` - * You can also see "config" in the "3. Configuration Example" section of this article-Contents of example.toml "。 + * config-example.toml can be found in java-sdk [GitHub link](https://github.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/config-example.toml)or [Gitee link](https://gitee.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/config-example.toml)The source file for 'src / test / resources / config-example.toml' + * You can also see the contents of "config-example.toml" in the "3. Configuration Example" section of this article。 -4. Modify config-IP and port of the node in example.toml, matching the node you want to connect to。 +4. Modify the IP and port of the node in config-example.toml to match the node you want to connect to。 ```toml [network] @@ -83,11 +83,11 @@ The Java SDK consists of five configuration options: ### Certificate Configuration -For security reasons, the Java SDK uses SSL encryption to communicate with nodes. Currently, both non-state-secret SSL connections and state-secret SSL connections are supported.-sdk version 3.3.0 adds support for cipher machines. You can use the key in the cipher machine for transaction signature verification.。'[cryptoMaterial]' Configure the certificate information of the SSL connection, including the following configuration items: +For security reasons, the Java SDK and the node use SSL encryption communication, currently supports both non-state secret SSL connection and state secret SSL connection, java-sdk version 3.3.0 adds support for cipher machine, transaction signature verification can use the key in the cipher machine。'[cryptoMaterial]' Configure the certificate information of the SSL connection, including the following configuration items: -* `certPath`: The certificate storage path. The default value is the 'conf' directory.; +* `certPath`: The certificate storage path. The default value is the 'conf' directory; -* `caCert`: The path of the CA certificate. This configuration item is commented by default. When this configuration item is commented, the default path of the CA certificate is' ${certPath}/ ca.crt '. The default CA certificate path is' $'when the SDK and the node are connected by using state-secret SSL.{certPath}/sm_ca.crt`;When this configuration item is turned on, the CA certificate is loaded from the path specified by the configuration; +* `caCert`: The path of the CA certificate. This configuration item is commented by default. When this configuration item is commented, the default path of the CA certificate is' ${certPath}/ ca.crt '. The default CA certificate path is' $'when the SDK and the node are connected by using state-secret SSL{certPath}/sm_ca.crt`;When this configuration item is turned on, the CA certificate is loaded from the path specified by the configuration; * `sslCert`: The path of the SDK certificate. This configuration item is annotated by default. When this configuration item is annotated, when a non-state-secret SSL connection is used between the SDK and the node, from '${certPath}/ sdk.crt 'Loads the SDK certificate. When the SDK and the node are connected to each other using the state-secret SSL connection, run the command from' ${certPath}/ sm _ sdk.crt 'Load SDK certificate;When this configuration option is enabled, the SDK certificate is loaded from the path specified by the configuration; @@ -95,9 +95,9 @@ For security reasons, the Java SDK uses SSL encryption to communicate with nodes * `enSslCert`: The path of the state-secret SSL encryption certificate. Only when the SDK and the node use the state-secret SSL connection, you need to configure this configuration item. The default value is from '${certPath}/ sm _ ensdk.crt 'Load the SSL encryption certificate;When this configuration item is enabled, the state-secret SSL encryption certificate is loaded from the path specified by the configuration item; -* `enSslKey`: The path of the private key for state-secret SSL encryption. This configuration item must be configured only when the state-secret SSL connection is used between the SDK and the node. The default value is from '${certPath}/ sm _ ensdk.key 'Load the SSL encryption private key;When the configuration item is blocked, the SSL encryption private key is loaded from the path specified by the configuration item.。 +* `enSslKey`: The path of the private key for state-secret SSL encryption. This configuration item must be configured only when the state-secret SSL connection is used between the SDK and the node. The default value is from '${certPath}/ sm _ ensdk.key 'Load the SSL encryption private key;When the configuration item is blocked, the SSL encryption private key is loaded from the path specified by the configuration item。 -* `useSMCrypto`: Whether to use the State Secret SSL connection. True indicates that the State Secret SSL connection is used.; +* `useSMCrypto`: Whether to use the State Secret SSL connection. True indicates that the State Secret SSL connection is used; * `enableHsm`: Whether to use a cipher machine, true to use a cipher machine; @@ -109,9 +109,9 @@ For security reasons, the Java SDK uses SSL encryption to communicate with nodes ```eval_rst .. note:: - - In most scenarios, you only need to configure the 'certPath' configuration item. Other configuration items do not need additional configuration.; - - Obtain an SDK certificate: see 'SDK Connection Certificate Configuration <.. / cert _ config.html >'. - - The SSL connection mode between the SDK and the RPC node can be determined by the node configuration item 'sm _ crypto'. For more information about this configuration item, see 'FISCO BCOS Configuration File and Configuration Item Description <.. /.. / tutorial / air / config.html#rpc>`_ . + - Most scenarios only need to configure the 'certPath' configuration item, other configuration items do not need additional configuration; + - SDK certificate acquisition: see 'SDK connection certificate configuration<../cert_config.html>`_ . + - The SSL connection mode between the SDK and the RPC node, which can be determined by the node configuration item 'sm _ crypto'. For more information about this configuration item, see 'FISCO BCOS Configuration File and Configuration Item Description<../../tutorial/air/config.html#rpc>`_ . ``` The SDK certificate configuration example is as follows: @@ -150,7 +150,7 @@ When the SDK communicates with the FISCO BCOS node, you must configure the 'IP' ```eval_rst .. note:: Connection information between nodes and the network - The SDK communicates with the node through 'RPC'. The SDK needs to connect to the listening port of 'RPC'. This port can be obtained through the 'rpc.listen _ port' of the node 'config.ini'. For more information, see <.. /.. / tutorial / air / config.html#rpc>`_ + The SDK communicates with the node through 'RPC'. The SDK needs to connect to the listening port of 'RPC'. This port can be obtained through the 'rpc.listen _ port' of the node 'config.ini'<../../tutorial/air/config.html#rpc>`_ ``` The network configuration example between the SDK and the node is as follows: @@ -167,7 +167,7 @@ Account configuration is mainly used to set the account information for the SDK * `keyStoreDir`: Path to load / save account files, default is' account'; -* `accountFileFormat`: The default file format is' pem '. Currently, only' pem 'and' p12 'are supported. You do not need a password to load an account file in' pem 'format. You need a password to load an account file in' p12 'format.; +* `accountFileFormat`: The default file format is' pem '. Currently, only' pem 'and' p12 'are supported. You do not need a password to load an account file in' pem 'format. You need a password to load an account file in' p12 'format; * `accountAddress`: Loaded account address, empty by default @@ -177,7 +177,7 @@ Account configuration is mainly used to set the account information for the SDK ```eval_rst .. note:: - When 'accountAddress' and 'accountFilePath' are not configured, the SDK generates random account-to-node transactions, and the generated account information is stored in the directory specified by the 'keyStoreDir' configuration item: When the SDK connection node is a non-state secret node, the generated temporary account is stored in the '$' format.{keyStoreDir}/ ecdsa / 'directory;The generated temporary account is saved in the format of 'p12' in the '${keyStoreDir}/ gm 'directory + When 'accountAddress' and 'accountFilePath' are not configured, the SDK generates random account-to-node transactions, and the generated account information is stored in the directory specified by the 'keyStoreDir' configuration item: When the SDK connection node is a non-state secret node, the generated temporary account is stored in the '$' format{keyStoreDir}/ ecdsa / 'directory;The generated temporary account is saved in the format of 'p12' in the '${keyStoreDir}/ gm 'directory ``` An example account profile is as follows: @@ -199,11 +199,11 @@ accountFileFormat = "pem" # The storage format of account file (Default is In order to facilitate the business to adjust the processing threads of the SDK according to the actual load of the machine, the Java SDK exposes its thread configuration items in the configuration, '[threadPool]' is the thread pool-related configuration, including: -* `threadPoolSize`: The number of threads that receive transactions. This configuration item is commented by default. When this configuration item is commented, the default value is the number of CPUs of the machine.;When this configuration item is enabled, the number of threads that receive transactions is created based on the configured value; +* `threadPoolSize`: The number of threads that receive transactions. This configuration item is commented by default. When this configuration item is commented, the default value is the number of CPUs of the machine;When this configuration item is enabled, the number of threads that receive transactions is created based on the configured value; ```eval_rst .. note:: - In most scenarios, you do not need to manually configure the thread pool configuration;In the pressure test scenario, you can set 'maxBlockingQueueSize' to a larger size.。 + In most scenarios, you do not need to manually configure the thread pool configuration;In the pressure test scenario, you can set 'maxBlockingQueueSize' to a larger size。 ``` An example thread pool configuration is as follows: @@ -216,7 +216,7 @@ An example thread pool configuration is as follows: ### Cpp SDK Log Configuration -Because the Java SDK uses the interface of the Cpp SDK encapsulated by JNI to perform operations on nodes, the logs of the Cpp SDK are also output when the Java SDK is started。The Cpp SDK log exists as a separate file in the configuration file. The file name is' clog.ini '. JNI will find this file in the root directory or conf directory under' classpath 'when starting.。In general, the file does not require additional configuration, according to the default。 +Because the Java SDK uses the interface of the Cpp SDK encapsulated by JNI to perform operations on nodes, the logs of the Cpp SDK are also output when the Java SDK is started。The Cpp SDK log exists as a separate file in the configuration file. The file name is' clog.ini '. JNI will find this file in the root directory or conf directory under' classpath 'when starting。In general, the file does not require additional configuration, according to the default。 An example of a log file is as follows: @@ -289,7 +289,7 @@ The Java SDK also supports configuration files in 'properties', 'yml', and 'xml' The meaning and default values of the fields are consistent with the 'toml' configuration file。 -Create the file 'fisco' in the project's home directory-config.properties', copy the following configuration content, and modify each configuration item according to the actual situation。 +Create a file 'fisco-config.properties' in the home directory of the project, copy the following configuration content, and modify each configuration item according to the actual situation。 ```properties cryptoMaterial.certPath=conf # The certification path @@ -398,7 +398,7 @@ public class FiscoBcos { The meaning and default values of the fields are consistent with the 'toml' configuration file。 -Create the file 'fisco' in the project's home directory-config.yml ', copy the following configuration content, and modify each configuration item according to the actual situation。 +Create a file 'fisco-config.yml' in the home directory of the project, copy the following configuration content, and modify each configuration item according to the actual situation。 ```yml cryptoMaterial: @@ -490,7 +490,7 @@ public class FiscoBcos { The meaning of each property is consistent with the 'toml' configuration file。 -Create the file 'fisco' in the project's home directory-config.xml ', copy the following configuration content, and modify each configuration item according to the actual situation。 +Create a file 'fisco-config.xml' in the home directory of the project, copy the following configuration content, and modify each configuration item according to the actual situation。 ```xml diff --git a/3.x/en/docs/sdk/java_sdk/contract_parser.md b/3.x/en/docs/sdk/java_sdk/contract_parser.md index 8658d2192..abecc4c16 100644 --- a/3.x/en/docs/sdk/java_sdk/contract_parser.md +++ b/3.x/en/docs/sdk/java_sdk/contract_parser.md @@ -1,15 +1,15 @@ # Contract Codec -Tag: "java-sdk`` ``abi`` `scale` ``codec`` +Tags: "java-sdk" "abi" "scale" "codec" " ---- -Java SDK 3.x uses two codec formats, 'ABI' and 'Scale', respectively.**Solidity Contract**和**WebankBlockchain-The Liquid Contract (WBC)-Liquid)** The function signature, parameter encoding, and return results are encoded and decoded.。 +Java SDK 3.x uses two codec formats, 'ABI' and 'Scale', respectively**Solidity Contract**和**WebankBlockchain-Liquid contract (WBC-Liquid)** The function signature, parameter encoding, and return results are encoded and decoded。 -**Note**In order to distinguish between ABI and Scale, 3.0.0-After rc3, org.fisco.bcos.sdk.ABICodec changed its name to org.fisco.bcos.sdk.v3.codec.ContractCodec +**Note**To distinguish between ABI and Scale, after 3.0.0-rc3, the name of org.fisco.bcos.sdk.ABICodec is changed to org.fisco.bcos.sdk.v3.codec.ContractCodec -In the Java SDK, the 'org.fisco.bcos.sdk.v3.codec.ContractCodec' class provides the functions of encoding the output of the transaction (the field of the 'data'), parsing the return value of the transaction, and parsing the content pushed by the contract event.。 +In the Java SDK, the 'org.fisco.bcos.sdk.v3.codec.ContractCodec' class provides the functions of encoding the output of the transaction (the field of the 'data'), parsing the return value of the transaction, and parsing the content pushed by the contract event。 -Here, we take the 'Add.sol' contract as an example to provide a reference for the use of 'ContractCodec'.。 +Here, we take the 'Add.sol' contract as an example to provide a reference for the use of 'ContractCodec'。 ```solidity pragma solidity^0.6.0; @@ -39,17 +39,17 @@ Call'add(uint256)The transaction receipt of the interface is as follows, focusin ```Java { - / / Omit... + / / Omit.. "input":"0x1003e2d2000000000000000000000000000000000000000000000000000000000000003c", "output":"0x00000000000000000000000000000000000000000000000000000000000000a0", "logs":[ { - / / Omit... + / / Omit.. "data":"0x0000000000000000000000000000000000000000000000000000000000000064000000000000000000000000000000000000000000000000000000000000003c", - / / Omit... + / / Omit.. } ], - / / Omit... + / / Omit.. } ``` @@ -67,27 +67,27 @@ CryptoSuite can be obtained from the initialized Client class, please refer to t isWasm is an important parameter that determines the ContractCodec encoding format to use: -- If isWasm is true, the transaction input and output will be coded and decoded using the Scale encoding format, which corresponds to the use of WBC in the node.-The Liquid Contract; -- If isWasm is false, the transaction input and output will be coded and decoded using the ABI encoding format, which corresponds to the Solidity contract in the node.。 +-If isWasm is true, the transaction input and output will be encoded and decoded using the Scale encoding format, which corresponds to the WBC-Liquid contract in the node; +- If isWasm is false, the transaction input and output will be encoded and decoded using the ABI encoding format, which corresponds to the Solidity contract in the node。 -## 2. Construct the deployment contract constructor input. +## 2. Construct the deployment contract constructor input -The input of the deployment transaction consists of two parts, the binary code of the contract deployment and the encoding of the parameters required by the constructor.。where the binary code is the contract compiled binary code。 +The input of the deployment transaction consists of two parts, the binary code of the contract deployment and the encoding of the parameters required by the constructor。where the binary code is the contract compiled binary code。 ```java -// bin + The parameter list in Object format, which is inserted into the transaction during deployment. +// bin + The parameter list in Object format, which is inserted into the transaction during deployment byte[] encodeConstructor(String abi, String bin, List params); -// bin + List of parameters in String format, abi is used to insert into the transaction at deployment time. +// bin + List of parameters in String format, abi is used to insert into the transaction at deployment time byte[] encodeConstructorFromString(String abi, String bin, List params); -// bin + List of parameters in String format, abi needs to be inserted into the transaction additionally. +// bin + List of parameters in String format, abi needs to be inserted into the transaction additionally byte[] encodeConstructorFromBytes(String bin, byte[] params); ``` -## 3. Construct the transaction input. +## 3. Construct the transaction input -The input of the transaction consists of two parts, the function selector and the encoding of the parameters required to call the function.。where the first four bytes of input data (such as"0x1003e2d2") Specifies the function selector to be called, and the calculation method of the function selector is the function declaration (remove spaces, that is,'add(uint256)') hash, take the first 4 bytes。The rest of the input is the result of the input parameters encoded according to the ABI (e.g."000000000000000000000000000000000000000000000000000000000000003c"as parameter"60"result after encoding)。 +The input of the transaction consists of two parts, the function selector and the encoding of the parameters required to call the function。where the first four bytes of input data (such as"0x1003e2d2") Specifies the function selector to be called, and the calculation method of the function selector is the function declaration (remove spaces, that is,'add(uint256)') hash, take the first 4 bytes。The rest of the input is the result of the input parameters encoded according to the ABI (e.g"000000000000000000000000000000000000000000000000000000000000003c"as parameter"60"result after encoding)。 -Depending on how the function is specified and the parameter input format, 'ContractCodec' provides the following interfaces to calculate the 'data' of the transaction.。 +Depending on how the function is specified and the parameter input format, 'ContractCodec' provides the following interfaces to calculate the 'data' of the transaction。 ```Java / / function name+ Parameter list in Object format @@ -106,7 +106,7 @@ byte[] encodeMethodByInterfaceFromString(String methodInterface, List pa byte[] encodeMethodByIdFromString(String ABI, byte[] methodId, List params); ``` -The following takes' encodeMethod 'as an example to illustrate the use of methods, other interfaces use similar methods.。 +The following takes' encodeMethod 'as an example to illustrate the use of methods, other interfaces use similar methods。 ```Java / / Initialize the SDK @@ -130,7 +130,7 @@ try { } ``` -## 4. Resolve the transaction return value. +## 4. Resolve the transaction return value Depending on how the function is specified and the type of return value, 'ContractCodec' provides the following interfaces to parse the function return value。 @@ -155,9 +155,9 @@ List decodeMethodByInterfaceToString(String abi, String methodInterface, List decodeMethodByIdToString(String abi, byte[] methodId, byte[] output) ``` -The 'output' in the above interface parameters is the 'output' field in the transaction receipt ("0x00000000000000000000000000000000000000000000000000000000000000a0")。The method of using the interface can refer to the interface usage of constructing transaction input.。 +The 'output' in the above interface parameters is the 'output' field in the transaction receipt ("0x00000000000000000000000000000000000000000000000000000000000000a0")。The method of using the interface can refer to the interface usage of constructing transaction input。 -## 5. Parse the contract event push content. +## 5. Parse the contract event push content Depending on how the event is specified and the type of parsing result, 'ContractCodec' provides the following interfaces to parse the event content。 @@ -176,7 +176,7 @@ List decodeEventByInterfaceToString(String abi, String eventSignature, E List decodeEventByTopicToString(String abi, String eventTopic, EventLog log) ``` -For event push, the Java SDK requires users to inherit the 'EventCallback' class and rewrite the 'onReceiveLog' interface to implement their own callback processing logic.。The following example uses' decodeEvent 'to parse the pushed event content。Other interfaces are used similarly。 +For event push, the Java SDK requires users to inherit the 'EventCallback' class and rewrite the 'onReceiveLog' interface to implement their own callback processing logic。The following example uses' decodeEvent 'to parse the pushed event content。Other interfaces are used similarly。 ```Java class SubscribeCallback implements EventSubCallback { @@ -200,9 +200,9 @@ class SubscribeCallback implements EventSubCallback { } ``` -## 6. Resolve the transaction input value. +## 6. Resolve the transaction input value -Compared with constructing transaction input, parsing transaction input is the reverse operation, and input parameters can be parsed according to ABI.。The transaction is divided into the transaction of the deployment contract and the transaction of the call contract, as can be seen from the above, the transaction input of the deployment contract is composed of binary code and encoded constructor parameters.;The transaction input of the call contract is spliced by the function selector with the encoded function parameters.。 +Compared with constructing transaction input, parsing transaction input is the reverse operation, and input parameters can be parsed according to ABI。The transaction is divided into the transaction of the deployment contract and the transaction of the call contract, as can be seen from the above, the transaction input of the deployment contract is composed of binary code and encoded constructor parameters;The transaction input of the call contract is spliced by the function selector with the encoded function parameters。 ```java / / ABI Definition+ Encoded parameters after removing the function selector+ Object, ABIObject returns a list diff --git a/3.x/en/docs/sdk/java_sdk/contracts_to_java.md b/3.x/en/docs/sdk/java_sdk/contracts_to_java.md index 7b350fdb3..5501ade8c 100644 --- a/3.x/en/docs/sdk/java_sdk/contracts_to_java.md +++ b/3.x/en/docs/sdk/java_sdk/contracts_to_java.md @@ -1,10 +1,10 @@ # Generate Java interface files for smart contracts -In the console 'console' and 'java-sdk-demo "provides tools that can be generated from the 'solidity' contract to call the 'java' tool class of the contract.。In this example, use "console" to generate the Solidity contract to call the contract 'java' tool class as an example.。 +Tools are provided in the console and java-sdk-demo. You can call the java tool class of the 'solidity' contract。In this example, use "console" to generate the Solidity contract to call the contract 'java' tool class as an example。 -> Experience webankblockchain-liquid (hereinafter referred to as WBC-Liquid), please refer to subsection 5。 +> To experience the webankblockchain-liquid (WBC-Liquid), please refer to Section 5。 > -> Using "java-sdk-For an example of demo "see section 7。 +> For an example of using "java-sdk-demo," see Section 7。 ## 1. Download Console @@ -20,9 +20,9 @@ $ bash download_console.sh $ cd ~/fisco/console ``` -## 2. Place the contract in the contract directory of the console. +## 2. Place the contract in the contract directory of the console -**Then, place the Solidity smart contract you want to use in the "~ / fisco / console / contracts / solidity" directory**。This time we use HelloWorld.sol in the console as an example.。Ensure that HelloWorld.sol is in the specified directory。 +**Then, place the Solidity smart contract you want to use in the "~ / fisco / console / contracts / solidity" directory**。This time we use HelloWorld.sol in the console as an example。Ensure that HelloWorld.sol is in the specified directory。 ```shell # Current Directory ~ / fisco / console @@ -35,13 +35,13 @@ get back HelloWorld.sol KVTableTest.sol ShaTest.sol KVTable.sol ... ``` -## 3. Generate the Java class that calls the smart contract. +## 3. Generate the Java class that calls the smart contract ```shell # Current Directory ~ / fisco / console $ bash contract2java.sh solidity -p org.com.fisco -s ./contracts/solidity/HelloWorld.sol # The parameter "org.com.fisco" in the above command is the package name to which the generated java class belongs。 -# Via the command. / contract2java.sh-H can view the script usage method +# The command. / contract2java.sh -h allows you to view how the script is used ``` get back @@ -52,7 +52,7 @@ INFO: Compile for solidity HelloWorld.sol success. *** Convert solidity to java for HelloWorld.sol success *** ``` -The use of 'contract2java.sh' will be described in detail in Appendix 2.。 +The use of 'contract2java.sh' will be described in detail in Appendix 2。 View Compilation Results @@ -76,9 +76,9 @@ After running successfully, the java, abi, and bin directories will be generated | |-- HelloWorld.java # Solidity Compiled HelloWorld Java File ``` -The 'org / com / fisco /' package path directory is generated in the Java directory。The Java contract file 'HelloWorld.java' will be generated in the package path directory.。where 'HelloWorld.java' is the Java contract file required by the Java application。 +The 'org / com / fisco /' package path directory is generated in the Java directory。The Java contract file 'HelloWorld.java' will be generated in the package path directory。where 'HelloWorld.java' is the Java contract file required by the Java application。 -## 4. The generated Java file code structure. +## 4. The generated Java file code structure The following takes the generated interface list of 'HelloWorld.java' as an example to briefly explain the code structure。 @@ -86,7 +86,7 @@ The following takes the generated interface list of 'HelloWorld.java' as an exam public class HelloWorld extends Contract { / / constructor protected HelloWorld(String contractAddress, Client client, CryptoKeyPair credential); - / / Obtain the code of the contract according to the CryptoSuite. If the code is the national secret, return the code of the national secret. + / / Obtain the code of the contract according to the CryptoSuite. If the code is the national secret, return the code of the national secret public static String getBinary(CryptoSuite cryptoSuite); / / Get the ABI json string of the contract public static String getABI(); @@ -94,30 +94,30 @@ public class HelloWorld extends Contract { public String get() throws ContractException; / / The Function class of the HelloWorld contract get interface, which records the input and return types and can be used for ABI parsing public Function getMethodGetRawFunction() throws ContractException; - / / The HelloWorld contract set interface. Enter the string type and return the transaction receipt. + / / The HelloWorld contract set interface. Enter the string type and return the transaction receipt public TransactionReceipt set(String n); / / The Function class of the HelloWorld contract set interface, which records the input and return types and can be used for ABI parsing public Function getMethodSetRawFunction(String n) throws ContractException; - / / Obtain the signed transaction calling the set interface, which can be sent directly to the chain. + / / Obtain the signed transaction calling the set interface, which can be sent directly to the chain public String getSignedTransactionForSet(String n); - / / The HelloWorld contract set asynchronous interface. Enter the string type and return the transaction hash. + / / The HelloWorld contract set asynchronous interface. Enter the string type and return the transaction hash public String set(String n, TransactionCallback callback); / / Input parsing of the HelloWorld contract set public Tuple1 getSetInput(TransactionReceipt transactionReceipt); - / / If there is a known HelloWorld contract on the chain, you can directly load the Java HelloWorld class using the modified interface.。Note: ABI must be the same, otherwise the call fails + / / If there is a known HelloWorld contract on the chain, you can directly load the Java HelloWorld class using the modified interface。Note: ABI must be the same, otherwise the call fails public static HelloWorld load(String contractAddress, Client client, CryptoKeyPair credential); / / Initiate the deployment contract operation on the chain and return the Java HelloWorld class。 public static HelloWorld deploy(Client client, CryptoKeyPair credential) throws ContractException; } ``` -## 5. Generate WBC-Java interface file for Liquid contract +## 5. Generate the Java interface file for the WBC-Liquid contract -Similar to the Solidity contract above, if you want to experience the webankblockchain-liquid (hereinafter referred to as WBC-Liquid) deployment operations, the console also provides you with examples.。 +Similar to the Solidity contract above, if you want to experience the deployment of webankblockchain-liquid (hereinafter referred to as WBC-Liquid), the console also provides you with examples。 Before using it, ensure the compilation environment of the cargo liquid. For details about how to use it, see: https://liquid-doc.readthedocs.io/。 -### 5.1 WBC-Compilation of Liquid Contracts +### 5.1 Compilation of the WBC-Liquid Contract You can view it under contracts / liquid in the dist directory of the console. The following is an example of hello _ world: @@ -141,23 +141,23 @@ Binary: ~/fisco/contracts/liquid/hello_world/target/hello_world.wasm Generate 'hello _ world.wasm' and 'hello _ world.abi' files -### 5.2 WBC-Liquid Contract Generating Java Files +### 5.2 WBC-Liquid contract generates Java files ```shell # Current Directory ~ / fisco / console $ bash contract2java.sh liquid -b ./contracts/liquid/hello_world/hello_world.wasm -a ./contracts/liquid/hello_world/hello_world.abi -s ./contracts/liquid/hello_world/hello_world_sm.wasm -p org.com.fisco -# Via the command. / contract2java.sh-H can view the script usage method +# The command. / contract2java.sh -h allows you to view how the script is used $ ls contracts/sdk/java/org/com/fisco # get back HelloWorld.java ``` -## 6. contract2java.sh script parsing. +## 6. contract2java.sh script parsing -The console provides a specialized tool for generating Java contracts that allows developers to integrate Solidity and WBC-Liquid contract files are compiled into Java contract files。 +The console provides a special tool for generating Java contracts, which allows developers to compile Solidity and WBC-Liquid contract files into Java contract files。 -The current contract generation tool supports Solidity's automatic compilation and generation of Java files, supports specifying WBC-Liquid compiled WASM file and ABI file to generate Java file。 +The current contract generation tool supports automatic compilation and generation of Java files by Solidity, WASM files compiled by specified WBC-Liquid, and Java files generated by ABI files。 ### 6.1 Solidity Contract Use @@ -195,11 +195,11 @@ usage: contract2java.sh [OPTIONS...] Detailed parameters: - `package`: Generate the package name of the 'Java' file。 -- `sol`: (Optional)The path of the 'solidity' file. Two methods are supported: file path and directory path. When the parameter is a directory, all the 'solidity' files in the directory are compiled and converted.。The default directory is' contracts / solidity'。 -- `output`: (Optional)The directory where the 'Java' file is generated. By default, it is generated in the 'contracts / sdk / java' directory.。 -- `no-analysis': (Optional) Skip static analysis of solidity compilation, which can effectively reduce compilation speed。Static analysis can analyze the parallel feasibility of the contract interface and put the results of the analysis into the abi file.。 -- `enable-async-call ': (Optional) You can generate a Java file with an asynchronous call interface.-Use when sdk version > = 3.3.0。 -- `transaction-version ': (Optional) Specifies the version number of the generated Java file for sending transactions. The default value is 0, which is compatible with all versions of the node.;When the transaction version is 1, transactions with fields such as value, gasLimit, gasPrice, and EIP1559 can only be sent to nodes of 3.6.0 and above.。 +- `sol`: (Optional)The path of the 'solidity' file. Two methods are supported: file path and directory path. When the parameter is a directory, all the 'solidity' files in the directory are compiled and converted。The default directory is' contracts / solidity'。 +- `output`: (Optional)The directory where the 'Java' file is generated. By default, it is generated in the 'contracts / sdk / java' directory。 +- 'no-analysis': (optional) Skip the static analysis of solidity compilation, which can effectively reduce the compilation speed。Static analysis can analyze the parallel feasibility of the contract interface and put the results of the analysis into the abi file。 +- 'enable-async-call': (Optional) You can generate a Java file with an asynchronous call interface>= 3.3.0 when used。 +- 'transaction-version': (Optional) Specifies the version number of the generated Java file for sending transactions. The default value is 0, which is compatible with all versions of nodes;When the transaction version is 1, transactions with fields such as value, gasLimit, gasPrice, and EIP1559 can only be sent to nodes of 3.6.0 and above。 ### 6.2 WBC-Liquid Contract Use @@ -219,16 +219,16 @@ usage: contract2java.sh [OPTIONS...] Detailed parameters: -- 'abi ': (Required) WBC-Path to the 'ABI' file of the Liquid contract, which is generated in the target folder after using the 'cargo liquid build' command。 -- 'bin ': (Required) WBC-Path to the 'wasm bin' file of the Liquid contract, which is generated in the target folder after using the 'cargo liquid build' command。 -- 'package ': (Optional) Generate the package name of the' Java 'file, which is' org 'by default.。 -- `sm-bin ': (Required) WBC-The path to the 'wasm sm bin' file of the Liquid contract.-Generated in the target folder after the g 'command。 +- 'abi': (Required) The path of the WBC-Liquid contract 'ABI' file, which is generated in the target folder after using the 'cargo liquid build' command。 +- 'bin': (Required) The path of the WBC-Liquid contract 'wasm bin' file, which is generated in the target folder after using the 'cargo liquid build' command。 +- 'package': (Optional) The package name of the generated 'Java' file, which is' org 'by default。 +- 'sm-bin': (Required) The path of the WBC-Liquid contract 'wasm sm bin' file, which is generated in the target folder after using the 'cargo liquid build -g' command。 -## 7. Use "java-sdk-demo "Generate a Java tool class for a smart contract that calls it +## 7. Use "java-sdk-demo" to generate the Java tool class that calls the smart contract ```shell $ mkdir -p ~/fisco && cd ~/fisco -# Get Java-sdk code +# get java-sdk code $ git clone https://github.com/FISCO-BCOS/java-sdk-demo # If the preceding command cannot be executed for a long time due to network problems, try the following command: @@ -237,10 +237,10 @@ $ git clone https://gitee.com/FISCO-BCOS/java-sdk-demo $ cd java-sdk-demo # Compile $ ./gradlew clean build -x test -# enter sdk-demo / dist directory, create contract storage directory +# Enter the sdk-demo / dist directory and create a contract storage directory $ cd dist && mkdir -p contracts/solidity -# Copy the sol file that needs to be converted into java code to ~ / fisco / java-under the sdk / dist / contracts / consolidation path +# Copy the sol file that needs to be converted into java code to the path ~ / fisco / java-sdk / dist / contracts / consolidation # Convert sol, where ${packageName}is the generated java code package path -# The generated java code is located at ~ / fisco / java-sdk / dist / contracts / sdk / java directory +# The generated java code is located in the ~ / fisco / java-sdk / dist / contracts / sdk / java directory java -cp "apps/*:lib/*:conf/" org.fisco.bcos.sdk.demo.codegen.DemoSolcToJava ${packageName} ``` diff --git a/3.x/en/docs/sdk/java_sdk/crypto.md b/3.x/en/docs/sdk/java_sdk/crypto.md index c3df418e6..59523b214 100644 --- a/3.x/en/docs/sdk/java_sdk/crypto.md +++ b/3.x/en/docs/sdk/java_sdk/crypto.md @@ -1,6 +1,6 @@ # Signature and Verification -Tag: "java-sdk`` ``Crypto`` +Tags: "java-sdk" "Crypto" ---- @@ -120,11 +120,11 @@ After initializing the cryptography suite 'CryptoSuite', users can directly use ## Signature / Validation Interface -After initializing the cryptography suite 'CryptoSuite', you can directly use the created 'CryptoSuite' to call the signature and signature verification interfaces. You can also create a specified signature verification object and call the signature and signature verification interfaces.。 +After initializing the cryptography suite 'CryptoSuite', you can directly use the created 'CryptoSuite' to call the signature and signature verification interfaces. You can also create a specified signature verification object and call the signature and signature verification interfaces。 ```eval_rst .. note:: - The plaintext data passed in by the signature / signature verification interface must be a hash. Before generating the signature of the specified plaintext, the hash must be calculated and the hash result must be passed into the interface as the original signature to generate the signature. + The plaintext data passed in by the signature / signature verification interface must be a hash. Before generating the signature of the specified plaintext, the hash must be calculated and the hash result must be passed into the interface as the original signature to generate the signature ``` ### Invoking the signing / checking interface using CryptoSuite diff --git a/3.x/en/docs/sdk/java_sdk/event_sub.md b/3.x/en/docs/sdk/java_sdk/event_sub.md index a85ed1b2f..f1952cf2e 100644 --- a/3.x/en/docs/sdk/java_sdk/event_sub.md +++ b/3.x/en/docs/sdk/java_sdk/event_sub.md @@ -1,12 +1,12 @@ # Contract Event Push -Tag: "java-sdk "" Event Subscription "" Event " +Tags: "java-sdk" "event subscription" "Event" ---- ## 1. Function Introduction -The contract event push function provides an asynchronous push mechanism for contract events. The client sends a registration request to the node, which carries the parameters of the contract events that the client is concerned about. The node filters the 'Event Log' of the request block range according to the request parameters and pushes the results to the client in stages.。 +The contract event push function provides an asynchronous push mechanism for contract events. The client sends a registration request to the node, which carries the parameters of the contract events that the client is concerned about. The node filters the 'Event Log' of the request block range according to the request parameters and pushes the results to the client in stages。 ## 2. Interactive Protocol @@ -32,16 +32,16 @@ The client sends a registration request for event push to the node: } ``` -- filerID: string type, unique for each request, marked as a registration task -- groupID: string type, group ID -- fromBlock: shaping string, initial block。"latest" current block high -- toBlock: shaping string, final block。When "latest" is processed to the current block high, continue to wait for a new block -- addresses: string or string array: string represents a single contract address, array is multiple contract addresses, array can be empty -- topics: string type or array type: string represents a single topic, array is multiple topics, array can be empty +-filerID: string type, unique for each request, marking a registration task +-groupID: string type, group ID +-fromBlock: shaping string, initial block。"latest" current block high +-toBlock: shaping string, final block。When "latest" is processed to the current block high, continue to wait for a new block +-addresses: string or string array: string represents a single contract address, array is multiple contract addresses, array can be empty +-topics: string type or array type: string represents a single topic, array is multiple topics, array can be empty ### 2.2 Node reply -When the node accepts the client registration request, it checks the request parameters and replies to the client whether it has successfully accepted the registration request.。 +When the node accepts the client registration request, it checks the request parameters and replies to the client whether it has successfully accepted the registration request。 ```Json // response sample: @@ -51,12 +51,12 @@ When the node accepts the client registration request, it checks the request par } ``` -- filterID: string type, unique for each request, marked as a registration task -- result: shaping, returns the result。0 success, the rest are failure status codes +-filterID: string type, unique for each request, marking a registration task +-result: shaping, return result。0 success, the rest are failure status codes ### 2.3 Event Log data push -After the node verifies that the client registration request is successful, it pushes the 'Event Log' data to the client based on the client request parameters.。 +After the node verifies that the client registration request is successful, it pushes the 'Event Log' data to the client based on the client request parameters。 ```Json // event log push sample: @@ -69,15 +69,15 @@ After the node verifies that the client registration request is successful, it p } ``` -- filterID: string type, unique for each request, marked as a registration task -- result: shaping 0: 'Event Log' data push 1: push completed。The client registers and requests the data push of the corresponding node multiple times (the request block range is relatively large or waiting for a new block). If the 'result' field is 1, the node push has ended. -- logs: an array of Log objects, valid when result is 0 +-filterID: string type, unique for each request, marking a registration task +-result: shaping 0: 'Event Log' data push 1: push complete。The client registers and requests the data push of the corresponding node multiple times (the request block range is relatively large or waiting for a new block). If the 'result' field is 1, the node push has ended +-logs: Log object array, valid when result is 0 ## 3. Java SDK Contract Event Tutorial ### Registration Interface -The 'org.fisco.bcos.sdk.v3.eventsub.EventSubscribe' class in the Java SDK provides an interface for registering contract events. You can call 'subscribeEvent' to send a registration request to a node and set a callback function.。 +The 'org.fisco.bcos.sdk.v3.eventsub.EventSubscribe' class in the Java SDK provides an interface for registering contract events. You can call 'subscribeEvent' to send a registration request to a node and set a callback function。 ```Java public String subscribeEvent(EventSubParams params, EventSubCallback callback); @@ -107,7 +107,7 @@ public interface EventSubCallback { - 'status' callback return status: ```Java - 0 : Normal push. Logs is the event log pushed by the node. + 0 : Normal push. Logs is the event log pushed by the node 1 : The push is completed, and all blocks in the execution interval have been processed 42000 : Other errors -41000 : Invalid parameter, client validation parameter error returned @@ -120,7 +120,7 @@ public interface EventSubCallback { -41007 : Event not registered, unsubscribe failed ``` -- 'logs' indicates the list of 'Event Log' objects for the callback. The status is valid as 0。The default value is' null '. The' data 'field of the following EventLog object can be resolved in the subclass through' org.fisco.bcos.sdk.v3.abi.ContractCodec '。 +- 'logs' indicates the list of 'Event Log' objects of the callback. The status is valid as 0。The default value is' null '. The' data 'field of the following EventLog object can be resolved in the subclass through' org.fisco.bcos.sdk.v3.abi.ContractCodec '。 ```Java / / EventLog object @@ -137,7 +137,7 @@ public interface EventSubCallback { - Implement callback object -Java SDK has no default implementation for the callback class' EventSubCallback '. You can inherit the' EventSubCallback 'class and rewrite the' onReceiveLog 'interface to implement your own callback logic processing.。 +Java SDK has no default implementation for the callback class' EventSubCallback '. You can inherit the' EventSubCallback 'class and rewrite the' onReceiveLog 'interface to implement your own callback logic processing。 ```Java class SubscribeCallback implements EventSubCallback { @@ -147,7 +147,7 @@ class SubscribeCallback implements EventSubCallback { } ``` -**Note: The logs of multiple callbacks of the 'onReceiveLog' interface may be duplicated. You can perform deduplication based on the 'blockNumber, transactionIndex, and logIndex' in the 'EventLog' object.** +**Note: The logs of multiple callbacks of the 'onReceiveLog' interface may be duplicated. You can perform deduplication based on the 'blockNumber, transactionIndex, and logIndex' in the 'EventLog' object** #### topic Tools @@ -172,7 +172,7 @@ class SubscribeCallback implements EventSubCallback { ## 4. Example -['Asset'](https://github.com/FISCO-BCOS/LargeFiles/blob/master/tools/asset-app.tar.gz)The 'TransferEvent' of the contract is used as an example to illustrate, giving some scenarios of contract event push for users' reference.。 +['Asset'](https://github.com/FISCO-BCOS/LargeFiles/blob/master/tools/asset-app.tar.gz)The 'TransferEvent' of the contract is used as an example to illustrate, giving some scenarios of contract event push for users' reference。 ```solidity contract Asset { @@ -194,7 +194,7 @@ contract Asset { } ``` -- Scenario 1: Call back all / latest events on the chain to the client +- Scenario 1: All / latest events on the chain are called back to the client ```Java / / Initialize EventSubscribe @@ -204,10 +204,10 @@ contract Asset { / / Parameter settings EventSubParams params = new EventSubParams(); - / / All Event fromBlock is set to-1 + / / All Event fromBlock is set to -1 params.setFromBlock(-1); - / / set toBlock to-1, processing to the latest block continues to wait for a new block + / / Set toBlock to -1 and wait for the new block until the latest block is processed params.setToBlock(-1); / / Register event @@ -229,9 +229,9 @@ contract Asset { / / Set parameters EventSubParams params = new EventSubParams(); - / / Start with the latest block at the time of subscription, and set fromBlock to-1 + / / Set fromBlock to -1 starting from the latest block at the time of subscription params.setFromBlock(-1); - / / set toBlock to-1, processing to the latest block continues to wait for a new block + / / Set toBlock to -1 and wait for the new block until the latest block is processed params.setToBlock(-1); // topic0,TransferEvent(int256,string,string,uint256) @@ -260,9 +260,9 @@ Contract Address: `String addr = "0x06922a844c542df030a2a2be8f835892db99f324";` / / Set parameters EventSubParams params = new EventSubParams(); - / / Start with the latest block at the time of subscription, and set fromBlock to-1 + / / Set fromBlock to -1 starting from the latest block at the time of subscription params.setFromBlock(-1); - / / set toBlock to-1, processing to the latest block continues to wait for a new block + / / Set toBlock to -1 and wait for the new block until the latest block is processed params.setToBlock(-1); / / addresses is set to the asset address, which matches the contract address @@ -292,7 +292,7 @@ Contract Address: `String addr = "0x06922a844c542df030a2a2be8f835892db99f324";` / / From the initial block, fromBlock is set to 1 params.setFromBlock(1); - / / set toBlock to-1, processing to the latest block continues to wait for a new block + / / Set toBlock to -1 and wait for the new block until the latest block is processed params.setToBlock(-1); / / addresses is set to the asset address, which matches the contract address @@ -312,7 +312,7 @@ Contract Address: `String addr = "0x06922a844c542df030a2a2be8f835892db99f324";` ## 4. Parsing Examples -The 'Asset' contract is used as an example to describe the implementation of contract deployment, invocation, registration of events, and resolution of node push events.。Note: The event parameters with the added indexed attribute are not decoded and are recorded directly at the corresponding position. The remaining event parameters with non-indexed attributes will be decoded.。 +The 'Asset' contract is used as an example to describe the implementation of contract deployment, invocation, registration of events, and resolution of node push events。Note: The event parameters with the added indexed attribute are not decoded and are recorded directly at the corresponding position. The remaining event parameters with non-indexed attributes will be decoded。 ```Java String contractAddress = ""; diff --git a/3.x/en/docs/sdk/java_sdk/index.md b/3.x/en/docs/sdk/java_sdk/index.md index 3f7119ba2..794a567c5 100644 --- a/3.x/en/docs/sdk/java_sdk/index.md +++ b/3.x/en/docs/sdk/java_sdk/index.md @@ -1,15 +1,15 @@ # 2. Java SDK -Tag: "java-sdk "" blockchain application " +Tags: "java-sdk" "blockchain application" ---- ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` -The Java SDK provides the Java API for accessing FISCO BCOS nodes, and supports node status query, deployment, and contract invocation. +The Java SDK provides the Java API for accessing FISCO BCOS nodes, and supports node status query, deployment, and contract invocation ```eval_rst .. toctree:: diff --git a/3.x/en/docs/sdk/java_sdk/keytool.md b/3.x/en/docs/sdk/java_sdk/keytool.md index d3777fe93..368e66c3a 100644 --- a/3.x/en/docs/sdk/java_sdk/keytool.md +++ b/3.x/en/docs/sdk/java_sdk/keytool.md @@ -1,12 +1,12 @@ # Account Key Management Tool -Tag: "java-sdk "" 'Set up account " +Tags: "java-sdk" "set up account" ---- The Java SDK provides an account management interface and supports the following functions: -- **Account loading**: Loads an account from a specified path, supports loading account files in both 'pem' and 'p12' formats, and also supports loading hexadecimal private key strings. +- **Account loading**: Loads an account from a specified path, supports loading account files in both 'pem' and 'p12' formats, and also supports loading hexadecimal private key strings - **Account Generation**: Randomly generate account public-private key pairs @@ -16,7 +16,7 @@ The Java SDK provides an account management interface and supports the following ```eval_rst .. note:: - The Java SDK provides interface-level account generation methods. For more information about tool-level account generation scripts, see 'get _ account.sh script <.. /.. / develop / account.html >'. + The Java SDK provides API-level account generation methods. For tool-level account generation scripts, see the 'get _ account.sh script<../../develop/account.html>`_ . ``` ## 1. Account loading @@ -100,7 +100,7 @@ public CryptoKeyPair loadGMAccountFromHexPrivateKey(BigInteger privateKey) ### 1.3 Load account from pem file -An example of loading a transaction sending account from a specified 'pem' account file is as follows(Please refer to [Quick Start] for client initialization method.(./quick_start.html#id4)): +An example of loading a transaction sending account from a specified 'pem' account file is as follows(Please refer to [Quick Start] for client initialization method(./quick_start.html#id4)): ```java / / Load the pem account file from the path specified by pemAccountFilePath and set it as the transaction sending account @@ -132,7 +132,7 @@ public void loadP12Account(Client client, String p12AccountFilePath, String pass Java SDK 'org.fisco.bcos.sdk.v3.crypto.CryptoSuite' provides account generation functionality。 -Examples of randomly generated non-State secret accounts are as follows. +Examples of randomly generated non-State secret accounts are as follows ```java / / Create a non-state secret type of CryptoSuite @@ -143,7 +143,7 @@ CryptoKeyPair cryptoKeyPair = cryptoSuite.generateRandomKeyPair(); String accountAddress = cryptoKeyPair.getAddress(); ``` -An example of a randomly generated State Secret account is as follows. +An example of a randomly generated State Secret account is as follows ```java / / Create a country secret type of CryptoSuite @@ -156,7 +156,7 @@ String accountAddress = cryptoKeyPair.getAddress(); ## 3. Account Preservation -When the account is not custom loaded and the account information is not configured through the profile(Please refer to [here] for account configuration.(./config.html#id6))Java SDK randomly generates an account to send transactions. Java SDK 'org.fisco.bcos.sdk.v3.crypto.CryptoSuite' provides the account save function, which can save the randomly generated account in the specified path。 +When the account is not custom loaded and the account information is not configured through the profile(Please refer to [here] for account configuration(./config.html#id6))Java SDK randomly generates an account to send transactions. Java SDK 'org.fisco.bcos.sdk.v3.crypto.CryptoSuite' provides the account save function, which can save the randomly generated account in the specified path。 An example of saving an account file to a specified path in the format 'pem' is as follows: diff --git a/3.x/en/docs/sdk/java_sdk/precompiled_service_api.md b/3.x/en/docs/sdk/java_sdk/precompiled_service_api.md index 68f1396fe..2c1d29dab 100644 --- a/3.x/en/docs/sdk/java_sdk/precompiled_service_api.md +++ b/3.x/en/docs/sdk/java_sdk/precompiled_service_api.md @@ -6,7 +6,7 @@ Tags: "Precompiled Contracts" "Interface" "Precompiled" "Service" The Java SDK provides Java API interfaces for blockchain application developers. By function, Java APIs can be divided into the following categories: -- Client: Provides access to FISCO BCOS 3.x node JSON-RPC interface support, providing support for deployment and invocation contracts; +- Client: Provides support for accessing the JSON-RPC interface for FISCO BCOS 3.x nodes, providing support for deploying and invoking contracts; - Precompiled: Provides calls to FISCO BCOS 3.x Precompiled contract(Precompiled Contracts)interfaces, including 'SensusService', 'SystemConfigService', 'BFSService', 'KVTableService', 'TableCRUDService', and 'AuthManager'。 ## 5. BFSService @@ -17,39 +17,39 @@ Creates a directory at the specified absolute path。 **Parameters** -- path: absolute path +-path: absolute path **Return value** -- RetCode: Create Directory Results。 +- RetCode: Create directory results。 ### 5.2 list -View the information of the specified absolute path. If it is a directory file, the meta information of all sub-resources in the directory is returned. If it is another file, the meta information of the file is returned.。(After the node version 3.1, the interface only returns up to 500) +View the information of the specified absolute path. If it is a directory file, the meta information of all sub-resources in the directory is returned. If it is another file, the meta information of the file is returned。(After the node version 3.1, the interface only returns up to 500) **Parameters** -- absolute Path: absolute path +-absolutePath: absolute path **Return value** -- List < BfsInfo >: Returns a list of meta information for a file。 +- ListReturns a list of meta information for a file。 ### 5.3 list Note: This interface can only be used when the node version is greater than 3.1 -View the information of the specified absolute path. If it is a directory file, the meta information of all sub-resources in the directory is returned. If it is another file, the meta information of the file is returned.。If there are too many directory files to traverse (greater than 500), you can traverse them using offsets and limits。 +View the information of the specified absolute path. If it is a directory file, the meta information of all sub-resources in the directory is returned. If it is another file, the meta information of the file is returned。If there are too many directory files to traverse (greater than 500), you can traverse them using offsets and limits。 **Parameters** -- absolute Path: absolute path -- offset: offset -- limit: limit value +-absolutePath: absolute path +-offset: offset +-limit: limit value **Return value** -- Tuple2 < BigInteger, List < BfsInfo > >: if the first value of tuple is negative, it means that the execution error occurred; if it is positive, it means how many files are left to be returned (when traversing the directory file);The second value of tuple is a list of meta information of the returned file。 +- Tuple2>: If the first value of tuple is negative, it means that there is an error in execution, and if it is positive, it means how many files remain to be returned (when traversing directory files);The second value of tuple is a list of meta information of the returned file。 ### 5.4 isExist @@ -59,24 +59,24 @@ Determine whether the file resource exists。 **Parameters** -- absolute Path: absolute path +-absolutePath: absolute path **Return value** -- BFSInfo: returns specific file meta information if it exists, or null if it does not exist。 +-BFSInfo: If it exists, it will return the specific file meta information, if it does not exist, it will return null。 ### 5.5 link -Create soft links to contracts under / apps / to facilitate contract management and version control。This method provides the same interface as before in order to adapt to the CNS function of the old node version.。 +Create soft links to contracts under / apps / to facilitate contract management and version control。This method provides the same interface as before in order to adapt to the CNS function of the old node version。 After successful execution, a link file is created under / apps /. For example, if the contract name is hello and the version number is v1, the absolute path of the link file is / apps / hello / v1 **Parameters** -- name: contract name -- version: contract version number -- contractAddress: contract address -- abi: Contract ABI +-name: contract name +-version: contract version number +-contractAddress: contract address +-abi: Contract ABI **Return value** @@ -86,13 +86,13 @@ After successful execution, a link file is created under / apps /. For example, Note: This interface can only be used when the node version is greater than 3.1 -Create soft links to contracts under / apps / to facilitate contract management and version control。This interface allows users to create soft links at any path in the / apps directory. +Create soft links to contracts under / apps / to facilitate contract management and version control。This interface allows users to create soft links at any path in the / apps directory **Parameters** -- absolute Path: absolute path -- contractAddress: contract address -- abi: Contract ABI +-absolutePath: absolute path +-contractAddress: contract address +-abi: Contract ABI **Return value** @@ -100,15 +100,15 @@ Create soft links to contracts under / apps / to facilitate contract management ### 5.7 readlink -Obtain the address corresponding to the link file。This method provides the same interface as before in order to adapt to the CNS function of the old node version.。 +Obtain the address corresponding to the link file。This method provides the same interface as before in order to adapt to the CNS function of the old node version。 **Parameters** -- absolute Path: absolute path +-absolutePath: absolute path **Return value** -- address: the address corresponding to the link file. +-address: the address corresponding to the link file ## 6. ConsensusService @@ -118,8 +118,8 @@ Obtain the address corresponding to the link file。This method provides the sam **Parameters** -- nodeId: The ID of the node added as the consensus node. -- weight: add the weight of the consensus node +-nodeId: the ID of the node added as the consensus node +-weight: Add the weight of the consensus node **Return value** @@ -127,7 +127,7 @@ Obtain the address corresponding to the link file。This method provides the sam ```eval_rst .. note:: - In order to ensure that the new node does not affect the consensus, the node to be added as a consensus node must establish a P2P network connection with other nodes in the group, and the node block height must not be lower than the current highest block.-10, otherwise it cannot be added as a consensus node。 + In order to ensure that the new node does not affect the consensus, the node to be added as a consensus node must establish a P2P network connection with other nodes in the group, and the node block height must not be lower than the current maximum block -10, otherwise it cannot be added as a consensus node。 ``` ### 6.2 addObserver @@ -136,7 +136,7 @@ Add the specified node as an observation node。 **Parameters** -- nodeId: The ID of the node added as an observation node. +- nodeId: The ID of the node added as an observation node **Return value** @@ -148,7 +148,7 @@ Move the specified node out of the group。 **Parameters** -- nodeId: The node ID of the node removed from the group. +- nodeId: The node ID of the node removed from the group **Return value** @@ -160,8 +160,8 @@ Set the weight of a consensus node。 **Parameters** -- nodeId: The node ID of the consensus node. -- weight: weight, not less than 1 +- nodeId: The node ID of the consensus node +-weight: weight, not less than 1 **Return value** @@ -175,7 +175,7 @@ Sets the value of the specified system configuration item。 **Parameters** -- key: Configuration item. Currently, 'tx _ count _ limit' and 'consensus _ leader _ period' are supported.; +- key: Configuration item. Currently, 'tx _ count _ limit' and 'consensus _ leader _ period' are supported; - value: The value to which the system configuration item is set。 @@ -193,7 +193,7 @@ Create User Table。 - tableName: Name of the created user table; - keyFieldName: Primary key name of the user table; -- valueFields: The fields of the user table. +- valueFields: The fields of the user table **Return value** @@ -220,7 +220,7 @@ Query specified records in the user table。 **Parameters** - tableName: Queried user table name。 -- key: the primary key value to be queried.。 +- key: the primary key value to be queried。 **Return value** @@ -236,7 +236,7 @@ Obtain the description information of the specified user table。 **Return value** -- Map: Description of the user table. The mapping between 'PrecompiledConstant.KEY _ NAME' and the mapping between 'PrecompiledConstant.FIELD _ NAME' and all fields. The fields are separated by commas.。 +- Map: Description of the user table. The mapping between 'PrecompiledConstant.KEY _ NAME' and the mapping between 'PrecompiledConstant.FIELD _ NAME' and all fields. The fields are separated by commas。 ### 8.5 asyncSet @@ -256,9 +256,9 @@ Obtain the description information of the specified user table。 ## 9. CNSService -**Note:** from 3.0.0-rc3 version started, CNS is no longer supported。For the corresponding contract alias function, please refer to the BFS link function.。 +**Note:** Starting with version 3.0.0-rc3, CNS is no longer supported。For the corresponding contract alias function, please refer to the BFS link function。 -**Migration Instructions:** Due to the abandonment of the CNS interface, BFS contains the functions of the CNS and also provides the corresponding adaptation interface.。You can change the original CNS service interface to the BFS interface. The interface corresponds to the following table: +**Migration Instructions:** Due to the abandonment of the CNS interface, BFS contains the functions of the CNS and also provides the corresponding adaptation interface。You can change the original CNS service interface to the BFS interface. The interface corresponds to the following table: | Method Name| CNSService | BFSService | | ------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | @@ -272,14 +272,14 @@ Obtain the description information of the specified user table。 Rights management interfaces include the following three interfaces: - Query interface without permission; -- Governance Committee-specific interface: An interface that has the private key of the governance committee to initiate transactions in order to execute correctly.; -- Administrator-specific interface: An interface where transactions initiated by an administrator's private key with administrative privileges on the corresponding contract can be executed correctly.。 +- Governance Committee-specific interface: an interface that has the private key of the governance committee to initiate transactions in order to execute correctly; +- Administrator-specific interface: an interface where transactions initiated by an administrator's private key with administrative rights to the corresponding contract can be executed correctly。 ### 10.1 Query interface without permission #### getCommitteeInfo -At initialization, a governance committee is deployed whose address information is automatically generated or specified at build _ chain.sh.。Initialize only one member, and the weight of the member is 1。 +At initialization, a governance committee is deployed whose address information is automatically generated or specified at build _ chain.sh。Initialize only one member, and the weight of the member is 1。 **Parameters** @@ -287,7 +287,7 @@ At initialization, a governance committee is deployed whose address information **Return value** -- CommitteeInfo: Details of the Governance Committee +- CommitteeInfo: governance committee details #### getProposalInfo @@ -295,7 +295,7 @@ Get information about a specific proposal。 **Parameters** -- proposalID: the ID number of the proposal +-proposalID: ID number of the proposal **Return value** @@ -311,7 +311,7 @@ Get the permissions policy for the current global deployment **Return value** -- BigInteger: policy type: 0 is no policy, 1 is whitelist mode, 2 is blacklist mode +-BigInteger: Policy type: 0 is no policy, 1 is whitelist mode, 2 is blacklist mode #### checkDeployAuth @@ -319,25 +319,25 @@ Check whether an account has deployment permissions **Parameters** -- account: account address +-account: account address **Return value** -- Boolean: Permission +-Boolean: Permission #### checkMethodAuth -Check whether an account has the permission to call an interface of a contract. +Check whether an account has the permission to call an interface of a contract **Parameters** -- contractAddr: contract address -- func: function selector for the interface, 4 bytes -- account: account address +-contractAddr: contract address +-func: function selector for the interface, 4 bytes +-account: account address **Return value** -- Boolean: Permission +-Boolean: Permission #### getAdmin @@ -345,11 +345,11 @@ Get the administrator address for a specific contract **Parameters** -- contractAddr: contract address +-contractAddr: contract address **Return value** -- account: account address +-account: account address ### 10.2 Special Interface for Account Number of Governance Committee @@ -357,16 +357,16 @@ There must be an account in the Governance Committee's Governors to call, and if #### updateGovernor -In the case of a new governance committee, add an address and weight.。If you are deleting a governance member, you can set the weight of a governance member to 0。 +In the case of a new governance committee, add an address and weight。If you are deleting a governance member, you can set the weight of a governance member to 0。 **Parameters** -- account: account address -- weight: account weight +-account: account address +-weight: account weight **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### setRate @@ -374,24 +374,24 @@ Set proposal threshold, which is divided into participation threshold and weight **Parameters** -- participatesRate: participation threshold, in percentage units -- winRate: by weight threshold, percentage unit +-participatesRate: participation threshold, percentage unit +-winRate: by weight threshold, percentage unit **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### setDeployAuthType -Set the ACL policy for deployment. Only white _ list and black _ list policies are supported. +Set the ACL policy for deployment. Only white _ list and black _ list policies are supported **Parameters** -- deployAuthType: When type is 1, it is set to a whitelist. When type is 2, it is set to a blacklist.。 +-deployAuthType: When type is 1, it is set as a white list, and when type is 2, it is set as a black list。 **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### modifyDeployAuth @@ -399,12 +399,12 @@ Modify a deployment permission proposal for an administrator account **Parameters** -- account: account address -- openFlag: whether to enable or disable permissions +-account: account address +-openFlag: whether to turn permissions on or off **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### resetAdmin @@ -412,12 +412,12 @@ Resetting an administrator account proposal for a contract **Parameters** -- newAdmin: Account address -- contractAddr: contract address +-newAdmin: Account address +-contractAddr: contract address **Return value** -- proposalId: returns the ID number of the proposal +-proposalId: Returns the ID number of the proposal #### revokeProposal @@ -425,7 +425,7 @@ Undo the initiation of a proposal, an operation that only the governance committ **Parameters** -- proposalId: ID number of the proposal +-proposalId: ID number of the proposal **Return value** @@ -437,8 +437,8 @@ vote on a proposal **Parameters** -- proposalId: ID number of the proposal -- agree: Do you agree to this proposal? +-proposalId: ID number of the proposal +-agree: Do you agree to this proposal? **Return value** @@ -446,21 +446,21 @@ vote on a proposal ### 10.3 Special interface for contract administrator account -Each contract has an independent administrator. Only the administrator account of a contract can set the interface permissions of the contract.。 +Each contract has an independent administrator. Only the administrator account of a contract can set the interface permissions of the contract。 #### setMethodAuthType -Set the API call ACL policy of a contract. Only white _ list and black _ list policies are supported. +Set the API call ACL policy of a contract. Only white _ list and black _ list policies are supported **Parameters** -- contractAddr: contract address -- func: function selector for the contract interface, four bytes in length。 -- authType: When type is 1, it is set to a whitelist. When type is 2, it is set to a blacklist.。 +-contractAddr: contract address +-func: function selector for contract interface, length is four bytes。 +-authType: When type is 1, it is set as a white list, and when type is 2, it is set as a black list。 **Return value** -- result: If it is 0, the setting is successful。 +-result: If it is 0, the setting is successful。 #### setMethodAuth @@ -468,11 +468,11 @@ Modify the interface call ACL policy of a contract。 **Parameters** -- contractAddr: contract address -- func: function selector for the contract interface, four bytes in length。 -- account: account address -- isOpen: whether to enable or disable permissions +-contractAddr: contract address +-func: function selector for contract interface, length is four bytes。 +-account: account address +-isOpen: whether the permission is enabled or disabled **Return value** -- result: If it is 0, the setting is successful。 +-result: If it is 0, the setting is successful。 diff --git a/3.x/en/docs/sdk/java_sdk/quick_start.md b/3.x/en/docs/sdk/java_sdk/quick_start.md index d5bdcd3e1..424379bb2 100644 --- a/3.x/en/docs/sdk/java_sdk/quick_start.md +++ b/3.x/en/docs/sdk/java_sdk/quick_start.md @@ -1,6 +1,6 @@ # Quick Start -Tag: "java-sdk "" Introducing Java SDK " +Tags: "java-sdk" "Introducing Java SDK" ---- @@ -57,12 +57,12 @@ mkdir -p conf && cp -r ~/fisco/nodes/127.0.0.1/sdk/* conf There are two ways to use smart contracts in the SDK: -- (Suitable for specific contract scenarios) Generate Java interface files for smart contracts. Java applications can directly deploy and invoke contracts based on Java interface files.。Reference: [Java interface file for generating smart contracts](./contracts_to_java.html) -- (Suitable for general contract scenarios) Initiate a transaction request on its own according to the contract ABI assembly parameters。Reference: [Constructing Transactions and Calls](./assemble_transaction.html) +- (Suitable for specific contract scenarios) Generate Java interface files for smart contracts. Java applications can directly deploy and invoke contracts based on Java interface files。Reference: [Java interface file for generating smart contracts](./contracts_to_java.html) +- (Suitable for general contract scenario) Initiate transaction request according to contract ABI assembly parameters。Reference: [Constructing Transactions and Calls](./assemble_transaction.html) ### Step 5. Create a configuration file -Create the configuration file "config.toml" in the project. For details, see [Configuration Wizard].(./config.html)For configuration, you can also refer to ["config-example.toml``](https://github.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/config-example.toml) +Create the configuration file "config.toml" in the project. For details, see [Configuration Wizard](./config.html)For configuration, you can also refer to ["config-example.toml"](https://github.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/config-example.toml) Please refer to Section 4 of this document, "Appendix III. Configuring with xml configuration" for configuration via "xml"。 @@ -101,11 +101,11 @@ public class BcosSDKTest ### Appendix I. Configuring with XML Configurations -To adapt to more scenarios, the Java SDK supports initializing the 'BcosSDK' with 'xml'. For example, see ['applicationContext' in the Java SDK source code.-sample.xml`](https://github.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/applicationContext-sample.xml), refer to [Configuration Description] for the meaning of configuration items.(./config.md). +To adapt to more scenarios, the Java SDK supports initializing the 'BcosSDK' with 'xml'. For example, see ['applicationContext-sample.xml'] of the Java SDK source code(https://github.com/FISCO-BCOS/java-sdk/blob/master/src/test/resources/applicationContext-sample.xml), refer to [Configuration Description] for the meaning of configuration items(./config.md). Before initializing the 'BcosSDK' through the 'xml' configuration file, you need to introduce 'spring'。 -**Using the 'applicationContext-sample 'Initialize' BcosSDK 'as follows**: +**Use 'applicationContext-sample' to initialize 'BcosSDK' as follows**: ```java ApplicationContext context = @@ -137,20 +137,20 @@ public class ConfigProperty { / / AMOP configuration options, which currently include the following: // topicName: Subscribed AMOP topic // publicKeys: In the private AMOP topic, define the list of public keys of other clients that are allowed to receive messages from this client, which is used for topic authentication - // privateKey: In the private AMOP topic, define the private key of the client for topic authentication. - // password: If the client private key is a p12 file, this configuration item defines the password for loading the private key file. + // privateKey: In the private AMOP topic, define the private key of the client for topic authentication + // password: If the client private key is a p12 file, this configuration item defines the password for loading the private key file public List amop; / / Account configuration items, including the following: - // keyStoreDir: Save path of the account private key. The default value is account. - // accountFilePath: Load the account road from the profile. + // keyStoreDir: Save path of the account private key. The default value is account + // accountFilePath: Load the account road from the profile // accountFileFormat: Account format, currently supports pem and p12 // accountAddress: Loaded account address // password: Define the password to access the account private key when loading the p12 type account private key public Map account; / / Thread pool configuration items, which mainly include the following: - // threadPoolSize: The number of threads that process RPC message packets. The default value is the number of CPU core threads. + // threadPoolSize: The number of threads that process RPC message packets. The default value is the number of CPU core threads public Map threadPool; } ``` diff --git a/3.x/en/docs/sdk/java_sdk/remote_sign_assemble_transaction.md b/3.x/en/docs/sdk/java_sdk/remote_sign_assemble_transaction.md index 31be67313..093de0c32 100644 --- a/3.x/en/docs/sdk/java_sdk/remote_sign_assemble_transaction.md +++ b/3.x/en/docs/sdk/java_sdk/remote_sign_assemble_transaction.md @@ -1,10 +1,10 @@ # Integrate external signature services to construct transactions -Tag: "java-sdk "" send transaction "" external signature "" assembly transaction "" contract invocation "" +Tags: "java-sdk" "send transaction" "external signature" "assembly transaction" "contract invocation" " ---- -[AssembleTransactionProcessor](./assemble_transaction.md)Common contract operation interfaces have been supported and covered。However, in real business scenarios, for some specific business scenarios, you need to call the hardware encryption machine or remote signing service to sign the hash.。To this end, we further provide AssembleTransactionWithRemoteSignProcessor on top of AssembleTransactionProcessor to facilitate user integration with custom signing services.。 +[AssembleTransactionProcessor](./assemble_transaction.md)Common contract operation interfaces have been supported and covered。However, in real business scenarios, for some specific business scenarios, you need to call the hardware encryption machine or remote signing service to sign the hash。To this end, we further provide AssembleTransactionWithRemoteSignProcessor on top of AssembleTransactionProcessor to facilitate user integration with custom signing services。 ## 1. Concept resolution: deployment and invocation @@ -12,11 +12,11 @@ Concepts related to deployment and invocation (transactions and queries) can be ## 2. Get started quickly -SDK supports synchronous and asynchronous ways to invoke contracts。In the Quick Start section, first show the use of synchronous methods to call the contract.。 +SDK supports synchronous and asynchronous ways to invoke contracts。In the Quick Start section, first show the use of synchronous methods to call the contract。 ### 2.1 Prepare abi and binary files -The console provides a specialized tool for compiling contracts that allows developers to integrate Solidity / webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract file compilation to generate Java files and abi, binary files, the specific use of [reference here](./quick_start.html#contract2java-sh)。 +The console provides a dedicated contract compilation tool that allows developers to compile Solidity / webankblockchain-liquid (hereinafter referred to as WBC-Liquid) contract files to generate Java files, abi, and binary files(./quick_start.html#contract2java-sh)。 By running the contract2java script, the generated abi and binary files are located in the contracts / sdk / abi and contracts / sdk / bin directories respectively (the files generated by the compilation of the national secret version are located in the contracts / sdk / abi / sm and contracts / sdk / bin / sm folders respectively)。You can copy files to the project directory, such as src / main / resources / abi and src / main / resources / bin。 @@ -65,9 +65,9 @@ Initialize the SDK based on the configuration file, such as: ```java / / Initialize the BcosSDK object BcosSDK sdk = BcosSDK.build(configFile); -/ / Obtain the client object. The group name is group0. +/ / Obtain the client object. The group name is group0 Client client = sdk.getClient("group0"); -/ / To construct an AssembleTransactionProcessor object, you must pass in the client object, the CryptoKeyPair object, and the path where the abi and binary files are stored.。The abi and binary files need to be copied to the defined folder in the previous step。 +/ / To construct an AssembleTransactionProcessor object, you must pass in the client object, the CryptoKeyPair object, and the path where the abi and binary files are stored。The abi and binary files need to be copied to the defined folder in the previous step。 CryptoKeyPair keyPair = client.getCryptoSuite().getCryptoKeyPair(); ``` @@ -88,7 +88,7 @@ public interface RemoteSignProviderInterface { } ``` -Users can implement the 'requestForSign and requestForSignAsync' interfaces on demand to implement the logic of calling external signature services and returning results synchronously or asynchronously.。The specific business logic is encapsulated autonomously depending on the business scenario, either calling the hardware signing machine service or calling an externally managed signature service.。The 'handleSignedTransaction' interface defined in 'RemoteSignCallbackInterface' is automatically called back when the result of the asynchronous signature interface is returned。The interface is defined as follows: +Users can implement the 'requestForSign and requestForSignAsync' interfaces on demand to implement the logic of calling external signature services and returning results synchronously or asynchronously。The specific business logic is encapsulated autonomously depending on the business scenario, either calling the hardware signing machine service or calling an externally managed signature service。The 'handleSignedTransaction' interface defined in 'RemoteSignCallbackInterface' is automatically called back when the result of the asynchronous signature interface is returned。The interface is defined as follows: ```java public interface RemoteSignCallbackInterface { @@ -102,11 +102,11 @@ public interface RemoteSignCallbackInterface { } ``` -For demonstration purposes, we create a Mock class of an external signature service(Code Location'src / integration-test/java/org/fisco/bcos/sdk/v3/test/transaction/mock/RemoteSignProviderMock` )This class simulates the synchronous signature interface 'requestForSign' and the asynchronous signature interface 'requestForSignAsync'。 +For demonstration purposes, we create a Mock class of an external signature service(Code Location 'src / integration-test / java / org / fisco / bcos / sdk / v3 / test / transaction / mock / RemoteSignProviderMock')This class simulates the synchronous signature interface 'requestForSign' and the asynchronous signature interface 'requestForSignAsync'。 #### 2.3.2 Deployment, trading and querying -Java SDK provides a way to directly deploy and invoke contracts based on abi and binary files。This scenario applies to the default situation, by creating and using the 'AssembleTransactionWithRemoteSignProcessor' object to complete contract-related deployment, invocation, and query operations.。Here, suppose we create an externally signed Mock class' RemoteSignProviderMock'。 +Java SDK provides a way to directly deploy and invoke contracts based on abi and binary files。This scenario applies to the default situation, by creating and using the 'AssembleTransactionWithRemoteSignProcessor' object to complete contract-related deployment, invocation, and query operations。Here, suppose we create an externally signed Mock class' RemoteSignProviderMock'。 ```java / / The remoteSignProviderMock object must implement the RemoteSignCallbackInterface interface @@ -117,7 +117,7 @@ AssembleTransactionWithRemoteSignProcessor assembleTransactionWithRemoteSignProc #### 2.3.3 Transactions and Enquiries Only -If you only trade and query, but do not deploy the contract, then you do not need to copy the binary file, and you do not need to pass in the path of the binary file during construction, for example, the binary path parameter can be passed in an empty string.。 +If you only trade and query, but do not deploy the contract, then you do not need to copy the binary file, and you do not need to pass in the path of the binary file during construction, for example, the binary path parameter can be passed in an empty string。 ```java / / The remoteSignProviderMock object must implement the RemoteSignCallbackInterface interface @@ -128,14 +128,14 @@ AssembleTransactionWithRemoteSignProcessor assembleTransactionWithRemoteSignProc ### 2.4 Send operation instruction -After initializing the SDK and configuring objects, you can initiate contract operation instructions.。 +After initializing the SDK and configuring objects, you can initiate contract operation instructions。 #### 2.4.1 Deploy contracts synchronously -The deployment contract calls the 'deployByContractLoader' method, passes in the contract name and constructor parameters, uploads the deployment contract, and obtains the result of the 'TransactionResponse'.。 +The deployment contract calls the 'deployByContractLoader' method, passes in the contract name and constructor parameters, uploads the deployment contract, and obtains the result of the 'TransactionResponse'。 ```java -/ / Deploy the HelloWorld contract。The first parameter is the contract name and the second parameter is the list of contract constructors, which is of type List < Object >。 +/ / Deploy the HelloWorld contract。The first parameter is the contract name, and the second parameter is the list of contract constructors, which is ListType。 TransactionResponse response = assembleTransactionWithRemoteSignProcessor.deployByContractLoader("HelloWorld", new ArrayList<>()); ``` @@ -146,8 +146,8 @@ The data structure of 'TransactionResponse' is as follows: - returnMessages: Error message returned。 - TransactionReceipt: transaction receipt returned on the chain。 - ContractAddress: Address of contract deployed or invoked。 -- values: If the called function has a return value, it returns the parsed transaction return value and a string in JSON format.。 -- events: If there is a trigger log record, the parsed log return value is returned, and a string in JSON format is returned.。 +- values: If the called function has a return value, it returns the parsed transaction return value and a string in JSON format。 +- events: If there is a trigger log record, the parsed log return value is returned, and a string in JSON format is returned。 - receiptMessages: Returns the parsed transaction receipt information。 For example, deploying the 'HelloWorld' contract returns: @@ -183,7 +183,7 @@ Calling a contract transaction uses' sendTransactionAndGetResponseByContractLoad ```java / / Create a parameter to call the transaction function. Here, a parameter is passed in List params = Lists.newArrayList("test"); -/ / Call the HelloWorld contract. The contract address is helloWorldAddress, the function name is set, and the function parameter type is params. +/ / Call the HelloWorld contract. The contract address is helloWorldAddress, the function name is set, and the function parameter type is params TransactionResponse transactionResponse = assembleTransactionWithRemoteSignProcessor.sendTransactionAndGetResponse( helloWorldAddrss, abi, "set", params); ``` @@ -214,12 +214,12 @@ For example, calling the 'HelloWorld' contract returns the following: } ``` -#### 2.4.3 Call the contract query interface. +#### 2.4.3 Call the contract query interface -Query contracts can return results directly by calling the node query function on the chain without consensus.;So all query transactions are synchronized。Querying a contract uses the 'sendCallByContractLoader' function to query the contract. This section shows how to call the 'name' function in 'HelloWorld' to query the contract.。 +Query contracts can return results directly by calling the node query function on the chain without consensus;So all query transactions are synchronized。Querying a contract uses the 'sendCallByContractLoader' function to query the contract. This section shows how to call the 'name' function in 'HelloWorld' to query the contract。 ```java -/ / Query the name function of the HelloWorld contract. The contract address is helloWorldAddress and the parameter is empty. +/ / Query the name function of the HelloWorld contract. The contract address is helloWorldAddress and the parameter is empty CallResponse callResponse1 = assembleTransactionWithRemoteSignProcessor.sendCallByContractLoader("HelloWorld", helloWorldAddrss, "name", new ArrayList<>()); ``` @@ -240,7 +240,7 @@ The query function returns the following: ## 3. More operation interface -When calling an external signature service, you can do so either synchronously or asynchronously.。Asynchronous calls can be made in a way such as callback or CompletableFuture。 +When calling an external signature service, you can do so either synchronously or asynchronously。Asynchronous calls can be made in a way such as callback or CompletableFuture。 ### 3.1 Asynchronous operation contract by callback @@ -248,9 +248,9 @@ When calling an external signature service, you can do so either synchronously o When calling the external signature service asynchronously, you can customize the callback class, implement and rewrite the callback handler function。 -The custom callback class needs to inherit the abstract class' RemoteSignCallbackInterface 'and implement the' handleSignedTransaction 'method.。 +The custom callback class needs to inherit the abstract class' RemoteSignCallbackInterface 'and implement the' handleSignedTransaction 'method。 -For example, we define a simple callback class。This callback class implements the effect of asynchronous callbacks sending transactions to nodes.。 +For example, we define a simple callback class。This callback class implements the effect of asynchronous callbacks sending transactions to nodes。 ```java public class RemoteSignCallbackMock implements RemoteSignCallbackInterface { @@ -318,7 +318,7 @@ assembleTransactionWithRemoteSignProcessor.sendTransactionAsync(helloWorldAddres The SDK also supports asynchronous contract deployment using CompletableFuture encapsulation。 ```java -/ / Deploy the transaction asynchronously and obtain the CompletableFuture < TransactionReceipt > object +/ / Deploy the transaction asynchronously and get CompletableFuture Object CompletableFuture future = assembleTransactionWithRemoteSignProcessor.deployAsync(abi, bin, new ArrayList<>()); / / Define the business logic returned normally future.thenAccept( @@ -338,7 +338,7 @@ future.exceptionally( Same deployment contract。 ```java -/ / Deploy the transaction asynchronously and obtain the CompletableFuture < TransactionReceipt > object +/ / Deploy the transaction asynchronously and get CompletableFuture Object CompletableFuture future2 = assembleTransactionWithRemoteSignProcessor.sendTransactionAsync( helloWorldAddrss, abi, "set", params); / / Define the business logic returned normally @@ -365,9 +365,9 @@ Inherited Interface Reference [AssembleTransactionWithRemoteSignProcessor](./ass The detailed API functions are as follows。 - **void deployAsync(RawTransaction rawTransaction, RemoteSignCallbackInterface remoteSignCallbackInterface):** Input the RawTransaction packet of the deployment contract and the callback of the signature service to deploy the contract and automatically execute the callback function。 -- **void deployAsync(String abi, String bin, List\ params, RemoteSignCallbackInterface remoteSignCallbackInterface) :** The contract is deployed by passing in contract abi, bin, constructor parameters and the callback of the signature service, and the callback function is automatically executed.。 -- **void deployByContractLoaderAsync(String contractName, List\ params, RemoteSignCallbackInterface remoteSignCallbackInterface):** Enter the contract name, construction parameters, and callback to deploy the contract asynchronously. -- **void sendTransactionAndGetReceiptByContractLoaderAsync(String contractName, String to, String functionName, List\ params,RemoteSignCallbackInterface remoteSignCallbackInterface):** Call the contract name, contract address, function name, function parameters, and callback of the signature service. Send the transaction asynchronously.。 +- **void deployAsync(String abi, String bin, List\ params, RemoteSignCallbackInterface remoteSignCallbackInterface) :** The contract is deployed by passing in contract abi, bin, constructor parameters and the callback of the signature service, and the callback function is automatically executed。 +- **void deployByContractLoaderAsync(String contractName, List\ params, RemoteSignCallbackInterface remoteSignCallbackInterface):** Enter the contract name, construction parameters, and callback to deploy the contract asynchronously +- **void sendTransactionAndGetReceiptByContractLoaderAsync(String contractName, String to, String functionName, List\ params,RemoteSignCallbackInterface remoteSignCallbackInterface):** Call the contract name, contract address, function name, function parameters, and callback of the signature service. Send the transaction asynchronously。 - **CompletableFuture\ sendTransactionAsync(String to,String abi,String functionName,List\ params,RemoteSignCallbackInterface remoteSignCallbackInterface):** Pass in the callback of the calling contract address, abi, function name, function parameter, and signature service, return the call synchronously, and asynchronously obtain the CompletableFuture processing receipt result。 - **CompletableFuture\ sendTransactionAsync(String to, String abi, String functionName, List\ params):** Pass in the call contract address, abi, function name, function parameters, synchronous signature, and synchronous return call, asynchronously obtain CompletableFuture processing receipt results。 - **TransactionReceipt signAndPush(RawTransaction rawTransaction, String signatureStr):** Incoming RawTransaction and signature results, pushing them to nodes, and receiving transaction receipts simultaneously。 diff --git a/3.x/en/docs/sdk/java_sdk/retcode_retmsg.md b/3.x/en/docs/sdk/java_sdk/retcode_retmsg.md index 3a469ae71..9c0c5103c 100644 --- a/3.x/en/docs/sdk/java_sdk/retcode_retmsg.md +++ b/3.x/en/docs/sdk/java_sdk/retcode_retmsg.md @@ -4,12 +4,12 @@ Tags: "TransactionResponse" "Response Code" "Error Message" "returnCode" "return ---- -## 1. The node returns the data structure. +## 1. The node returns the data structure The data returned by the node through the RPC interface, including the following. The detailed reasons are explained below: -- Node is not successfully linked, RPC request error -- The chain is successful on the node, including: execution failure, transaction rollback, precompiled contract execution error. +- The node was not successfully linked, RPC request error +- The chain on the node is successful, including: execution failure, transaction rollback, precompiled contract execution error Examples of nodes that are not successfully linked are as follows: @@ -53,25 +53,25 @@ An example of a successful link on a node is as follows: After a user initiates a transaction or RPC request, the following steps are required to analyze the JSON result returned by the node from the RPC interface: -- Determine whether there is an error in the outermost layer of the JSON. If there is an error, it means that the transaction has failed before entering the consensus, and the RPC request has failed.;The user performs specific operations according to the message in the error。 -- If there is a result structure, it means that the transaction is successfully linked and the RPC request returns success. - - If it is a transaction, the status in the result is parsed. If it is 0, the execution is successful, and the others are execution errors. -- If you call a precompiled contract, you need to decode the output based on the ABI. +- Determine whether there is an error in the outermost layer of the JSON. If there is an error, it means that the transaction has failed before entering the consensus, and the RPC request has failed;The user performs specific operations according to the message in the error。 +- If there is a result structure, it means that the transaction is successfully linked and the RPC request returns success + - If it is a transaction, the status in the result is parsed. If it is 0, the execution is successful, and the others are execution errors +- If you call a precompiled contract, you need to decode the output according to the ABI ## 2. The error code that the node has not been successfully linked | error code| Error reason| |--------|----------------------------------------------------| | -32700 | Node-side JSON decoding fails, usually in RPC requests| -| -32600 | Bad request, missing fields in JSON, etc.| +| -32600 | Bad request, missing fields in JSON, etc| | -32601 | The requested RPC method does not exist| | -32603 | Internal error, usually the node has an error| | -32000 | The node is not yet started, it usually appears in Pro and Max nodes| | -32004 | Service has not been initialized yet, generally appears on Pro and Max nodes| | -32005 | Request group does not exist| -| 10000 | The Nonce check fails, and the transaction request is usually sent repeatedly.| -| 10001 | The block limit check fails. Generally, the high state of the SDK block is too far behind the node.| -| 10002 | The trading pool is full.| +| 10000 | The Nonce check fails, and the transaction request is usually sent repeatedly| +| 10001 | The block limit check fails. Generally, the high state of the SDK block is too far behind the node| +| 10002 | The trading pool is full| | 10003 | Unknown error| | 10004 | The transaction already exists in the transaction pool| | 10005 | The deal is already on the chain| @@ -79,9 +79,9 @@ After a user initiates a transaction or RPC request, the following steps are req | 10007 | Bad Group ID| | 10008 | Wrong transaction signature| | 10009 | Transaction request sent to wrong group| -| 10010 | Transactions in the trading pool have not been processed for more than 10 minutes.| +| 10010 | Transactions in the trading pool have not been processed for more than 10 minutes| -## 3. RPC request error codes other than transactions. +## 3. RPC request error codes other than transactions | error code| Error reason| |--------|--------------------------------------| @@ -108,7 +108,7 @@ After a user initiates a transaction or RPC request, the following steps are req | 17 | ContractAddressAlreadyUsed | The deployed contract address already exists| | 18 | PermissionDenied | Insufficient permissions to invoke contracts and deploy contracts| | 19 | CallAddressError | Request contract address does not exist| -| 21 | ContractFrozen | Contracts have been frozen.| +| 21 | ContractFrozen | Contracts have been frozen| | 22 | AccountFrozen | Account has been frozen| | 23 | AccountAbolished | Account has been abolished| | 24 | ContractAbolished | The contract has been annulled| @@ -117,7 +117,7 @@ After a user initiates a transaction or RPC request, the following steps are req | 34 | WASMUnreachableInstruction | Error during WASM execution| | 35 | WASMTrap | WASM execution failed| -## 5. Precompiled contract error code. +## 5. Precompiled contract error code **Response code**与**Error Message**corresponding table diff --git a/3.x/en/docs/sdk/java_sdk/rpc_api.md b/3.x/en/docs/sdk/java_sdk/rpc_api.md index b27f27e5c..9b6856632 100644 --- a/3.x/en/docs/sdk/java_sdk/rpc_api.md +++ b/3.x/en/docs/sdk/java_sdk/rpc_api.md @@ -4,17 +4,17 @@ Tags: "RPC" "Interface" --------- -The Java SDK provides a Java API interface for blockchain application developers, where JSON-The RPC interface is encapsulated in the Client class, providing access to the FISCO BCOS 3.x node JSON-RPC interface support, providing support for deployment and invocation contracts。 +The Java SDK provides a Java API interface for blockchain application developers. The JSON-RPC interface is encapsulated in the Client class, providing support for accessing the JSON-RPC interface of FISCO BCOS 3.x nodes, providing support for deploying and invoking contracts。 ```eval_rst .. note:: - Client interface declarations are in the 'Client.java' file - - The client is an object of the group dimension. For more information, see Quick Start < quick _ start.html > _ Initialize the client. When you initialize the client, you must pass in the group name + -Client is the object of the group dimension. For more information, see Quick Start'_ Initialize the client. When initializing the client, the group name must be passed in ``` **Special note: There are two types of Client interfaces, one is an interface with node, and the other is an interface without node。Interface with node allows node RPC to send requests to specified connected nodes。If not specified, the node RPC randomly sends requests to the node。** -**In addition, Client provides synchronous and asynchronous interfaces for each interface, and developers can identify whether the method name ends with Asyn or has a Callback callback parameter.。The following interface distances are all asynchronous and the interface of the specified node is taken as an example。** +**In addition, Client provides synchronous and asynchronous interfaces for each interface, and developers can identify whether the method name ends with Asyn or has a Callback callback parameter。The following interface distances are all asynchronous and the interface of the specified node is taken as an example。** **Curl call description: SSL authentication is enabled by default for the rpc interface of the node. The following command uses curl to send the interface without an SSL certificate. You need to disable the rpc interface SSL authentication of the node。The shutdown method is to modify the configuration file / fisco / nodes / 127.0.0.1 / node0 / config.ini, and restart the node after modifying the configuration file** @@ -34,10 +34,10 @@ The transaction publishing asynchronous interface, after receiving the response **Parameters** -- node: allows RPC to send requests to the specified node -- signedTransactionData: transactions after signature -- withProof: return whether to bring Merkel tree proof -- callback: After the SDK receives the packet return from the node, it calls the callback function. The callback function will bring the transaction receipt.。 +-node: allows RPC to send requests to the specified node +-signedTransactionData: transactions after signature +-withProof: return whether to bring Merkel tree proof +- callback: After the SDK receives the packet return from the node, it calls the callback function. The callback function will bring the transaction receipt。 **Return value** @@ -63,10 +63,10 @@ Send a request to the node, call the contract constant interface。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transaction: Contract invocation information, including the contract address, the contract caller, and the abi encoding of the invoked contract interface and parameters -- sign: Yes(Contract address, call parameters)The user address corresponding to the signature can be recovered on the chain. The interface of this parameter is only available after the 3.4.0 version of the node.。 -- callback: The callback function returns the return result of the contract constant interface, including the current block height, interface execution status information, and interface execution result. +-sign: Yes(Contract address, call parameters)The user address corresponding to the signature can be recovered on the chain. The interface of this parameter is only available after the 3.4.0 version of the node。 +- callback: The callback function returns the return result of the contract constant interface, including the current block height, interface execution status information, and interface execution result **Return value** @@ -90,7 +90,7 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"call","params":["group0","","0xc } ``` -Note: After version 3.4.0, the Call with sign interface is supported, and the private key is used when initiating a static call request to the request body.(to+data)After signing, the user address corresponding to the signature will be restored on the node side, and the tx.origin and msg.sender of the call request can be obtained from the contract to achieve the purpose of user identity authentication.。 +Note: After version 3.4.0, the Call with sign interface is supported, and the private key is used when initiating a static call request to the request body(to+data)After signing, the user address corresponding to the signature will be restored on the node side, and the tx.origin and msg.sender of the call request can be obtained from the contract to achieve the purpose of user identity authentication。 ```shell # Request @@ -114,9 +114,9 @@ Query contract code information corresponding to a specified contract address as **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - address: Contract Address -- callback: callback function, containing the contract code corresponding to the contract address。 +-callback: callback function, containing the contract code corresponding to the contract address。 **Return value** @@ -141,9 +141,9 @@ Query contract ABI information corresponding to a specified contract address asy **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - address: Contract Address -- callback: callback function, contract address corresponding to the contract ABI JSON。 +-callback: callback function, contract address corresponding to the contract ABI JSON。 **Return value** @@ -174,8 +174,8 @@ Obtain the latest block height of the group corresponding to the client object **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after obtaining the block height, the latest block height of the group corresponding to the client object。 +-node: allows RPC to send requests to the specified node +-callback: callback after obtaining the block height, the latest block height of the group corresponding to the client object。 **Return value** @@ -198,12 +198,12 @@ curl -X POST --data '{"jsonrpc":"2.0","method":"getBlockNumber","params":[],"id" ### 2.2 getTotalTransactionCountAsync -Obtain the transaction statistics of the client group, including the number of transactions on the chain and the number of failed transactions on the chain.。 +Obtain the transaction statistics of the client group, including the number of transactions on the chain and the number of failed transactions on the chain。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after obtaining transaction information, TotalTransactionCount: Transaction statistics, including: +-node: allows RPC to send requests to the specified node +-callback: callback after obtaining transaction information, TotalTransactionCount: Transaction statistics, including: - txSum: Total amount of transactions on the chain - blockNumber: Current block height of the group - failedTxSum: Total amount of abnormal transactions executed on the chain @@ -236,13 +236,13 @@ Obtain block information according to block height。 **Parameters** -- node: allows RPC to send requests to the specified node; +-node: allows RPC to send requests to the specified node; - blockNumber: Block height; -- onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information.; -- onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information.; - - false: The block returned by the node contains complete transaction information.; +-onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information; +-onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information; + - false: The block returned by the node contains complete transaction information; - true: The block returned by the node contains only the transaction hash。 -- callback: callback after the block is completed, query the obtained block information +-callback: obtain the callback after the block is completed, query the obtained block information **Return value** @@ -285,13 +285,13 @@ Obtain block information based on block hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - blockHash: Block Hash -- onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information.; +-onlyHeader: true / false, indicating whether only the block header data is obtained in the obtained block information; - onlyTxHash: true / false, indicating whether the obtained block information contains complete transaction information; - true: The block returned by the node contains only the transaction hash; - - false: The block returned by the node contains complete transaction information.。 -- callback: callback after the block is completed, query the obtained block information + - false: The block returned by the node contains complete transaction information。 +-callback: obtain the callback after the block is completed, query the obtained block information **Return value** @@ -359,9 +359,9 @@ Obtain block hash based on block height **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - blockNumber: Block height -- callback: the callback after the callback is obtained, specifying the block hash corresponding to the block height +-callback: the callback after obtaining, specifying the block hash corresponding to the block height **Return value** @@ -387,10 +387,10 @@ Get transaction information based on transaction hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transactionHash: Transaction Hash -- withProof: whether to bring Merkel tree proof -- callback: the callback when the transaction is obtained, specifying the transaction information corresponding to the hash。 +-withProof: whether to bring Merkel Tree Proof +-callback: the callback when the transaction is obtained, specifying the transaction information corresponding to the hash。 **Return value** @@ -437,10 +437,10 @@ Get transaction receipt information based on transaction hash。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - transactionHash: Transaction Hash -- withProof: return whether to bring Merkel tree proof -- callback: callback when obtaining transaction receipt, BcosTransactionReceipt: Receipt information corresponding to the transaction hash。 +-withProof: return whether to bring Merkel tree proof +-callback: callback when getting transaction receipt, BcosTransactionReceipt: Receipt information corresponding to the transaction hash。 **Return value** @@ -489,8 +489,8 @@ Get the number of unprocessed transactions in the transaction pool。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback when obtaining transaction receipt, PendingTxSize: Number of unprocessed transactions in the trading pool。 +-node: allows RPC to send requests to the specified node +-callback: callback when obtaining transaction receipt, PendingTxSize: Number of unprocessed transactions in the trading pool。 **Return value** @@ -528,7 +528,7 @@ Obtain the network connection information of the specified node。 **Parameters** -- callback: callback after getting, Peers: Network connection information for the specified node。 +-callback: callback after getting, Peers: Network connection information for the specified node。 **Return value** @@ -602,8 +602,8 @@ Get Node Synchronization Status。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after obtaining synchronization information, SyncStatus: Blockchain node synchronization status。 +-node: allows RPC to send requests to the specified node +-callback: callback after obtaining synchronization information, SyncStatus: Blockchain node synchronization status。 **Return value** @@ -629,9 +629,9 @@ Gets the value of the system configuration item based on the specified configura **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - key: System Configuration Item -- callback: callback after obtaining the configuration item, SystemConfig: Value of System Configuration Item。 +-callback: callback after obtaining the configuration item, SystemConfig: Value of System Configuration Item。 **Return value** @@ -661,7 +661,7 @@ Obtain the data version number of the current blockchain。 **Parameters** -- callback: EnumNodeVersion.Version, the data version number of the blockchain +-callback: EnumNodeVersion.Version, the data version number of the blockchain **Return value** @@ -675,8 +675,8 @@ Obtain the observation node list of the group corresponding to the client。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after getting the node list, ObserverList: Watch Node List。 +-node: allows RPC to send requests to the specified node +-callback: callback after getting the node list, ObserverList: Watch Node List。 **Return value** @@ -704,8 +704,8 @@ Obtain the consensus node list of the client group。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: callback after getting the node list +-node: allows RPC to send requests to the specified node +-callback: callback after getting the node list **Return value** @@ -744,7 +744,7 @@ Obtain PBFT view information when a node uses the PBFT consensus algorithm。 **Parameters** -- node: allows RPC to send requests to the specified node +-node: allows RPC to send requests to the specified node - callback:PbftView: PBFT View Information。 **Return value** @@ -771,8 +771,8 @@ Get Node Consensus Status。 **Parameters** -- node: allows RPC to send requests to the specified node -- callback: the callback after obtaining the status.: Node consensus state。 +-node: allows RPC to send requests to the specified node +-callback: the callback after obtaining the status: Node consensus state。 **Return value** @@ -800,7 +800,7 @@ Query the status information of the current group。 **Parameters** -- callback: callback after status information is queried, BcosGroupInfo: Queried group status information。 +-callback: callback after status information is queried, BcosGroupInfo: Queried group status information。 **Return value** @@ -870,7 +870,7 @@ Get the list of groups for the current node。 **Parameters** -- callback: callback after obtaining the group list, BcosGroupList: List of groups for the current node。 +-callback: callback after getting the group list, BcosGroupList: List of groups for the current node。 **Return value** @@ -902,7 +902,7 @@ Gets the list of nodes connected to the specified group of the current node。 **Parameters** -- callback: callback after getting the node list, GroupPeers: Specify the list of nodes to which the group is connected。 +-callback: callback after getting the node list, GroupPeers: Specify the list of nodes to which the group is connected。 **Return value** @@ -932,7 +932,7 @@ Obtain the current node group information list。 **Parameters** -- callback: callback after obtaining group information, BcosGroupInfoList: Current node group information list。 +-callback: callback after obtaining group information, BcosGroupInfoList: Current node group information list。 **Return value** @@ -1005,7 +1005,7 @@ Obtain information about a specified node in a group。 **Parameters** - node: Specify node name -- callback: callback after obtaining information, BcosGroupNodeInfo: Query the obtained node information。 +-callback: callback after getting information, BcosGroupNodeInfo: Query the obtained node information。 **Return value** @@ -1103,4 +1103,4 @@ Determine whether serial execution is on the chain ### 5.6 getNegotiatedProtocol -Obtain the maximum and minimum value of the protocol number after the SDK and the node handshake. The first 16 bytes of the obtained int are the maximum value, and the last 16 bytes are the minimum value. +Obtain the maximum and minimum value of the protocol number after the SDK and the node handshake. The first 16 bytes of the obtained int are the maximum value, and the last 16 bytes are the minimum value diff --git a/3.x/en/docs/sdk/java_sdk/spring_boot_crud.md b/3.x/en/docs/sdk/java_sdk/spring_boot_crud.md index 8b2b6c1ec..f2b743234 100644 --- a/3.x/en/docs/sdk/java_sdk/spring_boot_crud.md +++ b/3.x/en/docs/sdk/java_sdk/spring_boot_crud.md @@ -1,6 +1,6 @@ # Maven SpringBoot Application Example -Tag: "spring-boot-crud "" 'Development Zone Block Chain Application " +Tags: "spring-boot-crud" "development of blockchain applications" --------- @@ -39,7 +39,7 @@ $ cp ~/fisco/nodes/127.0.0.1/sdk/* src/main/resources/conf/ ### Setting Up Configuration Files -`spring-boot-crud 'includes SDK configuration files (located in the' src / main / resources / applicationContext.xml 'path) and WebServer configuration files(Located in the 'src / main / resources / application.yml' path)。 +'spring-boot-crud 'includes SDK configuration files (located in the' src / main / resources / applicationContext.xml 'path) and WebServer configuration files(Located in the 'src / main / resources / application.yml' path)。 You need to configure the 'network.peers' configuration item of 'applicationContext.xml' based on the IP address and port of the blockchain node, as follows: @@ -59,7 +59,7 @@ You need to configure the 'network.peers' configuration item of 'applicationCont ... ``` -Please refer to [here] for detailed instructions on SDK configuration in the project.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/configuration.html)。 +Please refer to [here] for detailed instructions on SDK configuration in the project(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/configuration.html)。 WebServer is mainly configured with a listening port, which is' 45000 'by default, as follows: @@ -77,11 +77,11 @@ You can use IDEA to import and compile and install the project, or you can use t # Compile Project $ bash mvnw compile -# Install the project, and after installation, generate the fisco in the target / directory-bcos-spring-boot-crud-0.0.1-jar package for SNAPSHOT.jar +# Install the project. After installation, generate the jar package of fisco-bcos-spring-boot-crud-0.0.1-SNAPSHOT.jar in the target / directory $ bash mvnw install ``` -### Start Spring-boot-crud service +### Start the spring-boot-crud service **Method one:** @@ -89,10 +89,10 @@ Open IDEA to import and compile the project. After successful compilation, run ' **Method two:** -Jar package generated by using 'bash mvnw install' target / fisco-bcos-spring-boot-crud-0.0.1-SNAPSHOT.jar 'Start spring-boot-crud services: +Use the 'bash mvnw install' generated jar package 'target / fisco-bcos-spring-boot-crud-0.0.1-SNAPSHOT.jar' to start the spring-boot-crud service: ```shell -# Start Spring-boot-crud(After successful startup, the log of create client for group 1 success will be output) +# Start spring-boot-crud(After successful startup, the log of create client for group 1 success will be output) $ java -jar ./target/fisco-bcos-spring-boot-crud-0.0.1-SNAPSHOT.jar ``` @@ -100,7 +100,7 @@ $ java -jar ./target/fisco-bcos-spring-boot-crud-0.0.1-SNAPSHOT.jar ### Access user information on the chain API(KV set) -`spring-boot-crud 'implements an API for user information chaining based on the KV set interface, and chaining user information of the' Person 'type. The API statement is as follows: +'spring-boot-crud 'implements the user information chaining API based on the KV set interface, and chaining the user information of the' Person 'type. The API statement is as follows: ```java @Data @@ -128,7 +128,7 @@ $ curl -H "Content-Type: application/json" -X POST --data '{"name":"fisco", "age ### Query user information API on access chain(KV get) -`spring-boot-crud 'implements an API for querying user information on the chain based on the KV get interface. The API is declared as follows: +'spring-boot-crud 'implements an API for querying user information on the chain based on the KV get interface. The API is declared as follows: ```java @GetMapping("/get/{name}") @@ -152,16 +152,16 @@ $ curl http://localhost:45000/get/fisco ## Contribution code - We welcome and greatly appreciate your contribution, see [Code Contribution Process](../../community/pr.md)。 -- If the project is helpful to you, welcome star support! +-If the project is helpful to you, welcome star support! ## Join us -**FISCO BCOS Open Source Community**It is an active open source community in China, which has long provided all kinds of support and assistance to institutional and individual developers.。Thousands of technology enthusiasts from various industries have been researching and using FISCO BCOS。If you are interested in FISCO BCOS open source technology and applications, welcome to join the community for more support and help。 +**FISCO BCOS Open Source Community**It is an active open source community in China, which has long provided all kinds of support and assistance to institutional and individual developers。Thousands of technology enthusiasts from various industries have been researching and using FISCO BCOS。If you are interested in FISCO BCOS open source technology and applications, welcome to join the community for more support and help。 ![](https://media.githubusercontent.com/media/FISCO-BCOS/LargeFiles/master/images/QR_image.png) ## Related Links -- To learn about the FISCO BCOS project, please refer to [FISCO BCOS Documentation](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/index.html)。 -- For more information about Java SDK projects, see [Java SDK Documentation](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/index.html)。 +- To understand the FISCO BCOS project, please refer to [FISCO BCOS Documentation](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/index.html)。 +- For more information about Java SDK projects, please refer to [Java SDK Documentation](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/index.html)。 - To understand spring boot, please refer to [Spring Boot official website](https://spring.io/guides/gs/spring-boot/)。 diff --git a/3.x/en/docs/sdk/java_sdk/spring_boot_starter.md b/3.x/en/docs/sdk/java_sdk/spring_boot_starter.md index 4e7d6d563..ca27e019f 100644 --- a/3.x/en/docs/sdk/java_sdk/spring_boot_starter.md +++ b/3.x/en/docs/sdk/java_sdk/spring_boot_starter.md @@ -1,6 +1,6 @@ # Gradle SpringBoot Application Example -Tag: "spring-boot-starter "" 'Development of blockchain applications " +Tags: "spring-boot-starter" "development of blockchain applications" --------- @@ -12,7 +12,7 @@ If you want to use the Java SDK+ Maven + To access smart contracts in SpringBoot To build a FISCO BCOS single-group blockchain (Air version), the specific steps [refer here](../../quick_start/air_installation.md)。 -## Download Spring-boot-starter, certificate copy +## Download spring-boot-starter, certificate copy ```eval_rst .. note:: @@ -23,7 +23,7 @@ To build a FISCO BCOS single-group blockchain (Air version), the specific steps git clone https://github.com/FISCO-BCOS/spring-boot-starter.git ``` -Enter Spring-boot-The starter project +Enter the spring-boot-starter project ```shell cd spring-boot-starter @@ -51,13 +51,13 @@ server.port=8080 Among them: -- Java SDK configuration configuration section and [Java SDK](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/config.html)Consistent。For this example, the user needs to: - - Replace network.peers with the actual listening address of the chain node。 - - cryptoMaterial.certPath is set to conf +- Java SDK configuration configuration section with [Java SDK](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/sdk/java_sdk/config.html)Consistent。For this example, the user needs to: + - replace network.peers with the actual listening address of the chain node。 + -cryptoMaterial.certPath is set to conf -- System configuration configuration section, you need to configure: - - system.hexPrivateKey is the clear text of the hexadecimal private key, which can be generated by running 'keyGeneration' in Demos.java (file path: src / test / java / org / example / demo / Demos.java)。The configuration is allowed to be empty. In this case, the system randomly generates a private key.。 - - system.groupId is set to the target group. The default value is group0. +-System configuration configuration configuration section, you need to configure: + -system.hexPrivateKey is the clear text of the hexadecimal private key, which can be generated by running 'keyGeneration' in Demos.java (file path: src / test / java / org / example / demo / Demos.java)。The configuration is allowed to be empty. In this case, the system randomly generates a private key。 + -system.groupId is set to the target group. The default value is group0 The Demos.java code is as follows: (**The latest project documents shall prevail**) @@ -119,7 +119,7 @@ public class Demos { ## Compile and run -You can run it directly in the idea, or you can compile it into an executable jar package and run it.。To compile the jar package as an example: +You can run it directly in the idea, or you can compile it into an executable jar package and run it。To compile the jar package as an example: ```shell cd spring-boot-starter @@ -127,7 +127,7 @@ bash gradlew bootJar cd dist ``` -Spring will be generated in dist directory-boot-starter-exec.jar, you can execute this jar package: +Spring-boot-starter-exec.jar is generated in the dist directory. You can execute this jar package: ```shell java -jar spring-boot-starter-exec.jar @@ -161,7 +161,7 @@ Return example: ## Join our community -**FISCO BCOS Open Source Community**It is an active open source community in China, which has long provided all kinds of support and assistance to institutional and individual developers.。Thousands of technology enthusiasts from various industries have been researching and using FISCO BCOS。If you are interested in FISCO BCOS open source technology and applications, welcome to join the community for more support and help。 +**FISCO BCOS Open Source Community**It is an active open source community in China, which has long provided all kinds of support and assistance to institutional and individual developers。Thousands of technology enthusiasts from various industries have been researching and using FISCO BCOS。If you are interested in FISCO BCOS open source technology and applications, welcome to join the community for more support and help。 ![](https://raw.githubusercontent.com/FISCO-BCOS/LargeFiles/master/images/QR_image.png) diff --git a/3.x/en/docs/sdk/java_sdk/transaction_data_struct.md b/3.x/en/docs/sdk/java_sdk/transaction_data_struct.md index ece532611..e482a7021 100644 --- a/3.x/en/docs/sdk/java_sdk/transaction_data_struct.md +++ b/3.x/en/docs/sdk/java_sdk/transaction_data_struct.md @@ -1,27 +1,27 @@ # Transaction and Receipt Data Structure and Assembly Process -Tag: "java-sdk "" 'Assembly Transaction ""' Data Structure "" 'Transaction "' Transaction Receipt" ' +Tags: "java-sdk" "assembly transaction" "data structure" "transaction" "transaction receipt" " --- ## 1. Transaction data structure interpretation -The transaction of 3.0 is defined in FISCO-BCOS warehouse in 'bcos-tars-protocol/bcos-tars-defined in protocol / tars / Transaction.tars', visible link: [Transaction.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/Transaction.tars)。The data structure is as follows: +The transaction of 3.0 is defined in 'bcos-tars-protocol / bcos-tars-protocol / tars / Transaction.tars' in the FISCO-BCOS repository. You can see the link: [Transaction.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/Transaction.tars)。The data structure is as follows: ```c++ module bcostars { struct TransactionData { - 1 optional int version; / / Transaction version number. Currently, there are three types of transactions: v0, v1, and v2. + 1 optional int version; / / Transaction version number. Currently, there are three types of transactions: v0, v1, and v2 2 optional string chainID; / / Chain name 3 optional string groupID; / / group name 4 optional long blockLimit; / / Block height of transaction limit execution 5 optional string nonce; / / Transaction uniqueness identification - 6 optional string to; / / The contract address of the transaction call. + 6 optional string to; / / The contract address of the transaction call 7 optional vector input; / / Parameters of the transaction call contract, encoded by ABI / Scale - 8 optional string abi; / / The JSON string of the ABI. We recommend that you add the ABI when deploying a contract. + 8 optional string abi; / / The JSON string of the ABI. We recommend that you add the ABI when deploying a contract 9 optional string value; / / v1 New transaction field, original transfer amount 10 optional string gasPrice; / / The new field in the v1 transaction. The unit price of gas during execution(gas/wei) - 11 optional long gasLimit; / / The upper limit of the gas used when the transaction is executed. + 11 optional long gasLimit; / / The upper limit of the gas used when the transaction is executed 12 optional string maxFeePerGas; / / v1 new transaction field, EIP1559 reserved field 13 optional string maxPriorityFeePerGas; / / v1 new transaction field, EIP1559 reserved field 14 optional vector extension; / / v2 new fields for additional storage @@ -40,9 +40,9 @@ module bcostars { }; ``` -## 2. Transaction receipt data structure interpretation. +## 2. Transaction receipt data structure interpretation -Transaction receipts for 3.0 are defined in FISCO-BCOS warehouse in 'bcos-tars-protocol/bcos-tars-defined in protocol / tars / TransactionReceipt.tars', visible link: [TransactionReceipt.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/TransactionReceipt.tars)。The data structure is as follows: +The transaction receipt of 3.0 is defined in 'bcos-tars-protocol / bcos-tars-protocol / tars / TransactionReceipt.tars' in the FISCO-BCOS warehouse. You can see the link: [TransactionReceipt.tars](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/bcos-tars-protocol/bcos-tars-protocol/tars/TransactionReceipt.tars)。The data structure is as follows: ```c++ module bcostars { @@ -60,7 +60,7 @@ module bcostars { 5 optional vector output; / / Transaction execution return value 6 optional vector logEntries; / / Event list 7 optional long blockNumber;/ / Block height where the transaction is executed - 8 optional string effectiveGasPrice; / / The gas unit price (gas / wei) that takes effect when the transaction is executed. + 8 optional string effectiveGasPrice; / / The gas unit price (gas / wei) that takes effect when the transaction is executed }; struct TransactionReceipt { / / Transaction receipt type @@ -73,19 +73,19 @@ module bcostars { ## 3. The assembly process of the transaction -As shown above, the SDK needs to assemble the 'TransactionData' first, then assemble the transaction data structure as' Transaction ', and finally send it to the blockchain node.。Specific steps are as follows: +As shown above, the SDK needs to assemble the 'TransactionData' first, then assemble the transaction data structure as' Transaction ', and finally send it to the blockchain node。Specific steps are as follows: -- The actual parameters of the transaction call contract, encoded using ABI / Scale as the 'input' field; +- The actual parameters of the transaction call contract, using ABI / Scale encoding as the 'input' field; - Enter the 'blockLimit' field, which is usually the height of the current block+600; -- The 'nonce' field, which is a random hexadecimal string.; +- Incoming 'nonce' field, usually a random hexadecimal string; - Pass in other parameters to construct the 'TransactionData' structure object; - Hash the object of 'TransactionData', the hash calculation algorithm can be found in Section 4; -- Use the key to perform the signature calculation on the hash value (byte array) calculated in the previous step to obtain the signature; +-Use the key to perform signature calculation on the hash value (byte array) calculated in the previous step to obtain the signature; - Pass in other parameters to construct the 'Transaction' structure object; - Encode the 'Transaction' structure object using the 'Tars' encoding; - Get the final transaction raw data, send to the chain。 -## 4. TransactionData hash calculation algorithm and example. +## 4. TransactionData hash calculation algorithm and example TransactionData performs a hash calculation by assembling the bytes of all the fields in the object and finally performing a hash calculation on the byte array。C++An example of an implementation is as follows: @@ -151,9 +151,9 @@ if (getVersion() == TransactionVersion.V2.getValue()) { return byteArrayOutputStream.toByteArray(); ``` -## 5. TransactionReceiptData hash calculation algorithm and example. +## 5. TransactionReceiptData hash calculation algorithm and example -As described in Section 4, TransactionReceiptData's hash is also calculated by assembling the bytes of all the fields within the object and finally hashing the byte array.。C++An example of an implementation is as follows: +As described in Section 4, TransactionReceiptData's hash is also calculated by assembling the bytes of all the fields within the object and finally hashing the byte array。C++An example of an implementation is as follows: ```c++ int32_t version = boost::endian::native_to_big((int32_t)hashFields.version); diff --git a/3.x/en/docs/sdk/java_sdk/transaction_decode.md b/3.x/en/docs/sdk/java_sdk/transaction_decode.md index 687479a97..575e1c053 100644 --- a/3.x/en/docs/sdk/java_sdk/transaction_decode.md +++ b/3.x/en/docs/sdk/java_sdk/transaction_decode.md @@ -1,11 +1,11 @@ # Transaction Receipt Parsing -Tag: "java-sdk "" receipt resolution "" event resolution " +Tags: "java-sdk" "Receipt Parsing" "Event Parsing" ---- -A FISCO BCOS transaction is a request data sent to the blockchain system for deploying contracts, invoking contract interfaces, maintaining the life cycle of contracts, managing assets, and exchanging value.。When the transaction is confirmed, a transaction receipt will be generated, [transaction receipt](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html#gettransactionreceipt)and [transactions](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html#gettransactionbyhash)All are stored in blocks and are used to record information generated during the transaction execution process, such as result codes, events, and the amount of gas consumed.。Users can use the transaction hash to query the transaction receipt to determine whether the transaction is complete.。 +A FISCO BCOS transaction is a request data sent to the blockchain system for deploying contracts, invoking contract interfaces, maintaining the life cycle of contracts, managing assets, and exchanging value。When the transaction is confirmed, a transaction receipt will be generated, [transaction receipt](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html#gettransactionreceipt)and [transactions](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/api.html#gettransactionbyhash)All are stored in blocks and are used to record information generated during the transaction execution process, such as result codes, events, and the amount of gas consumed。Users can use the transaction hash to query the transaction receipt to determine whether the transaction is complete。 -The transaction receipt contains three key fields: input, output, and logs.: +The transaction receipt contains three key fields: input, output, and logs: | Field| Type| 描述| |:-------|:------------|:-----------------------------------| @@ -13,7 +13,7 @@ The transaction receipt contains three key fields: input, output, and logs.: | output | String | The ABI-encoded hexadecimal string returned by the transaction| | logs | List\ | The event log list, which stores the event information of the transaction| -The transaction parsing function helps users parse transaction receipts into json data.。 +The transaction parsing function helps users parse transaction receipts into json data。 ## 1. Construct the TransactionDecoderInterface @@ -36,8 +36,8 @@ abi In the java client folder generated by the contract, take HelloWorld.sol as - **public TransactionResponse decodeReceiptWithValues(String abi, String functionName, TransactionReceipt receipt):** Parsing transaction receipts with function return values。 - **public TransactionResponse decodeReceiptWithoutValues(String abi, TransactionReceipt transactionReceipt):** Parsing transaction receipts without function return values。 - **public Map\\>\>\> decodeEvents(String abi, List\ logs):** Parsing transaction events。 -- **public TransactionResponse decodeReceiptStatus(TransactionReceipt receipt):** Parse the status of the receipt and error information, etc.。 -- **public String decodeRevertMessage(String output)**If the receipt error code is a rollback, parse the revert information in the output. +- **public TransactionResponse decodeReceiptStatus(TransactionReceipt receipt):** Parse the status of the receipt and error information, etc。 +- **public String decodeRevertMessage(String output)**If the receipt error code is a rollback, parse the revert information in the output ### Parsing contract function example @@ -53,9 +53,9 @@ function incrementUint256(uint256 v) public returns(uint256){ In the above code, first add 1 to the incoming parameter, then record the incremental event (event), and finally return the result。 -## 2. Resolve transactions with return values. +## 2. Resolve transactions with return values -The abi file of the incoming contract, the name of the calling function, and the transaction receipt to parse the transaction result.。 +The abi file of the incoming contract, the name of the calling function, and the transaction receipt to parse the transaction result。 ```Java TransactionResponse transactionResponse = decoder.decodeReceiptWithValues(abi, "incrementUint256", transactionReceipt); @@ -63,7 +63,7 @@ TransactionResponse transactionResponse = decoder.decodeReceiptWithValues(abi, " ### Example of parsing results -In the above function definition, there is a function return value, which also triggers the event call.。Our incoming value v is 1. After parsing the TransactionReceipt returned by the transaction execution, the corresponding result is as follows. +In the above function definition, there is a function return value, which also triggers the event call。Our incoming value v is 1. After parsing the TransactionReceipt returned by the transaction execution, the corresponding result is as follows ```json { @@ -99,7 +99,7 @@ In the above function definition, there is a function return value, which also t } ``` -The above parsed message contains the detailed field values of the data structure of the blockchain receipt.。In addition, the event and return value of the function are parsed.。 +The above parsed message contains the detailed field values of the data structure of the blockchain receipt。In addition, the event and return value of the function are parsed。 Parsed function event(event)and the return value, you can view the 'events' or 'eventResultMap' and 'values' or 'valuesList' fields。 @@ -117,7 +117,7 @@ Parsed function event(event)and the return value, you can view the 'events' or ' } ``` -## 3. Resolve transactions with no return value. +## 3. Resolve transactions with no return value In some scenarios, we don't care about the return value of the transaction, just parse the event triggered in the function(event)and the detailed data structure of the transaction receipt。 @@ -129,7 +129,7 @@ TransactionResponse transactionResponseWithoutValues = decoder.decodeReceiptWith ### Example of parsing results -Again, the above section calls the incrementUint256 function as an example, we still parse this transaction receipt, but do not parse the function return value, the return result is as follows. +Again, the above section calls the incrementUint256 function as an example, we still parse this transaction receipt, but do not parse the function return value, the return result is as follows ```json { @@ -165,7 +165,7 @@ Again, the above section calls the incrementUint256 function as an example, we s } ``` -The result of the above parsed message contains the detailed field values of the data structure of the blockchain receipt and the parsed function event.(event)。 +The result of the above parsed message contains the detailed field values of the data structure of the blockchain receipt and the parsed function event(event)。 Parsed function event(event)to view the 'events' or 'eventResultMap' field。 @@ -206,7 +206,7 @@ Or the above section calls the incrementUint256 function as an example, now the } ``` -## 5. Parse the error message of the receipt. +## 5. Parse the error message of the receipt Incoming transaction receipt, parse the returned data, and parse it into a TransactionResponse object。 @@ -226,7 +226,7 @@ function setBytesMapping(bytes[] bytesArray) public returns (bool) { } ``` -During the execution of the following function, the transaction execution fails and an error is reported after the require statement is executed。After parsing the TransactionReceipt returned by the transaction execution, the corresponding results are as follows. +During the execution of the following function, the transaction execution fails and an error is reported after the require statement is executed。After parsing the TransactionReceipt returned by the transaction execution, the corresponding results are as follows ```json { diff --git a/3.x/en/docs/sdk/nodejs_sdk/api.md b/3.x/en/docs/sdk/nodejs_sdk/api.md index 068209c33..91004b872 100644 --- a/3.x/en/docs/sdk/nodejs_sdk/api.md +++ b/3.x/en/docs/sdk/nodejs_sdk/api.md @@ -1,9 +1,9 @@ # Node.js API -Tag: "java-sdk`` ``Client`` +Tags: "java-sdk" "Client" ---- -The Node.js SDK provides a Node.js API interface for blockchain application developers to use as a service for external calls.。According to the function, Node.js API can be divided into the following categories: +The Node.js SDK provides a Node.js API interface for blockchain application developers to use as a service for external calls。According to the function, Node.js API can be divided into the following categories: - **Web3jService**: Provides access to FISCO BCOS 2.0+Node [JSON-RPC](../../api.md)Interface Support;Provides support for deploying and invoking contracts。 - **PrecompiledService**: @@ -18,9 +18,9 @@ The Node.js SDK provides a Node.js API interface for blockchain application deve ## API calling convention -- Before using a service, you first need to initialize the global 'Configuration' object to provide the necessary configuration information for each service。'Configuration 'object in' nodejs-sdk / packages / api / common / configuration.js', whose initialization parameter is the path of a configuration file or an object containing configuration items。For a description of the configuration items of the configuration file, see [Configuration Description](./configuration.md) -- Unless otherwise specified, the APIs provided by the Node.js SDK are**asynchronous**API。The actual return value of the asynchronous API is a [Promise] wrapped around the API return value.(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)Object, developers can use [async / await syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await)or [then... catch... finally method](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then)Manipulate the Promise object to implement its own program logic -- When an error occurs in the API and the logic cannot be continued (for example, the contract address does not exist), an exception is thrown directly. All exceptions are inherited from the Error class. +-Before using the service, you first need to initialize the global 'Configuration' object to provide the necessary configuration information for each service。The 'Configuration' object is located in 'nodejs-sdk / packages / api / common / configuration.js', and its initialization parameter is the path of a configuration file or an object containing configuration items。For a description of the configuration items of the configuration file, see [Configuration Description](./configuration.md) +- Unless otherwise specified, the API provided by the Node.js SDK is**asynchronous**API。The actual return value of the asynchronous API is a [Promise] wrapped around the API return value(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)Object, developers can use [async / await syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await)or [then... catch... finally method](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/then)Manipulate the Promise object to implement its own program logic +-When an error occurs in the API and the logic cannot be continued (for example, the contract address does not exist), an exception will be thrown directly. All exceptions are inherited from the Error class ## Web3jService @@ -31,33 +31,33 @@ The Node.js SDK provides a Node.js API interface for blockchain application deve | Interface Name| 描述| Parameters| Return value| | :--| :--| :-- | :-- | | getBlockNumber | Get Latest Block High| None| Object, the result is in the result field\*\* | -| getPbftView | Get PBFT View| None| Ibid.| -| getObserverList | Get Observer Node List| None| Ibid.| -| getSealerList | Get Consensus Node List| None| Ibid.| -| getConsensusStatus | Get blockchain node consensus status| None| Ibid.| -| getSyncStatus | Obtain the synchronization status of a blockchain node| None| Ibid.| -| getClientVersion | Obtain blockchain node version information| None| Ibid.| -| getPeers | Obtain the connection information of a blockchain node| None| Ibid.| -| getNodeIDList | Get a list of nodes and their connected nodes| None| Ibid.| -| getGroupPeers | Obtain the consensus node < br > and watch node list of the specified group| Ibid.| -| getGroupList | Obtain the group ID list of the group to which the node belongs| None| Ibid.| -| getBlockByHash | Obtain block information based on block hash| Block Hash| Ibid.| -| getBlockByNumber | Obtain block information according to block height| Block height| Ibid.| -| getBlockHashByNumber | Obtain block hash based on block height| Block height| Ibid.| -| getTransactionByHash | Get transaction information based on transaction hash| Transaction Hash| Ibid.| -| getTransactionByBlockHashAndIndex | Obtain transaction information based on the transaction's block hash and < br > transaction index.| Transaction-owned block hash < br > transaction index| Ibid.| -| getTransactionByBlockNumberAndIndex | Get trading information based on the block height of the exchange, < br > trading index| Exchange-owned block height < br > Trading index| Ibid.| -| getPendingTransactions | Get all unchained transactions in the transaction pool.| None| Ibid.| -| getPendingTxSize | Get the number of unchained transactions in the transaction pool| None| Ibid.| -| getTotalTransactionCount | Obtains the number of transactions on the chain of a specified group.| None| Ibid.| -| getTransactionReceipt | Get transaction receipt based on transaction hash| Transaction Hash| Ibid.| -| getCode | Contract data queried by contract address| Contract Address| Ibid.| -| getSystemConfigByKey | Get System Configuration| System configuration keyword. Currently, < br >: < br >- tx_count_limit
- tx_gas_limit| Ibid.| -| sendRawTransaction | Send a signed transaction, which is then executed and agreed upon by the nodes on the chain| Accept variable number of parameters: When the number of parameters is 1, the parameters should be the RLP code of the transaction;When the number of parameters is 3, the parameters should be the contract address, method signature, and method parameters.| Ibid.| -| deploy | Deployment contract| Contract Path < br > Output Path| Ibid.| -| call | Call read-only contract| Contract Address < br > Call Interface\*< br > Parameter list| Ibid.| - -\*Call interface: function name(parameter type,...)For example: func(uint256,uint256)there can be no spaces between parameter types +| getPbftView | Get PBFT View| None| Ibid| +| getObserverList | Get Observer Node List| None| Ibid| +| getSealerList | Get Consensus Node List| None| Ibid| +| getConsensusStatus | Get blockchain node consensus status| None| Ibid| +| getSyncStatus | Obtain the synchronization status of a blockchain node| None| Ibid| +| getClientVersion | Obtain blockchain node version information| None| Ibid| +| getPeers | Obtain the connection information of a blockchain node| None| Ibid| +| getNodeIDList | Get a list of nodes and their connected nodes| None| Ibid| +| getGroupPeers | Gets the consensus node of the specified group
and watch node list| Ibid| +| getGroupList | Obtain the group ID list of the group to which the node belongs| None| Ibid| +| getBlockByHash | Obtain block information based on block hash| Block Hash| Ibid| +| getBlockByNumber | Obtain block information according to block height| Block height| Ibid| +| getBlockHashByNumber | Obtain block hash based on block height| Block height| Ibid| +| getTransactionByHash | Get transaction information based on transaction hash| Transaction Hash| Ibid| +| getTransactionByBlockHashAndIndex | According to the block hash of the transaction,
Transaction index Get transaction information| Transaction-owned block hash
Transaction Index| Ibid| +| getTransactionByBlockNumberAndIndex | According to the exchange belongs to the block height,
Transaction index Get transaction information| Exchange-owned block height
Transaction Index| Ibid| +| getPendingTransactions | Get all unchained transactions in the transaction pool| None| Ibid| +| getPendingTxSize | Get the number of unchained transactions in the transaction pool| None| Ibid| +| getTotalTransactionCount | Obtains the number of transactions on the chain of a specified group| None| Ibid| +| getTransactionReceipt | Get transaction receipt based on transaction hash| Transaction Hash| Ibid| +| getCode | Contract data queried by contract address| Contract Address| Ibid| +| getSystemConfigByKey | Get System Configuration| System configuration keywords, currently supported

- tx_count_limit
- tx_gas_limit| Ibid| +| sendRawTransaction | Send a signed transaction, which is then executed and agreed upon by the nodes on the chain| Accept variable number of parameters: When the number of parameters is 1, the parameters should be the RLP code of the transaction;When the number of parameters is 3, the parameters should be the contract address, method signature, and method parameters| Ibid| +| deploy | Deployment contract| Contract Path
Output Path| Ibid| +| call | Call read-only contract| Contract Address
Call Interface\*
Parameter List| Ibid| + +\*Call interface: function name(parameter type,..)For example: func(uint256,uint256)there can be no spaces between parameter types ## PrecompiledService @@ -69,9 +69,9 @@ The Node.js SDK provides a Node.js API interface for blockchain application deve | Interface Name| 描述| Parameters| Return value| | :--| :--| :-- | :-- | -| grantUserTableManager | Set permission information based on user table name and external account address| Table name < br > External account address| Number, representing the number of rows in the permission table that were successfully overwritten| -| revokeUserTableManager | Remove permission information based on user table name and external account address| Table name < br > External account address| Number, representing the number of rows in the permission table that were successfully overwritten| -| listUserTableManager | Query the set permission record list according to the user table name(Each record contains the external account address and the active block height.) | Table Name| Array, the queried records| +| grantUserTableManager | Set permission information based on user table name and external account address| Table Name
External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| +| revokeUserTableManager | Remove permission information based on user table name and external account address| Table Name
External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| +| listUserTableManager | Query the set permission record list according to the user table name(Each record contains the external account address and the active block height) | Table Name| Array, the queried records| | grantDeployAndCreateManager | Add permissions to deploy contracts and create user tables for external account addresses| External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| | revokeDeployAndCreateManager | Remove deployment contract and create user table permissions for external account addresses| External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| | listDeployAndCreateManager | Querying the list of permission records that have permission to deploy contracts and create user tables| None| Array, the queried records| @@ -85,7 +85,7 @@ The Node.js SDK provides a Node.js API interface for blockchain application deve | revokeCNSManager | Remove Use CNS permission for an external account address| External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| | listCNSManager | Querying the list of records that have permission to use the CNS| None| Array, the queried records| | grantSysConfigManager | Increase the system parameter management permission of the external account address| External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| -| revokeSysConfigManager | Remove the system parameter management permission of the external account address.| External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| +| revokeSysConfigManager | Remove the system parameter management permission of the external account address| External Account Address| Number, representing the number of rows in the permission table that were successfully overwritten| | listSysConfigManager | Query the list of records that have permission to manage system parameters| None| Array, the queried records| ### CNSService @@ -96,10 +96,10 @@ The Node.js SDK provides a Node.js API interface for blockchain application deve | Interface Name| 描述| Parameters| Return value| | :--| :--| :-- | :-- | -| registerCns | Register CNS information based on contract name, contract version number, contract address, and contract abi| contract name < br > contract version number < br > contract address < br > contract abi| Number, representing the number of CNS entry records that were successfully incremented| -| getAddressByContractNameAndVersion | Based on contract name and contract version number(The contract name and contract version number are concatenated with an English colon)Query Contract Address。If the contract version number is missing, the latest contract version is used by default.| Contract Name+ ':' + Version Number| Object, the CNS information queried| +| registerCns | Register CNS information based on contract name, contract version number, contract address, and contract abi| Contract Name
Contract version number
Contract Address
Contract abi| Number, representing the number of CNS entry records that were successfully incremented| +| getAddressByContractNameAndVersion | Based on contract name and contract version number(The contract name and contract version number are concatenated with an English colon)Query Contract Address。If the contract version number is missing, the latest contract version is used by default| Contract Name+ ':' + Version Number| Object, the CNS information queried| | queryCnsByName | Query CNS information based on contract name| Contract Name| Array, the CNS information queried| -| queryCnsByNameAndVersion | Query CNS information based on contract name and contract version number| Contract Name < br > Version Number| Ibid.| +| queryCnsByNameAndVersion | Query CNS information based on contract name and contract version number| Contract Name
Version Number| Ibid| ### SystemConfigService @@ -109,7 +109,7 @@ The Node.js SDK provides a Node.js API interface for blockchain application deve | Interface Name| 描述| Parameters| Return value| | :--| :--| :-- | :-- | -| setValueByKey | Set the corresponding value according to the key (the value corresponding to the query key, refer to**Web3jService**in the 'getSystemConfigByKey' interface)| Key Name < br > Value| Number, which represents the number of successfully modified system configurations| +| setValueByKey | Set the corresponding value according to the key (the value corresponding to the query key, refer to**Web3jService**in the 'getSystemConfigByKey' interface)| Key Name
Value| Number, which represents the number of successfully modified system configurations| ### ConsensusService @@ -131,9 +131,9 @@ The Node.js SDK provides a Node.js API interface for blockchain application deve | Interface Name| 描述| Parameters| Return value| | :--| :--| :-- | :-- | -| createTable | Create Table| Table object < br > The table object needs to set its table name, primary key field name and other field names。where the other field names are strings separated by commas| Number, status code, 0 indicates successful creation| -| insert | Insert Record| Table object Entry object < br > Table object needs to set table name and primary key field name;Entry is a map object that provides the inserted field name and field value. Note that the primary key field must be set| Number, representing the number of inserted records -| update | Update Record| Table object Entry object Condtion object < br > Table object requires setting table name and primary key field name;The Condition object is a condition object that allows you to set the matching criteria for the query| Array, the queried records -| remove | Remove Record| Table Object Condition Object < br >| The table object needs to set the table name and primary key field name;The Condition object is a condition object, and you can set the match condition for removal.| Number, the number of records successfully removed| -| Select | Query records| Table object: Table object needs to set the table name and primary key field value < br / > Condtion object: Condition object is a condition object, you can set the matching condition of the query| Number, the number of records successfully queried| +| createTable | Create Table| Table Object
The table object needs to set its table name, primary key field name and other field names。where the other field names are strings separated by commas| Number, status code, 0 indicates successful creation| +| insert | Insert Record| Table object Entry object
The table object needs to set the table name and primary key field name;Entry is a map object that provides the inserted field name and field value. Note that the primary key field must be set| Number, representing the number of inserted records +| update | Update Record| Table object Entry object Condtion object
The table object needs to set the table name and primary key field name;The Condition object is a condition object that allows you to set the matching criteria for the query| Array, the queried records +| remove | Remove Record| Table Object Condition Object
| The table object needs to set the table name and primary key field name;The Condition object is a condition object, and you can set the match condition for removal| Number, the number of records successfully removed| +| Select | Query records| Table object: Table object needs to set the table name and primary key field values
Condition object: Condition object is a condition object, you can set the matching condition of the query| Number, the number of records successfully queried| | desc | Query information about a table based on the table name| Table Name| Object, which mainly contains the primary key field name and other attribute fields of the table| diff --git a/3.x/en/docs/sdk/nodejs_sdk/configuration.md b/3.x/en/docs/sdk/nodejs_sdk/configuration.md index 5cd8a0400..5b34854dd 100644 --- a/3.x/en/docs/sdk/nodejs_sdk/configuration.md +++ b/3.x/en/docs/sdk/nodejs_sdk/configuration.md @@ -9,29 +9,29 @@ The configuration file of the Node.js SDK is a JSON file, including**Common Conf - `privateKey`: 'object ', required。The private key of the external account, which can be a random integer of 256 bits, or a private key file in pem or p12 format, the latter two need to be combined [get _ account.sh](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/account.html)The generated private key file uses the。'privateKey 'contains two required fields and one optional field: - `type`: 'string ', required。Used to indicate the private key type。The value of 'type' must be one of the following three values: - - 'ecrandom ': random integer - - 'pem ': file in pem format - - 'p12 ': file in p12 format - - 'value ':' string ', required。The specific value used to indicate the private key: - - If 'type' is' random ', then' value 'is a random integer of length 256 bits between 1 and 0xFFFF FFFF FFFF FFFF FFFF FFFE BAAE DCE6 AF48 A03B BFD2 5E8C D036 4141。 - - If 'type' is' pem ',' value 'is the path of the pem file. If it is a relative path, the directory where the configuration file is located must be the starting location of the relative path.。 - - If 'type' is' p12 ',' value 'is the path of the p12 file. If it is a relative path, the directory where the configuration file is located must be the starting location of the relative path.。 - - 'password ':' string ', optional。This field is required to decrypt the private key if 'type' is' p12 ', otherwise it is ignored。 -- `timeout`: `number`。Nodes connected to Node.js SDK may fall into a state of stopping responding。To avoid an infinite wait, every API call of the Node.js SDK is forced to return an error object if no result is obtained after 'timeout'.。'timeout 'in milliseconds。 -- `solc`: 'string ', optional。The Node.js SDK already comes with the 0.4.26 and 0.5.10 versions of the Solidity compiler. If you have special compiler requirements, you can set this configuration item to the execution path or global command of your compiler. + - 'ecrandom': random integer + - 'pem': file in pem format + - 'p12': file in p12 format + - 'value': 'string', required。The specific value used to indicate the private key: + - If 'type' is' random ',' value 'is a random integer of length 256 bits between 1 and 0xFFFF FFFF FFFF FFFF FFFF FFFE BAAE DCE6 AF48 A03B BFD2 5E8C D036 4141。 + -If 'type' is' pem ',' value 'is the path of the pem file. If it is a relative path, the directory where the configuration file is located needs to be the starting location of the relative path。 + -If 'type' is' p12 ',' value 'is the path of the p12 file. If it is a relative path, the directory where the configuration file is located needs to be the starting location of the relative path。 + - 'password': 'string', optional。This field is required to decrypt the private key if 'type' is' p12 ', otherwise it is ignored。 +- `timeout`: `number`。Nodes connected to Node.js SDK may fall into a state of stopping responding。To avoid an infinite wait, every API call of the Node.js SDK is forced to return an error object if no result is obtained after 'timeout'。'timeout 'in milliseconds。 +- `solc`: 'string ', optional。The Node.js SDK already comes with the 0.4.26 and 0.5.10 versions of the Solidity compiler. If you have special compiler requirements, you can set this configuration item to the execution path or global command of your compiler ## Group Configuration -- `groupID`: `number`。The group ID of the chain accessed by the Node.js SDK. +- `groupID`: `number`。The group ID of the chain accessed by the Node.js SDK ## Communication Configuration -- `nodes`: 'list ', required。The list of FISCO BCOS nodes. When the Node.js SDK accesses a node, it randomly selects a node from the list for communication. The number of nodes must be greater than or equal to 1.。In FISCO BCOS, a transaction on the chain does not mean that all nodes in the network have been synchronized to the latest state, if the Node.js SDK connects multiple nodes, it may not be able to read the latest state, so in the case of higher requirements for state synchronization, please be careful to connect multiple nodes。Each node contains two fields: +- `nodes`: 'list ', required。The list of FISCO BCOS nodes. When the Node.js SDK accesses a node, it randomly selects a node from the list for communication>= 1。In FISCO BCOS, a transaction on the chain does not mean that all nodes in the network have been synchronized to the latest state, if the Node.js SDK connects multiple nodes, it may not be able to read the latest state, so in the case of higher requirements for state synchronization, please be careful to connect multiple nodes。Each node contains two fields: - `ip`: 'string ', required。IP address of the FISCO BCOS node - `port`: 'string ', required, Channel port of FISCO BCOS node ## Certificate Configuration -- `authentication`:`object`。Required, including the authentication information required to establish Channel communication, which is typically generated automatically during the construction of the chain.。'authentication 'contains three required fields: - - `key`: 'string ', required。The path of the private key file. If the path is relative, the directory where the configuration file is located must be the starting location of the relative path.。 - - `cert`: 'string ', required。The path of the certificate file. If the path is relative, the directory where the configuration file is located must be the starting location of the relative path.。 - - `ca`: 'string ', required。The path of the CA root certificate file. If the path is relative, the directory where the configuration file is located must be the starting location of the relative path.。 +- `authentication`:`object`。Required, including the authentication information required to establish Channel communication, which is typically generated automatically during the construction of the chain。'authentication 'contains three required fields: + - `key`: 'string ', required。The path of the private key file. If the path is relative, the directory where the configuration file is located must be the starting location of the relative path。 + - `cert`: 'string ', required。The path of the certificate file. If the path is relative, the directory where the configuration file is located must be the starting location of the relative path。 + - `ca`: 'string ', required。The path of the CA root certificate file. If the path is relative, the directory where the configuration file is located must be the starting location of the relative path。 diff --git a/3.x/en/docs/sdk/nodejs_sdk/index.rst b/3.x/en/docs/sdk/nodejs_sdk/index.rst new file mode 100644 index 000000000..3cfbb9845 --- /dev/null +++ b/3.x/en/docs/sdk/nodejs_sdk/index.rst @@ -0,0 +1,36 @@ +############################################################## +6. Node.js SDK +############################################################## + +Tags: "Node.JS SDK" + +---- +`Node.js SDK '_ provides access to' FISCO BCOSThe Node.js API of the _ node, which supports node status query, deployment, and contract invocation. Based on the Node.js SDK, you can quickly develop blockchain applications. Currently, 'FISCO BCOS 2.0' is supported+ <../../../>`_ + + +.. admonition:: **注意** + :class: red + + **The Node.js SDK is currently only in the personal developer experience stage. To develop enterprise applications, use the** `Java SDK <../java_sdk/index.html>`_ + Node.js SDK does not currently support the SSL communication protocol + Node.js SDK temporarily supports versions 2.0.0 and above, and versions 3.0.0 and above are being adapted + + + +.. admonition:: **Main characteristics** + + - Provides calls to FISCO BCOS 'JSON-RPC<../../develop/api.html>Node.js API for '_ + - Provides Node.js API for deploying and invoking Solidity contracts (Solidity 0.4.x and Solidity 0.5.x support) + - Provides Node.js API for calling precompiled contracts + - Use the 'Channel protocol<../../design/protocol_description.html#channelmessage>'_ Communicate with FISCO BCOS nodes, two-way authentication is more secure + - Provide CLI (Command-Line Interface) tool for users to interact with the blockchain conveniently and quickly from the command line + +Install and configure the environment. For application development using Nodejs SDK, see 'gitHub`_ + + +.. toctree:: + :hidden: + + install.md + configuration.md + api.md \ No newline at end of file diff --git a/3.x/en/docs/sdk/nodejs_sdk/install.md b/3.x/en/docs/sdk/nodejs_sdk/install.md index 1ad25e7a9..ec21e214d 100644 --- a/3.x/en/docs/sdk/nodejs_sdk/install.md +++ b/3.x/en/docs/sdk/nodejs_sdk/install.md @@ -5,7 +5,7 @@ Tag: "Install Node.js" "Command Line Tools" ---- ## Environmental Requirements -- Node.js Development Environment +- Node.js development environment - Node.js >= 8.10.0 - npm >= 5.6.0 @@ -25,19 +25,19 @@ Tag: "Install Node.js" "Command Line Tools" nvm use 8 ``` - - If you are using Windows: + - If you use Windows: Please go to [Node.js official website](https://nodejs.org/en/download/)Download the installation package under Windows to install it。 -- Basic Development Components - - Python 2 (required for Windows, Linux, and MacOS) +- Basic development components + - Python 2 (required for Windows, Linux and MacOS) - g++(Required for Linux and MacOS) - - Make (required for Linux and MacOS) - - Git (required for Windows, Linux, and MacOS) + -make (required for Linux and MacOS) + - Git (required for Windows, Linux and MacOS) - Git bash (required for Windows only) - - MSBuild Build Environment (Windows only) + - MSBuild build environment (required for Windows only) -- FISCO BCOS Node: Refer to [FISCO BCOS Installation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/installation.html#fisco-bcos)Build +- FISCO BCOS Node: Please refer to [FISCO BCOS Installation](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/installation.html#fisco-bcos)Build ## Deploying the Node.js SDK @@ -63,7 +63,7 @@ npm config set registry https://registry.npm.taobao.org ``` ```bash -# During the deployment process, please ensure that you can access the external network to install third-party dependency packages. +# During the deployment process, please ensure that you can access the external network to install third-party dependency packages cd nodejs-sdk npm install npm run repoclean @@ -72,7 +72,7 @@ npm run bootstrap ## Node.js CLI -Node.js SDK embedded CLI tool for users to easily interact with the blockchain from the command line。The CLI tool is developed on the basis of the API provided by the Node.js SDK, and the usage and result output are script-friendly, and it is also an example of how to call the Node.js API for secondary development.。 +Node.js SDK embedded CLI tool for users to easily interact with the blockchain from the command line。The CLI tool is developed on the basis of the API provided by the Node.js SDK, and the usage and result output are script-friendly, and it is also an example of how to call the Node.js API for secondary development。 **Fast chain building (optional)** @@ -80,12 +80,12 @@ Node.js SDK embedded CLI tool for users to easily interact with the blockchain f ```bash # Get the development and deployment tool build _ chain.sh script -curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.9.1/build_chain.sh && chmod u+x build_chain.sh +curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v2.11.0/build_chain.sh && chmod u+x build_chain.sh ``` ```eval_rst .. note:: - - If the build _ chain.sh script cannot be downloaded for a long time due to network problems, try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` + -If the build _ chain.sh script cannot be downloaded for a long time due to network problems, please try 'curl-#LO https://gitee.com/FISCO-BCOS/FISCO-BCOS/raw/master-2.0/tools/build_chain.sh && chmod u+x build_chain.sh` ``` ```bash @@ -97,15 +97,15 @@ bash nodes/127.0.0.1/start_all.sh **Configure Certificates and Channel Ports** -- Configure Certificate +- Configure certificates - Modify the configuration file. The certificate configuration is located in the 'authentication' configuration item in the 'packages / cli / conf / config.json' file.。You need to modify the 'key', 'cert', and 'ca' configurations of the configuration item based on the path of the actual certificate file you are using, where 'key' is the path of the SDK private key file, 'cert' is the path of the SDK certificate file, and 'ca' is the path of the chain root certificate file.(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/build_chain.html)or [O & M Deployment Tool](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/enterprise_tools/index.html)Automatic generation, please refer to the documentation of the above tools for the specific generation method and file location。 + Modify the configuration file. The certificate configuration is located in the 'authentication' configuration item in the 'packages / cli / conf / config.json' file。You need to modify the 'key', 'cert', and 'ca' configurations of the configuration item based on the path of the actual certificate file you are using, where 'key' is the path of the SDK private key file, 'cert' is the path of the SDK certificate file, and 'ca' is the path of the chain root certificate file(https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/manual/build_chain.html)or [O & M Deployment Tool](https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/enterprise_tools/index.html)Automatic generation, please refer to the documentation of the above tools for the specific generation method and file location。 -- Configure Channel Ports +- Configure Channel Port - Modify the configuration file. The node IP and port configurations are located in the 'nodes' configuration item in the 'packages / cli / conf / config.json' file.。You need to modify the 'ip' and 'port' configurations of the configuration item according to the actual configuration of the FISCO BCOS node you want to connect to, where 'ip' is the IP address of the connected node, and 'port' is the value of the 'channel _ listen _ port' configuration item in the config.ini file under the node directory.。You can skip this step if you are using a quick hitch。 + Modify the configuration file. The node IP and port configurations are located in the 'nodes' configuration item in the 'packages / cli / conf / config.json' file。You need to modify the 'ip' and 'port' configurations of the configuration item according to the actual configuration of the FISCO BCOS node you want to connect to, where 'ip' is the IP address of the connected node, and 'port' is the value of the 'channel _ listen _ port' configuration item in the config.ini file under the node directory。You can skip this step if you are using a quick hitch。 -After the configuration is complete, you can start using the CLI tool. The CLI tool is located in 'packages / cli / cli.js'. All operations need to be performed in the 'packages / cli /' directory. +After the configuration is complete, you can start using the CLI tool. The CLI tool is located in 'packages / cli / cli.js'. All operations need to be performed in the 'packages / cli /' directory ``` cd packages/cli @@ -193,7 +193,7 @@ The output is as follows: } ``` -**To call the set interface of the HelloWorld contract, change the contract address to the actual address.** +**To call the set interface of the HelloWorld contract, change the contract address to the actual address** ```bash ./cli.js call HelloWorld 0x11b6d7495f2f04bdca45e9685ceadea4d4bd1832 set vita @@ -208,7 +208,7 @@ The output is as follows: } ``` -**To call the get interface of the HelloWorld contract, change the contract address to the actual address.** +**To call the get interface of the HelloWorld contract, change the contract address to the actual address** ```bash ./cli.js call HelloWorld 0xab09b29dd07e003776d22566ae5c078f2cb2279e get @@ -233,7 +233,7 @@ If you want to know how to use a command, you can use the following command: ./cli.js ? ``` -where command is a command name, using '?'as a parameter to get the command's usage tips, such as. +where command is a command name, using '?'as a parameter to get the command's usage tips, such as ```bash ./cli.js call ? @@ -248,12 +248,12 @@ Call a contract by a function and parameters Location: contractName The name of a contract [string] [required] - contractAddress 20 Bytes - The address of a contract [string] [required] + contractAddress 20 Bytes - The address of a contract [string] [required] function The function of a contract [string] [required] parameters The parameters(splited by a space) of a function [array] [default value]: []] Options: - --help Display help information [boolean] + --help Display Help Information [Boolean] --version Display Version Number [boolean] ``` diff --git a/3.x/en/docs/sdk/python_sdk/api.md b/3.x/en/docs/sdk/python_sdk/api.md index b09b4d9ff..efb737f31 100644 --- a/3.x/en/docs/sdk/python_sdk/api.md +++ b/3.x/en/docs/sdk/python_sdk/api.md @@ -25,25 +25,25 @@ Implemented in 'client / bcosclient.py', encapsulating access to FISCO BCOS 2.0+ | getConsensusStatus | Get blockchain node consensus status| None| | getSyncStatus | Obtain the synchronization status of a blockchain node| None| | getPeers | Obtain the connection information of a blockchain node| None| -| getGroupPeers | Obtain the consensus node < br > and watch node list of the specified group| None| +| getGroupPeers | Gets the consensus node of the specified group
and watch node list| None| | getNodeIDList | Get a list of nodes and their connected nodes| None| | getGroupList | Obtain the group ID list of the group to which the node belongs| None| | getBlockByHash | Obtain block information based on block hash| Block Hash| | getBlockByNumber | Obtain block information according to block height| Block height| | getBlockHashByNumber | Obtain block hash based on block height| Block height| | getTransactionByHash | Get transaction information based on transaction hash| Transaction Hash| -| getTransactionByBlockHashAndIndex |Obtain transaction information based on the transaction's block hash and < br > transaction index.| Transaction-owned block hash < br > transaction index| -| getTransactionByBlockNumberAndIndex | Get trading information based on the block height of the exchange, < br > trading index| Exchange-owned block height < br > Trading index| +| getTransactionByBlockHashAndIndex |According to the block hash of the transaction,
Transaction index Get transaction information| Transaction-owned block hash
Transaction Index| +| getTransactionByBlockNumberAndIndex | According to the exchange belongs to the block height,
Transaction index Get transaction information| Exchange-owned block height
Transaction Index| | getTransactionReceipt | Get transaction receipt based on transaction hash| Transaction Hash| -| getPendingTransactions | Get all unchained transactions in the transaction pool.| None| +| getPendingTransactions | Get all unchained transactions in the transaction pool| None| | getPendingTxSize | Get the number of unchained transactions in the transaction pool| None| | getCode | Contract data queried by contract address| Contract Address| -| getTotalTransactionCount | Obtains the number of transactions on the chain of a specified group.| None| -| getSystemConfigByKey | Get System Configuration| System configuration keywords < br > such as: < br >- tx_count_limit
- tx_gas_limit| +| getTotalTransactionCount | Obtains the number of transactions on the chain of a specified group| None| +| getSystemConfigByKey | Get System Configuration| System Configuration Keywords
如:
- tx_count_limit
- tx_gas_limit| | deploy | Deployment contract| Contract binary code| -| call | Call Contract| contract address < br > contract abi < br > call interface name < br > parameter list| -| sendRawTransaction | Send transaction| contract address < br > contract abi < br > interface name < br > parameter list < br > contract binary code| -| sendRawTransactionGetReceipt | Send transaction < br > and get transaction execution result| contract address < br > contract abi interface name < br > parameter list < br > contract binary code| +| call | Call Contract| Contract Address
Contract abi
Call Interface Name
Parameter List| +| sendRawTransaction | Send transaction| Contract Address
Contract abi
Interface Name
Parameter List
Contract binary code| +| sendRawTransactionGetReceipt | Send transaction
and get the results of the trade execution| Contract Address
contract abi interface name
Parameter List
Contract binary code| ## Precompile Service @@ -56,9 +56,9 @@ client.precompile.cns.cns_service.CnsService ``` **Function Interface** -- register _ cns: Register the contract name to(Contract Address, Contract Version)maps to the CNS system table -- query _ cns _ by _ name: Query CNS information based on contract name -- query _ cns _ by _ nameAndVersion: Query CNS information based on contract name and contract name +-register _ cns: Register the contract name to(Contract Address, Contract Version)maps to the CNS system table +-query _ cns _ by _ name: Query CNS information based on contract name +-query _ cns _ by _ nameAndVersion: Query CNS information based on contract name and contract name ### Consensus @@ -69,9 +69,9 @@ client.precompile.consensus.consensus_precompile.ConsensusPrecompile **Function Interface** -- addSealer: Add consensus node -- addObserver: Add an observer node -- removeNode: Remove the node from the group +-addSealer: Add consensus node +-addObserver: add observer node +-removeNode: remove node from group ### Permission Control @@ -82,7 +82,7 @@ client.precompile.permission.permission_service.PermissionService **Function Interface** - grant: Authorize permissions for the specified table to the user -- revoke: revokes the write permission of the specified user on the specified table. +-revoke: reclaims the write permission of the specified user on the specified table - list_permission: Displays account information that has write permission to the specified table ### CRUD @@ -92,11 +92,11 @@ client.precompile.permission.permission_service.PermissionService client.precompile.crud.crud_service.Entry ``` **Function Interface** -- create _ table: Create a user table -- insert: Inserts a record into the user table -- update: updates user table records -- remove: Deletes the specified record in the user table. -- select: Queries a specified record in the user table +-create _ table: Create a user table +-insert: Inserts a record into the user table +-update: Update user table records +-remove: Deletes the specified record in the user table +-select: Query the specified record in the user table - desc: Querying User Table Information ### System Configuration @@ -116,9 +116,9 @@ Implemented in 'client / bcostransaction.py', which defines FISCO BCOS 2.0+The t | :-- | :-- | | randomid | Random number, used for transaction weight protection| | gasPrice | The default is 30000000| -| gasLimit | The upper limit of gas consumed by transactions, which is 30000000 by default.| +| gasLimit | The upper limit of gas consumed by transactions, which is 30000000 by default| | blockLimit | Transaction weight limit, default is 500| -| to | Usually the contract address.| +| to | Usually the contract address| | value | Default is 0| | data | Transaction Data| | fiscoChainId | Chain ID, which is loaded by configuring 'client _ config.py'| @@ -132,10 +132,10 @@ Provides ABI, Event Log, and transaction input and output parsing functions, imp | Interface| Parameters| 描述| | :-- | :-- | :-- | -| load_abi_file | abi file path| Load and parse the ABI file from the specified path < br > to build the function name, selector to function abi mapping list| +| load_abi_file | abi file path| Load and parse ABI file from specified path
Create function name, selector to function abi mapping list| | parse_event_logs | event log| Parsing event log| -| parse_transaction_input | Transaction input| Parsing transaction input < br > returns the interface name and transaction parameters of the transaction call.| -| parse_receipt_output | The interface name of the transaction call < br > transaction output.| Parsing Transaction Output| +| parse_transaction_input | Transaction input| Parsing Transaction Input
Returns the interface name and transaction parameters of the transaction call| +| parse_receipt_output | Interface name of the transaction call
Transaction output| Parsing Transaction Output| ## ChannelHandler @@ -146,9 +146,9 @@ FISCO BCOS channel protocol implementation class, supports SSL encrypted communi ## Contract History Query -- **client/contratnote.py:** Use the ini configuration file format to save the latest and historical addresses of the contract for loading (as can be used in the console command.(Contract name last)Refers to the address of the latest deployment of a contract) +- **client/contratnote.py:** Use the ini configuration file format to save the latest and historical addresses of the contract for loading (as can be used in the console command(Contract name last)Refers to the address of the latest deployment of a contract) ## Log Module -- **client/clientlogger.py:** Logger definition, which currently includes client logs and statistics logs. +- **client/clientlogger.py:** Logger definition, which currently includes client logs and statistics logs - **client/stattool.py** A simple tool class for collecting statistics and printing logs diff --git a/3.x/en/docs/sdk/python_sdk/configuration.md b/3.x/en/docs/sdk/python_sdk/configuration.md index cbeb724aa..ab081b3b8 100644 --- a/3.x/en/docs/sdk/python_sdk/configuration.md +++ b/3.x/en/docs/sdk/python_sdk/configuration.md @@ -4,34 +4,34 @@ Tags: "Python SDK" "Certificate Configuration" ---- -'client _ config.py 'is the configuration file of the Python SDK.**SDK Algorithm Type Configuration**,**Common Configuration**,**Account Configuration**,**Group Configuration**,**Communication Configuration**和**Certificate Configuration**。 +'client _ config.py 'is the configuration file of the Python SDK**SDK Algorithm Type Configuration**,**Common Configuration**,**Account Configuration**,**Group Configuration**,**Communication Configuration**和**Certificate Configuration**。 ```eval_rst .. note:: - - Ensure that the connection port is open: We recommend that you use "telnet ip port" to check whether the client is connected to the node network. - - Use the RPC communication protocol without setting up a certificate - - For more information about log configuration, see "client / clientlogger.py". By default, logs are generated in the "bin / logs" directory. The default level is DEBUG. + - Ensure that the connection port is open: It is recommended to use "telnet ip port" to check whether the client and the node network are connected + - Use RPC communication protocol, no need to set certificate + - For more information about log configuration, see "client / clientlogger.py". By default, logs are generated in the "bin / logs" directory. The default level is DEBUG ``` ## SDK Algorithm Type Configuration -- **crypto_type**: SDK interface type. Currently, it supports the national secret interface.(`GM`)and non-state secret interface(`ECDSA`) +- **crypto_type**: SDK interface type. Currently, it supports the national secret interface(`GM`)and non-state secret interface(`ECDSA`) ## Common Configuration - **contract_info_file**: The file that saves the deployed contract information. The default value is' bin / contract.ini' -- **account_keyfile_path**: The directory where keystore files are stored. The default value is' bin / accounts'. -- **logdir**Default log output directory. The default value is bin / logs. +- **account_keyfile_path**: The directory where keystore files are stored. The default value is' bin / accounts' +- **logdir**Default log output directory. The default value is bin / logs ## Account Configuration -**The configuration of the non-State secret account is as follows.** +**The configuration of the non-State secret account is as follows** -- **account_keyfile**: The path of the keystore file that stores non-state secret account information. The default value is' pyaccount.keystore '. +- **account_keyfile**: The path of the keystore file that stores non-state secret account information. The default value is' pyaccount.keystore ' - **account_password**: The storage password of the non-state secret keystore file, which is' 123456 'by default -**The configuration of the State Secret account is as follows.** +**The configuration of the State Secret account is as follows** - **gm_account_keyfile**: The path of the encrypted file that stores the information of the national secret account. The default value is' gm _ account.json' - **gm_account_password**: The storage password of the State Secret account file, which defaults to '123456' @@ -41,20 +41,20 @@ Tags: "Python SDK" "Certificate Configuration" Group configuration mainly includes chain ID and group ID: - **fiscoChainId**Chain ID, which must be the same as that of the communication node. The default value is 1 -- **groupid**The ID of the group. The ID must be the same as that of the communication node. The default value is 1. +- **groupid**The ID of the group. The ID must be the same as that of the communication node. The default value is 1 ## Communication Configuration -- **client_protocol**Python SDK and node communication protocol, including 'rpc' and 'channel' options, the former using JSON-The RPC interface accesses the node by using the channel. You need to configure a certificate. The default value is' channel' -- **remote_rpcurl**: 采用**rpc**The rpc IP address and port of the node. The default value is' http '.://127.0.0.1:8545`,**If the channel protocol is used, it can be left blank.** -- **channel_host**When the channel protocol is used, the channel IP address of the node is' 127.0.0.1 'by default.**If you use the rpc protocol to communicate, you can leave it blank.** -- **channel_port**The channel port of the node. The default value is 20200.**If you use the rpc protocol to communicate, you can leave it blank.** +- **client_protocol**The communication protocol between the Python SDK and the node, including the 'rpc' and 'channel' options. The former uses the JSON-RPC interface to access the node, and the latter uses the channel to access the node. You need to configure a certificate. The default value is' channel' +- **remote_rpcurl**: 采用**rpc**The rpc IP address and port of the node. The default value is' http '://127.0.0.1:8545`,**If the channel protocol is used, it can be left blank** +- **channel_host**When the channel protocol is used, the channel IP address of the node is' 127.0.0.1 'by default**If you use the rpc protocol to communicate, you can leave it blank** +- **channel_port**The channel port of the node. The default value is 20200**If you use the rpc protocol to communicate, you can leave it blank** ## Certificate Configuration - **channel_ca**: Chain CA Certificate,**Setting when using the channel protocol**the default is' bin / ca.crt ', -- **channel_node_cert**node certificate,**Setting when using the channel protocol**the default value is' bin / sdk.crt '.**If you use the rpc protocol to communicate, you can leave it blank.** -- **channel_node_key**The private key for communication between the Python SDK and the node. It must be set when the channel protocol is used. The default value is' bin / sdk.key '.**If you use the rpc protocol to communicate, you can leave it blank here.** +- **channel_node_cert**node certificate,**Setting when using the channel protocol**the default value is' bin / sdk.crt '**If you use the rpc protocol to communicate, you can leave it blank** +- **channel_node_key**The private key for communication between the Python SDK and the node. It must be set when the channel protocol is used. The default value is' bin / sdk.key '**If you use the rpc protocol to communicate, you can leave it blank here** ## solc compiler configuration @@ -92,10 +92,10 @@ The Python SDK allows you to automatically compile contracts using the configure client_protocol = "channel" # or PROTOCOL_CHANNEL to use channel prototol # client_protocol = PROTOCOL_CHANNEL remote_rpcurl = "http://127.0.0.1:8545" # When using rpc communication, the rpc port of the node and the node to communicate with*Must*Consistent, such as the use of channel protocol communication, here can be left blank - channel_host = "127.0.0.1" # When using channel communication, the channel ip address of the node, such as using rpc protocol communication, can be left blank here. - channel_port = 20200 # The channel port of the node. If the RPC protocol is used for communication, leave it blank. + channel_host = "127.0.0.1" # When using channel communication, the channel ip address of the node, such as using rpc protocol communication, can be left blank here + channel_port = 20200 # The channel port of the node. If the RPC protocol is used for communication, leave it blank channel_ca = "bin/ca.crt" # When using the channel protocol, you need to set the chain certificate, such as the use of rpc protocol communication, this can be left blank - channel_node_cert = "bin/sdk.crt" # When using the channel protocol, you need to set the sdk certificate. If you use the rpc protocol for communication, you can leave it blank. + channel_node_cert = "bin/sdk.crt" # When using the channel protocol, you need to set the sdk certificate. If you use the rpc protocol for communication, you can leave it blank channel_node_key = "bin/sdk.key" # When using the channel protocol, you need to set the sdk private key, such as using the rpc protocol communication, this can be left blank fiscoChainId = 1 # chain ID, and the node to communicate with*Must*Consistent groupid = 1 # Group ID, and the node to communicate with*Must*Consistent, such as communicating with other groups, modifying this item, or setting the corresponding member variable in bcosclient.py @@ -105,7 +105,7 @@ The Python SDK allows you to automatically compile contracts using the configure contract_info_file = "bin/contract.ini" # File to save deployed contract information account_keyfile_path = "bin/accounts" # The path where the keystore file is saved, where the keystore file is named after [name] .keystore account_keyfile = "pyaccount.keystore" - account_password = "123456" # It is recommended to change to a complex password in actual use. + account_password = "123456" # It is recommended to change to a complex password in actual use gm_account_keyfile = "gm_account.json" # The storage file of the national secret account, which can be encrypted and stored. If it is left blank, gm _ account _ password ="123456" gm_account_password = "123456" # ---------console mode, support user input-------------- diff --git a/3.x/en/docs/sdk/python_sdk/console.md b/3.x/en/docs/sdk/python_sdk/console.md index 6d68d2b2f..df7ce1973 100644 --- a/3.x/en/docs/sdk/python_sdk/console.md +++ b/3.x/en/docs/sdk/python_sdk/console.md @@ -4,14 +4,14 @@ Tags: "Python SDK" "PythonSDK Console" ---- -[Python SDK](https://github.com/FISCO-BCOS/python-sdk)A simple console is implemented through 'console.py' to support contract operations, account management operations, etc.。 +[Python SDK](https://github.com/FISCO-BCOS/python-sdk)A simple console is implemented through 'console.py' to support contract operations, account management operations, etc。 ```eval_rst .. note:: - **Python SDK is currently a candidate version, available for development and testing, available for enterprise applications** `Java SDK <../java_sdk/index.html>`_ - - To install the Java version console, refer to 'here <.. /.. / installation.html >' _ - - To run console.py in windows, use '.\ console.py' or 'python console.py'. + - Install Java version console can refer to 'here<../../installation.html>`_ + -To run console.py in windows, use '.\ console.py' or 'python console.py' ``` @@ -26,7 +26,7 @@ Deploy the contract: ``` Parameters include: - contract_name: The contract name, which needs to be placed in the 'contracts' directory first -- save: If the save parameter is set, the contract address will be written to the history file. +-save: If the save parameter is set, the contract address will be written to the history file ```bash $ ./console.py deploy HelloWorld save @@ -64,10 +64,10 @@ Call the contract interface and parse the returned result: ``` Parameters include: -- contract _ name: contract name -- contract _ address: the address of the contract called -- function: the contract interface called -- args: call parameter +-contract _ name: contract name +-contract _ address: the address of the contract called +-function: the contract interface called +-args: call parameter ```bash # Contract address: 0x2d1c577e41809453c50e7e5c3f57d06f3cdd90ce @@ -87,10 +87,10 @@ Sending a transaction invokes the interface of the specified contract, and the t ``` Parameters include: -- contract _ name: contract name -- contract _ address: contract address -- function: function interface -- args: parameter list +-contract _ name: contract name +-contract _ address: contract address +-function: function interface +-args: parameter list ```bash # Contract Name: HelloWorld @@ -120,13 +120,13 @@ Create a new account and save the results in encrypted form with 'bin / accounts Parameters include: -- account _ name: Account name -- account _ password: Password to encrypt the keystore file +-account _ name: account name +-account _ password: Password to encrypt the keystore file ```eval_rst .. note:: - - After creating an account using the account creation command, if you want to use it as the default account, modify the "account _ keyfile" and "account _ password" configurations of client _ config.py. + - After creating an account using the account creation command, if you want to use it as the default account, modify the "account _ keyfile" and "account _ password" configurations of client _ config.py - Account name cannot exceed 240 characters - If "account _ password" contains special characters, add single quotation marks around "account _ password," otherwise it cannot be parsed ``` @@ -164,7 +164,7 @@ Based on the account name and the password of the account 'keystore' file, outpu Parameters include: -- name: Account name +-name: Account name - password: Account 'keystore' file password ```bash @@ -203,7 +203,7 @@ newaccount [name] [password] [save] if "save" arg follows,then backup file and write new without ask the account len should be limitted to 240 - ... omit lines... + ... omit lines.. [getTransactionByBlockHashAndIndex] [blockHash] [transactionIndex] [getTransactionByBlockNumberAndIndex] [blockNumber] [transactionIndex] [getSystemConfigByKey] [tx_count_limit/tx_gas_limit] @@ -220,7 +220,7 @@ INFO >> user input : ['list'] >> RPC commands [getNodeVersion] [getBlockNumber] - ... omit lines... + ... omit lines.. [getTransactionByBlockHashAndIndex] [blockHash] [transactionIndex] [getTransactionByBlockNumberAndIndex] [blockNumber] [transactionIndex] [getSystemConfigByKey] [tx_count_limit/tx_gas_limit] @@ -262,7 +262,7 @@ Query CNS information based on contract name: ./console.py queryCNSByName [contract_name] ``` Parameters include: -- contract _ name: contract name +-contract _ name: contract name ```bash Query the CNS information corresponding to the HelloWorld contract name @@ -313,7 +313,7 @@ Removes the specified node from the group: ./console.py removeNode [nodeId] ``` Parameters include: -- nodeId: nodeID of the deleted node +-nodeId: nodeID of the deleted node ```bash # Set the node to be located in the ~ / fisco / nodes directory and query the nodeID of node1 @@ -389,7 +389,7 @@ Python SDK provides system configuration modification commands, FISCO BCOS curre ./console.py setSystemConfigByKey [key(tx_count_limit/tx_gas_limit)] [value] ``` Parameters include: -- key: configuration keyword, which mainly includes' tx _ count _ limit 'and' tx _ gas _ limit' +-key: the configuration keyword, which mainly includes' tx _ count _ limit 'and' tx _ gas _ limit' - value: Configure the value of the keyword ```bash @@ -426,7 +426,7 @@ Authorize the functions that control permissions to the specified account: ./console.py grantPermissionManager [account_adddress] ``` Parameters include: -- account _ address: the address of the account to which the permission is granted. The account can be generated by using the 'newaccount' command. +-account _ address: the address of the account to which the permission is granted. The account can be generated by using the 'newaccount' command ```bash # Get Default Account Address @@ -471,11 +471,11 @@ Grant the given user table permissions to the specified user: ```eval_rst .. note:: - Before granting user table permissions to a user, ensure that the user table exists. You can use the "createTable" command to create the user table. + Before granting user table permissions to a user, ensure that the user table exists. You can use the "createTable" command to create the user table ``` Parameters include: -- tableName: User table name -- account _ address: Authorized user account address +-tableName: User table name +-account _ address: Authorized user account address ```bash # Create user table t _ test @@ -523,7 +523,7 @@ Grant node management permissions to the specified account: ./console.py grantNodeManager [account_adddress] ``` Parameters include: -- account _ address: Authorized user account address +-account _ address: Authorized user account address ```bash # Add node management for account 0x95198B93705e394a916579e048c8A32DdFB900f7 @@ -555,7 +555,7 @@ Grant CNS administrative privileges to the specified account: ./console.py grantCNSManager [account_adddress] ``` Parameters include: -- account _ address: Authorized user account address +-account _ address: Authorized user account address ```bash # Add CNS administrative rights for account 0x95198B93705e394a916579e048c8A32DdFB900f7 @@ -589,7 +589,7 @@ Grant the system configuration modification permission to the specified account: ./console.py grantSysConfigManager [account_adddress] ``` Parameters include: -- account _ address: Authorized user account address +-account _ address: Authorized user account address ```bash # Add system configuration permissions for account 0x95198B93705e394a916579e048c8A32DdFB900f7 @@ -621,7 +621,7 @@ Grant permissions to deploy and create tables to the specified account: ./console.py grantDeployAndCreateManager [account_adddress] ``` Parameters include: -- account _ address: Authorized user account address +-account _ address: Authorized user account address ```bash # Add create table and deploy contract permissions for account 0x95198B93705e394a916579e048c8A32DdFB900f7 @@ -653,8 +653,8 @@ Revoke the write permission of the specified user on the specified user table: ./console.py revokeUserTableManager [tableName] [account_adddress] ``` Parameters include: -- tableName: the name of the table that the specified user is prohibited from writing to -- account _ address: Address of the account whose permission has been revoked +-tableName: the name of the table that the specified user is prohibited from writing to +-account _ address: the address of the account whose permission has been revoked ```bash # Revoke the control permission of account 0x95198B93705e394a916579e048c8A32DdFB900f7 on user table t _ test @@ -673,7 +673,7 @@ Revoke the permission of the specified account to create tables and deploy contr ./console.py revokeDeployAndCreateManager [account_adddress] ``` Parameters include: -- account _ address: Address of the account whose permission has been revoked +-account _ address: the address of the account whose permission has been revoked ```bash # Revoke account 0x95198B93705e394a916579e048c8A32DdFB900f7 Deploy and create table permissions @@ -692,7 +692,7 @@ Revoke the node management permission of the specified account: ./console.py revokeNodeManager [account_adddress] ``` Parameters include: -- account _ address: Address of the account whose permission has been revoked +-account _ address: the address of the account whose permission has been revoked ```bash # Revoke the account 0x95198B93705e394a916579e048c8A32DdFB900f7 node management permission @@ -711,7 +711,7 @@ Revoke the CNS management authority of the specified account: ./console.py revokeCNSManager [account_adddress] ``` Parameters include: -- account _ address: Address of the account whose permission has been revoked +-account _ address: the address of the account whose permission has been revoked ```bash # Revoke account 0x95198B93705e394a916579e048c8A32DdFB900f7 CNS administrative privileges @@ -730,7 +730,7 @@ Revoke the permission of the specified account to modify the system configuratio ./console.py revokeSysConfigManager [account_adddress] ``` Parameters include: -- account _ address: Address of the account whose permission has been revoked +-account _ address: the address of the account whose permission has been revoked ```bash # Revoke account 0x95198B93705e394a916579e048c8A32DdFB900f7 system table management permissions @@ -749,7 +749,7 @@ Revoke the permission of the specified account management permission: ./console.py revokePermissionManager [account_adddress] ``` Parameters include: -- account _ address: Address of the account whose permission has been revoked +-account _ address: the address of the account whose permission has been revoked ```bash # Revoke account 0x95198B93705e394a916579e048c8A32DdFB900f7 permission management permission @@ -856,7 +856,7 @@ INFO >> getConsensusStatus "node_index": 2, "omitEmptyBlock": true, "protocolId": 65544, - ... omit lines... + ... omit lines.. } ``` ### getSyncStatus @@ -1007,8 +1007,8 @@ Query blocks based on block height: $ ./console.py getBlockByNumber [block_number] [True/False] ``` Parameters include: -- block _ number: block height -- True/False: Optional. True indicates that the returned block information contains specific transaction information.;False indicates that the returned block contains only the transaction hash +-block _ number: block height +- True/False: Optional. True indicates that the returned block information contains specific transaction information;False indicates that the returned block contains only the transaction hash ```bash @@ -1021,7 +1021,7 @@ INFO >> getBlockByNumber "dbHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "extraData": [ "0x312d62383738336366653363303733613533326539636263343739373864 - ... omit lines... + ... omit lines.. 7652d313030302d333030303030303030" ], "gasLimit": "0x0", @@ -1058,8 +1058,8 @@ Obtain block information based on the block hash: $ ./console.py getBlockByHash [block_hash] [True/False] ``` Parameters include: -- block _ hash: block hash -- True/False: Optional. True indicates that the returned block contains transaction specific information.;False indicates that the block returned contains only the transaction hash +-block _ hash: block hash +- True/False: Optional. True indicates that the returned block contains transaction specific information;False indicates that the block returned contains only the transaction hash ```bash $ ./console.py getBlockByHash 0xff1404962c6c063a98cc9e6a20b408e6a612052dc4267836bb1dc378acc6ce04 @@ -1092,7 +1092,7 @@ Get the binary encoding of the specified contract: $ ./console.py getCode 0x2d1c577e41809453c50e7e5c3f57d06f3cdd90ce INFO >> user input : ['getCode', '0x2d1c577e41809453c50e7e5c3f57d06f3cdd90ce'] INFO >> getCode - > > 0x60806040526... some omitted... a40029 + >> 0x60806040526... some omitted... a40029 ``` ### getTransactionByHash @@ -1103,7 +1103,7 @@ Get transaction information based on transaction hash: Parameters include: - hash: Transaction Hash -- contract _ name: optional. The name of the contract related to the transaction. If this parameter is entered, the specific content of the transaction will be parsed and returned. +-contract _ name: optional. The name of the contract related to the transaction. If this parameter is entered, the specific content of the transaction will be parsed and returned ```bash @@ -1131,8 +1131,8 @@ Get transaction receipt information based on transaction hash: ./console.py getTransactionReceipt [hash] [contract_name] ``` Parameters include: -- hash: transaction hash -- contract _ name: optional. The contract name related to the transaction. If this parameter is entered, the specific content of the transaction and receipt will be parsed. +-hash: transaction hash +-contract _ name: optional. The contract name related to the transaction. If this parameter is entered, the specific content of the transaction and receipt will be parsed ```bash $ ./console.py getTransactionReceipt 0xb291e9ca38b53c897340256b851764fa68a86f2a53cb14b2ecdcc332e850bb91 @@ -1163,8 +1163,8 @@ Query transaction information based on block hash and transaction index: ``` Parameters include: - blockHash: Block hash of the transaction in -- transactionIndex: transaction index -- contract _ name: optional. The name of the contract related to the transaction. If this parameter is entered, the specific content of the transaction will be parsed and returned. +-transactionIndex: transaction index +-contract _ name: optional. The name of the contract related to the transaction. If this parameter is entered, the specific content of the transaction will be parsed and returned ```bash $ ./console.py getTransactionByBlockHashAndIndex 0x3912605dde5f7358fee40a85a8b97ba6493848eae7766a8c317beecafb2e279d 0 @@ -1194,9 +1194,9 @@ Query transaction information based on block height and transaction index: $ ./console.py getTransactionByBlockNumberAndIndex [blockNumber] [transactionIndex] [contract_name] ``` Parameters include: -- blockNumber: Exchange in block high -- transactionIndex: transaction index -- contract _ name: optional. The name of the contract related to the transaction. If this parameter is entered, the specific content of the transaction will be parsed and returned. +-blockNumber: exchanges in blocks high +-transactionIndex: transaction index +-contract _ name: optional. The name of the contract related to the transaction. If this parameter is entered, the specific content of the transaction will be parsed and returned ```bash $ ./console.py getTransactionByBlockNumberAndIndex 1 0 diff --git a/3.x/en/docs/sdk/python_sdk/demo.md b/3.x/en/docs/sdk/python_sdk/demo.md index bbe9b3595..bcf23c266 100644 --- a/3.x/en/docs/sdk/python_sdk/demo.md +++ b/3.x/en/docs/sdk/python_sdk/demo.md @@ -4,7 +4,7 @@ Tags: "Python API" "Quick Install" ---- -The source code of Python SDK provides a complete Demo for developers to learn. +The source code of Python SDK provides a complete Demo for developers to learn * [Call Node API](https://github.com/FISCO-BCOS/python-sdk/blob/master/demo_get.py) * [Deploy contract, send transaction, process receipt, query contract data](https://github.com/FISCO-BCOS/python-sdk/blob/master/demo_transaction.py) @@ -33,7 +33,7 @@ except BcosError as e: ## Operating Contract -Correct [node information configured for SDK connection](./configuration.md)Rear。Can deploy contracts, send transactions, process receipts, and query contract data。For example, call functions such as' deploy ',' sendRawTransactionGetReceipt ',' call ', and' parse _ event _ logs'.。 +Correct [node information configured for SDK connection](./configuration.md)Rear。Can deploy contracts, send transactions, process receipts, and query contract data。For example, call functions such as' deploy ',' sendRawTransactionGetReceipt ',' call ', and' parse _ event _ logs'。 Full Demo: [demo_transaction.py](https://github.com/FISCO-BCOS/python-sdk/blob/master/demo_transaction.py) @@ -87,12 +87,12 @@ inputresult = data_parser.parse_transaction_input(txresponse['input']) print("transaction input parse:",txhash) print(inputresult) -#Parse the output output of the transaction in receipt, that is, the return value of the method called by the transaction. +#Parse the output output of the transaction in receipt, that is, the return value of the method called by the transaction outputresult = data_parser.parse_receipt_output(inputresult['name'], receipt['output']) print("receipt output :",outputresult) -#Call to get the data. +#Call to get the data print("\n>>Call:------------------------------------------------------------------------") res = client.call(to_address,contract_abi,"getbalance") print("call getbalance result:",res) diff --git a/3.x/en/docs/sdk/python_sdk/index.md b/3.x/en/docs/sdk/python_sdk/index.md index c10705d19..903503082 100644 --- a/3.x/en/docs/sdk/python_sdk/index.md +++ b/3.x/en/docs/sdk/python_sdk/index.md @@ -4,13 +4,13 @@ Tags: "Python SDK" "Blockchain Application" ---- -Python SDK for [FISCO BCOS](https://github.com/FISCO-BCOS/FISCO-BCOS/tree/master)Provides the Python API, using FISCO BCOS Python SDK can be simple and quick based on FISCO-BCOS for blockchain application development。 +Python SDK for [FISCO BCOS](https://github.com/FISCO-BCOS/FISCO-BCOS/tree/master)Python API is provided. Using FISCO BCOS Python SDK, you can easily and quickly develop blockchain applications based on FISCO-BCOS。 Version 2022.09 supports both FISCO BCOS 2.x / 3.x。For technical instructions related to 3.x, see [FISCO BCOS 3.x Development Introduction](https://github.com/FISCO-BCOS/python-sdk/blob/master/README_bcos3.md) **The Python SDK is positioned as a development version, with continuous iteration, for reference by developers who use the Python language to develop FISCO BCOS applications。Java SDK is recommended for enterprise applications** -If you want to use the Python SDK in a formal environment, please read and understand the code carefully, master the relevant knowledge points, and carry out secondary development according to your own needs.**rigorous testing**Backline。 +If you want to use the Python SDK in a formal environment, please read and understand the code carefully, master the relevant knowledge points, and carry out secondary development according to your own needs**rigorous testing**Backline。 If you have any questions, welcome to the community to ask questions and exchange, or modify the extension to submit pr, co-build the project。 diff --git a/3.x/en/docs/sdk/python_sdk/index.rst b/3.x/en/docs/sdk/python_sdk/index.rst new file mode 100644 index 000000000..323ecdf34 --- /dev/null +++ b/3.x/en/docs/sdk/python_sdk/index.rst @@ -0,0 +1,36 @@ +############################################################## +5. Python SDK +############################################################## + +Tags: "Python API" + +---- + +`Python SDK '_ provides access to' FISCO BCOS'_ Python API for nodes, supports node status query, deployment, and contract invocation. Based on the Python SDK, you can quickly develop blockchain applications. Currently, FISCO BCOS 2.0 is supported+ With FISCO BCOS 3.0+ + + +.. admonition:: **注意** + :class: red + + - **Python SDK is currently a candidate version, available for development and testing, available for enterprise applications** `Java SDK <../java_sdk/index.html>`_ + - Support FISCO BCOS 2.0 and 3.0 versions, different versions of the configuration and use see the project github home page readme documentation + +.. admonition:: **Main characteristics** + + - Provides calls to FISCO BCOS 'JSON-RPC<../../develop/api.html>Python API for '_ + - Support HTTP short connection and TLS long connection communication mode, to ensure that the node and the SDK secure encrypted communication at the same time, can receive the message pushed by the node。 + - Support transaction parsing function: including the assembly and parsing of ABI data such as transaction input, transaction output, Event Log, etc + - Supports contract compilation, compiling "sol" contracts into "abi" and "bin" files + - Support keystore-based account management + - Support contract history query + +Install and configure the environment. For application development using the Python SDK, see 'gitHub'_ LINK + +.. toctree:: + :hidden: + + install.md + configuration.md + api.md + console.md + demo.md diff --git a/3.x/en/docs/sdk/python_sdk/install.md b/3.x/en/docs/sdk/python_sdk/install.md index 4dc90b516..314babb8f 100644 --- a/3.x/en/docs/sdk/python_sdk/install.md +++ b/3.x/en/docs/sdk/python_sdk/install.md @@ -20,8 +20,8 @@ Tags: "Python API" "Quick Install" ## Deploying the Python SDK ### Environmental Requirements -- Python environment: Python 3.6.3, 3.7.x -- FISCO BCOS Node: Refer to [FISCO BCOS Installation](../../quick_start/air_installation.md)Build +- Python environment: python 3.6.3, 3.7.x +- FISCO BCOS Node: Please refer to [FISCO BCOS Installation](../../quick_start/air_installation.md)Build ### Initialize environment(If the python environment meets the requirements, you can skip the) @@ -41,21 +41,21 @@ git clone https://gitee.com/FISCO-BCOS/python-sdk ```eval_rst .. note:: - - ``bash init_env.sh -p "The main function is to install pyenv, and use the pyenv installation name as" python-python for sdk "-3.7.3 Virtual environment - - If the Python environment meets the requirements, you can skip this step. + - "bash init _ env.sh-p" The main function is to install pyenv and use pyenv to install python-3.7.3 virtual environment with the name "python-sdk" + - If the python environment meets the requirements, you can skip this step - If the script is executed incorrectly, check whether the dependency is installed by referring to [Dependency Software] - - Install Python-3.7.3 May take a long time - - This step only needs to be initialized once, log in again directly using the command "pyenv activate python-sdk "activate" python-sdk "virtual environment can be + - Installing python-3.7.3 may take a long time + -This step only needs to be initialized once, log in again and directly use the command "pyenv activate python-sdk" to activate "python-sdk" virtual environment ``` ```bash # Determine the python version and install the virtual environment of python 3.7.3 for the unqualified python environment, named python-sdk -# If the Python environment meets the requirements, you can skip this step. +# If the Python environment meets the requirements, you can skip this step # If the script is executed incorrectly, check whether the dependency is installed by referring to [Dependency Software] -# Tip: Install Python-3.7.3 May take a long time +# Note: installing python-3.7.3 may take a long time cd python-sdk && bash init_env.sh -p -# Activate Python-sdk virtual environment +# activate python-sdk virtual environment source ~/.bashrc && pyenv activate python-sdk && pip install --upgrade pip ``` @@ -70,10 +70,10 @@ To run the Python SDK on Windows, follow these steps to install the dependent so .. note:: - Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools"Workaround: https:/ / visualstudio.microsoft.com / downloads (note that vs 2005 is version 14.0) or https:/ / pan.baidu.com / s / 1ZmDUGZjZNgFJ8D14zBu9og extraction code: zrby - - After the solc compiler is downloaded successfully, extract and copy the "solc.exe" file "${python-sdk}\ bin "directory, if python-sdk path is "D:\\open-source\\python-sdk ", the" solc.exe "file copy path is" D:\\open-source\\python-sdk\\bin\\solc.exe`` + After the -solc compiler is downloaded successfully, extract it and copy the "solc.exe" file "${python-sdk}\ bin "directory, if the python-sdk path is" D:\\ open-source\\ python-sdk ", the" solc.exe "file copy path is" D:\\open-source\\python-sdk\\bin\\solc.exe`` ``` -- Install directly [Python-3.7.x](https://www.python.org/downloads/release/python-373/)and [git](https://git-scm.com/download/win)Software +- Direct install [Python-3.7.x](https://www.python.org/downloads/release/python-373/)and [git](https://git-scm.com/download/win)Software python environment variable configuration can refer to [here](https://jingyan.baidu.com/article/b0b63dbff271e24a4830708d.html) - [Visual C++ 14.0 Library](https://visualstudio.microsoft.com/downloads) @@ -99,7 +99,7 @@ Modify 'client _ config.py.template' and configure the 'solc' compiler path. For ```bash # Modify client _ config.py.template: -# Configure the solc compiler path. If the storage path of solc is D:\\open-source\\python-sdk\\ bin\\ solc.exe, solc _ path is configured as follows: +# Configure the solc compiler path. If the storage path of solc is D:\\ open-source\\ python-sdk\\ bin\\ solc.exe, configure solc _ path as follows: solc_path = "D:\\open-source\\python-sdk\\bin\\solc.exe" # Copy client _ config.py.template to client _ config.py @@ -126,7 +126,7 @@ pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt bash init_env.sh -i ``` -If you do not perform the above initialization steps, you need to manually compile the 'sol' code in the 'contracts /' directory into 'bin' and 'abi' files and place them in the 'contracts' directory before you can deploy and call the corresponding contract.。Contract compilation can be done using [remix](https://remix.ethereum.org) +If you do not perform the above initialization steps, you need to manually compile the 'sol' code in the 'contracts /' directory into 'bin' and 'abi' files and place them in the 'contracts' directory before you can deploy and call the corresponding contract。Contract compilation can be done using [remix](https://remix.ethereum.org) ## Configure Channel Communication Protocol @@ -151,7 +151,7 @@ Get channel _ listen _ port in the config.ini file under the node directory, her jsonrpc_listen_port=8545 ``` -Switch to Python-sdk directory. In the client _ config.py file, change 'channel _ host' to the actual IP address, and 'channel _ port' to the 'channel _ listen _ port' obtained in the previous step: +Switch to the python-sdk directory and modify 'channel _ host' in the client _ config.py file to the actual IP address, and 'channel _ port' to the 'channel _ listen _ port' obtained in the previous step: ```bash channel_host = "127.0.0.1" @@ -161,7 +161,7 @@ channel_port = 20200 **Configure Certificate** ```bash -# If the node and python-The sdks are located on different machines. Copy all related files in the node sdk directory to the bin directory. +# If the node and python-sdk are located on different machines, copy all related files in the node's sdk directory to the bin directory # If the node and the SDK are located on the same machine, directly copy the node certificate to the SDK configuration directory cp ~/fisco/nodes/127.0.0.1/sdk/* bin/ ``` @@ -171,20 +171,20 @@ cp ~/fisco/nodes/127.0.0.1/sdk/* bin/ ```eval_rst .. note:: - The "channel _ node _ cert" and "channel _ node _ key" options of "client _ config.py" are used to configure the SDK certificate and private key, respectively - - ``release-2.1.0 ", the SDK certificate and private key are updated to" sdk.crt "and" sdk.key. "Before configuring the certificate path, check the certificate name and private key copied in the previous step, and set" channel _ node _ cert "as the SDK certificate path and" channel _ node _ key "as the SDK private key path. - - FISCO-BCOS 2.5 and later versions have added the restriction that the SDK can only connect to the local node. During operation, you need to confirm the path of the copy certificate, otherwise Jianlian reports an error. + - "release-2.1.0", update the SDK certificate and private key to "sdk.crt" and "sdk.key." Before configuring the certificate path, check the certificate name and private key copied in the previous step, set "channel _ node _ cert" as the SDK certificate path, and set "channel _ node _ key" as the SDK private key path + -FISCO-BCOS 2.5 and later versions, adding the restriction that the SDK can only connect to the local node, you need to confirm the path of the copy certificate during operation, otherwise Jianlian reports an error ``` Check the path of the sdk certificate copied from the node. If the paths of the sdk certificate and private key are 'bin / sdk.crt' and 'bin / sdk.key', the configuration items in 'client _ config.py' are as follows: ```bash -channel_node_cert = "bin/sdk.crt" # When using the channel protocol, you need to set the sdk certificate. If you use the rpc protocol for communication, you can leave it blank. +channel_node_cert = "bin/sdk.crt" # When using the channel protocol, you need to set the sdk certificate. If you use the rpc protocol for communication, you can leave it blank channel_node_key = "bin/sdk.key" # When using the channel protocol, you need to set the sdk private key, such as using the rpc protocol communication, this can be left blank ``` If the paths of the sdk certificate and private key are 'bin / node.crt' and 'bin / node.key' respectively, the relevant configuration items in 'client _ config.py' are as follows: ```bash -channel_node_cert = "bin/node.crt" # When using the channel protocol, you need to set the sdk certificate. If you use the rpc protocol for communication, you can leave it blank. +channel_node_cert = "bin/node.crt" # When using the channel protocol, you need to set the sdk certificate. If you use the rpc protocol for communication, you can leave it blank channel_node_key = "bin/node.key" # When using the channel protocol, you need to set the sdk private key, such as using the rpc protocol communication, this can be left blank ``` @@ -192,7 +192,7 @@ channel_node_key = "bin/node.key" # When using the channel protocol, you need ```eval_rst .. note:: - To run console.py in windows, use '.\ console.py' or 'python console.py'. + To run console.py in windows, use '.\ console.py' or 'python console.py' ``` ```bash @@ -207,7 +207,7 @@ Python SDK introduction [argcomplete](https://argcomplete.readthedocs.io/en/late ```eval_rst .. note:: - - This step only needs to be set once, after setting, each login will automatically take effect + -This step only needs to be set once, after setting, each login will automatically take effect - Please in**bash environment** Perform this step under - Currently only supports bash, not zsh ``` diff --git a/3.x/en/docs/sdk/rust_sdk/index.md b/3.x/en/docs/sdk/rust_sdk/index.md index d310777be..47518dee8 100644 --- a/3.x/en/docs/sdk/rust_sdk/index.md +++ b/3.x/en/docs/sdk/rust_sdk/index.md @@ -6,12 +6,12 @@ Tags: "Rust SDK" "Blockchain Application" Rust SDK for FISCO-BCOS ,like some rusted but solid gears , help to build blockchain application with FISCO-BCOS -[FISCO BCOS](https://github.com/FISCO-BCOS/FISCO-BCOS/tree/master)The lightweight version of the Rust SDK, the basic network, the national secret non-national secret algorithm support, the contract resolution capability is relatively complete, with command line interactive console.。 +[FISCO BCOS](https://github.com/FISCO-BCOS/FISCO-BCOS/tree/master)The lightweight version of the Rust SDK, the basic network, the national secret non-national secret algorithm support, the contract resolution capability is relatively complete, with command line interactive console。 -- This project is positioned as a learning / research / programming interest project for learning reference only。If you have formal requirements for use, it is recommended that you develop your own production-level sdk with only partial reference to this project and FISCO BCOS-related implementations, and use it after rigorous testing and verification.。 +-The positioning of this project is a project of learning / research / programming interest, for learning reference only。If you have formal requirements for use, it is recommended that you develop your own production-level sdk with only partial reference to this project and FISCO BCOS-related implementations, and use it after rigorous testing and verification。 -- This project is not the only and official fisco bcos rust sdk, the community will have other excellent rust sdk implementation, providing a variety of options and reference possibilities. +- This project is not the only and official fisco bcos rust sdk, the community will have other excellent rust sdk implementation, providing a variety of options and reference possibilities -- This project temporarily supports FISCO BCOS 2.0.0 and above. Currently, FISCO BCOS 3.0.0 and above are being adapted. +- This project temporarily supports FISCO BCOS 2.0.0 and above. Currently, FISCO BCOS 3.0.0 and above are being adapted Install and configure the environment. For application development using the Rust SDK, see [[github link]](https://github.com/FISCO-BCOS/rust-gears-sdk/) diff --git a/3.x/en/docs/tutorial/air/build_chain.md b/3.x/en/docs/tutorial/air/build_chain.md index 86160e8f1..2a477e59a 100644 --- a/3.x/en/docs/tutorial/air/build_chain.md +++ b/3.x/en/docs/tutorial/air/build_chain.md @@ -6,12 +6,12 @@ Tags: "build _ chain" "Build an Air version of the blockchain network" ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` ```eval_rst .. important:: - The build _ chain.sh script goal of this deployment tool is to enable users to use FISCO BCOS Air version as quickly as possible.。 + The build _ chain.sh script goal of this deployment tool is to enable users to use FISCO BCOS Air version as quickly as possible。 ``` FISCO BCOS provides' build _ chain.sh 'script to help users quickly build FISCO BCOS alliance chain。 @@ -32,7 +32,7 @@ curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v3.6.0/buil # Note: If the speed of accessing git is too slow, try the following command to download the link creation script: curl -#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/build_chain.sh && chmod u+x build_chain.sh -# Type bash build _ chain.sh-h shows script usage and parameters +# Type bash build _ chain.sh -h to show script usage and parameters $ bash build_chain.sh Usage: -C [Optional] the command, support 'deploy' and 'expand' now, default is deploy @@ -78,30 +78,30 @@ expand node e.g Script command, which supports' deploy 'and' expand '. The default value is' deploy': - `deploy`: For deploying new nodes。 -- 'expand 'for node expansion。 +- 'expand' for node expansion。 ### **'g 'option [**Optional**]** -Set the group ID. If no group ID is set, the default value is group0.。 +Set the group ID. If no group ID is set, the default value is group0。 ### **'c 'option [**Optional**]** -Used to set the chain ID. If it is not set, the default value is chain0.。 +Used to set the chain ID. If it is not set, the default value is chain0。 ### **'v 'option [**Optional**]** Used to specify the binary version used when building FISCO BCOS。build _ chain default download [Release page](https://github.com/FISCO-BCOS/FISCO-BCOS/releases)Latest Version。 ### **'l 'option** -The IP address of the generated node and the number of blockchain nodes deployed on the corresponding IP address. The parameter format is' ip1.:nodeNum1, ip2:nodeNum2`。 +The IP address of the generated node and the number of blockchain nodes deployed on the corresponding IP address. The parameter format is' ip1:nodeNum1, ip2:nodeNum2`。 The 'l' option for deploying two nodes on a machine with IP address' 192.168.0.1 'and four nodes on a machine with IP address' 127.0.0.1 'is as follows: `192.168.0.1:2, 127.0.0.1:4` ### **'L 'Options [**Optional**]** -Used to configure to turn on FISCO BCOS light node mode,-You can specify the binary executable path of the Air version light node after L, or enter"download_binary", The latest version of the light node binary is downloaded by default, as shown in the figure below。 +For configuring to enable FISCO BCOS light node mode, you can specify the binary executable path of the Air version light node after -L, or enter"download_binary", The latest version of the light node binary is downloaded by default, as shown in the figure below。 ```shell -# The P2P service of the two nodes occupies ports 30300 and 30301 respectively, and the RPC service occupies ports 20200 and 20201 respectively. -# -l Start the light node module,"download_binary" By default, the latest version of the binary file is pulled. +# The P2P service of the two nodes occupies ports 30300 and 30301 respectively, and the RPC service occupies ports 20200 and 20201 respectively +# -L start light node module,"download_binary" By default, the latest version of the binary file is pulled $ bash build_chain.sh -p 30300,20200 -l 127.0.0.1:2 -L download_binary # Specify Light Node Binary Path $ bash build_chain.sh -p 30300,20200 -l 127.0.0.1:2 -L /bin/fisco-bcos-lightnode @@ -109,18 +109,18 @@ $ bash build_chain.sh -p 30300,20200 -l 127.0.0.1:2 -L /bin/fisco-bcos-lightnode ### **'e 'option [**Optional**]** -Specifies the binary executable path of the FISCO BCOS of the Air version. If no path is specified, the latest FISCO BCOS is pulled by default.。 +Specifies the binary executable path of the FISCO BCOS of the Air version. If no path is specified, the latest FISCO BCOS is pulled by default。 ### **'t 'option [**Optional**]** -Specifies the path, function, and-E is similar, if not specified, the latest version of FISCO BCOS is pulled by default。 +Specifies the path of the binary mtail on which the Air version monitoring depends. The function is similar to -e. If you do not specify the path, the latest version of FISCO BCOS is pulled by default。 ### **'o 'option [**Optional**]** Specifies the directory where the generated node configuration is located. The default directory is'. / nodes'。 ### **'p 'option** -Specifies the start port for listening to P2P and RPC services on the node. By default, the start port for P2P services is 30300, and the start port for RPC services is 20200.。 +Specifies the start port for listening to P2P and RPC services on the node. By default, the start port for P2P services is 30300, and the start port for RPC services is 20200。 Specify 30300 as the starting port for P2P service listening;An example of the starting port on which 20200 listens for the RPC service is as follows: @@ -135,7 +135,7 @@ $ bash build_chain.sh -p 30300,20200 -l 127.0.0.1:2 Specify whether to build a full-link state-secret blockchain. The state-secret blockchain has the following features: - **Blockchain Ledger Uses State Secret Algorithm**: Using sm2 signature verification algorithm, sm3 hash algorithm and sm4 symmetric encryption and decryption algorithm。 -- **The state-secret SSL connection is used between the SDK client and the node.**。 +- **The state-secret SSL connection is used between the SDK client and the node**。 - **State-secret SSL connection between blockchain nodes**。 An example of building a stand-alone four-node state-secret blockchain node is as follows: @@ -146,14 +146,14 @@ $ bash build_chain.sh -l 127.0.0.1:4 -s -o gm_nodes ### **'H 'Options [**Optional**]** -cipher machine option, which indicates the use of a cipher machine。To turn this option on, add '-S 'means to open the national secret, and then to add'-n 'option is used to load the node.pem file to generate the nodeid of the cipher key。The command to open the cipher machine by loading the certificate file path is as follows +cipher machine option, which indicates the use of a cipher machine。If this option is enabled, add '-s' to enable the national password, and then add '-n' to load the node.pem file to generate the nodeid of the password machine key。The command to open the cipher machine by loading the certificate file path is as follows ```shell ./build_chain.sh -e ./fisco-bcos -p 30300,20200 -l 127.0.0.1:4 -s -H -n nodeKeyDir/ ``` ### **'n 'Options [**Optional**]** -The node certificate directory option, which indicates that nodeid is generated by loading the node certificate in the folder. This option can be used for national secret and non-national secret without specifying-s。This option is followed by the certificate folder path。 +The node certificate directory option, which indicates that nodeid is generated by loading the node certificate in the folder. This option can be used for national and non-national secrets without specifying -s。This option is followed by the certificate folder path。 ### **'c 'Expansion options** @@ -164,7 +164,7 @@ The expansion node option, which is used to specify the configuration file path Scale-out node option, which is used to specify the directory where the CA certificate and CA private key of the scale-out node are located。 ### **'D 'Option [**Optional**]** -Use docker mode to build the FISCO BCOS blockchain. When this option is used, the binary is no longer pulled, but the user is required to start the node machine to install docker and the account has docker permission.。 +Use docker mode to build the FISCO BCOS blockchain. When this option is used, the binary is no longer pulled, but the user is required to start the node machine to install docker and the account has docker permission。 Run the following command in the node directory to start the docker node: @@ -180,23 +180,23 @@ docker run -d --rm --name ${nodePath} -v ${nodePath}:/data --network=host -w=/da ### **'a 'Permission Control Options [**Optional**]** -Optional parameter. When permission control is enabled for a blockchain node, the-The 'a' option specifies the address of the admin account. If this option is not specified, the 'build _ chain' script will generate an account address as the admin account.。 +Optional parameter. When permission control is enabled for a blockchain node, you can use the '-a' option to specify the address of the admin account. If this option is not specified, the 'build _ chain' script generates an account address as the admin account。 ### **'w 'Virtual Machine Options [**Optional**]** -Optional parameter, when the blockchain needs to enable the wasm virtual machine engine, you can use the '-w 'option is enabled. If this option is not specified, EVM is used by default。 +Optional parameter. When the blockchain needs to enable the wasm virtual machine engine, the '-w' option can be enabled. If this option is not specified, the EVM is used by default。 ### **'R 'Execution Mode Options [**Optional**]** -Optional parameter, when the blockchain starts serial execution mode, you can use the-The R 'option specifies the execution mode, which defaults to serial mode (true), and if set to false, DMC parallel mode is enabled。 +Optional parameter. When the blockchain starts the serial execution mode, you can use the '-R' option to specify the execution mode. The default value is serial mode (true). If the value is set to false, the DMC parallel mode is enabled。 ### **'k 'Storage Control Options [**Optional**]** -Optional parameter, when you need to set the key-The size of the page in the page store.-K 'option sets the size of the page, if not specified, the default page size is 10240。 +Optional parameter. When you need to set the size of the page in the key-page storage, you can use the '-k' option to set the size of the page. If not specified, the default page size is 10240。 ### **'m 'Node Monitoring Options [**Optional**]** -Optional parameter. When the blockchain node is enabled for node monitoring, the-m 'option to deploy nodes with monitoring. If this option is not selected, only nodes without monitoring are deployed。 +Optional parameter. When node monitoring is enabled for blockchain nodes, you can use the '-m' option to deploy nodes with monitoring. If this option is not selected, only nodes without monitoring are deployed。 An example of deploying an Air version blockchain with monitoring enabled is as follows: @@ -233,15 +233,15 @@ After generating the blockchain node file, start the node (nodes / 127.0.0.1 / s ### **'I'Expansion node monitoring options [**Optional**]** -Optional parameter. When the blockchain scaling node needs to be monitored, use the-i 'option to specify expansion node monitoring, parameter format is' ip1:nodeNum1 ', scale out the monitoring of the second node on the machine with IP address' 192.168.0.1 ', the' l 'option example is as follows:' 192.168.0.1:2`。 +Optional parameter. When the blockchain expansion node needs to be monitored, the '-i' option is used to specify the expansion node monitoring. The parameter format is' ip1 ':nodeNum1 ', scale out the monitoring of the second node on the machine with IP address' 192.168.0.1 ', the' l 'option example is as follows:' 192.168.0.1:2`。 ### **'M 'Node Monitoring Profile Options [**Optional**]** -Optional parameter. When the blockchain expansion node needs to be monitored, you can use the-M 'option to specify the relative path of the prometheus configuration file in the nodes directory。 +Optional parameter. When the blockchain scaling node needs to be monitored, you can use the '-M' option to specify the relative path of the prometheus configuration file in the nodes directory。 ### **'z 'Generate node directory package [**Optional**]** -Optional parameter to generate the corresponding compressed package while generating the node directory, which is convenient to copy during multi-machine deployment.。 +Optional parameter to generate the corresponding compressed package while generating the node directory, which is convenient to copy during multi-machine deployment。 ### **'h 'option [**Optional**]** @@ -284,7 +284,7 @@ nodes/ │ │ │ ├── ssl.key # ssl connection certificate private key │ │ │ ├── node.pem # node signature private key file │ │ │ ├── node.nodeid # Node id, hexadecimal representation of the public key -│ │ ├── config.ini # Node master configuration file, configure listening IP, port, certificate, log, etc. +│ │ ├── config.ini # Node master configuration file, configure listening IP, port, certificate, log, etc │ │ ├── config.genesis # Genesis profile, consensus algorithm type, consensus timeout, and trading gas limits │ │ ├── nodes.json # The json information of the node, showing the ip address and port of the node. Example:{"nodes": [127.0.0.1:30300]} │ │ ├── start.sh # Startup script to start the node diff --git a/3.x/en/docs/tutorial/air/config.md b/3.x/en/docs/tutorial/air/config.md index 862d61aae..7361170fb 100644 --- a/3.x/en/docs/tutorial/air/config.md +++ b/3.x/en/docs/tutorial/air/config.md @@ -6,13 +6,13 @@ Tags: "Air Blockchain Network" "Configuration" "config.ini" "config.genesis" "Po ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` Air version FISCO BCOS mainly includes creation block configuration file 'config.genesis' and node configuration file 'config.ini': -- 'config.ini ': node configuration file, mainly configuring RPC, P2P, SSL certificate, ledger data path, disk encryption and other information; -- 'config.genesis': Genesis block configuration file,**The Genesis block configuration must be consistent for all nodes in the group.**, **Genesis block configuration file cannot be changed after chain initialization**After the chain is initialized, even if the creation block configuration is changed, the new configuration will not take effect, and the system will still use the genesis configuration when the chain was initialized.。 +- 'config.ini': node configuration file, mainly configuring RPC, P2P, SSL certificate, ledger data path, disk encryption and other information; +- 'config.genesis': Genesis block configuration file,**The Genesis block configuration must be consistent for all nodes in the group**, **Genesis block configuration file cannot be changed after chain initialization**After the chain is initialized, even if the creation block configuration is changed, the new configuration will not take effect, and the system will still use the genesis configuration when the chain was initialized。 ## 1. Genesis block configuration @@ -20,18 +20,18 @@ The node genesis block configuration is in the configuration file 'config.genesi ```eval_rst .. note:: - - **The Genesis block configuration must be consistent for all nodes in the group.** + - **The Genesis block configuration must be consistent for all nodes in the group** - **Genesis block configuration file cannot be changed after chain initialization** - - After the chain is initialized, even if the creation block configuration is changed, the new configuration will not take effect, and the system still uses the genesis configuration when the chain is initialized + -After the chain is initialized, even if the creation block configuration is changed, the new configuration will not take effect, and the system still uses the genesis configuration when the chain is initialized ``` ### 1.1 Configuration chain information '[chain]' Configure the chain information of the node,**The field information under this configuration should not be changed once it is determined**: -- `[chain].sm_crypto`: Whether the node uses the national secret ledger. The default value is' false '.; +- `[chain].sm_crypto`: Whether the node uses the national secret ledger. The default value is' false '; - '[chain] .group _ id': group ID, default is' group0'; -- '[chain] .chain _ id': the chain ID, which is' chain0 'by default. +- '[chain] .chain _ id': the chain ID, which is' chain0 'by default ```ini [chain] @@ -47,10 +47,10 @@ The node genesis block configuration is in the configuration file 'config.genesi '[consensus]' involves consensus-related configurations, including: -- `[consensus].consensus_type`: Consensus type. The default setting is' pbft '. Currently, FISCO BCOS v3.x only supports the PBFT consensus algorithm.; +- `[consensus].consensus_type`: Consensus type. The default setting is' pbft '. Currently, FISCO BCOS v3.x only supports the PBFT consensus algorithm; - `[consensus].block_tx_count_limit`: The maximum number of transactions that can be included in each block. The default setting is 1000; - `[consensus].leader_period`: The number of consecutive blocks packed by each leader in the consensus process. The default value is 5; -- '[consensus] .node.idx': list of consensus nodes, configured with the NodeIDs of the participating consensus nodes。 +- '[consensus] .node.idx': the list of consensus nodes. The NodeIDs of the participating consensus nodes are configured。 The configuration example of '[consensus]' is as follows: @@ -87,7 +87,7 @@ FISCO BCOS v3.0.0 designs and implements a compatibility framework that supports '[executor]' configuration items involve the execution of related genesis block configurations, mainly including: - `[executor].is_wasm`: Used to configure the virtual machine type, 'true' indicates the use of WASM virtual machine, 'false' indicates the use of EVM virtual machine, the configuration option is not dynamically adjustable, the default is' false '; -- `[executor].is_auth_check`: The configuration switch for permission control. 'true' indicates that permission control is enabled, and 'false' indicates that permission control is disabled. This configuration option cannot be dynamically adjusted. The permission control function is disabled by default.; +- `[executor].is_auth_check`: The configuration switch for permission control. 'true' indicates that permission control is enabled, and 'false' indicates that permission control is disabled. This configuration option cannot be dynamically adjusted. The permission control function is disabled by default; - `[executor].is_serial_execute`: Transaction execution serial and parallel mode configuration switch, 'true' indicates to enter serial execution mode, 'false' indicates to enter DMC parallel execution mode, this configuration option cannot be dynamically adjusted, the default is' false '; - `[executor].auth_admin_account`: Permission administrator account address, only used in permission control scenarios。 @@ -97,20 +97,20 @@ FISCO BCOS v3.0.0 designs and implements a compatibility framework that supports ```eval_rst .. important:: - - The public IP addresses of cloud hosts are all virtual IP addresses. If listen _ ip is filled in, the binding fails. - - RPC / P2P listening port must be at 1024-65535 range, and cannot conflict with other application listening ports on the machine - - To facilitate development and experience, the listen _ ip reference configuration is' 0.0.0.0 '. For security reasons, please modify it to a secure listening address according to the actual business network conditions, such as the intranet IP or a specific extranet IP + - The public IP addresses of cloud hosts are all virtual IP addresses. If listen _ ip is filled in, the binding fails + - The RPC / P2P listening port must be in the 1024-65535 range and cannot conflict with other application listening ports on the machine + - For the convenience of development and experience, the listen _ ip reference configuration is' 0.0.0.0 '. For security reasons, please modify it to a secure listening address according to the actual business network situation, such as: Intranet IP or specific external IP ``` ### 2.1 Configuring P2P P2P related configurations include: -- '[p2p] .listen _ ip': the P2P listening IP address. The default setting is' 0.0.0.0'; +- '[p2p] .listen _ ip': the IP address of the P2P listener. The default setting is' 0.0.0.0'; - '[p2p] .listen _ port': Node P2P listening port; -- `[p2p].sm_ssl`: Whether the SSL connection between nodes uses the state-secret SSL protocol, 'true' indicates that the state-secret SSL connection is enabled; 'false 'indicates that a non-state-secret SSL connection is used. The default value is' false '.; -- '[p2p] .nodes _ path': the directory where the node connection information file 'nodes.json' is located. The default value is the current folder.; -- '[p2p] .nodes _ file': Path to the 'P2P' connection information file 'nodes.json'。 +- `[p2p].sm_ssl`: Whether the SSL connection between nodes uses the state-secret SSL protocol, 'true' indicates that the state-secret SSL connection is enabled; 'false 'indicates that a non-state-secret SSL connection is used. The default value is' false '; +- '[p2p] .nodes _ path': the directory where the node connection information file 'nodes.json' is located. The default value is the current folder; +- '[p2p] .nodes _ file': the path to the 'P2P' connection information file 'nodes.json'。 An example P2P configuration is as follows: @@ -139,10 +139,10 @@ Example: 'P2P 'supports configurable network connections and dynamic addition / deletion of connection nodes during service operation. The process is as follows: - Modify the connection information in the '[p2p] .nodes _ file' configuration -- Send signal to service process' USR1': +- Send signal 'USR1' to service process: ```shell -kill -USR1 Gateway Node pid +kill-USR1 gateway node pid ``` Service reloads' P2P 'connection information。 @@ -154,7 +154,7 @@ The RPC configuration options are located at '[rpc]' and mainly include: - `[rpc].listen_ip`: RPC listens on the IP address, which is set to '0.0.0.0' by default to facilitate cross-machine deployment of nodes and SDKs; - `[rpc].listen_port`: RPC listening port, default setting is' 20200'; - `[rpc].thread_count`: Number of RPC service threads, 4 by default; -- `[rpc].sm_ssl`: Whether the connection between the SDK and the node uses the state-secret SSL connection. True indicates that the state-secret SSL connection is enabled.; 'false 'indicates that a non-state secret SSL connection is used. The default value is' false '. +- `[rpc].sm_ssl`: Whether the connection between the SDK and the node uses the state-secret SSL connection. True indicates that the state-secret SSL connection is enabled; 'false 'indicates that a non-state secret SSL connection is used. The default value is' false ' An example RPC configuration is as follows: @@ -171,12 +171,12 @@ An example RPC configuration is as follows: ### 2.3 Configuring Certificate Information -For security reasons, SSL is used to encrypt communication between FISCO BCOS nodes. Configure the certificate information of the SSL connection. +For security reasons, SSL is used to encrypt communication between FISCO BCOS nodes. Configure the certificate information of the SSL connection - `[cert].ca_path`: Certificate path, default is' conf'; - `[cert].ca_cert`: ca certificate name, default is' ca.crt'; - `[cert].node_key`: The private key of the node SSL connection. The default value is' ssl.key'; -- `[cert].node_cert`: The SSL connection certificate of the node. The default value is' ssl.cert '. +- `[cert].node_cert`: The SSL connection certificate of the node. The default value is' ssl.cert ' ```ini [cert] @@ -192,7 +192,7 @@ For security reasons, SSL is used to encrypt communication between FISCO BCOS no '[security]' Configure the private key path, which is mainly used for message signing of the consensus module, as follows: -- '[security] .private _ key _ path': path to the private key file. The default value is' conf / node.pem'。 +- '[security] .private _ key _ path': path to the private key file. Default value: 'conf / node.pem'。 ```ini [security] @@ -206,7 +206,7 @@ Considering that too fast packaging of PBFT modules will result in packaging onl ```eval_rst .. important:: - "min _ seal _ time" defaults to 500ms - - "min _ seal _ time" cannot exceed 1000ms. If the value exceeds 1000ms, the default min _ seal _ time is 500ms. + - "min _ seal _ time" cannot exceed 1000ms. If the value exceeds 1000ms, the default min _ seal _ time is 500ms ``` @@ -220,8 +220,8 @@ Considering that too fast packaging of PBFT modules will result in packaging onl The storage configuration is located at '[storage]' and includes: -- `[storage].data_path`: The data storage path of the blockchain node. The default value is data.; -- `[storage].enable_cache`: Whether to enable caching. The default value is true.; +- `[storage].data_path`: The data storage path of the blockchain node. The default value is data; +- `[storage].enable_cache`: Whether to enable caching. The default value is true; - `[storage].key_page_size`: In the KeyPage storage scheme, the storage page size, in bytes, is required to be no less than '4096'(4KB)default is' 10240'(10KB); ```ini @@ -235,7 +235,7 @@ The storage configuration is located at '[storage]' and includes: The drop disk encryption configuration option is located at '[storage _ security]': -- `[storage_security].enable`: Whether to enable disk encryption. Disk encryption is disabled by default.; +- `[storage_security].enable`: Whether to enable disk encryption. Disk encryption is disabled by default; - `[storage_security].key_manager_url`: [Key Manager] is configured for 'key _ center _ url' when encryption is enabled(../../design/storage_security.md)url to get the data encryption and decryption key; - `[storage_security].cipher_data_key`: Private key for data drop encryption。 @@ -254,8 +254,8 @@ The trading pool configuration option is located at '[txpool]': - `[txpool].limit`: Capacity limit of trading pool, default is' 15000'; - `[txpool].notify_worker_num`: Number of transaction notification threads, 2 by default; -- `[txpool].verify_worker_num`: Number of transaction verification threads. The default value is the number of machine CPU cores.; -- `[txpool].txs_expiration_time`: The transaction expiration time, in seconds. The default value is 10 minutes. That is, transactions that have not been packaged by the consensus module for more than 10 minutes will be discarded directly.。 +- `[txpool].verify_worker_num`: Number of transaction verification threads. The default value is the number of machine CPU cores; +- `[txpool].txs_expiration_time`: The transaction expiration time, in seconds. The default value is 10 minutes. That is, transactions that have not been packaged by the consensus module for more than 10 minutes will be discarded directly。 ```ini [txpool] @@ -275,8 +275,8 @@ FISCO BCOS supports powerful [boostlog](https://www.boost.org/doc/libs/1_63_0/li - `[log].enable`: Enables / disables logging, set to 'true' to enable logging;Set to 'false' to disable logging,**The default setting is true, and performance tests can set this option to 'false' to reduce the impact of printing logs on test results** - `[log].log_path`:Log File Path。 -- `[log].level`: Log level. Currently, there are five log levels: 'trace', 'debug', 'info', 'warning', and 'error'. After a log level is set, logs greater than or equal to the log level are entered in the log file.。 -- '[log] .max _ log _ file _ size': the maximum size of each log file.**The unit of measurement is MB, the default is 200MB**。 +- `[log].level`: Log level. Currently, there are five log levels: 'trace', 'debug', 'info', 'warning', and 'error'. After a log level is set, logs greater than or equal to the log level are entered in the log file> warning > info > debug > trace`。 +- '[log] .max _ log _ file _ size': the maximum capacity of each log file**The unit of measurement is MB, the default is 200MB**。 The log configuration example is as follows: @@ -292,15 +292,15 @@ The log configuration example is as follows: #### v3.6.0 New Configuration Item -- 'log.format ': Configure the format of each log. The keywords are wrapped in%. Supported keywords include' LineID, TimeStamp, ProcessID, ThreadName, ThreadID, and Message '. -- 'log.enable _ rotate _ by _ hour ': The default value is true. When' false 'is set to' log.log _ name _ pattern, 'log.rotate _ name _ pattern,' log.archive _ path, 'log.compress _ archive _ file,' log.max _ archive _ files, 'log.max _ archive _ size,' or 'log.max _ archive _ size', 'log.min _ -- 'log.log _ name _ pattern ': The file name mode of the log file. You can configure a string and support formatting characters. The% prefix, Y, m, d, H, M, and S represent the year, month, day, minute, and second. N represents a monotonically increasing number. You can use a fixed-length number for% 5N. -- 'log.rotate _ name _ pattern ': the file name of the log file generated after scrolling. The supported format characters are the same as log.log _ name _ pattern -- 'log.archive _ path ': Archive folder for history log files -- 'log.compress _ archive _ file ': Whether to compress archived log files +- 'log.format': Configure the format of each log. The keywords are wrapped in%. Supported keywords include 'LineID, TimeStamp, ProcessID, ThreadName, ThreadID, and Message' +- 'log.enable _ rotate _ by _ hour': The default value is true. If 'false' is set to 'log.log _ name _ pattern,' log.rotate _ name _ pattern, 'log.archive _ path,' log.compress _ archive _ file, 'log.max _ archive _ files,' log.max _ archive _ freeze _ size, 'or' log _ min ' +- 'log.log _ name _ pattern': the file name mode of the log file, which can be configured as a string and also supports formatting characters,% prefix, Y, m, d, H, M, S represents the year, month, day, minute and second, N represents the monotonically increasing number, you can use the fixed-length number% 5N +- 'log.rotate _ name _ pattern': the file name of the log file generated after scrolling. The supported format characters are the same as log.log _ name _ pattern +- 'log.archive _ path': archive folder for history log files +- 'log.compress _ archive _ file': whether to compress archived log files - 'log.max _ archive _ files': the maximum number of files in the archive folder, 0 is unlimited -- 'log.max _ archive _ size ': the maximum hard disk space limit of the archive folder, in MB, 0 is unlimited -- 'log.min _ free _ space ': Minimum archive folder space, 0 by default +- 'log.max _ archive _ size': the maximum hard disk space limit of the archive folder, in MB, 0 is unlimited +- 'log.min _ free _ space': the minimum space of the archive folder, which is 0 by default ### 2.9 Gateway module current limiting @@ -308,9 +308,9 @@ The gateway module supports configuring the function of traffic rate limiting in Configure the following according to your needs to achieve -- Outgoing Bandwidth and Incoming Bandwidth Limiting -- Restriction of specific IP and group -- Excluding Current Limiting for Specific Modules +- Outgoing bandwidth and inbound bandwidth throttling +- Limited flow of specific IP, group +- Exclude current limiting for specific modules The configuration in the process-dependent config.ini is as follows (please uncomment some items as required) diff --git a/3.x/en/docs/tutorial/air/expand_node.md b/3.x/en/docs/tutorial/air/expand_node.md index fb5840e70..5619d5490 100644 --- a/3.x/en/docs/tutorial/air/expand_node.md +++ b/3.x/en/docs/tutorial/air/expand_node.md @@ -6,14 +6,14 @@ Tags: "Air version of the blockchain network" "" Expansion "" ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` -'build _ chain.sh 'provides the function of scaling new nodes. In this chapter, [Build the first blockchain network](../../quick_start/air_installation.md)A new blockchain node is expanded on the basis of FISCO BCOS to help users master the expansion steps of the Air version FISCO BCOS blockchain node.。 +'build _ chain.sh 'provides the function of scaling new nodes. In this chapter, [Build the first blockchain network](../../quick_start/air_installation.md)A new blockchain node is expanded on the basis of FISCO BCOS to help users master the expansion steps of the Air version FISCO BCOS blockchain node。 ```eval_rst .. note:: - Before performing node scaling, refer to 'Building the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ Deploy the Air version blockchain。 + Before performing node scaling, refer to 'Building the first blockchain network'<../../quick_start/air_installation.html>'_ Deploy Air version blockchain。 ``` ## 1. Prepare documents required for expansion @@ -27,13 +27,13 @@ When scaling the Air version of the blockchain, you need to prepare a certificat ```eval_rst .. note:: - The root certificate of the Air version blockchain node is located in the directory generated during the connection, and you can enter the folder generated when the node is built.(For example: 'Build the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ The generated node configuration folder is' nodes'), via "find.-name ca "Find the root certificate of the chain + The root certificate of the Air version blockchain node is located in the directory generated during the connection, and you can enter the folder generated when the node is built(For example: 'Building the first blockchain network<../../quick_start/air_installation.html>'_ The generated node configuration folder is' nodes')Find the root certificate of the chain through "find. -name ca" ``` Here to [build the first blockchain network](../quick_start.md)For example, scale out a new node 'node4' based on 'node0': ```shell -# Enter the operation directory(Note: Before performing this operation, please refer to [Building the First Blockchain Network Node] to deploy an Air version FISCO BCOS blockchain.) +# Enter the operation directory(Note: Before performing this operation, please refer to [Building the First Blockchain Network Node] to deploy an Air version FISCO BCOS blockchain) $ cd ~/fisco # Create a directory to store the expansion configuration @@ -42,7 +42,7 @@ $ mkdir config # Copy the root certificate and root certificate private key $ cp -r nodes/ca config -# Copy the node configuration file config.ini, the creation block configuration file config.genesis, and the node connection configuration file nodes.json from the expanded node node0. +# Copy the node configuration file config.ini, the creation block configuration file config.genesis, and the node connection configuration file nodes.json from the expanded node node0 $ cp nodes/127.0.0.1/node0/config.ini config/ $ cp nodes/127.0.0.1/node0/config.genesis config/ $ cp nodes/127.0.0.1/node0/nodes.json config/nodes.json.tmp @@ -66,8 +66,8 @@ $ cat config/nodes.json ```eval_rst .. note:: - - Please make sure that the "30304" and "20204" ports of the machine are not occupied - - Please refer to 'Build the first blockchain network <.. /.. / quick _ start / air _ installation.html >' _ Download the build script 'build _ chain.sh', 'build _ chain' Use can refer to 'Here <. / build _ chain.html >' _ + -Please make sure that the "30304" and "20204" ports of the machine are not occupied + - Please refer to 'Building the First Blockchain Network<../../quick_start/air_installation.html>"_ Download the build _ chain.sh script," "build _ chain" can be used here<./build_chain.html>`_ ``` **Step 1: Generate the scaling node configuration** @@ -78,7 +78,7 @@ After the configuration file is prepared, use the link creation script 'build _ # Enter the operation directory cd ~/fisco -# Call build _ chain.sh to expand the node. The new node is expanded to the nodes / 127.0.0.1 / node4 directory. +# Call build _ chain.sh to expand the node. The new node is expanded to the nodes / 127.0.0.1 / node4 directory # -c: Specify the paths of config.ini, config.genesis, and nodes.json # -d: Specify the path to the CA certificate and private key # -o: Specify the directory where the expansion node configuration is located @@ -135,8 +135,8 @@ bash nodes/127.0.0.1/node4/start.sh ```eval_rst .. note:: - - Before performing this step, start all nodes, including the expansion node - - Please refer to '[Configuration and Use Console] for building the first blockchain network <.. /.. / quick _ start / air _ installation.html#id7 > '_ Download Console + -Start all nodes including the expansion node before performing this step + - Please refer to the [Configuration and Use Console] of 'Building the First Blockchain Network'<../../quick_start/air_installation.html#id7>'_ Download Console ``` **Step 1: Check if all nodes are started** @@ -196,7 +196,7 @@ Type 'help' or 'h' for help. Type 'quit' or 'q' to quit console. ```eval_rst .. note:: - In order to ensure that the new node does not affect the consensus, you must first add the expansion node as an observation node, and then add it to the consensus node when the expansion node is synchronized to the latest block.。 + In order to ensure that the new node does not affect the consensus, you must first add the expansion node as an observation node, and then add it to the consensus node when the expansion node is synchronized to the latest block。 ``` ```shell diff --git a/3.x/en/docs/tutorial/air/index.md b/3.x/en/docs/tutorial/air/index.md index f97e58360..8246c1bc9 100644 --- a/3.x/en/docs/tutorial/air/index.md +++ b/3.x/en/docs/tutorial/air/index.md @@ -7,12 +7,12 @@ Tags: "Air FISCO BCOS" "" Expansion "" Configuration "" Deployment Tools "" ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` ```eval_rst .. note:: - Air version FISCO BCOS with all-in-The one encapsulation mode compiles all modules into a binary (process), a process is a blockchain node, including all functional modules such as network, consensus, access, etc., suitable for beginners, functional verification, POC products.。 + Air version FISCO BCOS adopts all-in-one encapsulation mode, compiles all modules into a binary (process), a process is a blockchain node, including network, consensus, access and other functional modules, suitable for beginners, function verification, POC products。 ``` diff --git a/3.x/en/docs/tutorial/air/multihost.md b/3.x/en/docs/tutorial/air/multihost.md index 6bfd96ecd..59cc7c0cf 100644 --- a/3.x/en/docs/tutorial/air/multihost.md +++ b/3.x/en/docs/tutorial/air/multihost.md @@ -4,7 +4,7 @@ Tags: "Build a multi-machine blockchain network" "Blockchain tutorial" "HelloWor ---- -Building the first blockchain network(../../quick_start/air_installation.md)This chapter describes in detail how to build a stand-alone 4-node blockchain network in the Air version. This chapter takes building a multi-machine 4-node blockchain network as an example to describe in detail how to deploy FISCO BCOS on multiple machines.。 +Building the first blockchain network(../../quick_start/air_installation.md)This chapter describes in detail how to build a stand-alone 4-node blockchain network in the Air version. This chapter takes building a multi-machine 4-node blockchain network as an example to describe in detail how to deploy FISCO BCOS on multiple machines。 ## 1. Build a multi-machine 4-node blockchain network @@ -14,20 +14,20 @@ In this tutorial, assume that the IP addresses of the four physical machines are ```eval_rst .. note:: - - Please ensure that the "30300" and "20200" ports of each machine are not occupied。 - - Please make sure that each machine has the network access rights of "30300" and "20200" ports. - - Make sure that the machine that generates the blockchain node configuration can access the external network(Used to download the chain building script) + - Please make sure that the "30300," "20200" ports of each machine are not occupied。 + - Make sure that each machine has network access to ports "30300" and "20200" + - ensure that the machine that generates the blockchain node configuration can access the external network(Used to download the chain building script) ``` ### Step 1. Download the deployment tool and generate the multi-machine node configuration -**Create an operation path and download fisco-bcos, development and deployment tool build _ chain** +**Create an operation path, download the fisco-bcos, development and deployment tool build _ chain** ```bash # Create operation path ~ / fisco mkdir -p ~/fisco && cd ~/fisco -# download _ bin.sh, download fisco-bcos binary, v specifies FISCO-BCOS Version +# download _ bin.sh, download the fisco-bcos binary, v Specifies the FISCO-BCOS version ./download_bin.sh -v 3.4.0 # Download the development and deployment tool build _ chain @@ -84,7 +84,7 @@ After generating the blockchain node configuration, you need to copy each node c ```bash # Create an operating directory for each machine ~ / fisco -# (Note: Use the FICO user here; In practice, you can use your own account for similar operations, and the IP needs to be replaced with your own machine IP.) +# (Note: Use the FICO user here; In practice, you can use your own account for similar operations, and the IP needs to be replaced with your own machine IP) ssh fisco@196.168.0.1 "mkdir -p ~/fisco" ssh fisco@196.168.0.2 "mkdir -p ~/fisco" ssh fisco@196.168.0.3 "mkdir -p ~/fisco" @@ -103,14 +103,14 @@ scp -r 4nodes/196.168.0.4/ fisco@196.168.0.4:~/fisco/196.168.0.4 ### Step 3. Start the multi-machine 4-node blockchain system -After the configuration of the blockchain node is copied successfully, you need to start all the nodes. You can start the blockchain node remotely by initiating an 'ssh' operation on a machine, or you can log on to all the physical machines and start the blockchain node on the corresponding physical machine.。 +After the configuration of the blockchain node is copied successfully, you need to start all the nodes. You can start the blockchain node remotely by initiating an 'ssh' operation on a machine, or you can log on to all the physical machines and start the blockchain node on the corresponding physical machine。 **Method one: Start a blockchain node remotely** The node start command is also initiated from '196.168.0.1', as follows: ```bash -# (Note: Use the FICO user here; In practice, you can use your own account for similar operations, and the IP needs to be replaced with your own machine IP.) +# (Note: Use the FICO user here; In practice, you can use your own account for similar operations, and the IP needs to be replaced with your own machine IP) # Launch the blockchain node deployed on the 196.168.0.1 machine $ ssh fisco@196.168.0.1 "bash ~/fisco/196.168.0.1/start_all.sh" try to start node0 @@ -135,7 +135,7 @@ try to start node0 **Method two: Log in to the machine directly to start the blockchain node** ```bash -# (Note: Use the FICO user here; In practice, you can use your own account for similar operations, and the IP needs to be replaced with your own machine IP.) +# (Note: Use the FICO user here; In practice, you can use your own account for similar operations, and the IP needs to be replaced with your own machine IP) # Log in to 196.168.0.1 and launch the blockchain node $ ssh fisco@196.168.0.1 $ bash ~/fisco/196.168.0.1/start_all.sh @@ -154,7 +154,7 @@ $ bash ~/fisco/196.168.0.4/start_all.sh ``` -At this point, a multi-machine 4-node blockchain system has been built. Next, you need to check whether the blockchain nodes are working properly.。 +At this point, a multi-machine 4-node blockchain system has been built. Next, you need to check whether the blockchain nodes are working properly。 ### Step 4. Check the blockchain node @@ -182,7 +182,7 @@ Log on to each machine and run the following command to determine whether the no tail -f ~/fisco/*/node0/log/* |grep -i connected ``` -Normally, the connection information will be output continuously. From the output, it can be seen that the node is connected to other machine nodes normally, and transactions can be initiated on the console.。 +Normally, the connection information will be output continuously. From the output, it can be seen that the node is connected to other machine nodes normally, and transactions can be initiated on the console。 ```bash info|2019-01-21 17:30:58.316769| [P2PService][Service] heartBeat,connected count=3 @@ -192,11 +192,11 @@ info|2019-01-21 17:31:18.317105| [P2PService][Service] heartBeat,connected count ## 2. Configure and use the console -This chapter describes how to configure a console for a multi-machine 4-node blockchain system and use the console to initiate transactions for the multi-machine blockchain system.。 +This chapter describes how to configure a console for a multi-machine 4-node blockchain system and use the console to initiate transactions for the multi-machine blockchain system。 ### Step 1. Prepare to rely on -- Install Java (Java 14 is recommended). +- Install Java (Java 14 is recommended) ```bash # Ubuntu system installation java @@ -347,4 +347,4 @@ Event: {} [group0]: /> exit ``` -At this point, we have completed the construction of the multi-machine blockchain network, the configuration and use of the console.。 +At this point, we have completed the construction of the multi-machine blockchain network, the configuration and use of the console。 diff --git a/3.x/en/docs/tutorial/air/storage_security.md b/3.x/en/docs/tutorial/air/storage_security.md index 948f7a77d..13d868882 100644 --- a/3.x/en/docs/tutorial/air/storage_security.md +++ b/3.x/en/docs/tutorial/air/storage_security.md @@ -6,7 +6,7 @@ Tags: "Storage Security" "Storage Encryption" "Drop Disk Encryption" Alliance chain data, visible only to members within the alliance。Drop disk encryption ensures the security of data running the alliance chain on the hard disk。Once the hard drive is taken out of the Alliance chain's own intranet environment, the data cannot be decrypted。 -Disk encryption is the encryption of the contents of the node stored on the hard disk, including: contract data, the node's private key.。 +Disk encryption is the encryption of the contents of the node stored on the hard disk, including: contract data, the node's private key。 For a specific introduction to falling disk encryption, please refer to: [Introduction to falling disk encryption](../../design/storage/storage_security.md) @@ -16,7 +16,7 @@ Each organization has a Key Manager. For more information, see [Key Manager Gith ```eval_rst .. important:: - If the node is in the state secret version, the key manager must be started in the state secret mode. Here, the non-state secret version is used as an example.。 + If the node is in the state secret version, the key manager must be started in the state secret mode. Here, the non-state secret version is used as an example。 ``` ## 2. Generate blockchain nodes @@ -30,7 +30,7 @@ curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v3.6.0/buil ```eval_rst .. note:: - - If the build _ chain.sh script cannot be downloaded for a long time due to network problems, try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/build_chain.sh && chmod u+x build_chain.sh` + -If the build _ chain.sh script cannot be downloaded for a long time due to network problems, please try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/build_chain.sh && chmod u+x build_chain.sh` ``` Deploy four nodes: @@ -46,7 +46,7 @@ bash build_chain.sh -l 127.0.0.1:4 ## 3. Start Key Manager -Start 'key by referring to the following command-manager`。If 'key' has not been deployed-manager '. For details, see Deploy Key Manager in the previous section of this document. +Start 'key-manager' with the following command。If you have not deployed the 'key-manager', you can first deploy it by referring to the previous section of this document, 'Deploying Key Manager' ```shell # Parameters: port, superkey @@ -66,7 +66,7 @@ Start successfully, print log ```eval_rst .. important:: - The node on which the dataKey is configured must be a newly generated node that has not been started.。If the node that has been started to disable the disk encryption mode modifies the configuration file to enable disk encryption, the node will not start normally, please be cautious。 + The node on which the dataKey is configured must be a newly generated node that has not been started。If the node that has been started to disable the disk encryption mode modifies the configuration file to enable disk encryption, the node will not start normally, please be cautious。 ``` Execute the script, define 'dataKey', and obtain 'cipherDataKey' @@ -83,7 +83,7 @@ key_manager_url=127.0.0.1:8150 cipher_data_key=ed157f4588b86d61a2e1745efe71e6ea ``` -The script automatically prints the ini configuration required for disk encryption.。 +The script automatically prints the ini configuration required for disk encryption。 The cipherDataKey of the node is obtained: "'cipher _ data _ key = ed157f4588b86d61a2e1745efe71e6ea"' Write the resulting encrypted ini configuration to the node configuration file ([config.ini](../tutorial/air/config.md)) in。 @@ -100,9 +100,9 @@ key_manager_url=127.0.0.1:8150 cipher_data_key=ed157f4588b86d61a2e1745efe71e6ea ``` -## 5. Encrypt the node private key. +## 5. Encrypt the node private key -Execute the script to encrypt the private keys of all nodes. The node 'node0' is used as an example.。 +Execute the script to encrypt the private keys of all nodes. The node 'node0' is used as an example。 ```bash $ cd key-manager/scripts @@ -118,7 +118,7 @@ $ bash encrypt_node_key.sh 127.0.0.1 8150 ../../nodes/127.0.0.1/node0/conf/node. [INFO] "nodes/127.0.0.1/node0/conf/node.pem" encrypted! ``` -After execution, the node private key is automatically encrypted. The files before encryption are backed up to the files' ssl.key.bak.xxxxxx 'and' node.pem.bak.xxxxxx '.**Keep the backup private key safe and delete the backup private key generated on the node** +After execution, the node private key is automatically encrypted. The files before encryption are backed up to the files' ssl.key.bak.xxxxxx 'and' node.pem.bak.xxxxxx '**Keep the backup private key safe and delete the backup private key generated on the node** If you view 'ssl.key', you can see that it has been encrypted as ciphertext @@ -129,7 +129,7 @@ If you view 'ssl.key', you can see that it has been encrypted as ciphertext **Note: All files that need to be encrypted are listed below。Node cannot start without encryption。** - - non-state secret edition + - Non-State Secret Edition - conf/ssl.key - conf/node.pem - State Secret Edition diff --git a/3.x/en/docs/tutorial/air/use_hsm.md b/3.x/en/docs/tutorial/air/use_hsm.md index 97bbcbaef..961664a2d 100644 --- a/3.x/en/docs/tutorial/air/use_hsm.md +++ b/3.x/en/docs/tutorial/air/use_hsm.md @@ -6,27 +6,27 @@ Tags: "hardware encryption" "" HSM "" "cipher machine" " The FISCO BCOS 3.3.0 Hardware Secure Module (HSM) adds the following features: 1. build _ chain.sh Loads the node.pem file of the built-in key of the cipher machine and builds a blockchain using the cipher machine。 -2. java-sdk adds a password machine configuration item to use the password machine to verify the transaction signature;(Specific reference [java-sdk configuration](../../sdk/java_sdk/config.md)) -This tutorial mainly introduces how to configure FISCO BCOS version 3.3.0 on the node side to use cipher machine.。 +2. Java-sdk adds a password machine configuration item and uses the password machine to verify the transaction signature;(specific reference [java-sdk configuration](../../sdk/java_sdk/config.md)) +This tutorial mainly introduces how to configure FISCO BCOS version 3.3.0 on the node side to use cipher machine。 ## 1. Node version -- When your node needs to use the hardware encryption module, you need to set the node configuration item to enable the cipher machine encryption function, where the signature of the node is verified with the key in the key machine.。All key pairs are stored in the password machine, and no key pairs remain in memory, which improves the security of key storage.。 +-When your node needs to use the hardware encryption module, you need to set it in the node configuration item to start the cipher machine encryption function, where the signature of the node is verified with the key in the key machine。All key pairs are stored in the password machine, and no key pairs remain in memory, which improves the security of key storage。 ## 2. Install password card / password machine -To build a state secret chain using a hardware cryptographic module, you need to install a password card or password machine on the server where the node is located.。FISCO BCOS supports GMT0018-Cipher Card / Cipher 2012 Cipher Device Application Interface Specification。 +To build a state secret chain using a hardware cryptographic module, you need to install a password card or password machine on the server where the node is located。FISCO BCOS supports the GMT0018-2012 Cipher Device Application Interface Specification for Cipher Cards / Ciphers。 -### Step 1. Please install the password machine according to your password card / password machine installation guidelines. -Installation complies with GMT0018-Dynamic library files for the 2012 specification, such as. +### Step 1. Please install the password machine according to your password card / password machine installation guidelines +Install dynamic library files that comply with the GMT0018-2012 specification, such as: 1. Place the dynamic library file "libgmt0018.so" under the default library search path (windows operating system is in .dll format), and ensure that the user has read and execute permissions。The path of the dynamic library can be configured in the configuration item 'security' of the node's configuration file 'config.ini'。For example, it can be placed in the "/ usr / local / lib" directory of the Ubuntu operating system and placed in the CentOS operating system, "/ lib64" or "/ usr / lib64" directory。 -### Step 2. Please initialize the password card / password machine and run its test program to ensure that it functions properly. -Initialize the device according to the password card / password machine manufacturer's guidelines and create the internal key you need。Then run the test program to ensure that the function is normal and that the GMT0018 provided by the cipher machine can be called correctly through the libgmt0018.so dynamic library-2012 interface method。 +### Step 2. Please initialize the password card / password machine and run its test program to ensure that it functions properly +Initialize the device according to the password card / password machine manufacturer's guidelines and create the internal key you need。Then run the test program to ensure that the function is normal and that the interface method provided by GMT0018-2012 of the cipher machine can be called correctly through the dynamic library of libgmt0018.so。 -## 3. Create a FISCO BCOS blockchain node using a cipher machine. +## 3. Create a FISCO BCOS blockchain node using a cipher machine ### Step 1. Dynamic Binary of Nodes -FISCO BCOS dynamic binary is required to load the dynamic library file of the password card.。Users can download the dynamic binaries provided by FISCO BCOS, or manually compile the node dynamic binaries themselves in the appropriate environment。Use source code to compile binary, refer to [source code compilation](../../tutorial/compile_binary.md)。 -Note:**Compile**link, you need to specify a compiled dynamic binary, that is, you do not specify '-DBUILD_STATIC=ON` +FISCO BCOS dynamic binary is required to load the dynamic library file of the password card。Users can download the dynamic binaries provided by FISCO BCOS, or manually compile the node dynamic binaries themselves in the appropriate environment。Use source code to compile binary, refer to [source code compilation](../../tutorial/compile_binary.md)。 +Note:**Compile**link, you need to specify the compiled dynamic binary, that is, do not specify '-DBUILD _ STATIC = ON' ```shell # Create Compile Directory mkdir -p build && cd build @@ -36,7 +36,7 @@ cmake .. || cat *.log ### Step 2. Generate State Secret Node Such as cipher key generation, there are two ways: -1. After generating the node key through the tool, import the node key into the cipher machine and record the index position。For example, import the key certificates of node0 and node1 into the key index positions 43 and 44 of the cipher machine.; +1. After generating the node key through the tool, import the node key into the cipher machine and record the index position。For example, import the key certificates of node0 and node1 into the key index positions 43 and 44 of the cipher machine; 2. Through the cipher machine management program, generate the built-in key of the cipher machine and record the index position; cipher machine**Public Private Key**用于**Signature verification**; @@ -47,7 +47,7 @@ cd ~/fisco curl -LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v3.6.0/build_chain.sh && chmod u+x build_chain.sh ``` -In the build _ chain directory, create a folder (for example, nodeKeyDir) to store the node.pem file for the cipher key.(The number of certificates is consistent with the number of nodes built.)。 +In the build _ chain directory, create a folder (for example, nodeKeyDir) to store the node.pem file for the cipher key(The number of certificates is consistent with the number of nodes built)。 ```bash ./build_chain.sh -e ~/fisco/FISCO-BCOS/build/fisco-bcos-air/fisco-bcos -p 30300,20200 -l 127.0.0.1:4 -s -H -n nodeKeyDir/ ``` @@ -56,7 +56,7 @@ In the build _ chain directory, create a folder (for example, nodeKeyDir) to sto Specific reference [Deployment Tools(build_chain.sh)](./build_chain.md); ### Step 3. Configure key type and key index -Add the configuration items' enable _ hsm ',' hsm _ lib _ path ',' key _ index ', and' password 'to the node configuration file' config.ini ', and set whether to use the key in the password machine for node signature verification.。 +Add the configuration items' enable _ hsm ',' hsm _ lib _ path ',' key _ index ', and' password 'to the node configuration file' config.ini ', and set whether to use the key in the password machine for node signature verification。 For example, configure node node0 to use the internal key of the cipher machine, and the signature verification key index is 43; ``` [security] @@ -110,7 +110,7 @@ View the number of nodes linked to node node0 as follows tail -f nodes/127.0.0.1/node0/log/* |grep -i "heartBeat,connected count" ``` -Normally, the connection information will be output continuously. From the output, it can be seen that node0 is connected to three other nodes.。 +Normally, the connection information will be output continuously. From the output, it can be seen that node0 is connected to three other nodes。 ```bash info|2022-08-15 19:38:59.270112|[P2PService][Service][METRIC]heartBeat,connected count=3 info|2022-08-15 19:39:09.270210|[P2PService][Service][METRIC]heartBeat,connected count=3 diff --git a/3.x/en/docs/tutorial/compile_binary.md b/3.x/en/docs/tutorial/compile_binary.md index 58814b57d..389e482c3 100644 --- a/3.x/en/docs/tutorial/compile_binary.md +++ b/3.x/en/docs/tutorial/compile_binary.md @@ -6,23 +6,23 @@ Tags: "executable program" "development manual" "precompiled program" "source co ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` ```eval_rst .. note:: - FISCO BCOS supports compilation on Linux, macOS and Galaxy Kirin operating systems; When compiling binaries for Linux and Kirin systems, the gcc version must be no less than 10; When compiling binaries in macOS, clang version is required to be no less than 12.0 - - FISCO BCOS 3.x supports macOS compilation with Apple Silicon, same compilation steps as x86 _ 64。 - - FISCO BCOS 3.x compilation depends on the rust environment. Please install the rust environment before compiling the source code. - - Source code compilation is suitable for users with rich development experience. During the compilation process, you need to download the dependency library. Please keep the network smooth. - - FISCO BCOS compiles both the Air and Pro versions of the binary + - FISCO BCOS 3.x supports macOS compilation with Apple Silicon, the compilation steps are the same as x86 _ 64。 + - FISCO BCOS 3.x compilation depends on the rust environment, please install the rust environment before compiling the source code + -Source code compilation is suitable for users with rich development experience. Dependency libraries need to be downloaded during the compilation process. Please keep the network unblocked + - FISCO BCOS compiles binaries for both the Air and Pro versions ``` -FSICO-BCOS uses a generic 'CMake' build system to generate platform-specific build files, which means that workflows are very similar no matter what operating system you use: +FSICO-BCOS uses a general-purpose 'CMake' build system to generate platform-specific build files, which means that the workflow is very similar no matter what operating system you use: 1. Install build tools and dependencies (platform dependent) 2. Cloning code from FISCO BCOS -3. Run cmake to generate the build file and compile it. +3. Run cmake to generate the build file and compile it ## 1. Installation Dependencies @@ -78,10 +78,10 @@ sudo yum update sudo yum install -y wget curl tar sudo yum install -y build-essential clang flex bison patch glibc-static glibc-devel libzstd-devel libmpc cpp -# Check the gcc version. If the gcc version is lower than 10, install a gcc version higher than 10. +# Check the gcc version. If the gcc version is lower than 10, install a gcc version higher than 10 gcc -v -# Check whether the cmake version is greater than or equal to 3.14. If not, install the cmake version that meets the requirements. +# Check whether the cmake version is greater than or equal to 3.14. If not, install the cmake version that meets the requirements cmake --version ``` @@ -103,14 +103,14 @@ cd FISCO-BCOS ## 3. Compile -**The compiled Air version binary is located in 'FISCO-BCOS/build/fisco-bcos-air/fisco-bcos-air'Path。** +**The compiled Air version binary is located in the path 'FISCO-BCOS / build / fisco-bcos-air / fisco-bcos-air'。** **Compile all binaries corresponding to the Pro version of the Rpc service, Gateway service, Executor service, and node service. The path is as follows:** -- Rpc Service: 'FISCO-BCOS/build/fisco-bcos-tars-service/RpcService/main/BcosRpcService` -- Gateway Service: 'FISCO-BCOS/build/fisco-bcos-tars-service/GatewayService/main/BcosGatewayService` -- Executor Service: 'FISCO-BCOS/build/fisco-bcos-tars-service/ExecutorService/main/BcosExecutorService` -- Blockchain Node Service: 'FISCO-BCOS/build/fisco-bcos-tars-service/NodeService/main/BcosNodeService`、`FISCO-BCOS/build/fisco-bcos-tars-service/NodeService/main/BcosMaxNodeService` +- Rpc service: 'FISCO-BCOS / build / fisco-bcos-tars-service / RpcService / main / BcosRpcService' +- Gateway service: 'FISCO-BCOS / build / fisco-bcos-tars-service / GatewayService / main / BcosGatewayService' +- Executor service: 'FISCO-BCOS / build / fisco-bcos-tars-service / ExecutorService / main / BcosExecutorService' +- Blockchain node services: 'FISCO-BCOS / build / fisco-bcos-tars-service / NodeService / main / BcosNodeService', 'FISCO-BCOS / build / fisco-bcos-tars-service / NodeService / main / BcosMaxNodeService' **If it is too slow to pull dependencies from GitHub during compilation, you can do the following to speed up:** @@ -130,7 +130,7 @@ EOF - **Modifying DNS and Host** -Modifying the DNS host or adding the direct IP address of GitHub to the host can improve the access speed.。You can refer to tools such as' SwitchHosts'.。 +Modifying the DNS host or adding the direct IP address of GitHub to the host can improve the access speed。You can refer to tools such as' SwitchHosts'。 - **Configure the vcpkg agent** @@ -150,10 +150,10 @@ cd ~/fisco/FISCO-BCOS mkdir -p build && cd build cmake -DBUILD_STATIC=ON .. || cat *.log -# If vcpkg fails during dependency compilation, check the error log according to the error message. +# If vcpkg fails during dependency compilation, check the error log according to the error message # For network reasons, configure the vcpkg agent as prompted above -# Compile source code(High performance machines can be added-j4 Compile with 4-core acceleration) +# Compile source code(High-performance machines can add -j4 using 4-core accelerated compilation) make -j4 # generate tgz package @@ -178,10 +178,10 @@ cd ~/fisco/FISCO-BCOS mkdir -p build && cd build cmake3 -DBUILD_STATIC=ON .. || cat *.log -# If vcpkg fails during dependency compilation, check the error log according to the error message. +# If vcpkg fails during dependency compilation, check the error log according to the error message # For network reasons, configure the vcpkg agent as prompted above -# High performance machines can be added-j4 Compile with 4-core acceleration +# High-performance machines can add -j4 using 4-core accelerated compilation make -j4 # generate tgz package rm -rf fisco-bcos-tars-service/*.tgz && make tar @@ -203,10 +203,10 @@ cd ~/fisco/FISCO-BCOS mkdir -p build && cd build cmake3 -DBUILD_STATIC=ON .. || cat *.log -# If vcpkg fails during dependency compilation, check the error log according to the error message. +# If vcpkg fails during dependency compilation, check the error log according to the error message # For network reasons, configure the vcpkg agent as prompted above -# High performance machines can be added-j4 Compile with 4-core acceleration +# High-performance machines can add -j4 using 4-core accelerated compilation make -j4 # generate tgz package rm -rf fisco-bcos-tars-service/*.tgz && make tar @@ -222,13 +222,13 @@ cd ~/fisco/FISCO-BCOS mkdir -p build && cd build cmake -DBUILD_STATIC=ON ..|| cat *.log -# If vcpkg fails during dependency compilation, check the error log according to the error message. +# If vcpkg fails during dependency compilation, check the error log according to the error message # For network reasons, configure the vcpkg agent as prompted above # If an error occurs when you execute the preceding procedure, run the following command to specify SDKROOT #rm -rf CMakeCache.txt && export SDKROOT=$(xcrun --sdk macosx --show-sdk-path) && CC=/usr/bin/clang CXX=/usr/bin/clang++ cmake .. -# High performance machines can be added-j8 uses 8-core accelerated compilation +# High-performance machines can be added -j8 using 8-core accelerated compilation make -j4 # generate tgz package @@ -245,10 +245,10 @@ cd ~/fisco/FISCO-BCOS mkdir -p build && cd build cmake3 -DBUILD_STATIC=ON .. || cat *.log -# If vcpkg fails during dependency compilation, check the error log according to the error message. +# If vcpkg fails during dependency compilation, check the error log according to the error message # For network reasons, configure the vcpkg agent as prompted above -# High performance machines can be added-j4 Compile with 4-core acceleration +# High-performance machines can add -j4 using 4-core accelerated compilation make -j4 # generate tgz package rm -rf fisco-bcos-tars-service/*.tgz && make tar @@ -257,10 +257,10 @@ rm -rf fisco-bcos-tars-service/*.tgz && make tar ### Compile Option Description -- -- FULLNODE compiles all nodes, enabled by default +--- FULLNODE compiles all nodes, enabled by default - -- WITH _ LIGHTNODE compiles light nodes, enabled by default - -- WITH _ TIKV Compile TIKV, enabled by default - -- WITH _ TARS _ SERVICES Compile TARS service, enabled by default -- -- WITH _ SM2 _ OPTIMIZE enables SM2 performance optimization, which is enabled by default +--- WITH _ SM2 _ OPTIMIZE enables SM2 performance optimization, which is enabled by default - -- WITH _ CPPSDK Compile C++SDK, enabled by default -- -- WITH _ BENCHMARK compiles the performance test program, which is enabled by default \ No newline at end of file +--- WITH _ BENCHMARK compiles the performance test program, which is enabled by default \ No newline at end of file diff --git a/3.x/en/docs/tutorial/docker.md b/3.x/en/docs/tutorial/docker.md index 5a9c37b59..e5a3bcd63 100644 --- a/3.x/en/docs/tutorial/docker.md +++ b/3.x/en/docs/tutorial/docker.md @@ -4,11 +4,11 @@ Tags: "Use Docker to Build a Blockchain" "Blockchain Tutorial" "" Docker "" ---- -[build_chain.sh](../manual/build_chain.md)The script provides'-d 'parameter, supports using docker to deploy blockchain。This chapter will demonstrate how to build a four-node blockchain in docker mode, and help users become familiar with the process of building a blockchain in docker through examples.。 +[build_chain.sh](../manual/build_chain.md)The script provides the '-d' parameter, which supports deploying the blockchain in docker mode。This chapter will demonstrate how to build a four-node blockchain in docker mode, and help users become familiar with the process of building a blockchain in docker through examples。 ```eval_rst .. note:: - - Currently, it only supports the deployment of blockchain environment through docker in Linux environment. + - Currently only supports the deployment of blockchain environment in Linux environment through docker ``` ## 1. Installation Dependencies @@ -37,12 +37,12 @@ curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v3.6.0/buil ```eval_rst .. note:: - - If the build _ chain.sh script cannot be downloaded for a long time due to network problems, try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/build_chain.sh && chmod u+x build_chain.sh` + -If the build _ chain.sh script cannot be downloaded for a long time due to network problems, please try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/build_chain.sh && chmod u+x build_chain.sh` ``` ## 3. Build a single group 4-node blockchain Run the following command in the FICO directory to generate a blockchain with a single group of 4 nodes。 -Please make sure that the '30300 ~ 30303, 20200 ~ 20203' ports of the machine are not occupied, you can also pass'-The p 'parameter specifies a different port。 +Please make sure that the '30300 ~ 30303, 20200 ~ 20203' ports of the machine are not occupied, or you can specify other ports through the '-p' parameter。 ```bash bash build_chain.sh -D -l 127.0.0.1:4 -p 30300,20200 @@ -50,7 +50,7 @@ bash build_chain.sh -D -l 127.0.0.1:4 -p 30300,20200 ```eval_rst .. note:: - - Use the parameters of build _ chain.sh. For more information, see <.. / operation _ and _ maintenance / build _ chain.html > + Use of each parameter of -build _ chain.sh, refer to 'here<../operation_and_maintenance/build_chain.html>`_ ``` Successful command execution will output 'All completed'。If an error occurs, check the error message in the 'nodes / build.log' file。 @@ -93,7 +93,7 @@ writing RSA key Run 'nodes / 127.0.0.1 / start _ all.sh' -On startup, looks to see if FISCO exists locally-The node image of the corresponding version of BCOS. If it does not exist, download it from docker hub.。 +During startup, the node image of the FISCO-BCOS version is checked to see if it exists locally. If it does not exist, the node image is downloaded from the docker hub。 ```shell $ bash nodes/127.0.0.1/start_all.sh @@ -151,13 +151,13 @@ efae6adb1ebe fiscoorg/fiscobcos:v3.6.0 "/usr/local/bin/fisc…" 47 second a846dc34e23b fiscoorg/fiscobcos:v3.6.0 "/usr/local/bin/fisc…" 47 seconds ago Up 45 seconds roottestnodes127.0.0.1node1 de8b704d51a2 fiscoorg/fiscobcos:v3.6.0 "/usr/local/bin/fisc…" 47 seconds ago Up 45 seconds roottestnodes127.0.0.1node3 ``` -If the container status is UP, the node starts normally.。 +If the container status is UP, the node starts normally。 -For more information about docker, see the docker documentation.: [https://docs.docker.com/](https://docs.docker.com/) +For more information about docker, see the docker documentation: [https://docs.docker.com/](https://docs.docker.com/) ## 6. View Nodes -You can check the log to confirm whether the number of p2p connections and consensus of the node are normal.。 +You can check the log to confirm whether the number of p2p connections and consensus of the node are normal。 - View the number of nodes connected to node node0 @@ -165,7 +165,7 @@ You can check the log to confirm whether the number of p2p connections and conse tail -f nodes/127.0.0.1/node0/log/* |grep -i "heartBeat,connected count" ``` -Normally, the connection information will be output continuously. From the output, it can be seen that node0 is connected to three other nodes.。 +Normally, the connection information will be output continuously. From the output, it can be seen that node0 is connected to three other nodes。 ```bash info|2023-06-15 12:28:47.014473|[P2PService][Service][METRIC]heartBeat,connected count=3 info|2023-06-15 12:28:57.014577|[P2PService][Service][METRIC]heartBeat,connected count=3 diff --git a/3.x/en/docs/tutorial/lightnode.md b/3.x/en/docs/tutorial/lightnode.md index 1e34618c7..c8c6962a4 100644 --- a/3.x/en/docs/tutorial/lightnode.md +++ b/3.x/en/docs/tutorial/lightnode.md @@ -6,30 +6,30 @@ Tags: "light node" "" build light node " ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` ```eval_rst .. important:: - The build _ chain.sh script goal of this deployment tool is to enable users to use FISCO BCOS light nodes as quickly as possible.。 + The build _ chain.sh script goal of this deployment tool is to enable users to use FISCO BCOS light nodes as quickly as possible。 ``` FISCO BCOS provides' build _ chain.sh 'script to help users quickly build FISCO BCOS light nodes。 -This article only describes how to use build _ chain.sh to build a light node. If you want to query the full usage of build _ chian.sh, please see < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/tutorial/air/build_chain.html>` +This article only describes how to use build _ chain.sh to build a light node. If you want to query the full usage of build _ chian.sh, please see` ## 1. Compile light nodes -'Please check the compilation documentation < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compile_binary.html>` +'Please check the compilation documentation` -When configuring cmake, increase the option-DWITH _ LIGHTNODE = ON, the light node program will be generated to build / lightnode / fisco-bcos-in the lightnode directory。 +When configuring cmake, increase the option -DWITH _ LIGHTNODE = ON, and the light node program will be generated in the build / lightnode / fisco-bcos-lightnode directory。 ## 2. Build light nodes The 'build _ chain.sh' script is used for fast light nodes. The source code of the script is located in [github source code](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/tools/BcosAirBuilder/build_chain.sh), [gitee source code](https://gitee.com/FISCO-BCOS/FISCO-BCOS/blob/master/tools/BcosAirBuilder/build_chain.sh)。 ```shell -# Type bash build _ chain.sh-h shows script usage and parameters +# Type bash build _ chain.sh -h to show script usage and parameters $ bash build_chain.sh Usage: -C [Optional] the command, support 'deploy' and 'expand' now, default is deploy @@ -72,8 +72,8 @@ Used to configure to enable FISCO BCOS light node mode, when the light node need Case: ```shell -# P2P services of four nodes occupy 30300 respectively-Port 30303 -# RPC services occupy 20200 respectively-Port 20203 +# P2P services of four nodes occupy ports 30300-30303 respectively +# RPC services occupy ports 20200 - 20203 respectively # Light nodes will be generated in nodes / lightnodes, and P2P and RPC services occupy ports 30304 and 20204 respectively $ bash build_chain.sh -p 30300,20200 -l 127.0.0.1:4 -e ./bin/fisco-bcos -L ./bin/fisco-bcos-lightnode $ bash build_chain.sh -p 30300,20200 -l 127.0.0.1:4 -L download_binary @@ -116,7 +116,7 @@ writing RSA key [INFO] All completed. Files in ./nodes ``` -## 3. Light node profile organization structure. +## 3. Light node profile organization structure The node configurations generated by build _ chain are as follows: @@ -155,11 +155,11 @@ bash start.sh lightnode start successfully pid=72369 ``` -You can start the light node, and then use the console or SDK to connect to the light node in a similar way to the full node. The experience is basically similar to that of the full node.。 +You can start the light node, and then use the console or SDK to connect to the light node in a similar way to the full node. The experience is basically similar to that of the full node。 ## 5. Light Node Limit -Light nodes do not store full ledger data, which means that most of the information is obtained from all nodes, and light nodes ensure that the information obtained from all nodes is trustworthy and cannot be tampered with.。 +Light nodes do not store full ledger data, which means that most of the information is obtained from all nodes, and light nodes ensure that the information obtained from all nodes is trustworthy and cannot be tampered with。 Light nodes support the following RPC interfaces: @@ -182,7 +182,7 @@ For specific usage of each RPC interface, please refer to the document [console ## 6. Expansion of light nodes Starting from FISCOBCOS version 3.3, you can use the build _ chain.sh script to scale up light nodes. The specific operation process is as follows: 1. Create a new folder config in the same directory as the build _ chain.sh build chain script; -2. Copy the root certificate folder ca under nodes to the config folder, "cp-r nodes/ca config ```; +2. Copy the root certificate folder ca under nodes to the config folder, "'cp -r nodes / ca config"'; 3. Copy config.genesis, config.ini, and nodes.json from the existing lightnode folder to the config folder; ```shell cp nodes/lightnode/config.* config @@ -214,7 +214,7 @@ writing RSA key [INFO] output dir : nodes/lightnode1 [INFO] All completed. Files in nodes/lightnode1 ``` -7. Enter lightnode1, the light node generated by the expansion, and start bash start.sh, the light node generated by the expansion. +7. Enter lightnode1, the light node generated by the expansion, and start bash start.sh, the light node generated by the expansion ```shell bash start.sh diff --git a/3.x/en/docs/tutorial/max/deploy_max_by_buildchain.md b/3.x/en/docs/tutorial/max/deploy_max_by_buildchain.md index 14d26bf7e..3f73de8b0 100644 --- a/3.x/en/docs/tutorial/max/deploy_max_by_buildchain.md +++ b/3.x/en/docs/tutorial/max/deploy_max_by_buildchain.md @@ -5,7 +5,7 @@ Tags: "build _ chain" "build version of blockchain network" ---- ```eval_rst - The deployment tool build _ chain script aims to enable users to deploy and use FISCO BCOS Pro / max version blockchain without tars as quickly as possible. + The deployment tool build _ chain script aims to enable users to deploy and use FISCO BCOS Pro / max version blockchain without tars as quickly as possible ``` ## 1. Script function introduction @@ -20,37 +20,37 @@ Script command, which supports' deploy '. The default value is' deploy': ### **'g 'option [**Optional**]** -Set the group ID. If no group ID is set, the default value is group0.。 +Set the group ID. If no group ID is set, the default value is group0。 ### **'I 'option [**Optional**]** -Used to set the chain ID. If it is not set, the default value is chain0.。 +Used to set the chain ID. If it is not set, the default value is chain0。 ### **'V 'Options [**Optional**]** -Specifies the chain version (air, pro, max). The default value is air.。 +Specifies the chain version (air, pro, max). The default value is air。 ### **'l 'Options [**Optional**]** -The IP address of the generated node and the number of blockchain nodes deployed on the corresponding IP address. The parameter format is' ip1.:nodeNum1, ip2:nodeNum2`。 +The IP address of the generated node and the number of blockchain nodes deployed on the corresponding IP address. The parameter format is' ip1:nodeNum1, ip2:nodeNum2`。 The 'l' option for deploying two nodes on a machine with IP address' 192.168.0.1 'and four nodes on a machine with IP address' 127.0.0.1 'is as follows: `192.168.0.1:2, 127.0.0.1:4` ### **'p 'option [**Optional**]** -Specifies the start port for listening to P2P, RPC, tars, tikv, and monitor services. The default start ports are 30300, 20200, 40400, 2379, and 3901.。 +Specifies the start port for listening to P2P, RPC, tars, tikv, and monitor services. The default start ports are 30300, 20200, 40400, 2379, and 3901。 Specify 30300 as the starting port for P2P service listening;An example of the starting port on which 20200 listens for the RPC service is as follows: ``` -# Specify the P2P and RPC ports of the node. The remaining ports are the default values. +# Specify the P2P and RPC ports of the node. The remaining ports are the default values -p 30300,20200 ``` ### **'e 'option [**Optional**]** -Specifies the path of the binary executable files of the existing local Pro / Max versions such as rpc, gateway, and nodef. If no path is specified, the latest version of the binary is pulled by default. The default address is in the binary folder. For example, the default address of the binary for the Pro version is BcosBuilder / pro / binary.。 +Specifies the path of the binary executable files of the existing local Pro / Max versions such as rpc, gateway, and nodef. If no path is specified, the latest version of the binary is pulled by default. The default address is in the binary folder. For example, the default address of the binary for the Pro version is BcosBuilder / pro / binary。 ### **'y 'Options [**Optional**]** @@ -58,11 +58,11 @@ Specifies the binary download method of rpc, gateway, and nodef, git, or cdn. De ### **'v 'option [**Optional**]** -Specifies the binary download version of rpc, gateway, and nodef. The default value is v3.4.0.。 +Specifies the binary download version of rpc, gateway, and nodef. The default value is v3.4.0。 ### **'r 'Option [**Optional**]** -Specifies the binary download path of the rpc, gateway, or nodef service. By default, the file is downloaded to the binary folder.。 +Specifies the binary download path of the rpc, gateway, or nodef service. By default, the file is downloaded to the binary folder。 ### **'c 'option [**Optional**]** @@ -81,7 +81,7 @@ Specifies the directory where the generated node artifacts are located. The defa Specify whether to build a full-link state-secret blockchain. The state-secret blockchain has the following features: - **Blockchain Ledger Uses State Secret Algorithm**: Using sm2 signature verification algorithm, sm3 hash algorithm and sm4 symmetric encryption and decryption algorithm。 -- **The state-secret SSL connection is used between the SDK client and the node.**。 +- **The state-secret SSL connection is used between the SDK client and the node**。 - **State-secret SSL connection between blockchain nodes**。 ### **'h 'option [**Optional**]** @@ -92,7 +92,7 @@ View Script Usage。 ### 2.1 Installation Dependencies -Deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose ', depending on the operating system you are using, use the following command to install the dependency。 +The deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose'. Depending on the operating system you are using, use the following command to install the dependency。 **Install Ubuntu Dependencies(Version not less than Ubuntu18.04)** @@ -142,7 +142,7 @@ Here are four examples of deployment chains 1. Specify the ip and port of the service and automatically generate the configuration file -Execute the following command to deploy RPC services, gateway services, and node services. +Execute the following command to deploy RPC services, gateway services, and node services The starting ports of, tars and tikv are 30300, 20200, 40400 and 2379 respectively, and the ip addresses of the four institutions are 172.31.184.227, 172.30.93.111, 172.31.184.54 and 172.31.185.59, which automatically download the latest binary; ``` @@ -151,7 +151,7 @@ bash build_chain.sh -p 30300,20200,40400,2379 -l 172.31.184.227:1,172.30.93.111: 2. Deployment of State Secret Chain -Execute the following command through-s designated deployment state-secret chain, through-e specifies that a binary path already exists +Execute the following command, specify the deployment state secret chain through -s, and specify the existing binary path through -e ``` bash build_chain.sh -p 30300,20200,40400,2379 -l 172.31.184.227:1,172.30.93.111:1,172.31.184.54:1,172.31.185.59:1 -C deploy -V max -o generate -t all -e ./binary -s diff --git a/3.x/en/docs/tutorial/max/expand_max_withoutTars.md b/3.x/en/docs/tutorial/max/expand_max_withoutTars.md index 11ca9ddf2..3d31de277 100644 --- a/3.x/en/docs/tutorial/max/expand_max_withoutTars.md +++ b/3.x/en/docs/tutorial/max/expand_max_withoutTars.md @@ -15,11 +15,11 @@ Script command, which supports' deploy '. The default value is' deploy': **'V 'Options [Optional]** -Specifies the chain version (air, pro, max). The default value is air.。 +Specifies the chain version (air, pro, max). The default value is air。 **'c 'option [Optional]** -Specifies the path of the service configuration file. This path must include config.toml. The default value is. / BcosBuilder / max / config.toml.。 +Specifies the path of the service configuration file. This path must include config.toml. The default value is. / BcosBuilder / max / config.toml。 **'o 'option [Optional]** @@ -29,7 +29,7 @@ Specifies the directory where the generated node artifacts are located. The defa ### 2.1 Setting RPC / Gateway Service Expansion Configuration -Use the build _ chain script to deploy a max service. Now you need to scale up the rpc / gateway. +Use the build _ chain script to deploy a max service. Now you need to scale up the rpc / gateway Its main modifications are as follows: @@ -40,7 +40,7 @@ Its main modifications are as follows: 4. Set the deploy _ ip, listen _ port, tars _ listen _ port service ip and corresponding port of [agency.rpc]; 5. Set the deploy _ ip, listen _ port, tars _ listen _ port service ip and corresponding port of [agency.gateway], and modify the peers (you need to write the IP: port of the deployed gateway, and other deployed gateways do not need to modify the corresponding nodes.json; -Note that the difference between tars _ listen _ port and the last deployed port must be greater than 6. For example, if the tars _ listen _ port of the last deployed node is 40402, the minimum value of tars _ listen _ port in this instance is 40408, and the minimum value of tars _ listen _ port in gateway is 40409. +Note that the difference between tars _ listen _ port and the last deployed port must be greater than 6. For example, if the tars _ listen _ port of the last deployed node is 40402, the minimum value of tars _ listen _ port in this instance is 40408, and the minimum value of tars _ listen _ port in gateway is 40409 ``` The configuration of the new RPC / Gateway service 'config.toml' is as follows: @@ -168,7 +168,7 @@ expand_service/172.30.93.111 ### 2.3 Deploy TiKV -Deploy tikv on the machine in the expansion service. For convenience of demonstration, use TiUP playground to start the TiKV node. The playground is only used for the test environment. For the production environment, please refer to the official TiKV document to deploy the cluster.; +Deploy tikv on the machine in the expansion service. For convenience of demonstration, use TiUP playground to start the TiKV node. The playground is only used for the test environment. For the production environment, please refer to the official TiKV document to deploy the cluster; **Download and install tiup** @@ -232,7 +232,7 @@ Specific steps are as follows: 3. [group] genesis _ config _ path, which specifies the path of the genesis block configuration file of the existing node; 4. [agency.group] option in [[agency]], modify node _ name, tars _ listen _ port; -Note that the tars _ listen _ port in [[agency.group.node]] requires 6 ports. Therefore, the difference between the tars _ listen _ port port and the last deployed port must be greater than 6. For example, if the tars _ listen _ port of the last deployed node is 40402, the minimum value of the tars _ listen _ port is 40408.。 +Note that the tars _ listen _ port in [[agency.group.node]] requires 6 ports. Therefore, the difference between the tars _ listen _ port port and the last deployed port must be greater than 6. For example, if the tars _ listen _ port of the last deployed node is 40402, the minimum value of the tars _ listen _ port is 40408。 ``` Configure 'config.toml' for scaling (for example, for scaling nodes of the rpc / gateway service that has been scaled out) as follows: @@ -368,11 +368,11 @@ expand_node/172.30.93.111/ ### 3.3 Add the new expansion node to the group -Place the generated product on the corresponding ip machine. Before starting the node, you need to turn on tikv as in 2.3.; +Place the generated product on the corresponding ip machine. Before starting the node, you need to turn on tikv as in 2.3; ```eval_rst .. note:: - When you scale out a new node, first add the node as an observation node, and only when the block height of the scale-out node is the same as the highest block height of the existing node on the chain, can it be added as a consensus node.。 + When you scale out a new node, first add the node as an observation node, and only when the block height of the scale-out node is the same as the highest block height of the existing node on the chain, can it be added as a consensus node。 ``` **Step 1: Obtain the NodeID of the scaling node** diff --git a/3.x/en/docs/tutorial/max/index.md b/3.x/en/docs/tutorial/max/index.md index 3e8464e2e..544c0fa19 100644 --- a/3.x/en/docs/tutorial/max/index.md +++ b/3.x/en/docs/tutorial/max/index.md @@ -7,13 +7,13 @@ Tags: "Pro FISCO BCOS" "" Expansion "" Configuration "" Deployment Tools "" ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` ```eval_rst .. note:: - Max version FISCO BCOS is designed to provide mass storage services, high-performance scalable execution modules, and highly available failure recovery mechanisms.。 - Max version FISCO BCOS nodes use distributed storage TiKV, execution modules are independent into services, storage and execution are scalable, and support automated master and standby recovery.。 + Max version FISCO BCOS is designed to provide mass storage services, high-performance scalable execution modules, and highly available failure recovery mechanisms。 + Max version FISCO BCOS nodes use distributed storage TiKV, execution modules are independent into services, storage and execution are scalable, and support automated master and standby recovery。 ``` ```eval_rst diff --git a/3.x/en/docs/tutorial/max/installation.md b/3.x/en/docs/tutorial/max/installation.md index 1e1f51e1f..36df41bb8 100644 --- a/3.x/en/docs/tutorial/max/installation.md +++ b/3.x/en/docs/tutorial/max/installation.md @@ -5,21 +5,21 @@ Tags: "Max version of the blockchain network" "deployment" ------------ In order to be able to support a large number of transactions on the chain scene, v3.x launched the Max version FISCO BCOS, Max version FISCO BCOS is designed to provide**Mass storage services, high-performance and scalable execution modules**、**Highly available fault recovery mechanism**。 -Max version FISCO BCOS nodes use distributed storage TiKV, execution modules are independent into services, storage and execution are scalable, and support automated master and standby recovery.。 +Max version FISCO BCOS nodes use distributed storage TiKV, execution modules are independent into services, storage and execution are scalable, and support automated master and standby recovery。 -This chapter builds the Max version of the single-node FISCO BCOS alliance chain on a single machine to help users master the deployment process of the Max version of the FISCO BCOS blockchain.。Please refer to [here](../../quick_start/hardware_requirements.md)Use supported**Hardware and platforms**Conduct operation。 +This chapter builds the Max version of the single-node FISCO BCOS alliance chain on a single machine to help users master the deployment process of the Max version of the FISCO BCOS blockchain。Please refer to [here](../../quick_start/hardware_requirements.md)Use supported**Hardware and platforms**Conduct operation。 ```eval_rst .. note:: - - Max version FISCO BCOS uses the "BcosBuilder / max" tool for chain building and expansion. For more information about this tool, see 'BcosBuilder <. / max _ builder.html >' _ - - FISCO BCOS 3.x builds and manages microservices based on tars. Before building the Max version of FISCO BCOS, you need to install the tars service. This chapter describes the process of building the docker version of tars service. For more information about deploying and building tars, please refer to 'Here < https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/installation/README.md>`_ - - In this chapter, you can build the TARS service based on Docker. Make sure that the system user has the Docker permission. - - To build a Max version of FISCO BCOS, you must first deploy a TiKV cluster. For details about how to deploy a TiKV cluster, see 'Here < https://tikv.org/docs/5.1/deploy/install/install/>`_ + -Max version FISCO BCOS uses the "BcosBuilder / max" tool for chain building and expansion. For details about this tool, see "BcosBuilder"<./max_builder.html>`_ + - FISCO BCOS 3.x builds and manages microservices based on tars. Before building Max FISCO BCOS, you need to install the tars service`_ + - This chapter builds the tars service based on Docker. Make sure that the system user has the Docker permission + - To build a Max version of FISCO BCOS, you must first deploy a TiKV cluster. For details about how to deploy a TiKV cluster, see`_ ``` ## 1. Installation Dependencies -Deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose ', depending on the operating system you are using, use the following command to install the dependency。 +The deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose'. Depending on the operating system you are using, use the following command to install the dependency。 **Install Ubuntu Dependencies(Version not less than Ubuntu18.04)** @@ -44,8 +44,8 @@ brew install curl docker docker-compose python3 wget ```eval_rst .. note:: - - Deployment tool "BcosBuilder" configuration and use please refer to 'here <. / max _ builder.html >' _ - - If the network speed of "BcosBuilder" downloaded from github is too slow, try: curl -#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/BcosBuilder.tgz && tar -xvf BcosBuilder.tgz + - Deployment tool "BcosBuilder" configuration and use please refer to 'here<./max_builder.html>`_ + - If downloading the deployment tool "BcosBuilder" from github is too slow, please try: curl -#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/BcosBuilder.tgz && tar -xvf BcosBuilder.tgz ``` ```shell @@ -64,16 +64,16 @@ cd BcosBuilder && pip3 install -r requirements.txt ## 3. Install, start, and configure the tars service -**Please refer to [here] for the installation, startup and configuration of tars service.(../pro/installation.html#id2).** +**Please refer to [here] for the installation, startup and configuration of tars service(../pro/installation.html#id2).** ## 4. Deploy TiKV ```eval_rst .. note:: - - For the convenience of demonstration, use TiUP playground to start TiKV nodes. The playground is only used in the test environment. For the production environment, please refer to the official TiKV documentation to deploy the cluster. - - It is recommended to modify the 'coprocessor.region of TiKV-split-size 'is 256MB, modify' coprocessor.enable-region-bucket 'to' true 'to reduce the time it takes to submit transactions and receipts - - It is recommended to modify 'raftstore.raft of TiKV-entry-max-size 'is 64MB to avoid possible raft entry out-of-limit issues - - It is recommended to turn on the compression function of TiKV to reduce disk occupation + - For ease of presentation, use TiUP playground to start TiKV nodes. The playground is only used in the test environment. For the production environment, please refer to the official TiKV documentation to deploy the cluster + - We recommend that you modify the 'coprocessor.region-split-size' of TiKV to 256MB and 'coprocessor.enable-region-bucket' to 'true' to reduce the time taken to submit transactions and receipts + - It is recommended to modify the 'raftstore.raft-entry-max-size' of TiKV to 64MB to avoid the problem that the raft entry may exceed the limit + -It is recommended to turn on the compression function of TiKV to reduce disk occupation ``` **Download and install tiup** @@ -101,21 +101,21 @@ PD client endpoints: [172.25.0.3:2379] Max version FISCO BCOS includes RPC service, Gateway service, blockchain node service 'BcosMaxNodeService' and blockchain execution service 'BcosExecutorService': -- RPC service: It is responsible for receiving client requests and forwarding the requests to nodes for processing. RPC services can be scaled horizontally, and one RPC service can access multiple blockchain node services.; -- Gateway service: It is responsible for network communication between blockchain nodes across institutions. Gateway services can be scaled horizontally. One Gateway service can access multiple blockchain node services.; -- Blockchain node service 'BcosMaxNodeService': provides services related to blockchain scheduling, including block packaging, consensus, execution scheduling, and submission scheduling.; +-RPC service: responsible for receiving client requests and forwarding the requests to nodes for processing, RPC service can be scaled horizontally, and one RPC service can access multiple blockchain node services; +- Gateway service: responsible for network communication between blockchain nodes across institutions. The Gateway service is horizontally scalable, and one Gateway service can access multiple blockchain node services; +- Blockchain node service 'BcosMaxNodeService': provides services related to blockchain scheduling, including blockchain packaging, consensus, execution scheduling, and submission scheduling. The node service obtains network communication functions by accessing RPC services and Gateway services; - Blockchain execution service 'BcosExecutorService': responsible for block execution, scalable horizontally and dynamically。 -This chapter takes deploying a single-node blockchain on a single machine as an example to introduce the Max version FISCO BCOS build deployment process.。 +This chapter takes deploying a single-node blockchain on a single machine as an example to introduce the Max version FISCO BCOS build deployment process。 ```eval_rst .. note:: - - Before deploying the Max version blockchain system, please refer to 'here <.. / pro / installation.html#id2 > '_ Set up the tars service and apply for a token - - If you do not apply for a token, refer to [3.2 Configuring Tars Service] to apply for a token. - - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform.-> [user center]-> [token management] to obtain the token list - - Before deploying the Max version of the blockchain, make sure that your tars service is in the startup state - - The application token must be configured to the "tars.tars _ token" configuration option of the "config.toml" configuration file before all subsequent deployment steps can be performed. - - Before deploying the Max version blockchain, make sure that tikv is deployed by reference, and ensure that each Max node corresponds to a tikv service. Multiple Max nodes cannot share the tikv service. + -Before deploying the Max version of the blockchain system, please refer to the 'here<../pro/installation.html#id2>'_ Set up tars service and apply for token + - If you do not apply for a token, refer to [3.2 Configuring Tars Service] to apply for a token + - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform ->User Center ->[token management] obtaining the token list + -Before deploying the Max version of the blockchain, please make sure that your tars service is in the started state + - The requested token must be configured to the "tars.tars _ token" configuration option of the "config.toml" configuration file before all subsequent deployment steps can be performed + - Before deploying the Max version blockchain, make sure that tikv is deployed by reference, and ensure that each Max node corresponds to a tikv service. Multiple Max nodes cannot share the tikv service ``` ### 5.1 Download Binary @@ -124,26 +124,26 @@ Before building the Max version of FISCO BCOS, you need to download the binary p ```eval_rst .. note:: - - You can use the python3 build _ chain.py-h "View deployment script usage - - The binary is downloaded to the "binary" directory by default - - If downloading the binary is slow, try: ``python3 build_chain.py download_binary -t cdn`` + - You can view the deployment script usage through "python3 build _ chain.py -h" + - binary is downloaded to the "binary" directory by default + - If downloading binary is slow, please try: ``python3 build_chain.py download_binary -t cdn`` ``` ```shell # Enter the operation directory cd ~/fisco/BcosBuilder/max -# Run the build _ chain.py script to download the binary. The binary package is downloaded to the binary directory by default. +# Run the build _ chain.py script to download the binary. The binary package is downloaded to the binary directory by default python3 build_chain.py download_binary ``` ### 5.2 Deploying RPC Services -Similar to the Pro version FISCO BCOS, the Max version blockchain system also includes RPC services, which can be deployed and built through the chain building script 'BcosBuilder'. The sample configuration file 'config' is provided in the 'BcosBuilder / max / conf' directory.-deploy-example.toml ', which can be deployed on the' 172.25.0.3 'machine of the organization' agencyA '. The listening port occupied by RPC is' 20200'。 +Similar to Pro version FISCO BCOS, the Max version blockchain system also includes RPC services, which can be deployed and built through the chain building script 'BcosBuilder'. The 'BcosBuilder / max / conf' directory provides a sample configuration file 'config-deploy-example.toml', which can be deployed on the '172.25.0.3' machine of the organization 'agencyA'。 ```eval_rst .. note:: - Make sure that the default port 20200 is not occupied. If it is occupied, manually modify the configuration "config.toml" to configure ports that are not occupied. + Make sure that the default port 20200 is not occupied. If it is occupied, manually modify the configuration "config.toml" to configure ports that are not occupied ``` ```shell @@ -208,7 +208,7 @@ generated/rpc/chain0 ├── 172.25.0.3 │   ├── agencyABcosRpcService # RPC Service Directory for Institution A │   │   ├── config.ini.tmp # Configuration file for RPC service of institution A -│   │   ├── sdk # The SDK certificate directory. The SDK client can copy certificates from this directory to connect to the RPC service. +│   │   ├── sdk # The SDK certificate directory. The SDK client can copy certificates from this directory to connect to the RPC service │   │   │   ├── ca.crt │   │   │   ├── cert.cnf │   │   │   ├── sdk.crt @@ -218,25 +218,25 @@ generated/rpc/chain0 │   │   ├── cert.cnf │   │   ├── ssl.crt │   │   └── ssl.key -└── ca # The CA certificate directory, which mainly includes the CA certificate and the CA private key. Keep the CA certificate and the CA private key properly. +└── ca # The CA certificate directory, which mainly includes the CA certificate and the CA private key. Keep the CA certificate and the CA private key properly ├── ca.crt ├── ca.key ├── ca.srl └── cert.cnf ``` -After the RPC service is started successfully, you can view the service list 'agencyABcosRpcService' on the tars web management platform, and each service is in the 'active' state. +After the RPC service is started successfully, you can view the service list 'agencyABcosRpcService' on the tars web management platform, and each service is in the 'active' state ```eval_rst .. note:: - - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform.-> [user center]-> [token management] to obtain the token list - - **Keep the RPC service CA certificate and CA private key generated during service deployment for SDK certificate application, RPC service expansion, and other operations.** + - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform ->User Center ->[token management] obtaining the token list + - **Keep the RPC service CA certificate and CA private key generated during service deployment for SDK certificate application, RPC service expansion, and other operations** ``` ### 5.3 Deploying Gateway Services -After the RPC service is deployed, you need to deploy the Gateway service to establish network connections between organizations.。Run the following command in the 'BcosBuilder / max' directory to deploy and start the Gateway service of the two organizations. The corresponding Gateway service name is' agencyABcosGatewayService ', the ip address is' 172.25.0.3 ', and the occupied ports are' 30300'(Before performing this operation, please make sure that the '30300' port of the machine is not occupied)。 +After the RPC service is deployed, you need to deploy the Gateway service to establish network connections between organizations。Run the following command in the 'BcosBuilder / max' directory to deploy and start the Gateway service of the two organizations. The corresponding Gateway service name is' agencyABcosGatewayService ', the ip address is' 172.25.0.3 ', and the occupied ports are' 30300'(Before performing this operation, please make sure that the '30300' port of the machine is not occupied)。 ```shell # Enter the operation directory @@ -288,7 +288,7 @@ generated/gateway/chain0 │   │   ├── cert.cnf │   │   ├── ssl.crt │   │   └── ssl.key -└── ca # Configure the root certificate of the Gateway service. Save the root certificate and the root certificate private key. +└── ca # Configure the root certificate of the Gateway service. Save the root certificate and the root certificate private key ├── ca.crt ├── ca.key ├── ca.srl @@ -297,14 +297,14 @@ generated/gateway/chain0 ```eval_rst .. note:: - - **Keep the RPC service CA certificate and CA private key generated during service deployment for operations such as gateway service expansion.** + - **Keep the RPC service CA certificate and CA private key generated during service deployment for operations such as gateway service expansion** ``` -After the Gateway service is successfully started, you can view the service list 'agencyABcosGatewayService' on the tars web management platform, and each service is in the 'active' state. +After the Gateway service is successfully started, you can view the service list 'agencyABcosGatewayService' on the tars web management platform, and each service is in the 'active' state ### 5.4 Deploying Blockchain Node Services -After the RPC service and the Gateway service are deployed, you can deploy the blockchain node service.。Run the following command in the 'BcosBuilder / max' directory to deploy and start a single-node blockchain service. The corresponding service names are 'agencyAgroup0node0BcosMaxNodeService' and 'agencyAgroup0node0BcosExecutorService'. The chain ID is' chain0 'and the group ID is' group0'。 +After the RPC service and the Gateway service are deployed, you can deploy the blockchain node service。Run the following command in the 'BcosBuilder / max' directory to deploy and start a single-node blockchain service. The corresponding service names are 'agencyAgroup0node0BcosMaxNodeService' and 'agencyAgroup0node0BcosExecutorService'. The chain ID is' chain0 'and the group ID is' group0'。 ```shell # Enter the operation directory @@ -402,22 +402,22 @@ generated/chain0 ```eval_rst .. note:: - - It is recommended to deploy the blockchain node service after the RPC and Gateway services are deployed. - - Before deploying a Max version blockchain node, make sure that tikv is deployed and started + - It is recommended to deploy the blockchain node service after deploying RPC and Gateway services + - Before deploying a Max version blockchain node, make sure tikv is deployed and started ``` -After the blockchain node service is successfully started, you can view the service lists' agencyAgroup0node0BcosMaxNodeService 'and' agencyAgroup0node0BcosExecutorService 'on the tars web page management platform, and each service is in the' active 'status.。 +After the blockchain node service is successfully started, you can view the service lists' agencyAgroup0node0BcosMaxNodeService 'and' agencyAgroup0node0BcosExecutorService 'on the tars web page management platform, and each service is in the' active 'status。 ## 6. Configure and use the console -The console is also suitable for Air / Pro / Max versions of FISCO BCOS blockchain, and the experience is completely consistent。After the Max version blockchain experience environment is built, you can configure and use the console to send transactions to the Max version blockchain.。 +The console is also suitable for Air / Pro / Max versions of FISCO BCOS blockchain, and the experience is completely consistent。After the Max version blockchain experience environment is built, you can configure and use the console to send transactions to the Max version blockchain。 ### 6.1 Installation Dependencies ```eval_rst .. note:: - - For console configuration methods and commands, please refer to 'here <.. /.. / operation _ and _ maintenance / console / console _ config.html >' _ + -For console configuration methods and commands, please refer to 'here<../../operation_and_maintenance/console/console_config.html>`_ ``` Before using the console, you need to install the java environment: @@ -439,7 +439,7 @@ cd ~/fisco && curl -LO https://github.com/FISCO-BCOS/console/releases/download/v ``` ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh` + -If you cannot download for a long time due to network problems, please try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh` ``` **Step 2: Configure the Console** @@ -453,10 +453,10 @@ If the RPC service does not use the default port, replace 20200 in the file with cp -n console/conf/config-example.toml console/conf/config.toml ``` -- Configure Console Certificates +- Configure console certificates ```shell -# The command find.-name sdk Find all SDK certificate paths +# All SDK certificate paths can be found through the command find.-name sdk cp -r ~/fisco/BcosBuilder/max/generated/rpc/chain0/agencyBBcosRpcService/172.25.0.3/sdk/* console/conf ``` diff --git a/3.x/en/docs/tutorial/max/max_builder.md b/3.x/en/docs/tutorial/max/max_builder.md index ec3a972f4..0dd8e32a4 100644 --- a/3.x/en/docs/tutorial/max/max_builder.md +++ b/3.x/en/docs/tutorial/max/max_builder.md @@ -9,15 +9,15 @@ Tags: "Max version of the blockchain network" "deployment tool" The deployment tool BcosBuilder aims to enable users to deploy and use the FISCO BCOS Pro / max version of the blockchain as quickly as possible. Its functions include: deploying / starting / shutting down / updating / scaling RPC services, Gateway services, and blockchain node services。 ``` -FISCO BCOS provides the 'BcosBuilder' tool to help users quickly deploy, start, stop, update and scale the FISCO BCOS Max version of the blockchain consortium chain, which can be downloaded directly from the release tags of FISCO BCOS.。 +FISCO BCOS provides the 'BcosBuilder' tool to help users quickly deploy, start, stop, update and scale the FISCO BCOS Max version of the blockchain consortium chain, which can be downloaded directly from the release tags of FISCO BCOS。 -'BcosBuilder 'provides some configuration templates in the' max / conf 'directory to help users quickly complete the deployment and expansion of the Max version blockchain.。This chapter introduces the configuration items of 'BcosBuilder' in detail from three perspectives: tars service configuration items, blockchain deployment configuration items, and blockchain expansion configuration items.。 +'BcosBuilder 'provides some configuration templates in the' max / conf 'directory to help users quickly complete the deployment and expansion of the Max version blockchain。This chapter introduces the configuration items of 'BcosBuilder' in detail from three perspectives: tars service configuration items, blockchain deployment configuration items, and blockchain expansion configuration items。 ## 1 tars service configuration item -- `[tars].tars_url`: The URL for accessing the tars web console. The default value is' http '.://127.0.0.1:3000`。 -- `[tars].tars_token`: Access the token of the tars service through the [admin] of the tars web console.-> [user center]-> [token management] for token application and query。 -- `[tars].tars_pkg_dir`: Path to place the Max version binary package. The default path is binary /. If this configuration item is configured, the FISCO BCOS Pro version binary is obtained from the specified directory by default for service deployment, expansion, and other operations.。 +- `[tars].tars_url`: The URL for accessing the tars web console. The default value is' http '://127.0.0.1:3000`。 +- `[tars].tars_token`: To access the token of the tars service, you can use the [admin] ->User Center ->[token management] token application and query。 +- `[tars].tars_pkg_dir`: Path to place the Max version binary package. The default path is binary /. If this configuration item is configured, the FISCO BCOS Pro version binary is obtained from the specified directory by default for service deployment, expansion, and other operations。 The following is an example of a configuration item for the tars service: @@ -30,17 +30,17 @@ tars_pkg_dir = "binary/" ## 2 Deployment configuration of blockchain service -Configuration items related to blockchain service deployment mainly include chain configuration items, RPC / Gateway service configuration items, and blockchain node service configuration items. The configuration template is located in the 'conf / config' of 'BcosBuilder / max'-deploy-example.toml 'under the path。 +Configuration items related to blockchain service deployment mainly include chain configuration items, RPC / Gateway service configuration items, and blockchain node service configuration items. The configuration template is located in the 'conf / config-deploy-example.toml' path of 'BcosBuilder / max'。 ### 2.1 Chain Configuration Item Chain configuration items are located in the configuration '[chain]' and mainly include: -- `[chain].chain_id`: The ID of the chain to which the blockchain service belongs. The default value is' chain0 '.**Cannot include all special characters except letters and numbers**; -- `[chain].rpc_sm_ssl`: The type of SSL connection used between the RPC service and the SDK client. If the value is set to 'false', RSA encryption is used.;If it is set to 'true', it indicates that the state-secret SSL connection is used. The default value is' false '.; -- `[chain].gateway_sm_ssl`: SSL connection type between Gateway services. Set to 'false' to use RSA encryption;Set to 'true' to indicate that a state-secret SSL connection is used. The default value is' false '.; -- `[chain].rpc_ca_cert_path`: The path of the CA certificate of the RPC service. If a complete CA certificate and CA private key are available in this path, the 'BcosBuilder' deployment tool generates the RPC service SSL connection certificate based on the CA certificate in this path.;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the RPC service based on the generated CA certificate; -- `[chain].gateway_ca_cert_path`: The CA certificate path of the Gateway service. If there is a complete CA certificate and CA private key in this path, the 'BcosBuilder' deployment tool generates the Gateway service SSL connection certificate based on the CA certificate in this path.;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the Gateway service based on the generated CA certificate; +- `[chain].chain_id`: The ID of the chain to which the blockchain service belongs. The default value is' chain0 '**Cannot include all special characters except letters and numbers**; +- `[chain].rpc_sm_ssl`: The type of SSL connection used between the RPC service and the SDK client. If the value is set to 'false', RSA encryption is used;If it is set to 'true', it indicates that the state-secret SSL connection is used. The default value is' false '; +- `[chain].gateway_sm_ssl`: SSL connection type between Gateway services. Set to 'false' to use RSA encryption;Set to 'true' to indicate that a state-secret SSL connection is used. The default value is' false '; +- `[chain].rpc_ca_cert_path`: The path of the CA certificate of the RPC service. If a complete CA certificate and CA private key are available in this path, the 'BcosBuilder' deployment tool generates the RPC service SSL connection certificate based on the CA certificate in this path;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the RPC service based on the generated CA certificate; +- `[chain].gateway_ca_cert_path`: The CA certificate path of the Gateway service. If there is a complete CA certificate and CA private key in this path, the 'BcosBuilder' deployment tool generates the Gateway service SSL connection certificate based on the CA certificate in this path;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the Gateway service based on the generated CA certificate; The chain ID is' chain0 '. The configuration items for RSA encrypted connections between RPC and SDK and between Gateway services are as follows: @@ -62,23 +62,23 @@ gateway_sm_ssl=false The organization service configuration item is located in '[[agency]]'. The main configuration information of the organization's disk placement encryption and the etcd cluster information used to provide the primary and secondary services are as follows: - `[[agency]].name`: Name of institution; -- `[[agency]].failover_cluster_url`: The access address of the etcd cluster used to provide automated master / standby services. You can reuse the 'tikv' pd cluster.**Ensure that RPC / Gateway / blockchain nodes within the organization can access the etcd cluster**; +- `[[agency]].failover_cluster_url`: The access address of the etcd cluster used to provide automated master / standby services. You can reuse the 'tikv' pd cluster**Ensure that RPC / Gateway / blockchain nodes within the organization can access the etcd cluster**; - `[[agency]].enable_storage_security`: Whether the RPC / Gateway service in the organization is enabled; - `[[agency]].key_center_url`: If the disk encryption service is enabled, configure the URL of the 'Key Manager' through this configuration item; -- `[[agency]].cipher_data_key`: If the disk encryption service is enabled, configure the encryption key through this configuration item. +- `[[agency]].cipher_data_key`: If the disk encryption service is enabled, configure the encryption key through this configuration item ### 2.3 RPC Service Configuration Item ```eval_rst .. note:: - - When deploying an RPC service to multiple machines, make sure that the tarsnode service is installed on these machines. For details about how to deploy a tarsnode, see < https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/installation/node.md>`_ + -When deploying an RPC service to multiple machines, make sure that the tarsnode service is installed on these machines. For tarsnode deployment, please refer to 'here`_ ``` RPC service configuration items are located in '[[agency]]. [agency.rpc]'. An organization can deploy an RPC service, and a chain can contain multiple organizations. The main configuration items include: -- `[[agency]].[agency.rpc].deploy_ip`: The deployment IP address of the RPC service. If multiple IP addresses are configured, the RPC service is deployed on multiple machines to achieve the goal of parallel expansion.。 +- `[[agency]].[agency.rpc].deploy_ip`: The deployment IP address of the RPC service. If multiple IP addresses are configured, the RPC service is deployed on multiple machines to achieve the goal of parallel expansion。 - `[[agency]].[agency.rpc].listen_ip`: The listening IP address of the RPC service. The default value is' 0.0.0.0'。 -- `[[agency]].[agency.rpc].listen_port`: The listening port of the RPC service. The default value is 20200.。 +- `[[agency]].[agency.rpc].listen_port`: The listening port of the RPC service. The default value is 20200。 - `[[agency]].[agency.rpc].thread_count`: Number of worker threads in RPC service process, default is' 4'。 @@ -95,7 +95,7 @@ enable_storage_security = false # cipher_data_key = [agency.rpc] - # You can deploy multiple IP addresses. You must ensure that the tarsnode service is installed on the machine corresponding to each IP address. + # You can deploy multiple IP addresses. You must ensure that the tarsnode service is installed on the machine corresponding to each IP address deploy_ip=["172.25.0.3"] # RPC Service Listening IP listen_ip="0.0.0.0" @@ -109,7 +109,7 @@ enable_storage_security = false RPC service configuration items are located in '[[agency]]. [agency.gateway]'. An organization can deploy one Gateway service and a chain can deploy multiple Gateway services. The main configuration items include: -- `[[agency]].[agency.gateway].deploy_ip`: The deployment IP address of the Gateway service. If multiple IP addresses are configured, the Gateway service is deployed on multiple machines to achieve the goal of parallel expansion.。 +- `[[agency]].[agency.gateway].deploy_ip`: The deployment IP address of the Gateway service. If multiple IP addresses are configured, the Gateway service is deployed on multiple machines to achieve the goal of parallel expansion。 - `[[agency]].[agency.gateway].listen_ip`: The listening IP address of the Gateway service. The default value is' 0.0.0.0'。 - `[[agency]].[agency.gateway].listen_port`: The listening port of the Gateway service. The default value is' 30300'。 - `[[agency]].[agency.gateway].peers`: Connection information for all Gateway services。 @@ -147,14 +147,14 @@ Each blockchain node service in the blockchain of FISCO BCOS Pro belongs to a gr The group configuration also includes configurations related to the Genesis block: -- `[[group]].leader_period`: The number of blocks that each leader can package consecutively. The default value is 5.; +- `[[group]].leader_period`: The number of blocks that each leader can package consecutively. The default value is 5; - `[[group]].block_tx_count_limit`: The maximum number of transactions that can be included in each block, which defaults to 1000; -- `[[group]].consensus_type`: Consensus algorithm type. Currently, only the 'pbft' consensus algorithm is supported.; -- `[[group]].gas_limit`: The maximum amount of gas consumed during the run of each transaction. The default value is 300000000.; -- `[[group]].vm_type`: The type of virtual machine running on a blockchain node. Currently, two types are supported: 'evm' and 'wasm'. A group can run only one type of virtual machine. Some nodes cannot run EVM virtual machines and some nodes cannot run WASM virtual machines.; +- `[[group]].consensus_type`: Consensus algorithm type. Currently, only the 'pbft' consensus algorithm is supported; +- `[[group]].gas_limit`: The maximum amount of gas consumed during the run of each transaction. The default value is 300000000; +- `[[group]].vm_type`: The type of virtual machine running on a blockchain node. Currently, two types are supported: 'evm' and 'wasm'. A group can run only one type of virtual machine. Some nodes cannot run EVM virtual machines and some nodes cannot run WASM virtual machines; - `[[group]].auth_check`: To enable the permission governance mode, please refer to the link [Permission Governance User Guide](../../develop/committee_usage.md); - `[[group]].init_auth_address`: When permission governance is enabled, specify the account address of the initialization governance committee. For permission usage documents, please refer to the link: [Permission Governance Usage Guide](../../develop/committee_usage.md); -- `[[group]].compatibility_version`: The data-compatible version number. The default value is 3.0.0. You can upgrade the data-compatible version when running the 'setSystemConfigByKey' command in the console.。 +- `[[group]].compatibility_version`: The data-compatible version number. The default value is 3.0.0. You can upgrade the data-compatible version when running the 'setSystemConfigByKey' command in the console。 ```ini [[group]] @@ -186,13 +186,13 @@ compatibility_version="3.0.0" ### 2.6 Blockchain Node Service Configuration Item: Deployment Configuration The blockchain node service deployment configuration item is located in '[[agency]]. [[agency.group]]. [[agency.group.node]]', as follows: -- `[[agency]].[[agency.group]].[[agency.group.node]].node_name`: The name of the node service, which is not configured in the service deployment scenario.**If this option is configured, make sure that the service names of different node services are not duplicated**; +- `[[agency]].[[agency.group]].[[agency.group.node]].node_name`: The name of the node service, which is not configured in the service deployment scenario**If this option is configured, make sure that the service names of different node services are not duplicated**; - `[[agency]].[[agency.group]].[[agency.group.node]].deploy_ip`: BcosMaxNodeService node service deployment ip address of Max node - `[[agency]].[[agency.group]].[[agency.group.node]].executor_deploy_ip`: BcosExecutoerService service deployment ip of Max node - `[[agency]].[[agency.group]].[[agency.group.node]].pd_addrs`: The pd access address of the TiKV cluster,**Each Max node must be connected to an independent TiKV cluster. TiKV clusters cannot be shared by different Max nodes**; -- `[[agency]].[[agency.group]].[[agency.group.node]].key_page_size`: The granularity of the KeyPage. The default value is 10KB.; -- `[[agency]].[[agency.group]].[[agency.group.node]].enable_storage_security`: Whether to enable disk placement encryption. The default value is false.; -- `[[agency]].[[agency.group]].[[agency.group.node]].key_center_url`: If disk encryption is enabled, you can configure the key-url of manager +- `[[agency]].[[agency.group]].[[agency.group.node]].key_page_size`: The granularity of the KeyPage. The default value is 10KB; +- `[[agency]].[[agency.group]].[[agency.group.node]].enable_storage_security`: Whether to enable disk placement encryption. The default value is false; +- `[[agency]].[[agency.group]].[[agency.group.node]].key_center_url`: If disk encryption is enabled, the url of the key-manager can be configured here - `[[agency]].[[agency.group]].[[agency.group.node]].cipher_data_key`: If disk encryption is enabled, configure the data encryption key here - `[[agency]].[[agency.group]].[[agency.group.node]].monitor_listen_port`: The listening port of the monitoring service, which is' 3902 'by default - `[[agency]].[[agency.group]].[[agency.group.node]].monitor_log_path`: Path of the blockchain node logs to be monitored @@ -218,13 +218,13 @@ name = "agencyA" ## 3 Block chain service expansion configuration -'BcosBuilder / max 'provides blockchain node service expansion and RPC / Gateway service expansion functions. The configuration template for blockchain node service expansion can be found in' conf / config-node-expand-example.toml 'path, RPC / Gateway service expansion configuration template in' conf / config-service-expand-example.toml 'under the path。 +'BcosBuilder / max 'provides blockchain node service expansion and RPC / Gateway service expansion functions. The configuration template for blockchain node service expansion is in the' conf / config-node-expand-example.toml 'path, and the configuration template for RPC / Gateway service expansion is in the' conf / config-service-expand-example.toml 'path。 ### 3.1 RPC Service Expansion Configuration In FISCO BCOS Pro version blockchain, an RPC service can contain multiple RPC service nodes. BcosBuilder provides the RPC service scaling function, which can scale out RPC service nodes based on existing RPC services. The configuration options are mainly located in the configurations of '[chain]' and '[[agency]]. [agency.rpc]', mainly including: -- `[chain].chain_id`: The ID of the chain to which the expanded RPC service belongs.; +- `[chain].chain_id`: The ID of the chain to which the expanded RPC service belongs; - `[chain].rpc_sm_ssl`: Whether the expanded RPC service and SDK client use the state-secret SSL connection; - `[chain].rpc_ca_cert_path`: Specify the path to the CA certificate and CA private key of the expanded RPC service; - `[[agency]].[agency.rpc].deploy_ip`: Deployment IP of Scaled RPC Service; @@ -265,13 +265,13 @@ enable_storage_security = false ### 3.2 Configuration of Gateway Service Expansion Similar to the RPC service, the scaling configuration options of the Gateway service are mainly located in the configurations of '[chain]' and '[[agency]]. [agency.gateway]', mainly including: -- `[chain].chain_id`: The ID of the chain to which the expanded Gateway service belongs.; +- `[chain].chain_id`: The ID of the chain to which the expanded Gateway service belongs; - `[chain].gateway_sm_ssl`: Whether the state-secret SSL connection is used between the expanded Gateway service and the SDK client; - `[chain].gateway_ca_cert_path`: Specify the path of the CA certificate and the CA private key of the extended Gateway service; - `[[agency]].[agency.gateway].deploy_ip`: Deployment IP address of the scaled-out Gateway service; - `[[agency]].[agency.gateway].listen_ip`: The listening IP address of the Gateway service node. The default value is' 0.0.0.0'; - `[[agency]].[agency.gateway].listen_port`: The listening port of the Gateway service. The default value is' 30300'; -- `[[agency]].[agency.gateway].peers`: The connection information of the Gateway service. You must configure the connection IP address and connection port information of all Gateway service nodes.。 +- `[[agency]].[agency.gateway].peers`: The connection information of the Gateway service. You must configure the connection IP address and connection port information of all Gateway service nodes。 The following is an example of how to configure the 'agencyA' Gateway service 'agencyABcosGatewayService' to '172.25.0.5': @@ -304,20 +304,20 @@ enable_storage_security = false ### 3.3 Block chain node expansion configuration -'BcosBuilder / max 'provides the blockchain node expansion function, which can expand new blockchain node services for specified groups. The blockchain node expansion configuration template is located in' conf / config-node-expand-example.toml 'path, mainly including**chain configuration**和**Scale-out deployment configuration**, as follows: +'BcosBuilder / max 'provides the blockchain node expansion function to expand new blockchain node services for a specified group. The blockchain node expansion configuration template is located in the' conf / config-node-expand-example.toml 'path**chain configuration**和**Scale-out deployment configuration**, as follows: -- `[chain].chain_id`: The ID of the chain to which the expanded blockchain node belongs.; +- `[chain].chain_id`: The ID of the chain to which the expanded blockchain node belongs; - `[[group]].group_id`: Group ID of the expansion node; - `[[group]].genesis_config_path`: Path to configure the Genesis block of the scaling node; -- `[[group]].sm_crypto`: Whether the scaling node is a state secret node. The default value is' false '.; +- `[[group]].sm_crypto`: Whether the scaling node is a state secret node. The default value is' false '; - `[[agency]].[[agency.group]].group_id`: Group ID of the scaling node; -- `[[agency]].[[agency.group.node]].node_name`: The service name of the expanded blockchain node.**Cannot conflict with the service name of an existing blockchain node**; +- `[[agency]].[[agency.group.node]].node_name`: The service name of the expanded blockchain node**Cannot conflict with the service name of an existing blockchain node**; - `[[agency]].[[agency.group.node]].deploy_ip`: Deployment IP address of the expanded blockchain node service; - `[[agency]].[[agency.group.node]].pd_addrs`: The pd access address of the TiKV cluster corresponding to the expansion node,**Each Max node must be connected to an independent TiKV cluster. TiKV clusters cannot be shared by different Max nodes**; - `[[agency]].[[agency.group.node]].executor_deploy_ip`: BcosExecutoerService service deployment ip of Max node; - `[[agency]].[[agency.group.node]].enable_storage_security`: Whether disk encryption is enabled on the expansion node; -- `[[agency]].[[agency.group.node]].key_center_url`: key-The url of the manager. You need to configure the url when you enable disk encryption.; -- `[[agency]].[[agency.group.node]].cipher_data_key`: Data disk encryption key. You need to configure the data disk encryption key in the disk encryption scenario.。 +- `[[agency]].[[agency.group.node]].key_center_url`: The url of the key-manager. You need to configure the; +- `[[agency]].[[agency.group.node]].cipher_data_key`: Data disk encryption key. You need to configure the data disk encryption key in the disk encryption scenario。 The following is an example of how to scale up the blockchain node named 'node2' to '172.25.0.5' for the 'group0' group of the 'agencyA' organization: ```ini @@ -343,13 +343,13 @@ name = "agencyA" ### 3.4 Block chain actuator expansion configuration -Traditional blockchain nodes are deployed on a machine, and the execution rate of transactions is limited by the performance of a machine。The Max version of FISCO BCOS supports the deployment of transaction executors in blockchain nodes on multiple machines.**Multi-machine parallel execution of intra-block transactions**The transaction processing performance of a single blockchain node is greatly expanded.。At the same time, multiple transaction actuators also improve the stability of the system, only one actuator can work online。 +Traditional blockchain nodes are deployed on a machine, and the execution rate of transactions is limited by the performance of a machine。The Max version of FISCO BCOS supports the deployment of transaction executors in blockchain nodes on multiple machines**Multi-machine parallel execution of intra-block transactions**The transaction processing performance of a single blockchain node is greatly expanded。At the same time, multiple transaction actuators also improve the stability of the system, only one actuator can work online。 -'BcosBuilder / max 'provides the blockchain node expansion function, which can expand new blockchain node services for specified groups. The blockchain node expansion configuration template is located in' conf / config-node-expand-example.toml 'path, mainly including**chain configuration**和**Scale-out deployment configuration**。The relevant configuration items of the transaction executor are as follows. We can configure multiple executors for each node: +'BcosBuilder / max 'provides the blockchain node expansion function to expand new blockchain node services for a specified group. The blockchain node expansion configuration template is located in the' conf / config-node-expand-example.toml 'path**chain configuration**和**Scale-out deployment configuration**。The relevant configuration items of the transaction executor are as follows. We can configure multiple executors for each node: * `[[agency]].[[agency.group.node]].executor_deploy_ip` -When the max version is built, the multi-machine architecture of executor is built. At this time, you can see that the executor process is running through the tars console, and it is not the same as the node process. +When the max version is built, the multi-machine architecture of executor is built. At this time, you can see that the executor process is running through the tars console, and it is not the same as the node process ![](../../../images/tutorial/expand_executor0.png) @@ -358,7 +358,7 @@ When the max version is built, the multi-machine architecture of executor is bui Can expand more executor ``` bash -# After the Max node has been built, edit the config-node-expand-example.toml, add more executors +# After the Max node is built, edit config-node-expand-example.toml to add more executors cd tools/BcosBuilder/max/ vim config-node-expand-example.toml ``` @@ -371,7 +371,7 @@ Modify 'executor _ deploy _ ip' in the file to add more machine ip addresses to # As originally executor_deploy_ip=["172.25.0.3"] # More can be added -executor_deploy_ip=["172.25.0.3","172.25.0.4","172.25.0.5"] # an executor of the node is deployed under each ip address. +executor_deploy_ip=["172.25.0.3","172.25.0.4","172.25.0.5"] # an executor of the node is deployed under each ip address ``` Call script to expand capacity @@ -380,16 +380,16 @@ Call script to expand capacity python3 build_chain.py chain -c config-node-expand-example.toml -o expand -t executor ``` -After success, you can see more executors in the active state on the dashboard page of tars. +After success, you can see more executors in the active state on the dashboard page of tars ![](../../../images/tutorial/expand_executor1.png) #### Management Operations -After scaling, you can stop or restart the executor through the tars console.。After an executor is stopped or restarted, there is no need to restart the corresponding blockchain node process. The node automatically rebuilds the transaction execution context with all online executors.。The multi-machine deployment of the transaction executor improves the performance of the transaction execution while also improving the stability of the system.。 +After scaling, you can stop or restart the executor through the tars console。After an executor is stopped or restarted, there is no need to restart the corresponding blockchain node process. The node automatically rebuilds the transaction execution context with all online executors。The multi-machine deployment of the transaction executor improves the performance of the transaction execution while also improving the stability of the system。 ![](../../../images/tutorial/expand_executor2.png) ### TiKV Expansion -Max nodes are recommended to use the cluster version of TIKV in the production environment. TiKV cluster version can be used as the back end of the node to easily and simply achieve scale-out and scale-out.。Specific expansion and contraction operation instructions [please refer to official documents](https://docs.pingcap.com/zh/tidb/dev/scale-tidb-using-tiup)。 +Max nodes are recommended to use the cluster version of TIKV in the production environment. TiKV cluster version can be used as the back end of the node to easily and simply achieve scale-out and scale-out。Specific expansion and contraction operation instructions [please refer to official documents](https://docs.pingcap.com/zh/tidb/dev/scale-tidb-using-tiup)。 diff --git a/3.x/en/docs/tutorial/pro/config.md b/3.x/en/docs/tutorial/pro/config.md index 0c60ecbb0..181f4b3d2 100644 --- a/3.x/en/docs/tutorial/pro/config.md +++ b/3.x/en/docs/tutorial/pro/config.md @@ -76,9 +76,9 @@ Block chain node service construction please refer to [here](./installation.html ```eval_rst .. note:: - - **The Genesis block configuration must be consistent for all nodes in the group.** + - **The Genesis block configuration must be consistent for all nodes in the group** - **Genesis block configuration file cannot be changed after chain initialization** - - After the chain is initialized, even if the creation block configuration is changed, the new configuration will not take effect, and the system still uses the genesis configuration when the chain is initialized + -After the chain is initialized, even if the creation block configuration is changed, the new configuration will not take effect, and the system still uses the genesis configuration when the chain is initialized ``` #### 2.1.1 Chain Configuration Options @@ -87,7 +87,7 @@ Chain configuration options are located in '[chain]' and include: - `[chain].sm_crypto`: Used to configure the cryptology type of the ledger, 'true' indicates that the ledger uses the state secret algorithm, 'false' indicates that the ledger uses the non-state secret algorithm, the default is' false '; - `[chain].group_id`: group id; -- `[chain].chain_id`: chain id. +- `[chain].chain_id`: chain id An example of node chain configuration options is as follows: @@ -102,10 +102,10 @@ chain_id=chain0 '[consensus]' involves consensus-related configurations, including: -- `[consensus].consensus_type`: Consensus type. The default setting is' pbft '. Currently, FISCO BCOS v3.x only supports the PBFT consensus algorithm.; +- `[consensus].consensus_type`: Consensus type. The default setting is' pbft '. Currently, FISCO BCOS v3.x only supports the PBFT consensus algorithm; - `[consensus].block_tx_count_limit`: The maximum number of transactions that can be included in each block. The default setting is 1000; - `[consensus].leader_period`: The number of consecutive blocks packed by each leader in the consensus process. The default value is 5; -- '[consensus] .node.idx': list of consensus nodes, configured with the NodeIDs of the participating consensus nodes。 +- '[consensus] .node.idx': the list of consensus nodes. The NodeIDs of the participating consensus nodes are configured。 The configuration example of '[consensus]' is as follows: @@ -142,13 +142,13 @@ FISCO BCOS v3.0.0 designs and implements a compatibility framework that supports '[executor]' configuration items involve the execution of related genesis block configurations, mainly including: - `[executor].is_wasm`: Used to configure the virtual machine type, 'true' indicates the use of WASM virtual machine, 'false' indicates the use of EVM virtual machine, the configuration option is not dynamically adjustable, the default is' false '; -- `[executor].is_auth_check`: The configuration switch for permission control. 'true' indicates that permission control is enabled, and 'false' indicates that permission control is disabled. This configuration option cannot be dynamically adjusted. The permission control function is disabled by default.; +- `[executor].is_auth_check`: The configuration switch for permission control. 'true' indicates that permission control is enabled, and 'false' indicates that permission control is disabled. This configuration option cannot be dynamically adjusted. The permission control function is disabled by default; - `[executor].is_serial_execute`: Transaction execution serial and parallel mode configuration switch, 'true' indicates to enter the serial execution mode, 'false' indicates to enter the DMC parallel execution mode, the configuration option is expected not to be dynamically adjusted, the default is' false '; - `[executor].auth_admin_account`: Permission administrator account address, only used in permission control scenarios(When the chain version number is greater than 3.3 or the permission is enabled, this configuration must be added)。 ### 2.2 Node Configuration -Node configuration 'config.ini' is mainly used to configure the node's chain ID, group ID, and ledger type(State Secret / Non-State Secret)and so on, including service configuration options, consensus configuration options, storage configuration options, transaction pool configuration options, log configuration options, and so on.。 +Node configuration 'config.ini' is mainly used to configure the node's chain ID, group ID, and ledger type(State Secret / Non-State Secret)and so on, including service configuration options, consensus configuration options, storage configuration options, transaction pool configuration options, log configuration options, and so on。 #### 2.2.1 Service Configuration Options @@ -187,10 +187,10 @@ min_seal_time=500 The storage configuration option is located in '[storage]' and is primarily used to configure on-chain data paths: - `[storage].data_path`: Ledger Data Storage Path; -- `[storage].enable_cache`: Whether to enable caching. The default value is true.; +- `[storage].enable_cache`: Whether to enable caching. The default value is true; - `[storage].type`: The underlying storage database type, which is RocksDB by default; - `pd_addrs`: Pro empty, max version field; -- `key_page_size`: The size of each page in the key _ page storage. The default value is 10240k.。 +- `key_page_size`: The size of each page in the key _ page storage. The default value is 10240k。 ```ini [storage] @@ -207,8 +207,8 @@ The trading pool configuration option is located at '[txpool]': - `[txpool].limit`: Capacity limit of trading pool, default is' 15000'; - `[txpool].notify_worker_num`: Number of transaction notification threads, 2 by default; -- `[txpool].verify_worker_num`: Number of transaction verification threads. The default value is the number of machine CPU cores.; -- `[txpool].txs_expiration_time`: The transaction expiration time, in seconds. The default value is 10 minutes. That is, transactions that have not been packaged by the consensus module for more than 10 minutes will be discarded directly.。 +- `[txpool].verify_worker_num`: Number of transaction verification threads. The default value is the number of machine CPU cores; +- `[txpool].txs_expiration_time`: The transaction expiration time, in seconds. The default value is 10 minutes. That is, transactions that have not been packaged by the consensus module for more than 10 minutes will be discarded directly。 ```ini [txpool] @@ -228,8 +228,8 @@ Log configuration options are located in '[log]' and include: - `[log].enable`: Enables / disables logging, set to 'true' to enable logging;Set to 'false' to disable logging,**The default setting is true, and performance tests can set this option to 'false' to reduce the impact of printing logs on test results** - `[log].log_path`:Log File Path。 -- `[log].level`: Log level. Currently, there are five log levels: 'trace', 'debug', 'info', 'warning', and 'error'. After a log level is set, logs greater than or equal to the log level are entered in the log file.。 -- '[log] .max _ log _ file _ size': the maximum size of each log file.**The unit of measurement is MB, the default is 200MB**。 +- `[log].level`: Log level. Currently, there are five log levels: 'trace', 'debug', 'info', 'warning', and 'error'. After a log level is set, logs greater than or equal to the log level are entered in the log file> warning > info > debug > trace`。 +- '[log] .max _ log _ file _ size': the maximum capacity of each log file**The unit of measurement is MB, the default is 200MB**。 ```ini [log] @@ -251,10 +251,10 @@ The network connection configuration is located at '[p2p]' and mainly includes: - `[p2p].listen_ip`: RPC / Gateway listens to the IP address. To ensure normal communication between nodes deployed across machines, the default listening IP address is' 0.0.0.0'; - `[p2p].listen_port`: RPC / Gateway listening port, default setting is' 30300'; -- `[p2p].sm_ssl`: Whether to use state-secret SSL connections between nodes or between SDKs and RPC services. The default value is false.; -- `[p2p].nodes_path`: The directory where the gateway connection file 'nodes.json' is located. The default value is the current directory.; +- `[p2p].sm_ssl`: Whether to use state-secret SSL connections between nodes or between SDKs and RPC services. The default value is false; +- `[p2p].nodes_path`: The directory where the gateway connection file 'nodes.json' is located. The default value is the current directory; - `[p2p].nodes_file`: The name of the gateway connection information file 'nodes.json'. The default value is' nodes.json'; -- `[p2p].thread_count`: Number of RPC / Gateway network processing threads. The default value is 4. +- `[p2p].thread_count`: Number of RPC / Gateway network processing threads. The default value is 4 ```ini [p2p] @@ -296,7 +296,7 @@ chain_id = chain0 ### 3.4 Disk drop encryption configuration -FISCO BCOS v3.0.0 supports disk encryption. It can encrypt the SSL connection private key of RPC / Gateway to ensure the confidentiality of the SSL connection private key.: +FISCO BCOS v3.0.0 supports disk encryption. It can encrypt the SSL connection private key of RPC / Gateway to ensure the confidentiality of the SSL connection private key: - `[storage_security].enable`: Whether to enable the disk drop encryption function, which is turned off by default; - `[storage_security].key_center_url`: [Key Manager] is configured for 'key _ center _ url' when encryption is enabled(../../design/storage_security.md)url to get the data encryption and decryption key; @@ -319,8 +319,8 @@ The log configuration is in the '[log]' option: - `[log].enable`: Enables / disables logging, set to 'true' to enable logging;Set to 'false' to disable logging,**The default setting is true, and performance tests can set this option to 'false' to reduce the impact of printing logs on test results** - `[log].log_path`:Log File Path。 -- `[log].level`: Log level. Currently, there are five log levels: 'trace', 'debug', 'info', 'warning', and 'error'. After a log level is set, logs greater than or equal to the log level are entered in the log file.。 -- '[log] .max _ log _ file _ size': the maximum size of each log file.**The unit of measurement is MB, the default is 200MB**。 +- `[log].level`: Log level. Currently, there are five log levels: 'trace', 'debug', 'info', 'warning', and 'error'. After a log level is set, logs greater than or equal to the log level are entered in the log file> warning > info > debug > trace`。 +- '[log] .max _ log _ file _ size': the maximum capacity of each log file**The unit of measurement is MB, the default is 200MB**。 ```ini [log] diff --git a/3.x/en/docs/tutorial/pro/deploy_pro_by_buildchain.md b/3.x/en/docs/tutorial/pro/deploy_pro_by_buildchain.md index 76f6915ce..7caeb07cf 100644 --- a/3.x/en/docs/tutorial/pro/deploy_pro_by_buildchain.md +++ b/3.x/en/docs/tutorial/pro/deploy_pro_by_buildchain.md @@ -5,7 +5,7 @@ Tags: "build _ chain" "build version of blockchain network" ---- ```eval_rst - The deployment tool build _ chain script aims to enable users to deploy and use FISCO BCOS Pro / max version blockchain without tars as quickly as possible. + The deployment tool build _ chain script aims to enable users to deploy and use FISCO BCOS Pro / max version blockchain without tars as quickly as possible ``` ## 1. Script function introduction @@ -20,37 +20,37 @@ Script command, which supports' deploy '. The default value is' deploy': ### **'g 'option [**Optional**]** -Set the group ID. If no group ID is set, the default value is group0.。 +Set the group ID. If no group ID is set, the default value is group0。 ### **'I 'option [**Optional**]** -Used to set the chain ID. If it is not set, the default value is chain0.。 +Used to set the chain ID. If it is not set, the default value is chain0。 ### **'V 'Options [**Optional**]** -Specifies the chain version (air, pro, max). The default value is air.。 +Specifies the chain version (air, pro, max). The default value is air。 ### **'l 'Options [**Optional**]** -The IP address of the generated node and the number of blockchain nodes deployed on the corresponding IP address. The parameter format is' ip1.:nodeNum1, ip2:nodeNum2`。 +The IP address of the generated node and the number of blockchain nodes deployed on the corresponding IP address. The parameter format is' ip1:nodeNum1, ip2:nodeNum2`。 The 'l' option for deploying two nodes on a machine with IP address' 192.168.0.1 'and four nodes on a machine with IP address' 127.0.0.1 'is as follows: `192.168.0.1:2, 127.0.0.1:4` ### **'p 'option [**Optional**]** -Specifies the start port for listening to P2P, RPC, tars, tikv, and monitor services. The default start ports are 30300, 20200, 40400, 2379, and 3901.。 +Specifies the start port for listening to P2P, RPC, tars, tikv, and monitor services. The default start ports are 30300, 20200, 40400, 2379, and 3901。 Specify 30300 as the starting port for P2P service listening;An example of the starting port on which 20200 listens for the RPC service is as follows: ``` -# Specify the P2P and RPC ports of the node. The remaining ports are the default values. +# Specify the P2P and RPC ports of the node. The remaining ports are the default values -p 30300,20200 ``` ### **'e 'option [**Optional**]** -Specifies the path of the binary executable files of the existing local Pro / Max versions such as rpc, gateway, and nodef. If no path is specified, the latest version of the binary is pulled by default. The default address is in the binary folder. For example, the default address of the binary for the Pro version is BcosBuilder / pro / binary.。 +Specifies the path of the binary executable files of the existing local Pro / Max versions such as rpc, gateway, and nodef. If no path is specified, the latest version of the binary is pulled by default. The default address is in the binary folder. For example, the default address of the binary for the Pro version is BcosBuilder / pro / binary。 ### **'y 'Options [**Optional**]** @@ -58,11 +58,11 @@ Specifies the binary download method of rpc, gateway, and nodef, git, or cdn. De ### **'v 'option [**Optional**]** -Specifies the binary download version of rpc, gateway, and nodef. The default value is v3.4.0.。 +Specifies the binary download version of rpc, gateway, and nodef. The default value is v3.4.0。 ### **'r 'Option [**Optional**]** -Specifies the binary download path of the rpc, gateway, or nodef service. By default, the file is downloaded to the binary folder.。 +Specifies the binary download path of the rpc, gateway, or nodef service. By default, the file is downloaded to the binary folder。 ### **'c 'option [**Optional**]** @@ -81,7 +81,7 @@ Specifies the directory where the generated node artifacts are located. The defa Specify whether to build a full-link state-secret blockchain. The state-secret blockchain has the following features: - **Blockchain Ledger Uses State Secret Algorithm**: Using sm2 signature verification algorithm, sm3 hash algorithm and sm4 symmetric encryption and decryption algorithm。 -- **The state-secret SSL connection is used between the SDK client and the node.**。 +- **The state-secret SSL connection is used between the SDK client and the node**。 - **State-secret SSL connection between blockchain nodes**。 ### **'h 'option [**Optional**]** @@ -92,7 +92,7 @@ View Script Usage。 ### 2.1 Installation Dependencies -Deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose ', depending on the operating system you are using, use the following command to install the dependency。 +The deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose'. Depending on the operating system you are using, use the following command to install the dependency。 **Install Ubuntu Dependencies(Version not less than Ubuntu18.04)** @@ -119,7 +119,7 @@ Here are four examples of deployment chains 1. Specify the ip and port of the service and automatically generate the configuration file -Run the following command to deploy the RPC service, gateway service, and node service. +Run the following command to deploy the RPC service, gateway service, and node service and tars start ports are 30300, 20200, 40400, respectively, the ip of the two institutions is 172.31.184.227, 172.30.93.111, each institution has two nodes, automatically download the latest binary; ``` @@ -128,7 +128,7 @@ bash build_chain.sh -p 30300,20200,40400 -l 172.31.184.227:2,172.30.93.111:2 -C 2. Deployment of State Secret Chain -Execute the following command through-s designated deployment state-secret chain, through-e specifies that a binary path already exists +Execute the following command, specify the deployment state secret chain through -s, and specify the existing binary path through -e ``` bash build_chain.sh -p 30300,20200,40400 -l 172.31.184.227:2,172.30.93.111:2 -C deploy -V pro -o generate -t all -e ./binary -s @@ -136,7 +136,7 @@ bash build_chain.sh -p 30300,20200,40400 -l 172.31.184.227:2,172.30.93.111:2 -C 3. Specify the download binary version -Run the following command to deploy the RPC service, the Gateway service, and the node service. Specify the download method of the binary as cdn, v3.4.0, and the download path binaryPath. +Run the following command to deploy the RPC service, the Gateway service, and the node service. Specify the download method of the binary as cdn, v3.4.0, and the download path binaryPath ``` bash build_chain.sh -p 30300,20200 -l 172.31.184.227:2,172.30.93.111:2 -C deploy -V pro -o generate -y cdn -v v3.4.0 -r ./binaryPath diff --git a/3.x/en/docs/tutorial/pro/expand_group.md b/3.x/en/docs/tutorial/pro/expand_group.md index c35fdb79b..3d556c82a 100644 --- a/3.x/en/docs/tutorial/pro/expand_group.md +++ b/3.x/en/docs/tutorial/pro/expand_group.md @@ -11,20 +11,20 @@ BCOS blockchain system group expansion and offline steps。 ```eval_rst .. note:: - - Before scaling a new group, please refer to 'here <. / installation.html >' _ Building a Pro Blockchain Network + -Before expanding the new group, please refer to 'here<./installation.html>'_ Build a Pro version of the blockchain network ``` ## 1. Expand the new group -Here take the machine at IP '172.25.0.3'(Container)Two blockchain nodes with chain ID 'chain' and group ID 'group2' are used as examples to introduce the new group expansion.。 +Here take the machine at IP '172.25.0.3'(Container)Two blockchain nodes with chain ID 'chain' and group ID 'group2' are used as examples to introduce the new group expansion。 ### 1.1 Setting up a new group configuration ```eval_rst .. note:: - In the actual operation, the tars token must be replaced by the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens。 + In the actual operation, the tars token must be replaced with the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens。 ``` -The service deployment configuration template 'conf / config can be used directly to scale out a new group-deploy-example.toml ', set the group ID to' group2 ', as follows: +You can directly use the service deployment configuration template 'conf / config-deploy-example.toml' to configure the group ID to 'group2', as follows: **macOS System:** @@ -35,7 +35,7 @@ $ cd ~/fisco/BcosBuilder/pro # Copy Configuration File $ cp conf/config-deploy-example.toml config.toml -# Configure tars token: Through the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens +# Configure tars token: Through the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens # The token here is: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430Gi $ sed -i .bkp 's/tars_token = ""/tars_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430ni50xWPJXgJdckpOTktJB3kAMNwFdl8w_GIP_3Ls"/g' config.toml @@ -55,7 +55,7 @@ $ cd ~/fisco/BcosBuilder/pro # Copy Configuration File $ cp conf/config-deploy-example.toml config.toml -# Configure tars token: Through the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens +# Configure tars token: Through the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens # The token here is: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430Gi $ sed -i 's/tars_token = ""/tars_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430ni50xWPJXgJdckpOTktJB3kAMNwFdl8w_GIP_3Ls"/g' config.toml @@ -225,7 +225,7 @@ generated/chain0/group2 └── config.genesis ``` -After the new group is successfully expanded, you can see the new blockchain services' agencyAgroup2node0BcosNodeService 'and' agencyBgroup2node0BcosNodeService 'on the tars web management platform.: +After the new group is successfully expanded, you can see the new blockchain services' agencyAgroup2node0BcosNodeService 'and' agencyBgroup2node0BcosNodeService 'on the tars web management platform: ![](../../../images/tutorial/expand_group.png) @@ -325,7 +325,7 @@ The steps for the offline group 'group2' are as follows: # Enter the operation directory cd ~/fisco/BcosBuilder/pro -# Offline group group2: Make sure that config.toml is the configuration file used during group group2 expansion. +# Offline group group2: Make sure that config.toml is the configuration file used during group group2 expansion python3 build_chain.py chain -o undeploy -t node ``` diff --git a/3.x/en/docs/tutorial/pro/expand_node.md b/3.x/en/docs/tutorial/pro/expand_node.md index cb799df11..a5e582191 100644 --- a/3.x/en/docs/tutorial/pro/expand_node.md +++ b/3.x/en/docs/tutorial/pro/expand_node.md @@ -4,31 +4,31 @@ Tags: "Pro version of the blockchain network" "" Expansion node "" ------------ -'BcosBuilder 'provides the function of expanding new nodes on the basis of existing groups. In this chapter, [Building a Pro Blockchain Network](./installation.md)On the basis of the expansion of two new blockchain nodes, to help users master the Pro version of FISCO BCOS blockchain node expansion steps.。 +'BcosBuilder 'provides the function of expanding new nodes on the basis of existing groups. In this chapter, [Building a Pro Blockchain Network](./installation.md)On the basis of the expansion of two new blockchain nodes, to help users master the Pro version of FISCO BCOS blockchain node expansion steps。 ```eval_rst .. note:: - Before performing node scaling, refer to 'Building a Pro Blockchain Network <. / installation.html >' _ Deploy a Pro Blockchain。 + Before performing node scaling, refer to 'Building a Pro Blockchain Network<./installation.html>'_ Deploy Pro version blockchain。 ``` ## 1. Deploy tarsnode -Before scaling the blockchain node service, you need to install tarsnode on the machine where the scaled blockchain service node is deployed. To help users quickly experience the service scaling process on a single machine, this chapter directly virtualizes the container with IP address' 172.25.0.5 'through the bridge network as the physical machine where the scaled blockchain service node is installed.。 +Before scaling the blockchain node service, you need to install tarsnode on the machine where the scaled blockchain service node is deployed. To help users quickly experience the service scaling process on a single machine, this chapter directly virtualizes the container with IP address' 172.25.0.5 'through the bridge network as the physical machine where the scaled blockchain service node is installed。 ```eval_rst .. note:: - - For the installation of tarsnode in the actual production environment, please refer to 'tars installation and deployment < https://doc.tarsyun.com/#/markdown/ TarsCloud/TarsDocs/installation/README.md>`_ - - If the tarsnode is already installed and the tarsnode is started, you can ignore this step. + -For the installation of tarsnode in the actual production environment, please refer to 'tars installation and deployment`_ + - If the tarsnode is already installed and the tarsnode is started, you can ignore this step ``` ```shell # Enter the operation directory cd ~/fisco/BcosBuilder -# Linux system: Go to tarsnode docker-Compose directory(macos system can be skipped) +# Linux system: Go to the directory where tarsnode docker-compose is located(macos system can be skipped) cd docker/bridge/linux/node -# macos system: Go to tarsnode docker-Compose directory(Linux system can be skipped) +# macos system: Go to the directory where tarsnode docker-compose is located(Linux system can be skipped) cd docker/bridge/mac/node # Install and start tarsnode @@ -39,10 +39,10 @@ docker-compose up -d ```eval_rst .. note:: - In the actual operation, the tars token must be replaced by the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens。 + In the actual operation, the tars token must be replaced with the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens。 ``` -For more information about the capacity expansion configuration of the blockchain node service, see the capacity expansion template 'conf / config' of 'BcosBuilder'.-node-expand-example.toml ', the specific configuration steps are as follows: +For more information about how to configure blockchain node service expansion, see the expansion template 'conf / config-node-expand-example.toml' of 'BcosBuilder'. The specific configuration steps are as follows: ```shell # Enter the operation directory @@ -51,7 +51,7 @@ cd ~/fisco/BcosBuilder/pro # Copy Template Configuration cp conf/config-node-expand-example.toml config.toml -# Configure tars token: Through the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens +# Configure tars token: Through the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens # The token here is: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430Gi # Linux system(macOS system Skip this step): sed -i 's/tars_token = ""/tars_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430ni50xWPJXgJdckpOTktJB3kAMNwFdl8w_GIP_3Ls"/g' config.toml @@ -64,7 +64,7 @@ Configure 'config.toml' for scaling as follows: ```ini [tars] tars_url = "http://127.0.0.1:3000" -# Access the token of the tars service. During deployment, replace the token from the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens +# Access the token of the tars service. During deployment, replace the token from the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens tars_token ="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430ni50xWPJXgJdckpOTktJB3kAMNwFdl8w_GIP_3Ls" tars_pkg_dir = "" @@ -181,7 +181,7 @@ generated/chain0/group0/172.25.0.5 s ``` -After the blockchain service is successfully expanded, you can see the new blockchain services' agencyAgroup0node1BcosNodeService 'and' agencyAgroup0node2BcosNodeService 'on the tars web management platform.: +After the blockchain service is successfully expanded, you can see the new blockchain services' agencyAgroup0node1BcosNodeService 'and' agencyAgroup0node2BcosNodeService 'on the tars web management platform: ![](../../../images/tutorial/expand_node.png) @@ -189,7 +189,7 @@ After the blockchain service is successfully expanded, you can see the new block ```eval_rst .. note:: - When scaling a new node, we do not recommend that you add the node as a consensus node. You can add the node as a consensus node only when the block height of the scaling node is the same as the highest block height of the existing node in the chain.。 + When scaling a new node, we do not recommend that you add the node as a consensus node. You can add the node as a consensus node only when the block height of the scaling node is the same as the highest block height of the existing node in the chain。 ``` **Step 1: Obtain the NodeID of the scaling node** diff --git a/3.x/en/docs/tutorial/pro/expand_pro_withoutTars.md b/3.x/en/docs/tutorial/pro/expand_pro_withoutTars.md index 981c6ff92..8f66dee7d 100644 --- a/3.x/en/docs/tutorial/pro/expand_pro_withoutTars.md +++ b/3.x/en/docs/tutorial/pro/expand_pro_withoutTars.md @@ -15,11 +15,11 @@ Script command, which supports' deploy '. The default value is' deploy': **'V 'Options [Optional]** -Specifies the chain version (air, pro, max). The default value is air.。 +Specifies the chain version (air, pro, max). The default value is air。 **'c 'option [Optional]** -Specifies the path of the service configuration file. This path must include config.toml. The default value is. / BcosBuilder / pro / config.toml.。 +Specifies the path of the service configuration file. This path must include config.toml. The default value is. / BcosBuilder / pro / config.toml。 **'o 'option [Optional]** @@ -36,7 +36,7 @@ Specific steps are as follows: 2. tars _ pkg _ dir in [tars] specifies the binary location of the service; 3. [group] genesis _ config _ path, which specifies the path of the genesis block configuration file of the existing node; 4. [agency.group] option in [[agency]], modify node _ name, tars _ listen _ port; -Note that the difference between the last deployed port and the tars _ listen _ port must be greater than 5. For example, if the tars _ listen _ port of the last deployed node is 40423, the minimum value of the tars _ listen _ port is 40428.。 +Note that the difference between the last deployed port and the tars _ listen _ port must be greater than 5. For example, if the tars _ listen _ port of the last deployed node is 40423, the minimum value of the tars _ listen _ port is 40428。 ``` The capacity expansion configuration 'config.toml' is as follows (for example, node1 of node A of the capacity expansion organization): @@ -159,7 +159,7 @@ expand_node/172.31.184.227/ ```eval_rst .. note:: - When you scale out a new node, first add the node as an observation node, and only when the block height of the scale-out node is the same as the highest block height of the existing node on the chain, can it be added as a consensus node.。 + When you scale out a new node, first add the node as an observation node, and only when the block height of the scale-out node is the same as the highest block height of the existing node on the chain, can it be added as a consensus node。 ``` **Step 1: Obtain the NodeID of the scaling node** @@ -260,7 +260,7 @@ Major modifications: 3. Set the institution name in [[agency]] 4. Set the deploy _ ip, listen _ port, tars _ listen _ port service ip and corresponding port of [agency.rpc]; 5. Set the deploy _ ip, listen _ port, tars _ listen _ port service ip and corresponding port of [agency.gateway], and modify the peers (you need to write the IP: port of the deployed gateway, and other deployed gateways do not need to modify the corresponding nodes.json; -Note that the difference between tars _ listen _ port and the last deployed port must be greater than 5. For example, if the tars _ listen _ port of the last deployed node is 40423, the minimum value of tars _ listen _ port in this instance is 40428, and the minimum value of tars _ listen _ port in gateway is 44429. +Note that the difference between tars _ listen _ port and the last deployed port must be greater than 5. For example, if the tars _ listen _ port of the last deployed node is 40423, the minimum value of tars _ listen _ port in this instance is 40428, and the minimum value of tars _ listen _ port in gateway is 44429 ``` The configuration of the new RPC / Gateway service 'config.toml' is as follows: diff --git a/3.x/en/docs/tutorial/pro/expand_service.md b/3.x/en/docs/tutorial/pro/expand_service.md index d5345431a..de73a8af0 100644 --- a/3.x/en/docs/tutorial/pro/expand_service.md +++ b/3.x/en/docs/tutorial/pro/expand_service.md @@ -4,36 +4,36 @@ Tags: "Pro version of blockchain network" "Scaling RPC service" ------------ -If the RPC / Gateway service cannot support business traffic, you need to scale out the RPC / Gateway service. BcosBuilder provides the function of scaling out the RPC / Gateway service. This chapter uses a stand-alone scaling out of the RPC / Gateway service of the Pro version FISCO BCOS alliance chain as an example to help users master the service scaling steps of the Pro version FISCO BCOS blockchain.。 +If the RPC / Gateway service cannot support business traffic, you need to scale out the RPC / Gateway service. BcosBuilder provides the function of scaling out the RPC / Gateway service. This chapter uses a stand-alone scaling out of the RPC / Gateway service of the Pro version FISCO BCOS alliance chain as an example to help users master the service scaling steps of the Pro version FISCO BCOS blockchain。 ```eval_rst .. note:: - Before scaling out RPC, refer to 'Building a Pro Blockchain Network <. / installation.html >' _ Deploy a Pro Blockchain。 + Before scaling out RPC, refer to Building a Pro Blockchain Network<./installation.html>'_ Deploy Pro version blockchain。 ``` ## 1. Deploy tarsnode -Before scaling the RPC / Gateway service, you must first install the tarsnode on the machine where the scaled-out RPC / Gateway service node is deployed. To help users quickly experience the service scaling process on a single machine, this chapter directly virtualizes the container with IP address' 172.25.0.5 'through the bridge network as the physical machine on which the scaled-out RPC / Gateway service node is installed.。 +Before scaling the RPC / Gateway service, you must first install the tarsnode on the machine where the scaled-out RPC / Gateway service node is deployed. To help users quickly experience the service scaling process on a single machine, this chapter directly virtualizes the container with IP address' 172.25.0.5 'through the bridge network as the physical machine on which the scaled-out RPC / Gateway service node is installed。 ```eval_rst .. note:: - For the installation of tarsnode in the actual production environment, please refer to 'tars installation and deployment < https://doc.tarsyun.com/#/markdown/ TarsCloud/TarsDocs/installation/README.md>`_ + for the installation of tarsnode in the actual production environment, see 'tars installation and deployment'`_ ``` ```shell # Enter the operation directory cd ~/fisco/BcosBuilder -# Linux system: Go to tarsnode docker-Compose directory(macos system can be skipped) +# Linux system: Go to the directory where tarsnode docker-compose is located(macos system can be skipped) cd docker/bridge/linux/node -# macos system: Go to tarsnode docker-Compose directory(Linux system can be skipped) +# macos system: Go to the directory where tarsnode docker-compose is located(Linux system can be skipped) cd docker/bridge/mac/node # Install and start tarsnode docker-compose up -d ``` -After the tarsnode is successfully installed, you can use the [O & M Management]-> The newly installed tarsnode with IP address' 172.25.0.5 'is displayed in [Node Management]: +After the tarsnode is successfully installed, you can use the [Operation and Maintenance Management] ->The newly installed tarsnode with IP address' 172.25.0.5 'is displayed in [Node Management]: ![](../../../images/tutorial/tars_node.png) @@ -42,10 +42,10 @@ After the tarsnode is successfully installed, you can use the [O & M Management] ```eval_rst .. note:: - In the actual operation, the tars token must be replaced by the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens。 + In the actual operation, the tars token must be replaced with the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens。 ``` -For details about how to configure RPC / Gateway service expansion, see the expansion template 'conf / config' of 'BcosBuilder'.-service-expand-example.toml ', the specific configuration steps are as follows: +For more information about how to configure RPC / Gateway service expansion, see the expansion template 'conf / config-service-expand-example.toml' of 'BcosBuilder'. The specific configuration steps are as follows: ```shell # Enter the operation directory @@ -54,7 +54,7 @@ cd ~/fisco/BcosBuilder/pro # Copy Template Configuration cp conf/config-service-expand-example.toml config.toml -# Configure tars token: Through the tars web management platform [admin]-> [user center]-> [token management] to obtain available tokens +# Configure tars token: Through the tars web management platform [admin] ->User Center ->[token management] obtaining available tokens # The token here is: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430Gi # Linux system(macOS system Skip this step): sed -i 's/tars_token = ""/tars_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJhZG1pbiIsImlhdCI6MTYzODQzMTY1NSwiZXhwIjoxNjY3MjAyODU1fQ.430ni50xWPJXgJdckpOTktJB3kAMNwFdl8w_GIP_3Ls"/g' config.toml @@ -237,10 +237,10 @@ After the Gateway service is successfully expanded, you can see [in the service ```eval_rst .. note:: - For console configuration and deployment, refer to 'Configuring and Using the Console <. / installation.html#id6>`_ + For console configuration and deployment, see Configuring and Using the Console<./installation.html#id6>`_ ``` -Start the console and run the 'getPeers' command. The number of Gateway service nodes displayed on the console is increased from 2 to 3.。 +Start the console and run the 'getPeers' command. The number of Gateway service nodes displayed on the console is increased from 2 to 3。 ```shell # Enter the operation directory diff --git a/3.x/en/docs/tutorial/pro/index.md b/3.x/en/docs/tutorial/pro/index.md index 9fbe22426..f339276f3 100644 --- a/3.x/en/docs/tutorial/pro/index.md +++ b/3.x/en/docs/tutorial/pro/index.md @@ -7,12 +7,12 @@ Tags: "Pro FISCO BCOS" "" Expansion "" Configuration "" Deployment Tools "" ```eval_rst .. important:: - Related Software and Environment Release Notes!'Please check < https://fisco-bcos-documentation.readthedocs.io/zh_CN/latest/docs/compatibility.html>`_ + Related Software and Environment Release Notes!'Please check`_ ``` ```eval_rst .. note:: - It consists of RPC, Gateway access layer services, and multiple blockchain node node services. One node service represents a group, and the storage uses local RocksDB. All nodes share access layer services. Access layer services can be extended in parallel. It is suitable for production environments with controllable capacity (within T level) and can support multi-group dynamic expansion.。 + It consists of RPC, Gateway access layer services, and multiple blockchain node node services. One node service represents a group, and the storage uses local RocksDB. All nodes share access layer services. Access layer services can be extended in parallel. It is suitable for production environments with controllable capacity (within T level) and can support multi-group dynamic expansion。 ``` ```eval_rst diff --git a/3.x/en/docs/tutorial/pro/installation.md b/3.x/en/docs/tutorial/pro/installation.md index 5d55414b5..c4e4806c7 100644 --- a/3.x/en/docs/tutorial/pro/installation.md +++ b/3.x/en/docs/tutorial/pro/installation.md @@ -4,18 +4,18 @@ Tags: "Pro version of the blockchain network" "deployment" ------------ -FISCO BCOS 3.x supports the Pro version of the microservice blockchain architecture. The Pro version of FISCO BCOS includes RPC services, Gateway services, and node services. Each service can be deployed and expanded independently.。Please refer to [here](../../quick_start/hardware_requirements.md)Use supported**Hardware and platforms**Conduct operation。 +FISCO BCOS 3.x supports the Pro version of the microservice blockchain architecture. The Pro version of FISCO BCOS includes RPC services, Gateway services, and node services. Each service can be deployed and expanded independently。Please refer to [here](../../quick_start/hardware_requirements.md)Use supported**Hardware and platforms**Conduct operation。 ```eval_rst .. note:: - - Pro FISCO BCOS uses the "BcosBuilder / pro" tool for chain building and capacity expansion. For more information about this tool, see 'BcosBuilder <. / pro _ builder.html >' _ - - FISCO BCOS 3.x builds and manages microservices based on tars. Before building the Pro version of FISCO BCOS, you need to install the tars service. This chapter describes the process of building the docker version of tars service. For more information about deploying and building tars, please refer to 'Here < https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/installation/README.md>`_ - - In this chapter, build the tars service based on Docker. Make sure that the system user has the permission to operate docker and that the docker service is started. + - Pro version FISCO BCOS uses the "BcosBuilder / pro" tool for chain building and expansion and other related operations, please refer to the introduction of this tool 'BcosBuilder<./pro_builder.html>`_ + - FISCO BCOS 3.x builds and manages microservices based on tars. Before building the Pro version of FISCO BCOS, you need to install the tars service`_ + - This chapter builds the tars service based on Docker. Make sure that the system user has the permission to operate docker and that the docker service is started ``` ## 1. Installation Dependencies -Deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose ', depending on the operating system you are using, use the following command to install the dependency。 +The deployment tool 'BcosBuilder' depends on 'python3, curl, docker, docker-compose'. Depending on the operating system you are using, use the following command to install the dependency。 **Install Ubuntu Dependencies(Version not less than Ubuntu18.04)** @@ -40,8 +40,8 @@ brew install curl docker docker-compose python3 wget ```eval_rst .. note:: - - Deployment tool "BcosBuilder" configuration and use please refer to 'here <. / pro _ builder.html >' _ - - If the network speed of "BcosBuilder" downloaded from github is too slow, try: curl -#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/BcosBuilder.tgz && tar -xvf BcosBuilder.tgz + - Deployment tool "BcosBuilder" configuration and use please refer to 'here<./pro_builder.html>`_ + - If downloading the deployment tool "BcosBuilder" from github is too slow, please try: curl -#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/BcosBuilder.tgz && tar -xvf BcosBuilder.tgz ``` ```shell @@ -62,14 +62,14 @@ cd BcosBuilder && pip3 install -r requirements.txt The Pro version of FISCO BCOS uses the tars service to build and manage microservices. The tars service mainly includes' TarsFramework 'and' TarsNode '. For more information about the tars service, please refer to [here](https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/installation/README.md). -**'BcosBuilder 'provides the configuration of tars docker in' bridge 'and' host 'network modes. We recommend that you use the configuration of tars docker in' bridge 'network mode in the stand-alone version. We recommend that you use the configuration of tars docker in' host 'network mode in the production environment.**。 +**'BcosBuilder 'provides the configuration of tars docker in' bridge 'and' host 'network modes. We recommend that you use the configuration of tars docker in' bridge 'network mode in the stand-alone version. We recommend that you use the configuration of tars docker in' host 'network mode in the production environment**。 - Docker configuration path of 'bridge' network mode: 'docker / bridge', where 'docker / bridge / linux' is used by linux users and 'docker / bridge / mac' is used by mac users -- The docker configuration path of the 'host' network mode: 'docker / host / linux'. Currently, only docker configurations for linux systems are provided. +- The docker configuration path of the 'host' network mode: 'docker / host / linux'. Currently, only docker configurations for linux systems are provided ### 3.0 Configure Permission Mode -**Note:**If you do not need permission and the chain version is less than 3.3, you can skip this section.。 +**Note:**If you do not need permission and the chain version is less than 3.3, you can skip this section。 Set 'auth _ check' of the configuration file to true and set the 'init _ auth _ address' field accordingly。The address specified by the 'init _ auth _ address' field here is generated by the following steps: @@ -79,7 +79,7 @@ curl -#LO https://raw.githubusercontent.com/FISCO-BCOS/console/master/tools/get_ ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_account.sh && chmod u+x get_account.sh && bash get_account.sh` + -If you cannot download for a long time due to network problems, please try'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_account.sh && chmod u+x get_account.sh && bash get_account.sh` ``` State secret version please use the following instruction to get the script @@ -90,7 +90,7 @@ curl -#LO https://raw.githubusercontent.com/FISCO-BCOS/console/master/tools/get_ ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_gm_account.sh && chmod u+x get_gm_account.sh && bash get_gm_account.sh` + -If you cannot download for a long time due to network problems, please try'curl-#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/tools/get_gm_account.sh && chmod u+x get_gm_account.sh && bash get_gm_account.sh` ``` After the execution, there is the following output, using 'init _ auth _ address' is to use the following 'Account Address' @@ -111,8 +111,8 @@ init_auth_address="0xd5eff0641c2f69a8deed9510e374aa3e94066a66" ```eval_rst .. note:: - - Use docker to install and start the tars service. Make sure that the system user has the docker permission and that the docker service is started. - - After installing the tars service, it takes about a minute or so to pass the http://127.0.0.1:3000 / Visit tars Web Management Platform + -Here, docker is used to install / start the tars service. Please make sure that the system user has the permission to operate docker and make sure that the docker service is in the startup state + -After installing the tars service, it will take about a minute to pass the http://127.0.0.1:3000 / Visit tars Web Management Platform ``` **Install the tars service: If you run the tars service for the first time, run the following command to install and start the tars service。** @@ -121,18 +121,18 @@ init_auth_address="0xd5eff0641c2f69a8deed9510e374aa3e94066a66" # Enter the BcosBuilder directory cd ~/fisco/BcosBuilder/pro -# Note: It is necessary to ensure that the docker service is started. +# Note: It is necessary to ensure that the docker service is started # Skip this step if you have done it before -# Create bridge type network tars with network segment 172.25.0.0 / 16-network +# Create a bridge-type network tars-network with a network segment of 172.25.0.0 / 16 python3 build_chain.py create-subnet -n tars-network -s 172.25.0.0/16 -# Note: It is necessary to ensure that the docker service is started. +# Note: It is necessary to ensure that the docker service is started # Linux system: Enter the path to the docker configuration file(macOS system can skip this step) cd ../docker/bridge/linux/framework # macOS system: Go to docker profile path(Linux system can skip this step) cd ../docker/bridge/mac/framework -# Configure the MYSQL password, assuming that the password is set to FISCO (Note: docker-There are two MYSQL _ ROOT _ PASSWORD configuration items in the compose.yml file. The passwords must be the same。) +# Configure the MYSQL password. It is assumed that the password is set to FISCO (Note: There are two MYSQL _ ROOT _ PASSWORD configuration items in the docker-compose.yml file。) # Linux system(macOS system can skip this step) sed -i 's/MYSQL_ROOT_PASSWORD: ""/MYSQL_ROOT_PASSWORD: "FISCO"/g' docker-compose.yml # macOS System(Linux system can skip this step) @@ -148,7 +148,7 @@ docker-compose up -d # Enter the BcosBuilder directory cd ~/fisco/BcosBuilder -# Note: It is necessary to ensure that the docker service is started. +# Note: It is necessary to ensure that the docker service is started # Linux system: Enter the path to the docker configuration file(macOS system can skip this step) cd docker/bridge/linux/framework # macOS system: Go to docker profile path(Linux system can skip this step) @@ -164,8 +164,8 @@ After the tars service is installed and started, the local environment can use h ```eval_rst .. note:: - - The tars service only needs to be configured once. If it has been configured before, it does not need to be reconfigured. - - In this step, you must apply for a token to obtain the operation permission of the blockchain service based on the tars service. + -tars service only needs to be configured once. If it has been configured before, it does not need to be reconfigured + - In this step, you must apply for a token to obtain the operation permission of the blockchain service based on the tars service ``` After the initial installation of the tars management service, you need to configure the following: @@ -173,7 +173,7 @@ After the initial installation of the tars management service, you need to confi - **Login Configuration**Initialize login password for administrator user 'admin'。 - **Apply for token**Log on to the tars web management platform to apply for a token for service construction and management。 -For the first time using the tars management platform, enter the URL http://127.0.0.1:3000 /, refer to the figure below to initialize the administrator password and enter [admin]-> [user center]-> [token management] apply for token。 +For the first time using the tars management platform, enter the URL http://127.0.0.1:3000 /, refer to the following figure to initialize the administrator password and enter [admin] ->User Center ->[token management] apply for token。 ![](../../../images/tutorial/tars_config.gif) @@ -181,9 +181,9 @@ For the first time using the tars management platform, enter the URL http://127. Pro version FISCO BCOS includes RPC service, Gateway service and blockchain node service BcosNodeService。 -- RPC service: It is responsible for receiving client requests and forwarding the requests to nodes for processing. RPC services can be scaled horizontally, and one RPC service can access multiple blockchain node services. -- Gateway service: It is responsible for network communication between blockchain nodes across institutions. Gateway services can be scaled horizontally. One Gateway service can access multiple blockchain node services. -- Blockchain node service 'BcosNodeService': Provides blockchain-related services, including consensus, execution, and transaction blockchain. The node service accesses RPC services and Gateway services to obtain network communication functions.。 +-RPC service: responsible for receiving client requests and forwarding the requests to nodes for processing, RPC service can be scaled horizontally, and one RPC service can access multiple blockchain node services +- Gateway service: responsible for network communication between blockchain nodes across institutions. The Gateway service is horizontally scalable, and one Gateway service can access multiple blockchain node services +- Blockchain node service 'BcosNodeService': Provides blockchain-related services, including consensus, execution, and transaction blockchain. The node service accesses RPC services and Gateway services to obtain network communication functions。 For the overall architecture design of Pro version FISCO BCOS, please refer to [here](../../design/architecture.md)。 @@ -193,9 +193,9 @@ This chapter introduces the Pro version FISCO BCOS deployment process by taking ```eval_rst .. note:: - - If you do not apply for a token, refer to [3.2 Configuring Tars Service] to apply for a token. - - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform.-> [user center]-> [token management] to obtain the token list - - Before deploying the Pro version of the blockchain node, please make sure that your tars service is in the startup state. For installation / startup and configuration of the tars service, please refer to Section 3.2 + - If you do not apply for a token, refer to [3.2 Configuring Tars Service] to apply for a token + - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform ->User Center ->[token management] obtaining the token list + -Before deploying the Pro version blockchain node, please make sure that your tars service is in the started state. Please refer to Section 3.2 for installing / starting and configuring the tars service ``` ### 4.1 Download Binary @@ -204,16 +204,16 @@ Before building the Pro version of FISCO BCOS, you need to download the binary p ```eval_rst .. note:: - - You can use the python3 build _ chain.py-h "View deployment script usage - - The binary is downloaded to the "binary" directory by default - - If downloading the binary is slow, try: ``python3 build_chain.py download_binary -t cdn`` + - You can view the deployment script usage through "python3 build _ chain.py -h" + - binary is downloaded to the "binary" directory by default + - If downloading binary is slow, please try: ``python3 build_chain.py download_binary -t cdn`` ``` ```shell # Enter the operation directory cd ~/fisco/BcosBuilder/pro -# Run the build _ chain.py script to download the binary. The binary package is downloaded to the binary directory by default. +# Run the build _ chain.py script to download the binary. The binary package is downloaded to the binary directory by default python3 build_chain.py download_binary ``` @@ -293,7 +293,7 @@ generated/rpc/chain0 ├── 172.25.0.3 │   ├── agencyABcosRpcService # RPC Service Directory for Institution A │   │   ├── config.ini.tmp # Configuration file for RPC service of institution A -│   │   ├── sdk # The SDK certificate directory. The SDK client can copy certificates from this directory to connect to the RPC service. +│   │   ├── sdk # The SDK certificate directory. The SDK client can copy certificates from this directory to connect to the RPC service │   │   │   ├── ca.crt │   │   │   ├── cert.cnf │   │   │   ├── sdk.crt @@ -305,7 +305,7 @@ generated/rpc/chain0 │   │   └── ssl.key │   └── agencyBBcosRpcService # RPC Service Configuration Directory for Institution B │   ├── config.ini.tmp # Configuration file for RPC service of institution B -│   ├── sdk # The SDK certificate directory. The SDK client copies the certificate from this directory to connect to the RPC service. +│   ├── sdk # The SDK certificate directory. The SDK client copies the certificate from this directory to connect to the RPC service │   │   ├── ca.crt │   │   ├── cert.cnf │   │   ├── sdk.crt @@ -315,7 +315,7 @@ generated/rpc/chain0 │   ├── cert.cnf │   ├── ssl.crt │   └── ssl.key -└── ca # The CA certificate directory, which mainly includes the CA certificate and the CA private key. Keep the CA certificate and the CA private key properly. +└── ca # The CA certificate directory, which mainly includes the CA certificate and the CA private key. Keep the CA certificate and the CA private key properly ├── ca.crt ├── ca.key ├── ca.srl @@ -328,14 +328,14 @@ After the RPC service is started successfully, you can view the service lists' a ```eval_rst .. note:: - - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform.-> [user center]-> [token management] to obtain the token list - - Before deploying the Pro version of the blockchain node, please make sure that your tars service is in the startup state. For installation / startup and configuration of the tars service, please refer to Section 3.2 - - **Keep the RPC service CA certificate and CA private key generated during service deployment for SDK certificate application, RPC service expansion, and other operations.** + - If you forget to access the token of the tars service, you can use the [admin] of the tars web management platform ->User Center ->[token management] obtaining the token list + -Before deploying the Pro version blockchain node, please make sure that your tars service is in the started state. Please refer to Section 3.2 for installing / starting and configuring the tars service + - **Keep the RPC service CA certificate and CA private key generated during service deployment for SDK certificate application, RPC service expansion, and other operations** ``` ### 4.3 Deploying Gateway Services -After the RPC service is deployed, you need to deploy the Gateway service to establish network connections between organizations.。Run the following command in the BcosBuilder directory to deploy and start the gateway service of the two organizations. The corresponding gateway service names are 'agencyABcosGatewayService' and 'agencyBBcosGatewayService', the ip address is' 172.25.0.3 ', and the occupied ports are' 30300 'and' 30301 'respectively(Before performing this operation, please make sure that the '30300' and '30301' ports of the machine are not occupied)。 +After the RPC service is deployed, you need to deploy the Gateway service to establish network connections between organizations。Run the following command in the BcosBuilder directory to deploy and start the gateway service of the two organizations. The corresponding gateway service names are 'agencyABcosGatewayService' and 'agencyBBcosGatewayService', the ip address is' 172.25.0.3 ', and the occupied ports are' 30300 'and' 30301 'respectively(Before performing this operation, please make sure that the '30300' and '30301' ports of the machine are not occupied)。 ```shell # Enter the operation directory @@ -410,7 +410,7 @@ generated/gateway/chain0 │   ├── cert.cnf │   ├── ssl.crt │   └── ssl.key -└── ca # Configure the root certificate of the Gateway service. Save the root certificate and the root certificate private key. +└── ca # Configure the root certificate of the Gateway service. Save the root certificate and the root certificate private key ├── ca.crt ├── ca.key ├── ca.srl @@ -420,8 +420,8 @@ generated/gateway/chain0 ```eval_rst .. note:: - This step is performed on the basis of step 4.2 - - Before deploying the Pro version of the blockchain node, please make sure that your tars service is in the startup state. For installation / startup and configuration of the tars service, please refer to Section 3.2 - - **Keep the RPC service CA certificate and CA private key generated during service deployment for operations such as gateway service expansion.** + -Before deploying the Pro version blockchain node, please make sure that your tars service is in the started state. Please refer to Section 3.2 for installing / starting and configuring the tars service + - **Keep the RPC service CA certificate and CA private key generated during service deployment for operations such as gateway service expansion** ``` After the Gateway service is successfully started, you can view the service lists' agencyABcosGatewayService 'and' agencyBBcosGatewayService 'on the tars web management platform, and each service is in the' active 'state: @@ -429,7 +429,7 @@ After the Gateway service is successfully started, you can view the service list ### 4.4 Deploying Blockchain Node Services -After the RPC service and the Gateway service are deployed, you can deploy the blockchain node service.。Run the following command in the BcosBuilder directory to deploy and start the blockchain service of two institutions and two nodes. The corresponding service names are 'group0node00BcosNodeService' and 'group0node10BcosNodeService', and the chain ID is' chain0 'and the group ID is' group0'。 +After the RPC service and the Gateway service are deployed, you can deploy the blockchain node service。Run the following command in the BcosBuilder directory to deploy and start the blockchain service of two institutions and two nodes. The corresponding service names are 'group0node00BcosNodeService' and 'group0node10BcosNodeService', and the chain ID is' chain0 'and the group ID is' group0'。 ```shell # Enter the operation directory @@ -512,8 +512,8 @@ generated/chain0 ```eval_rst .. note:: - - It is recommended to deploy the blockchain node service after the RPC and Gateway services are deployed. - - Before deploying the Pro version of the blockchain node, please make sure that your tars service is in the startup state. For installation / startup and configuration of the tars service, please refer to Section 3.2 + - It is recommended to deploy the blockchain node service after deploying RPC and Gateway services + -Before deploying the Pro version blockchain node, please make sure that your tars service is in the started state. Please refer to Section 3.2 for installing / starting and configuring the tars service ``` After the blockchain node service is successfully started, you can view the service lists' agencyAgroup0node0BcosNodeService 'and' agencyBgroup0node0BcosNodeService 'on the tars web page management platform, and each service is in the' active 'status: @@ -522,7 +522,7 @@ After the blockchain node service is successfully started, you can view the serv ### 4.5 Deploy the blockchain node monitoring service -After the RPC service, Gateway service, and node service are deployed, you can deploy the blockchain node monitoring service.。Run the following command in the BcosBuilder / pro directory to deploy and start the blockchain node monitoring service。 +After the RPC service, Gateway service, and node service are deployed, you can deploy the blockchain node monitoring service。Run the following command in the BcosBuilder / pro directory to deploy and start the blockchain node monitoring service。 ```shell # Enter the operation directory @@ -579,7 +579,7 @@ app_log/ ```eval_rst .. note:: - - It is recommended to deploy the blockchain node monitoring service after deploying RPC and Gateway and node services + - It is recommended to deploy the blockchain node monitoring service after deploying RPC, Gateway and node services ``` After the blockchain node service is successfully started, log in to grafana. The default listening port on the grafana UI management page is 3001, and the login URL is (http://ip:3001 / graphna). The default account name and password are admin / admin. Import the Dashboard after logging in ([github source code](https://github.com/FISCO-BCOS/FISCO-BCOS/blob/master/tools/template/Dashboard.json)) and configure the prometheus source (http://ip:9090 /) View real-time display of each indicator data。 @@ -587,12 +587,12 @@ After the blockchain node service is successfully started, log in to grafana. Th ## 5. Configure and use the console -The console is applicable to both the Pro version and the Air version of the FISCO BCOS blockchain, and the experience is completely consistent。After the Pro version blockchain experience environment is built, you can configure and use the console to send transactions to the Pro version blockchain.。 +The console is applicable to both the Pro version and the Air version of the FISCO BCOS blockchain, and the experience is completely consistent。After the Pro version blockchain experience environment is built, you can configure and use the console to send transactions to the Pro version blockchain。 ### 5.1 Installation Dependencies ```eval_rst .. note:: - - For console configuration methods and commands, please refer to 'here <.. /.. / operation _ and _ maintenance / console / console _ config.html >' _ + -For console configuration methods and commands, please refer to 'here<../../operation_and_maintenance/console/console_config.html>`_ ``` Before using the console, you need to install the java environment: @@ -614,7 +614,7 @@ cd ~/fisco && curl -LO https://github.com/FISCO-BCOS/console/releases/download/v ``` ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh` + -If you cannot download for a long time due to network problems, please try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh` ``` **Step 2: Configure the Console** @@ -628,10 +628,10 @@ If the RPC service does not use the default port, replace 20200 in the file with cp -n console/conf/config-example.toml console/conf/config.toml ``` -- Configure Console Certificates +- Configure console certificates ```shell -# The command find.-name sdk Find all SDK certificate paths +# All SDK certificate paths can be found through the command find.-name sdk cp -r BcosBuilder/pro/generated/rpc/chain0/agencyBBcosRpcService/172.25.0.3/sdk/* console/conf ``` @@ -798,7 +798,7 @@ Event: {} ```eval_rst .. note:: - - It is recommended to deploy the blockchain node monitoring service after deploying RPC and Gateway and node services + - It is recommended to deploy the blockchain node monitoring service after deploying RPC, Gateway and node services - This step is optional ``` @@ -857,4 +857,4 @@ app_log/ ``` -After the blockchain node service is successfully started, you can view the data of each metric on the graphna and prometheus pages.。 +After the blockchain node service is successfully started, you can view the data of each metric on the graphna and prometheus pages。 diff --git a/3.x/en/docs/tutorial/pro/installation_without_tars.md b/3.x/en/docs/tutorial/pro/installation_without_tars.md index 1108e1c68..403d2b7ad 100644 --- a/3.x/en/docs/tutorial/pro/installation_without_tars.md +++ b/3.x/en/docs/tutorial/pro/installation_without_tars.md @@ -4,11 +4,11 @@ Tags: "Pro version of the blockchain network" "" deployment "" does not rely on ------------ -Pro version of FISCO BCOS 3.x can be built without relying on tars web console。This document takes the example of deploying a blockchain service with two institutions and two nodes on a single machine to introduce the process of building and deploying FISCO BCOS in the Pro version without relying on the tars web console.。 +Pro version of FISCO BCOS 3.x can be built without relying on tars web console。This document takes the example of deploying a blockchain service with two institutions and two nodes on a single machine to introduce the process of building and deploying FISCO BCOS in the Pro version without relying on the tars web console。 ```eval_rst .. note:: - - Pro version does not rely on the tars web console to build FISCO BCOS "" BcosBuilder / pro "" tool for chain building and expansion and other related operations, please refer to the introduction of this tool 'BcosBuilder <. / pro _ builder.html >' _ + - Pro version does not rely on the tars web console to build FISCO BCOS "BcosBuilder / pro" tool for chain building and expansion and other related operations, please refer to the introduction of this tool 'BcosBuilder<./pro_builder.html>`_ ``` **注意:** @@ -40,8 +40,8 @@ brew install curl python3 wget ```eval_rst .. note:: - - Deployment tool "BcosBuilder" configuration and use please refer to 'here <. / pro _ builder.html >' _ - - If the network speed of "BcosBuilder" downloaded from github is too slow, try: curl -#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/BcosBuilder.tgz && tar -xvf BcosBuilder.tgz + - Deployment tool "BcosBuilder" configuration and use please refer to 'here<./pro_builder.html>`_ + - If downloading the deployment tool "BcosBuilder" from github is too slow, please try: curl -#LO https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/FISCO-BCOS/FISCO-BCOS/releases/v3.6.0/BcosBuilder.tgz && tar -xvf BcosBuilder.tgz ``` ```shell @@ -62,9 +62,9 @@ cd BcosBuilder && pip3 install -r requirements.txt Pro version FISCO BCOS includes RPC service, Gateway service and blockchain node service BcosNodeService。 -- RPC service: It is responsible for receiving client requests and forwarding the requests to nodes for processing. RPC services can be scaled horizontally, and one RPC service can access multiple blockchain node services. -- Gateway service: It is responsible for network communication between blockchain nodes across institutions. Gateway services can be scaled horizontally. One Gateway service can access multiple blockchain node services. -- Blockchain node service 'BcosNodeService': Provides blockchain-related services, including consensus, execution, and transaction blockchain. The node service accesses RPC services and Gateway services to obtain network communication functions.。 +-RPC service: responsible for receiving client requests and forwarding the requests to nodes for processing, RPC service can be scaled horizontally, and one RPC service can access multiple blockchain node services +- Gateway service: responsible for network communication between blockchain nodes across institutions. The Gateway service is horizontally scalable, and one Gateway service can access multiple blockchain node services +- Blockchain node service 'BcosNodeService': Provides blockchain-related services, including consensus, execution, and transaction blockchain. The node service accesses RPC services and Gateway services to obtain network communication functions。 For the overall architecture design of Pro version FISCO BCOS, please refer to [here](../../design/architecture.md)。 @@ -78,17 +78,17 @@ Before building the Pro version of FISCO BCOS, you need to download the binary p ```eval_rst .. note:: - - You can use the python3 build _ chain.py-h "View script usage - - You can use the python3 build _ chain.py build-h "View how to use the build installation package - - Use the "python3 build _ chain.py download _ binary" command to download an executable binary file. The binary file is downloaded to the "binary" directory by default. - - If downloading the binary is slow, try: ``python3 build_chain.py download_binary -t cdn`` + - You can view the script usage through "python3 build _ chain.py -h" + - You can view the usage of the build installation package through "python3 build _ chain.py build -h" + - Use the "python3 build _ chain.py download _ binary" command to download the executable binary file. The binary file is downloaded to the "binary" directory by default + - If downloading binary is slow, please try: ``python3 build_chain.py download_binary -t cdn`` ``` ```shell # Enter the operation directory cd ~/fisco/BcosBuilder/pro -# Run the build _ chain.py script to download the binary. The binary package is downloaded to the binary directory by default. +# Run the build _ chain.py script to download the binary. The binary package is downloaded to the binary directory by default python3 build_chain.py download_binary ``` @@ -124,16 +124,16 @@ Parameters: ### 3.3 Build installation package -In the 'BcosBuilder' directory, run the following command to build installation packages for two node services, two RPC services, and two gateway services. The IP addresses are all '127.0.0.1'.: +In the 'BcosBuilder' directory, run the following command to build installation packages for two node services, two RPC services, and two gateway services. The IP addresses are all '127.0.0.1': - RPC Service: '20200 'and' 20201' - Gateway Service: '30300 'and' 30301' -- tars port: `40401` ~ `40407`、`40411` ~ `40417` +-tars port: `40401` ~ `40407`、`40411` ~ `40417` -**注意:** When building an environment that does not rely on the tars page management console, because there is no tars page management console, the tars module in each microservice listening port and connection information needs to use the configuration file management, you can refer to the tars configuration file description.。 +**注意:** When building an environment that does not rely on the tars page management console, because there is no tars page management console, the tars module in each microservice listening port and connection information needs to use the configuration file management, you can refer to the tars configuration file description。 #### 3.3.1 tars configuration file -Build an environment that does not rely on the tars web management console. Since there is no tars management background, the tars module of each service needs to use configuration file management to monitor and connect information.。 +Build an environment that does not rely on the tars web management console. Since there is no tars management background, the tars module of each service needs to use configuration file management to monitor and connect information。 Each service will have two additional configuration files' tars.conf 'and' tars _ proxy.ini' @@ -159,7 +159,7 @@ $ ls -a 127.0.0.1/*/conf/tars_proxy.ini ##### 3.3.1.1. tars.conf -the server - side monitoring information of the internal tars module of the service. +the server - side monitoring information of the internal tars module of the service ###### 3.3.1.1.1 RPC Service @@ -207,7 +207,7 @@ $ cat 127.0.0.1/rpc_20200/conf/tars.conf ``` -For more information about the configuration of tars, see < https://doc.tarsyun.com/#/base/tars-template.md> +for details of tars configuration, refer to As configuration example: @@ -237,9 +237,9 @@ There is a tars rpc module inside the RPC service, listening on port '40400' listen_port=20201 thread_count=4 # rpc tars server listen ip - tars_listen_ip="0.0.0.0" # Modify the IP address of the TARS listener. + tars_listen_ip="0.0.0.0" # Modify the IP address of the TARS listener # rpc tars server listen port - tars_listen_port=40410 # modify the port on which tars listens. + tars_listen_port=40410 # modify the port on which tars listens ``` ###### 3.3.1.1.2 Gateway Gateway Service @@ -288,7 +288,7 @@ $ cat 127.0.0.1/gateway_30300/conf/tars.conf ``` -For more information about the configuration of tars, see < https://doc.tarsyun.com/#/base/tars-template.md> +for details of tars configuration, refer to As configuration example: @@ -307,7 +307,7 @@ As configuration example: There is a tars gateway module inside the Gateway service, listening on port '40401' -注意: Modify the 'tars' listening information inside the service. You can modify the '[agency.gateway] tars _ listen _ ip' and 'tars _ listen _ port' configurations of 'config.toml' during build. +注意: Modify the 'tars' listening information inside the service. You can modify the '[agency.gateway] tars _ listen _ ip' and 'tars _ listen _ port' configurations of 'config.toml' during build ```shell [agency.gateway] @@ -412,7 +412,7 @@ cat 127.0.0.1/group0_node_40402/conf/tars.conf As configuration example: -The node service contains five tars modules.: TxPool, Scheduler, PBFT, Ledger, and Front。 +The node service contains five tars modules: TxPool, Scheduler, PBFT, Ledger, and Front。 - TxPool @@ -501,8 +501,8 @@ Front module listening port: 40406 注意: -- Modify the internal 'tars' listening port of the service. You can modify the configurations of 'config.toml', '[[agency.group]] [[agency.group.node]] tars _ listen _ ip' and 'tars _ listen _ port' during build -- Five consecutive ports must be allocated to the node service. The range is [tars _ listen _ port, tars _ listen _ port+4], please note the port conflict +- Modify the listening port of 'tars' inside the service. You can modify the configurations of 'config.toml', '[[agency.group]] [[agency.group.node]] tars _ listen _ ip' and 'tars _ listen _ port' during build +- The node service needs to allocate five consecutive ports. The range is [tars _ listen _ port, tars _ listen _ port+4], please note the port conflict ```shell [[agency.group]] @@ -564,8 +564,8 @@ The preceding configuration indicates that if the internal module of the service **注意:** -- 'tars _ proxy.ini 'recommendations for services within the organization are consistent -- A new file is generated for the service to be expanded during expansion. You need to merge the newly generated 'institution name _ tars _ proxy.ini' into the used 'tars _ proxy.ini' file and synchronize it to all services. The service needs to be restarted and take effect. Otherwise, the newly expanded service cannot be connected to the existing environment. +- 'tars _ proxy.ini' recommendations for various services within the organization are consistent +- A new file is generated for the service that is being expanded during expansion. You need to merge the newly generated 'institution name _ tars _ proxy.ini' into the used 'tars _ proxy.ini' file and synchronize it to all services. The service needs to be restarted and take effect. Otherwise, the newly expanded service cannot be connected to the existing environment Before performing this operation, please make sure that the above ports of the machine are not occupied。 @@ -709,7 +709,7 @@ generated │   │   │   ├── ssl.key # ssl certificate private key │   │   │   ├── ssl.nodeid │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── gateway_30301 # Gateway Service Directory @@ -723,7 +723,7 @@ generated │   │   │   ├── ssl.key # ssl certificate private key │   │   │   ├── ssl.nodeid │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── group0_node_40402 # Node Service Directory @@ -734,7 +734,7 @@ generated │   │   │   ├── node.nodeid # node nodeid │   │   │   ├── node.pem # Private key file, consensus module for message signing, verification │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── group0_node_40412 # Node Service Directory @@ -745,7 +745,7 @@ generated │   │   │   ├── node.nodeid # node nodeid │   │   │   ├── node.pem # Private key file, consensus module for message signing, verification │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── rpc_20200 # RPC Service Directory @@ -764,7 +764,7 @@ generated │   │   │   ├── ssl.key # ssl certificate private key │   │   │   ├── ssl.nodeid │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── rpc_20201 # RPC Service Directory @@ -783,27 +783,27 @@ generated │   │   │   ├── ssl.key # ssl certificate private key │   │   │   ├── ssl.nodeid │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── start_all.sh # Start script to start all service nodes │   └── stop_all.sh # Stop script to stop all service nodes ├── chain0 │   ├── agencyA_tars_proxy.ini # An additional backup of the agency A tars _ proxy.ini. The tars _ proxy.ini of each service node in the agency needs to be consistent. After the service node changes such as scaling up or scaling down, all services need to update the configuration file and restart -│   ├── agencyB_tars_proxy.ini # Agency B extra backup of tars _ proxy.ini, each service node within the agency tars _ proxy.ini needs to be consistent, after the expansion or contraction of service node changes, all services need to update the configuration file, and then restart. +│   ├── agencyB_tars_proxy.ini # Agency B extra backup of tars _ proxy.ini, each service node within the agency tars _ proxy.ini needs to be consistent, after the expansion or contraction of service node changes, all services need to update the configuration file, and then restart │   └── group0 │   ├── agencyAgroup0node0BcosNodeService # node agencyAgroup0node0BcosNodeService -│   │   ├── config.genesis # The node creation block file, which is an important file. This file is required for node expansion in a group. +│   │   ├── config.genesis # The node creation block file, which is an important file. This file is required for node expansion in a group │   │   ├── config.ini # The node configuration file, which is the same file as the node service conf / config.ini │   │   ├── node.nodeid # Node nodeid, used when registering or exiting a node -│   │   └── node.pem # The node private key file. The consensus module is used for message signing and signature verification. +│   │   └── node.pem # The node private key file. The consensus module is used for message signing and signature verification │   ├── agencyBgroup0node0BcosNodeService # node agencyBgroup0node0BcosNodeService │   │   ├── config.genesis # The node creation block file, which is required when new nodes are expanded in the group │   │   ├── config.ini # The node configuration file, which is the same file as the node service conf / config.ini │   │   ├── node.nodeid # Node nodeid, used when registering or exiting a node -│   │   └── node.pem # The node private key file. The consensus module is used for message signing and signature verification. +│   │   └── node.pem # The node private key file. The consensus module is used for message signing and signature verification │   └── config.genesis -├── gateway # Gateway service root certificate directory, which is used to issue certificates for new gateway service nodes when they are expanded. +├── gateway # Gateway service root certificate directory, which is used to issue certificates for new gateway service nodes when they are expanded │   └── chain0 │   └── ca │   ├── ca.crt @@ -821,7 +821,7 @@ generated ### 3.4 Startup Services -**注意:** This example is a stand-alone environment, in the actual environment, the service division is on different machines, then you need to first copy the installation package to the corresponding machine, and then start the service.。 +**注意:** This example is a stand-alone environment, in the actual environment, the service division is on different machines, then you need to first copy the installation package to the corresponding machine, and then start the service。 ```shell $ cd generated/127.0.0.1 @@ -844,13 +844,13 @@ Service started successfully。 ## 4. Configure and use the console -The console is applicable to both the Pro version and the Air version of the FISCO BCOS blockchain, and the experience is completely consistent。After the Pro version blockchain experience environment is built, you can configure and use the console to send transactions to the Pro version blockchain.。 +The console is applicable to both the Pro version and the Air version of the FISCO BCOS blockchain, and the experience is completely consistent。After the Pro version blockchain experience environment is built, you can configure and use the console to send transactions to the Pro version blockchain。 ### 4.1 Installation Dependencies ```eval_rst .. note:: - - For console configuration methods and commands, please refer to 'here <.. /.. / develop / console / console _ config.html >' _ + -For console configuration methods and commands, please refer to 'here<../../develop/console/console_config.html>`_ ``` Before using the console, you need to install the java environment: @@ -873,7 +873,7 @@ cd ~/fisco && curl -LO https://github.com/FISCO-BCOS/console/releases/download/v ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh` + -If you cannot download for a long time due to network problems, please try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh` ``` **Step 2: Configure the Console** @@ -887,10 +887,10 @@ If the RPC service does not use the default port, replace 20200 in the file with cp -n console/conf/config-example.toml console/conf/config.toml ``` -- Configure Console Certificates +- Configure console certificates ```shell -# The command find.-name sdk Find all SDK certificate paths +# All SDK certificate paths can be found through the command find.-name sdk cp BcosBuilder/pro/generated/127.0.0.1/rpc_20200/conf/sdk/* console/conf/ ``` @@ -996,7 +996,7 @@ contract HelloWorld { **Step 2: Deploying HelloWorld Contracts** -To facilitate the user's quick experience, the HelloWorld contract is built into the console and located in the console directory 'contracts / consolidation / HelloWorld.sol'. +To facilitate the user's quick experience, the HelloWorld contract is built into the console and located in the console directory 'contracts / consolidation / HelloWorld.sol' ```shell # Enter the following command in the console to return the contract address if the deployment is successful @@ -1057,11 +1057,11 @@ Event: {} After successfully building a blockchain network that does not rely on the tars console, this section describes how to scale up the rpc, gateway, and node。 ### 5.1 Scaling the RPC / Gateway service (without relying on the tars console) -Take the RPC / Gateway service of the Pro version FISCO BCOS alliance chain as an example to help users master the service expansion of the Pro version FISCO BCOS blockchain without relying on the tars console.。 +Take the RPC / Gateway service of the Pro version FISCO BCOS alliance chain as an example to help users master the service expansion of the Pro version FISCO BCOS blockchain without relying on the tars console。 #### 5.1.1. Modify the expansion configuration -For more information about the capacity expansion configuration of the blockchain node service, see the capacity expansion template 'conf / config' of 'BcosBuilder'.-node-rpc-example.toml ', the specific configuration steps are as follows: +For more information about how to configure blockchain node service expansion, see the expansion template 'conf / config-node-rpc-example.toml' of 'BcosBuilder'. The specific configuration steps are as follows: ```shell # Enter the operation directory @@ -1109,13 +1109,13 @@ enable_storage_security = false Modify configuration files as needed: -- RPC Root Certificate Path +- RPC root certificate path ```shell rpc_ca_cert_path="generated/rpc/chain0/ca/" ``` -- Deploy Server Modifications +- Deploy server modifications ```shell deploy_ip = "127.0.0.1" @@ -1128,7 +1128,7 @@ rpc_ca_cert_path="generated/rpc/chain0/ca/" listen_port=20202 ``` -- modify tars listening information +-tars listening information modification ```shell tars_listen_ip="0.0.0.0" @@ -1189,7 +1189,7 @@ expand/rpc │   │   │   ├── ssl.key # ssl certificate private key │   │   │   ├── ssl.nodeid │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── start_all.sh @@ -1200,7 +1200,7 @@ expand/rpc #### 5.1.3. Merging tars _ proxy.ini Files -Use'merge-config 'command to merge tars _ proxy files +Merge the tars _ proxy file using the 'merge-config' command ```shell python3 build_chain.py merge-config --help @@ -1219,7 +1219,7 @@ options: [Required] specify the output dir ``` --t/--type : The type of the merged configuration file. Currently, only the 'tars' type is supported. +-t/--type : The type of the merged configuration file. Currently, only the 'tars' type is supported -c/--config : Configuration list, list of configuration files to be merged -O/--output : Output Directory @@ -1323,7 +1323,7 @@ try to start rpc_20202 ```eval_rst .. note:: - - For console configuration methods and commands, please refer to 'here <.. /.. / operation _ and _ maintenance / console / console _ config.html >' _ + -For console configuration methods and commands, please refer to 'here<../../operation_and_maintenance/console/console_config.html>`_ ``` Before using the console, you need to install the java environment: @@ -1346,7 +1346,7 @@ cd ~/fisco && curl -LO https://github.com/FISCO-BCOS/console/releases/download/v ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh` + -If you cannot download for a long time due to network problems, please try 'cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh && bash download_console.sh` ``` **Step 2: Configure the Console** @@ -1364,10 +1364,10 @@ cp -n console/conf/config-example.toml console/conf/config.toml peers=["127.0.0.1:20202"] ``` -- Configure Console Certificates +- Configure console certificates ```shell -# The command find.-name sdk Find all SDK certificate paths +# All SDK certificate paths can be found through the command find.-name sdk cp ~/fisco/BcosBuilder/pro/expand/rpc/127.0.0.1/rpc_20202/conf/sdk/* console/conf ``` @@ -1402,7 +1402,7 @@ Return values:() #### 5.2.1. Modify the expansion configuration -For more information about the capacity expansion configuration of the blockchain node service, see the capacity expansion template 'conf / config' of 'BcosBuilder'.-node-expand-example.toml ', the specific configuration steps are as follows: +For more information about how to configure blockchain node service expansion, see the expansion template 'conf / config-node-expand-example.toml' of 'BcosBuilder'. The specific configuration steps are as follows: ```shell # Enter the operation directory @@ -1489,19 +1489,19 @@ Modify configuration files as needed: node_name = "node2" ``` -- Deploy Server Modifications +- Deploy server modifications ```shell deploy_ip = "127.0.0.1" ``` -- Set Genesis Block File Path +- Set Genesis block file path ```shell genesis_config_path = "./generated/chain0/group0/config.genesis" ``` -- modify tars listening information +-tars listening information modification ```shell tars_listen_ip="0.0.0.0" @@ -1564,7 +1564,7 @@ expand/node/ │   │   │   ├── node.nodeid # node nodeid │   │   │   ├── node.pem # Private key file, consensus module for message signing, verification │   │   │   ├── tars.conf # For more information about the configuration of the tars.conf server, see Tars.conf -│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini. +│   │   │   └── tars_proxy.ini # For details about the configuration of the tars client connection, see the configuration description of tars _ proxy.ini │   │   ├── start.sh # Startup Script │   │   └── stop.sh # Stop Script │   ├── start_all.sh # Start script to start all service nodes @@ -1574,14 +1574,14 @@ expand/node/ └── group0 └── agencyAgroup0node2BcosNodeService ├── config.genesis # Blockchain Genesis Block File - ├── config.ini # the configuration file of the expansion node. + ├── config.ini # the configuration file of the expansion node ├── node.nodeid # nodeid of the scaling node └── node.pem # Private key file of the scaling node ``` #### 5.2.3. Merging tars _ proxy.ini Files -Use'merge-config 'command to merge tars _ proxy files +Merge the tars _ proxy file using the 'merge-config' command ```shell python3 build_chain.py merge-config --help @@ -1600,7 +1600,7 @@ options: [Required] specify the output dir ``` --t/--type : The type of the merged configuration file. Currently, only the 'tars' type is supported. +-t/--type : The type of the merged configuration file. Currently, only the 'tars' type is supported -c/--config : Configuration list, list of configuration files to be merged -O/--output : Output Directory @@ -1701,7 +1701,7 @@ try to start group0_node_40422 ```eval_rst .. note:: - When scaling a new node, we do not recommend that you add the node as a consensus node. You can add the node as a consensus node only when the block height of the scaling node is the same as the highest block height of the existing node in the chain.。 + When scaling a new node, we do not recommend that you add the node as a consensus node. You can add the node as a consensus node only when the block height of the scaling node is the same as the highest block height of the existing node in the chain。 ``` **Step 1: Obtain the NodeID of the scaling node** diff --git a/3.x/en/docs/tutorial/pro/pro_builder.md b/3.x/en/docs/tutorial/pro/pro_builder.md index b88a085fe..5fe10393d 100644 --- a/3.x/en/docs/tutorial/pro/pro_builder.md +++ b/3.x/en/docs/tutorial/pro/pro_builder.md @@ -9,17 +9,17 @@ Tags: "Pro version of the blockchain network" "Deployment tool" The deployment tool BcosBuilder aims to enable users to deploy and use the FISCO BCOS Pro / max version of the blockchain as quickly as possible. Its functions include: deploying / starting / shutting down / updating / scaling RPC services, Gateway services, and blockchain node services。 ``` -FISCO BCOS provides the 'BcosBuilder' tool to help users quickly deploy, start, stop, update and scale the FISCO BCOS Pro version of the blockchain consortium chain, which can be downloaded directly from the release tags of FISCO BCOS.。 +FISCO BCOS provides the 'BcosBuilder' tool to help users quickly deploy, start, stop, update and scale the FISCO BCOS Pro version of the blockchain consortium chain, which can be downloaded directly from the release tags of FISCO BCOS。 ## 1. Configuration Introduction -'BcosBuilder 'provides some configuration templates in the' pro / conf 'directory to help users quickly complete the deployment and expansion of the Pro version of the blockchain.。This chapter introduces the configuration items of 'BcosBuilder' in detail from three perspectives: tars service configuration items, blockchain deployment configuration items, and blockchain expansion configuration items.。 +'BcosBuilder 'provides some configuration templates in the' pro / conf 'directory to help users quickly complete the deployment and expansion of the Pro version of the blockchain。This chapter introduces the configuration items of 'BcosBuilder' in detail from three perspectives: tars service configuration items, blockchain deployment configuration items, and blockchain expansion configuration items。 ### 1.1 tars service configuration item -- `[tars].tars_url`: The URL for accessing the tars web console. The default value is' http '.://127.0.0.1:3000`。 -- `[tars].tars_token`: Access the token of the tars service through the [admin] of the tars web console.-> [user center]-> [token management] for token application and query。 -- `[tars].tars_pkg_dir`: Path to place the Pro version binary package. If this configuration item is configured, the FISCO BCOS Pro version binary is obtained from the specified directory by default for service deployment, expansion, and other operations.。 +- `[tars].tars_url`: The URL for accessing the tars web console. The default value is' http '://127.0.0.1:3000`。 +- `[tars].tars_token`: To access the token of the tars service, you can use the [admin] ->User Center ->[token management] token application and query。 +- `[tars].tars_pkg_dir`: Path to place the Pro version binary package. If this configuration item is configured, the FISCO BCOS Pro version binary is obtained from the specified directory by default for service deployment, expansion, and other operations。 The following is an example of a configuration item for the tars service: @@ -32,17 +32,17 @@ tars_pkg_dir = "" ### 1.2 Blockchain Service Deployment Configuration -Configuration items related to blockchain service deployment mainly include chain configuration items, RPC / Gateway service configuration items, and blockchain node service configuration items. The configuration template is located in the 'conf / config' of 'BcosBuilder / pro'-deploy-example.toml 'under the path。 +Configuration items related to blockchain service deployment mainly include chain configuration items, RPC / Gateway service configuration items, and blockchain node service configuration items. The configuration template is located in the 'conf / config-deploy-example.toml' path of 'BcosBuilder / pro'。 **Chain Configuration Item** Chain configuration items are located in the configuration '[chain]' and mainly include: -- `[chain].chain_id`: The ID of the chain to which the blockchain service belongs. The default value is' chain0 '.**Cannot include all special characters except letters and numbers**; -- `[chain].rpc_sm_ssl`: The type of SSL connection used between the RPC service and the SDK client. If the value is set to 'false', RSA encryption is used.;If it is set to 'true', it indicates that the state-secret SSL connection is used. The default value is' false '.; -- `[chain].gateway_sm_ssl`: SSL connection type between Gateway services. Set to 'false' to use RSA encryption;Set to 'true' to indicate that a state-secret SSL connection is used. The default value is' false '.; -- `[chain].rpc_ca_cert_path`: The path of the CA certificate of the RPC service. If a complete CA certificate and CA private key are available in this path, the 'BcosBuilder' deployment tool generates the RPC service SSL connection certificate based on the CA certificate in this path.;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the RPC service based on the generated CA certificate; -- `[chain].gateway_ca_cert_path`: The CA certificate path of the Gateway service. If there is a complete CA certificate and CA private key in this path, the 'BcosBuilder' deployment tool generates the Gateway service SSL connection certificate based on the CA certificate in this path.;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the Gateway service based on the generated CA certificate; +- `[chain].chain_id`: The ID of the chain to which the blockchain service belongs. The default value is' chain0 '**Cannot include all special characters except letters and numbers**; +- `[chain].rpc_sm_ssl`: The type of SSL connection used between the RPC service and the SDK client. If the value is set to 'false', RSA encryption is used;If it is set to 'true', it indicates that the state-secret SSL connection is used. The default value is' false '; +- `[chain].gateway_sm_ssl`: SSL connection type between Gateway services. Set to 'false' to use RSA encryption;Set to 'true' to indicate that a state-secret SSL connection is used. The default value is' false '; +- `[chain].rpc_ca_cert_path`: The path of the CA certificate of the RPC service. If a complete CA certificate and CA private key are available in this path, the 'BcosBuilder' deployment tool generates the RPC service SSL connection certificate based on the CA certificate in this path;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the RPC service based on the generated CA certificate; +- `[chain].gateway_ca_cert_path`: The CA certificate path of the Gateway service. If there is a complete CA certificate and CA private key in this path, the 'BcosBuilder' deployment tool generates the Gateway service SSL connection certificate based on the CA certificate in this path;Otherwise, the 'BcosBuilder' deployment tool generates a CA certificate and issues an SSL connection certificate for the Gateway service based on the generated CA certificate; The chain ID is' chain0 '. The configuration items for RSA encrypted connections between RPC and SDK and between Gateway services are as follows: @@ -63,14 +63,14 @@ gateway_sm_ssl=false ```eval_rst .. note:: - - When deploying an RPC service to multiple machines, make sure that the tarsnode service is installed on these machines. For details about how to deploy a tarsnode, see < https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/installation/node.md>`_ + -When deploying an RPC service to multiple machines, make sure that the tarsnode service is installed on these machines. For tarsnode deployment, please refer to 'here`_ ``` RPC service configuration items are located in '[[agency]]. [agency.rpc]'. An organization can deploy an RPC service, and a chain can contain multiple organizations. The main configuration items include: -- `[[agency]].[agency.rpc].deploy_ip`: The deployment IP address of the RPC service. If multiple IP addresses are configured, the RPC service is deployed on multiple machines to achieve the goal of parallel expansion.。 +- `[[agency]].[agency.rpc].deploy_ip`: The deployment IP address of the RPC service. If multiple IP addresses are configured, the RPC service is deployed on multiple machines to achieve the goal of parallel expansion。 - `[[agency]].[agency.rpc].listen_ip`: The listening IP address of the RPC service. The default value is' 0.0.0.0'。 -- `[[agency]].[agency.rpc].listen_port`: The listening port of the RPC service. The default value is 20200.。 +- `[[agency]].[agency.rpc].listen_port`: The listening port of the RPC service. The default value is 20200。 - `[[agency]].[agency.rpc].thread_count`: Number of worker threads in RPC service process, default is' 4'。 @@ -86,7 +86,7 @@ enable_storage_security = false # cipher_data_key = [agency.rpc] - # You can deploy multiple IP addresses. You must ensure that the tarsnode service is installed on the machine corresponding to each IP address. + # You can deploy multiple IP addresses. You must ensure that the tarsnode service is installed on the machine corresponding to each IP address deploy_ip=["172.25.0.3"] # RPC Service Listening IP listen_ip="0.0.0.0" @@ -100,7 +100,7 @@ enable_storage_security = false The configuration items of the Gateway service are located in '[[agency]]. [agency.gateway]'. An organization can deploy one Gateway service and a chain can deploy multiple Gateway services. The main configuration items include: -- `[[agency]].[agency.gateway].deploy_ip`: The deployment IP address of the Gateway service. If multiple IP addresses are configured, the Gateway service is deployed on multiple machines to achieve the goal of parallel expansion.。 +- `[[agency]].[agency.gateway].deploy_ip`: The deployment IP address of the Gateway service. If multiple IP addresses are configured, the Gateway service is deployed on multiple machines to achieve the goal of parallel expansion。 - `[[agency]].[agency.gateway].listen_ip`: The listening IP address of the Gateway service. The default value is' 0.0.0.0'。 - `[[agency]].[agency.gateway].listen_port`: The listening port of the Gateway service. The default value is' 30300'。 - `[[agency]].[agency.gateway].peers`: Connection information for all Gateway services。 @@ -137,14 +137,14 @@ Each blockchain node service in the blockchain of FISCO BCOS Pro belongs to a gr The group configuration also includes configurations related to the Genesis block: -- `[[group]].leader_period`: The number of blocks that each leader can package consecutively. The default value is 5.。 +- `[[group]].leader_period`: The number of blocks that each leader can package consecutively. The default value is 5。 - `[[group]].block_tx_count_limit`: The maximum number of transactions that can be included in each block, which defaults to 1000。 -- `[[group]].consensus_type`: Consensus algorithm type. Currently, only the 'pbft' consensus algorithm is supported.。 -- `[[group]].gas_limit`: The maximum amount of gas consumed during the run of each transaction. The default value is 300000000.。 -- `[[group]].vm_type`: The type of virtual machine running on a blockchain node. Currently, two types are supported: 'evm' and 'wasm'. A group can run only one type of virtual machine. Some nodes cannot run EVM virtual machines and some nodes cannot run WASM virtual machines.。 +- `[[group]].consensus_type`: Consensus algorithm type. Currently, only the 'pbft' consensus algorithm is supported。 +- `[[group]].gas_limit`: The maximum amount of gas consumed during the run of each transaction. The default value is 300000000。 +- `[[group]].vm_type`: The type of virtual machine running on a blockchain node. Currently, two types are supported: 'evm' and 'wasm'. A group can run only one type of virtual machine. Some nodes cannot run EVM virtual machines and some nodes cannot run WASM virtual machines。 - `[[group]].auth_check`: To enable the permission governance mode, please refer to the link [Permission Governance User Guide](../../develop/committee_usage.md)。 - `[[group]].init_auth_address`: When permission governance is enabled, specify the account address of the initialization governance committee. For permission usage documents, please refer to the link: [Permission Governance Usage Guide](../../develop/committee_usage.md)。 -- `[[group]].compatibility_version`: The data-compatible version number. The default value is 3.0.0. You can upgrade the data-compatible version when running the 'setSystemConfigByKey' command in the console.。 +- `[[group]].compatibility_version`: The data-compatible version number. The default value is 3.0.0. You can upgrade the data-compatible version when running the 'setSystemConfigByKey' command in the console。 ```ini [[group]] @@ -176,11 +176,11 @@ compatibility_version="3.0.0" **Blockchain Node Service Configuration Item: Deployment Configuration** The blockchain node service deployment configuration item is located in '[[agency]]. [[agency.group]]. [[agency.group.node]]', as follows: -- `node_name`: The name of the node service, which is not configured in the service deployment scenario.**If this option is configured, make sure that the service names of different node services are not duplicated**。 +- `node_name`: The name of the node service, which is not configured in the service deployment scenario**If this option is configured, make sure that the service names of different node services are not duplicated**。 - `deploy_ip`: node service deployment ip -- `key_page_size`: The granularity of the KeyPage. The default value is 10KB.; -- `enable_storage_security`: Whether to enable disk placement encryption. The default value is false. -- `key_center_url`: If disk encryption is enabled, you can configure the key-url of manager +- `key_page_size`: The granularity of the KeyPage. The default value is 10KB; +- `enable_storage_security`: Whether to enable disk placement encryption. The default value is false +- `key_center_url`: If disk encryption is enabled, the url of the key-manager can be configured here - `cipher_data_key`: If disk encryption is enabled, configure the data encryption key here - `monitor_listen_port`: The listening port of the monitoring service, which is' 3902 'by default - `monitor_log_path`: Path of the blockchain node logs to be monitored @@ -206,13 +206,13 @@ name = "agencyA" ### 1.3 Block chain service expansion configuration -'BcosBuilder 'provides blockchain node service expansion and RPC / Gateway service expansion functions. The configuration template for blockchain node service expansion can be found in' conf / config-node-expand-example.toml 'path, RPC / Gateway service expansion configuration template in' conf / config-service-expand-example.toml 'under the path。 +'BcosBuilder 'provides blockchain node service expansion and RPC / Gateway service expansion functions. The configuration template for blockchain node service expansion is in the' conf / config-node-expand-example.toml 'path, and the configuration template for RPC / Gateway service expansion is in the' conf / config-service-expand-example.toml 'path。 **RPC Service Expansion Configuration** In FISCO BCOS Pro version blockchain, an RPC service can contain multiple RPC service nodes. BcosBuilder provides the RPC service scaling function, which can scale out RPC service nodes based on existing RPC services. The configuration options are mainly located in the configurations of '[chain]' and '[[agency]]. [agency.rpc]', mainly including: -- `[chain].chain_id`: The ID of the chain to which the expanded RPC service belongs.。 +- `[chain].chain_id`: The ID of the chain to which the expanded RPC service belongs。 - `[chain].rpc_sm_ssl`: Whether the expanded RPC service and SDK client use the state-secret SSL connection。 - `[chain].rpc_ca_cert_path`: Specify the path to the CA certificate and CA private key of the expanded RPC service。 - `[[agency]].[agency.rpc].deploy_ip`: Deployment IP of Scaled RPC Service。 @@ -253,13 +253,13 @@ enable_storage_security = false Similar to the RPC service, the scaling configuration options of the Gateway service are mainly located in the configurations of '[chain]' and '[[agency]]. [agency.gateway]', mainly including: -- `[chain].chain_id`: The ID of the chain to which the expanded Gateway service belongs.。 +- `[chain].chain_id`: The ID of the chain to which the expanded Gateway service belongs。 - `[chain].gateway_sm_ssl`: Whether the state-secret SSL connection is used between the expanded Gateway service and the SDK client。 - `[chain].gateway_ca_cert_path`: Specify the path of the CA certificate and the CA private key of the extended Gateway service。 - `[[agency]].[agency.gateway].deploy_ip`: Deployment IP address of the scaled-out Gateway service。 - `[[agency]].[agency.gateway].listen_ip`: The listening IP address of the Gateway service node. The default value is' 0.0.0.0'。 - `[[agency]].[agency.gateway].listen_port`: The listening port of the Gateway service. The default value is' 30300'。 -- `[[agency]].[agency.gateway].peers`: The connection information of the Gateway service. You must configure the connection IP address and connection port information of all Gateway service nodes.。 +- `[[agency]].[agency.gateway].peers`: The connection information of the Gateway service. You must configure the connection IP address and connection port information of all Gateway service nodes。 A sample configuration for scaling the Gateway service 'agencyABcosGatewayService' to '172.25.0.5' is as follows: @@ -292,19 +292,19 @@ enable_storage_security = false **Blockchain node expansion configuration** -'BcosBuilder / pro 'provides the blockchain node expansion function, which can expand new blockchain node services for specified groups. The blockchain node expansion configuration template is located in' conf / config-node-expand-example.toml 'path, mainly including**chain configuration**和**Scale-out deployment configuration**, as follows: +'BcosBuilder / pro 'provides the blockchain node expansion function to expand new blockchain node services for a specified group. The blockchain node expansion configuration template is located in the' conf / config-node-expand-example.toml 'path**chain configuration**和**Scale-out deployment configuration**, as follows: -- `[chain].chain_id`: The ID of the chain to which the expanded blockchain node belongs.。 +- `[chain].chain_id`: The ID of the chain to which the expanded blockchain node belongs。 - `[[group]].group_id`: Group ID of the expansion node。 - `[[group]].genesis_config_path`: Path to configure the Genesis block of the scaling node。 -- `[[group]].sm_crypto`: Whether the scaling node is a state secret node. The default value is' false '.。 +- `[[group]].sm_crypto`: Whether the scaling node is a state secret node. The default value is' false '。 - `[[agency]].[[agency.group]].group_id`: Group ID of the scaling node。 -- `[[agency]].[[agency.group.node]].node_name`: The service name of the expanded blockchain node.**Cannot conflict with the service name of an existing blockchain node**。 +- `[[agency]].[[agency.group.node]].node_name`: The service name of the expanded blockchain node**Cannot conflict with the service name of an existing blockchain node**。 - `[[agency]].[[agency.group.node]].deploy_ip`: Deployment IP address of the expanded blockchain node service。 - `[[agency]].[[agency.group.node]].enable_storage_security`: Whether disk encryption is enabled on the expansion node。 -- `[[agency]].[[agency.group.node]].key_center_url`: key-The url of the manager. You need to configure the url when you enable disk encryption.。 -- `[[agency]].[[agency.group.node]].cipher_data_key`: Data disk encryption key. You need to configure the data disk encryption key in the disk encryption scenario.。 +- `[[agency]].[[agency.group.node]].key_center_url`: The url of the key-manager. You need to configure the。 +- `[[agency]].[[agency.group.node]].cipher_data_key`: Data disk encryption key. You need to configure the data disk encryption key in the disk encryption scenario。 The following is an example of how to scale up blockchain nodes named 'node1' and 'node2' to '172.25.0.5' for the 'group0' group of institution 'agencyA': @@ -338,7 +338,7 @@ name = "agencyA" ## 2. Introduction to Use -You can use 'python3 build _ chain.py-h 'View how to use' BcosBuilder / pro': +Use 'python3 build _ chain.py -h' to see how 'BcosBuilder / pro': ```shell ----------- help for subcommand 'download_binary' ----------- @@ -387,18 +387,18 @@ optional arguments: ### 2.1 **'download _ binary 'command** -Binary download command, currently includes'-t`(`--type`), `-v`(`--version`)and '-p`(`--path`)Three options, all of which are optional. By default, download the latest version of binary from FISCO BCOS github release tags to the 'binary' folder. Each option has the following functions: +Binary download command, currently includes' -t'(`--type`), `-v`(`--version`)and '-p'(`--path`)Three options, all of which are optional. By default, download the latest version of binary from FISCO BCOS github release tags to the 'binary' folder. Each option has the following functions: -- `-t`, `--type`: Specifies the download type. Currently, 'git' and 'cdn' are supported. By default, you can download the latest version of binary from FISCO BCOS github release tags.**If the access to git is slow when building and deploying the Pro version of the blockchain, you can use the cdn option to speed up the download.**。 -- `-v`, `--version`: Specifies the binary version to download. By default, the latest binary is downloaded.**FISCO BCOS 3.x default binary minimum version is not less than 3.0.0-rc1**。 -- `-p`, `--path`: Specifies the binary download path, which is downloaded to the binary folder by default.。 +- `-t`, `--type`: Specifies the download type. Currently, 'git' and 'cdn' are supported. By default, you can download the latest version of binary from FISCO BCOS github release tags**If the access to git is slow when building and deploying the Pro version of the blockchain, you can use the cdn option to speed up the download**。 +- `-v`, `--version`: Specifies the binary version to download. By default, the latest binary is downloaded**FISCO BCOS 3.x default binary minimum version is not less than 3.0.0-rc1**。 +- `-p`, `--path`: Specifies the binary download path, which is downloaded to the binary folder by default。 -### 2.2 **`-o, --op'Options** +### 2.2 **'-o, --op' option** -Used to specify operation commands, currently supports' gen-config, upload, deploy, upgrade, undeploy, expand, start, stop`: +Used to specify operation commands. Currently, 'gen-config, upload, deploy, upgrade, undeploy, expand, start, and stop' are supported: - `gen-config`: Generate Profile。 -- `upload`: In a scenario where a service configuration already exists, upload and publish the service, general and 'gen-config 'used with, first through' gen-config 'generates the configuration file, and then uploads and publishes the service configuration through the' upload 'command。 +- `upload`: In the scenario where the service configuration already exists, upload and publish the service, which is usually used in conjunction with 'gen-config'. First, use 'gen-config' to generate the configuration file, and then use the 'upload' command to upload and publish the service configuration。 - `deploy`: Deploy a service, including two steps: service configuration generation and service release。 - `undeploy`: Offline service。 - `upgrade`: Upgrade service, binary for upgrade service。 @@ -406,16 +406,16 @@ Used to specify operation commands, currently supports' gen-config, upload, depl - `start`: Start Service。 - `stop`: Stop Service。 -### 2.3 **`-t, --type 'option** +### 2.3 **'-t, --type' option** -Used to specify the service type of the operation when using '-o`(`--op`)option, you must set this option, which currently includes' rpc, gateway, node': +Used to specify the service type of the operation when using '-o'(`--op`)option, you must set this option, which currently includes' rpc, gateway, node': - **rpc**: Specifies that the service type of the operation is an RPC service。 - **gateway**: Specifies that the service type of the operation is a Gateway service。 - **node**: The service type of the specified operation is blockchain node service。 -### 2.4 **`-c, --config 'Options [**Optional**]:** +### 2.4 **'-c, --config' option [**Optional**]:** Used to specify the configuration file path. The default value is' config.toml '. BcosBuilder provides four types of configuration templates: @@ -424,31 +424,31 @@ Used to specify the configuration file path. The default value is' config.toml ' - `conf/config-service-expand-example.toml`: RPC, Gateway Service Expansion Configuration Template。 - `conf/config-node-expand-example.toml`: Blockchain node management service configuration template。 -### 2.5 **`create-subnet 'command** +### 2.5 **'create-subnet 'command** ```eval_rst .. note:: - To simplify O & M deployment, we recommend that you do not use a bridged network in a production environment. We recommend that you use the host network mode.。 + To simplify O & M deployment, we recommend that you do not use a bridged network in a production environment. We recommend that you use the host network mode。 ``` - `-n/--name`: Specifies the name of the bridged network, such as: `tars-network`。 - `-s/--subnet`: Specifies the segment of the bridged network, such as: `172.25.0.0/16`。 -## 3. tars docker-Compose Configuration Introduction +## 3. tars docker-compose configuration introduction -FISCO BCOS Pro version blockchain based on [tars](https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/installation/README.md)To simplify the deployment of tars, 'BcosBuilder' provides a docker for tars.-compose Configuration。 +FISCO BCOS Pro version blockchain based on [tars](https://doc.tarsyun.com/#/markdown/TarsCloud/TarsDocs/installation/README.md)To simplify the deployment of tars, 'BcosBuilder' provides the docker-compose configuration of tars。 -### 3.1 Tars Docker for Bridging Networking-compose Configuration +### 3.1 Tars docker-compose configuration for bridged networking ```eval_rst .. note:: - **recommend the experience environment to build tars by using bridge networking**。 - - Due to the slow IO speed of macOS docker across file systems, it is not recommended to mount volumes in the macOS experience environment.。 - - Using docker-Before compose starts the container, make sure to bridge the network "tars-network "has been created, you can use the" create "of" BcosBuilder "-subnet "command to create a bridged network。 - - The bridge network can only ensure the network connection between the local container networks. If cross-machine network communication is required, it is recommended to use the "host" network mode or "vxlan" network connection between two machines.。 + - Due to the slow speed of macOS docker cross-file system io, it is not recommended to mount the volume in the macOS experience environment。 + - Before starting the container using docker-compose, make sure that the bridge network "tars-network" has been created. You can use the "create-subnet" command of "BcosBuilder" to create a bridge network。 + - The bridge network can only ensure the connectivity between local container networks. If cross-machine network communication is required, it is recommended to use "host" network mode or "vxlan" network connection between two machines。 ``` -**In bridge mode, the docker of the tarsFramework-compose is configured as follows**: +**In bridge mode, the docker-compose configuration of tarsFramework is as follows**: ```yml version: "3" @@ -498,7 +498,7 @@ services: - tars-mysql ``` -**Docker for tarsnode in bridge mode-compose is configured as follows**: +**In bridge mode, the docker-compose configuration of tarsnode is as follows**: ```yml version: "3" @@ -526,16 +526,16 @@ services: - /etc/localtime:/etc/localtime ``` -### 3.2 Tars docker for host networking-compose Configuration +### 3.2 Tars docker-compose configuration for host networking ```eval_rst .. note:: - **it is recommended that the production environment use hosts networking to build tars**。 - - Due to the slow IO speed of macOS docker across file systems, it is not recommended to mount volumes in the macOS experience environment.。 + - Due to the slow speed of macOS docker cross-file system io, it is not recommended to mount the volume in the macOS experience environment。 - In actual use, replace "172.25.0.2, 172.25.0.3" in the following configuration example with the physical machine IP address。 ``` -**In host mode, the docker of the tarsFramework-compose is configured as follows**: +**In host mode, the docker-compose configuration of tarsFramework is as follows**: ```yml version: "3" @@ -569,7 +569,7 @@ services: - tars-mysql ``` -**In host mode, the docker of the tarsnode-compose is configured as follows**: +**In host mode, the docker-compose configuration of tarsnode is as follows**: ```yml version: "3" diff --git a/3.x/en/docs/tutorial/promax_expand_air.md b/3.x/en/docs/tutorial/promax_expand_air.md index a757e0270..d9af2a6b9 100644 --- a/3.x/en/docs/tutorial/promax_expand_air.md +++ b/3.x/en/docs/tutorial/promax_expand_air.md @@ -1,6 +1,6 @@ # pro chain or max chain expansion air node -The build _ chian.sh script provides the function of expanding the air node of the pro chain / max chain. This chapter expands a new air blockchain node on the basis of building the pro chain / max chain to help users master the expansion steps of the pro version chain / max version chain expansion air node.。 +The build _ chian.sh script provides the function of expanding the air node of the pro chain / max chain. This chapter expands a new air blockchain node on the basis of building the pro chain / max chain to help users master the expansion steps of the pro version chain / max version chain expansion air node。 ## pro expansion air node @@ -84,7 +84,7 @@ When scaling the Air version of the blockchain, you need to prepare a certificat - **Node configuration file 'config.ini'**: Can be copied from an existing node directory。 - **Node Genesis block configuration file 'config.genesis'**: Can be copied from an existing node directory。 - **Node connection configuration 'nodes.json'**: Configure the IP and port information of all node connections, which can be copied from the existing node directory and added with the IP and port of the new node。 -- **fisco-bcos binary** +- **disco-bcos binary** ```shell # Create a directory to store the expansion configuration @@ -108,11 +108,11 @@ $ cp generate/172.31.184.227/gateway_31300/conf/nodes.json config/ ```ini # The command is as follows -# Call build _ chain.sh to expand the node. The new node is expanded to the nodes / 127.0.0.1 / node4 directory. +# Call build _ chain.sh to expand the node. The new node is expanded to the nodes / 127.0.0.1 / node4 directory # -c: Specify the paths of config.ini, config.genesis, and nodes.json # -d: Specify the path to the CA certificate and private key # -o: Specify the directory where the expansion node configuration is located -# -e: Specify the capacity expansion node fisco-bcos binary path +# -e: Specify the binary path of the scale-out node fisco-bcos bash build_chain.sh -C expand -c config -d config/ca -o expandAirNode/node0 -e fisco-bcos ``` @@ -120,7 +120,7 @@ bash build_chain.sh -C expand -c config -d config/ca -o expandAirNode/node0 -e f ### 4. Modify related configuration ```shell -# replication fisco-bcos binary to expansion node +# Copy the disco-bcos binary to the scale-out node cp ./fisco-bcos ./expandAirNode # Copy the tars _ proxy.ini file to the configuration file directory of the expansion node @@ -336,11 +336,11 @@ $ cp generate/172.30.35.60/gateway_31300/conf/nodes.json config/ ```ini # The command is as follows -# Call build _ chain.sh to expand the node. The new node is expanded to the nodes / 127.0.0.1 / node4 directory. +# Call build _ chain.sh to expand the node. The new node is expanded to the nodes / 127.0.0.1 / node4 directory # -c: Specify the paths of config.ini, config.genesis, and nodes.json # -d: Specify the path to the CA certificate and private key # -o: Specify the directory where the expansion node configuration is located -# -e: Specify the capacity expansion node fisco-bcos binary path +# -e: Specify the binary path of the scale-out node fisco-bcos bash build_chain.sh -C expand -c config -d config/ca -o expandAirNode/node0 -e fisco-bcos ``` @@ -348,7 +348,7 @@ bash build_chain.sh -C expand -c config -d config/ca -o expandAirNode/node0 -e f ### 4. Modify related configuration ``` -# replication fisco-bcos binary to expansion node +# Copy the disco-bcos binary to the scale-out node cp ./fisco-bcos ./expandAirNode # Copy the tars _ proxy.ini file to the configuration file directory of the expansion node diff --git a/3.x/en/docs/tutorial/support_os.md b/3.x/en/docs/tutorial/support_os.md index 69e6ef4a9..dfd40cd3c 100644 --- a/3.x/en/docs/tutorial/support_os.md +++ b/3.x/en/docs/tutorial/support_os.md @@ -2,7 +2,7 @@ tags: "domestic support" ---- -FISCO BCOS is fully adapted to domestic servers and supports domestic platforms such as Kunpeng and Galaxy Kirin V10.。The following describes the steps to compile and deploy the run chain on the Galaxy Kirin V10ARM platform FISCO BCOS source code. +FISCO BCOS is fully adapted to domestic servers and supports domestic platforms such as Kunpeng and Galaxy Kirin V10。The following describes the steps to compile and deploy the run chain on the Galaxy Kirin V10ARM platform FISCO BCOS source code ### Installation of basic software and source code compilation #### 1. Update Software @@ -15,10 +15,10 @@ sudo yum update sudo yum install -y wget curl tar sudo yum install -y build-essential clang flex bison patch glibc-static glibc-devel libzstd-devel libmpc cpp -# Check the gcc version. If the gcc version is lower than 10, install a gcc version higher than 10. +# Check the gcc version. If the gcc version is lower than 10, install a gcc version higher than 10 gcc -v -# Check whether the cmake version is greater than or equal to 3.14. If not, install the cmake version that meets the requirements. +# Check whether the cmake version is greater than or equal to 3.14. If not, install the cmake version that meets the requirements cmake --version ``` #### 3. Pull code @@ -52,10 +52,10 @@ export X_VCPKG_ASSET_SOURCES=x-azurl,http://106.15.181.5/ # Compile cmake3 -DBUILD_STATIC=ON .. || cat *.log -# If vcpkg fails during dependency compilation, check the error log according to the error message. +# If vcpkg fails during dependency compilation, check the error log according to the error message # For network reasons, configure the vcpkg agent as prompted above -# High performance machines can be added-j4 Compile with 4-core acceleration +# High-performance machines can add -j4 using 4-core accelerated compilation make -j4 ``` ![](../../images/tutorial/img_2.png) @@ -69,7 +69,7 @@ For detailed compilation, please refer to [node source code compilation](./compi curl -#LO https://github.com/FISCO-BCOS/FISCO-BCOS/releases/download/v3.6.0/build_chain.sh && chmod u+x build_chain.sh ``` -#### 2. Use the compiled binary deployment chain. +#### 2. Use the compiled binary deployment chain ```shell bash build_chain.sh -l 127.0.0.1:4 -p 30300,20200 -e ../FISCO-BCOS/build/fisco-bcos-air/fisco-bcos [INFO] Generate ca cert successfully! @@ -121,7 +121,7 @@ cd ~/fisco && curl -LO https://github.com/FISCO-BCOS/console/releases/download/v ```eval_rst .. note:: - - If you cannot download for a long time due to network problems, please try cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh + -If you cannot download for a long time due to network problems, please try cd ~ / fisco & & curl-#LO https://gitee.com/FISCO-BCOS/console/raw/master/tools/download_console.sh ``` ```shell diff --git a/3.x/en/index.rst b/3.x/en/index.rst index 7c008ebb1..86f655159 100644 --- a/3.x/en/index.rst +++ b/3.x/en/index.rst @@ -1,4 +1,4 @@ -请根据需求选择FISCO BCOS的版本,并确认周边组件与其版本相匹配。 +Please select the version of FISCO BCOS according to your requirements and confirm that the peripheral components match their versions。 .. container:: row @@ -12,7 +12,7 @@
@@ -21,7 +21,7 @@
@@ -36,27 +36,27 @@ ############################################################## -FISCO BCOS 3.0 技术文档 +FISCO BCOS 3.0 Technical Documents ############################################################## -FISCO BCOS(读作/ˈfɪskl bi:ˈkɒz/) 是一个稳定、高效、安全的区块链底层平台,其可用性经广泛应用实践检验。开源社区至今已有5000+企业及机构、400+产业数字化标杆应用,覆盖文化版权、司法服务、政务服务、物联网、金融、智慧社区、房产建筑、社区治理、乡村振兴等领域。 +FISCO BCOS:KZ /) is a stable, efficient and secure blockchain underlying platform, and its usability has been tested by widely used practices。The open source community is now 5,000+Enterprises and Institutions, 400+Industrial digital benchmarking applications, covering cultural copyright, judicial services, government services, Internet of Things, finance, smart communities, real estate construction, community governance, rural revitalization and other fields。 .. image:: images/introduction/applications_new.png :align: center - :alt: 产业应用 + :alt: industrial application -FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今为止,开源社区汇聚超过10万名成员共建共治,发展成为最大最活跃的国产开源联盟链生态圈之一,其中涌现出诸多对社区建设、代码贡献的优秀社区成员。截止2023年,开源社区共认定63位MVP,这些优秀的贡献者或是将FISCO BCOS技术落地到各领域应用中,助力产业数字化,或是在多渠道布道,将开源社区精神传播到更远的地方。 +FISCO BCOS open source community is committed to building an open and diversified open source alliance chain ecology, so far, the open source community has gathered more than 100,000 members to build and govern together, and has developed into one of the largest and most active domestic open source alliance chain ecosystems, in which many outstanding community members who contribute to community construction and code have emerged。As of 2023, the open source community has identified 63 MVPs, and these outstanding contributors are either applying FISCO BCOS technology to various applications to help digitize the industry, or preaching in multiple channels to spread the spirit of the open source community further afield。 .. image:: images/community/mvp_review_2023.png :align: center - :alt: FISCO BCOS 2023年度MVP + :alt: FISCO BCOS 2023 MVP of the Year .. note:: - 本技术文档适用于FISCO BCOS 3.x版本, FISCO BCOS 2.x稳定版技术文档请参考 `FISCO BCOS 2.x技术文档(stable) `_ + This technical document is applicable to FISCO BCOS 3.x version. For FISCO BCOS 2.x stable technical document, please refer to 'FISCO BCOS 2.x technical document(stable) `_ - FISCO BCOS 3.x版本源码位于 `master` 分支,请参考 `这里 `_ - FISCO BCOS 2.x版本源码位于 `master-2.0` 分支,请参考 `这里 `_ + FISCO BCOS 3.x version source code is located in the 'master' branch, please refer to 'here`_ + FISCO BCOS 2.x source code is located in the 'master-2.0' branch, please refer to 'here`_ .. container:: row @@ -67,14 +67,14 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
-             快速开始 +             Quick Start

- - `了解FISCO BCOS区块链 <./docs/introduction/introduction.html>`_ - - `FISCO BCOS 3.X新特性 <./docs/introduction/change_log/index.html>`_ - - `搭建第一个区块链网络 <./docs/quick_start/air_installation.html>`_ - - `开发第一个Solidity区块链应用 <./docs/quick_start/solidity_application.html>`_ - - `开发第一个webankblockchain-liquid区块链应用 <./docs/quick_start/wbc_liquid_application.html>`_ + - 'Understanding the FISCO BCOS Blockchain<./docs/introduction/introduction.html>`_ + - 'FISCO BCOS 3.X NEW FEATURES<./docs/introduction/change_log/index.html>`_ + - 'Building the first blockchain network<./docs/quick_start/air_installation.html>`_ + - 'Develop the first Solidity blockchain application<./docs/quick_start/solidity_application.html>`_ + - 'Develop the first webankblockchain-liquid blockchain application<./docs/quick_start/wbc_liquid_application.html>`_ .. container:: card-holder @@ -83,12 +83,12 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
-             开发教程 +             Development Tutorial

- - `Air版本区块链网络搭建 <./docs/tutorial/air/index.html>`_ - - `Pro版本区块链网络搭建 <./docs/tutorial/pro/index.html>`_ - - `轻节点搭建 <./docs/tutorial/lightnode.html>`_ + - 'Air version blockchain network building<./docs/tutorial/air/index.html>`_ + - 'Pro version blockchain network building<./docs/tutorial/pro/index.html>`_ + - 'Light Node Build<./docs/tutorial/lightnode.html>`_ - `FISCO BCOS Java SDK <./docs/sdk/java_sdk/index.html>`_ .. container:: card-holder-bigger @@ -98,7 +98,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
-             使用工具 +             Using Tools

.. container:: tools @@ -108,9 +108,9 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今

- 开发部署工具:区块链网络快速部署工具 + Development and Deployment Tool: Blockchain Network Rapid Deployment Tool

-

开发部署工具是提供给开发者快速搭建FISCO BCOS区块链网络的脚本工具。

+

The development and deployment tool is a scripting tool for developers to quickly build FISCO BCOS blockchain networks。

@@ -124,9 +124,9 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今

- 命令行交互控制台:节点查询与管理工具 + Command Line Interactive Console: Node Query and Management Tools

-

命令行交互控制台是提供给开发者使用的节点查询与管理的工具。控制台拥有丰富的命令,包括查询区块链状态、管理区块链节点、部署并调用合约等。此外,控制台提供一个合约编译工具,用户可以方便快捷的将Solidity合约文件编译为Java合约文件。

+

The command line interactive console is a tool for developers to query and manage nodes。The console has a wealth of commands, including querying blockchain status, managing blockchain nodes, deploying and invoking contracts, and more。In addition, the console provides a contract compilation tool that allows users to quickly and easily compile Solidity contract files into Java contract files。

@@ -141,9 +141,9 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今
- 图形化的区块链管理工具 + Graphical blockchain management tool

-

WeBankBlockchain WeBASE(WeBank Blockchain Application Software Extension, 简称WBC-WeBASE) 是一套管理FISCO-BCOS联盟链的工具集。WBC-WeBASE提供了图形化的管理界面,屏蔽了区块链底层的复杂度,降低区块链使用的门槛,大幅提高区块链应用的开发效率,包含节点前置、节点管理、交易链路,数据导出,Web管理平台等子系统。

+

WeBankBlockchain WeBASE(WeBank Blockchain Application Software Extension, WBC-WeBASE) is a set of tools for managing the FISCO-BCOS alliance chain。WBC-WeBASE provides a graphical management interface, shielding the complexity of the underlying blockchain, reducing the threshold for blockchain use, and greatly improving the development efficiency of blockchain applications, including subsystems such as node front, node management, transaction links, data export, and web management platforms。



@@ -157,9 +157,9 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
- 数据治理通用组件:释放数据价值 + Common Components of Data Governance: Unlocking Data Value

-

数据治理通用组件的全名是“WeBankBlockchain-Data数据治理通用组件”,它是一套稳定、高效、安全的区块链数据治理组件解决方案,可无缝适配FISCO BCOS区块链底层平台。它由数据导出组件(Data-Export)、数据仓库组件(Data-Stash)、数据对账组件(Data-Reconcile)这三款相互独立、可插拔、可灵活组装的组件所组成,开箱即用,灵活便捷,易于二次开发。

+

The full name of the data governance common component is "WeBankBlockchain-Data Data Data Governance Common Component," which is a stable, efficient and secure blockchain data governance component solution that can seamlessly adapt to the underlying platform of the FISCO BCOS blockchain。It consists of the Data Export component(Data-Export), Data Warehouse Components(Data-Stash)Data Reconciliation Component(Data-Reconcile)These three independent, pluggable, flexible assembly components, out of the box, flexible and convenient, easy to secondary development。

@@ -174,9 +174,9 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今
- 区块链多方协作治理组件:开启治理实践新起点 + Blockchain multi-party collaborative governance component: opening a new starting point for governance practice

-

WeBankBlockchain-Governance区块链多方协作治理组件是一套轻量解耦、简洁易用、通用场景和一站式的区块链治理组件解决方案。 首批开源的有账户治理组件(Governance-Account)、权限治理组件(Governance-Auth)、 私钥管理组件(Governance-Key)和证书管理组件(Governance-Cert)。上述组件都提供了可部署的智能合约代码、易于使用的SDK和可参考的落地实践Demo等交付物。

+

WeBankBlockchain-Governance is a lightweight, easy-to-use, common scenario and one-stop blockchain governance component solution。 First open source account governance components(Governance-Account), Permission Governance Components(Governance-Auth)Private key management component(Governance-Key)and Certificate Management Components (Governance-Cert)。The above components all provide deliverables such as deployable smart contract code, easy-to-use SDK and reference landing practice Demo。

@@ -189,9 +189,9 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
- 区块链应用开发组件:助力低代码开发 + Blockchain application development components: enabling low-code development

-

WeBankBlockchain-SmartDev应用开发组件包含了一套开放、轻量的开发组件集,覆盖智能合约的开发、调试、应用开发等环节,包括智能合约库(SmartDev-Contract)、智能合约编译插件(SmartDev-SCGP)和应用开发脚手架(SmartDev-Scaffold)。开发者可根据自己的情况自由选择相应的开发工具,提升开发效率。

+

The WeBankBlockchain-SmartDev application development component includes an open and lightweight set of development components covering the development, debugging, and application development of smart contracts, including the SmartDev-Contract, SmartDev-SCGP, and SmartDev-Scaffold。Developers can freely choose the corresponding development tools according to their own situation to improve development efficiency。

@@ -204,13 +204,13 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
-             系统设计 +             System Design

- - `系统架构 <./docs/design/architecture.html>`_ - - `两阶段并行拜占庭共识 <./docs/design/consensus/consensus.html>`_ - - `合约文件系统BFS <./docs/design/contract_directory.html>`_ - - `更多设计文档 <./docs/design/index.html>`_ + - 'System Architecture<./docs/design/architecture.html>`_ + - 'Two-stage parallel Byzantine consensus<./docs/design/consensus/consensus.html>`_ + - 'Contract File System BFS<./docs/design/contract_directory.html>`_ + - 'More design documentation<./docs/design/index.html>`_ .. container:: card-holder @@ -219,14 +219,14 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
-             社区资源 +             Community Resources

- - `Github主页 `_ - - `贡献代码 `_ - - `反馈问题 `_ - - `应用案例集 `_ - - `微信群 `_ 、`公众号 `_ + - 'Github Home Page`_ + - 'Contribution Code`_ + - 'Feedback Questions`_ + - 'Application Case Set`_ + - 'WeChat Group'_,' public number`_ .. container:: card-holder-bigger @@ -236,7 +236,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html
-             更多开源工具 +             More open source tools

@@ -246,34 +246,34 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. raw:: html - - **FISCO BCOS企业级金融联盟链底层平台**: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **区块链中间件平台**:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **WeIdentity 基于区块链的实体身份标识及可信数据交换解决方案**: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **WeDPR 即时可用,场景式隐私保护高效解决方案套件和服务**:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **WeCross 区块链跨链协作平台**: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **Truora 可信预言机服务**:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **webankblockchain-liquid(简称WBC-Liquid)智能合约编程语言软件**:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **WeBankBlockchain-Data 数据治理通用组件**: - - Data-Stash 数据仓库组件: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - Data-Export 数据导出组件: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - Data-Reconcile 数据对账组件: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **WeBankBlockchain-Governance 多方治理协作组件**: - - Governance-Account 账户治理组件: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - Governance-Authority 权限治理组件:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - Governance-Key 私钥管理组件: `[GitHub] `_ `[Gitee] `_ `[文档] `_ - - Governance-Cert 证书管理组件:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **WeEvent 基于区块链的分布式事件驱动架构**:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **WeBankBlockchain-SmartDev 区块链应用开发工具**: - - SmartDev-Contract 智能合约库组件:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - SmartDev-SCGP 合约编译插件:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - SmartDev-Scaffold 应用开发脚手架:`[GitHub] `_ `[Gitee] `_ `[文档] `_ - - **DDCMS分布式数据协作管理解决方案:**:`[GitHub] `_ `[Gitee] `_ `[文档] `_ + - **FISCO BCOS Enterprise Financial Alliance Chain Underlying Platform**: `[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **Blockchain Middleware Platform**:`[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **Blockchain-based Entity Identity and Trusted Data Exchange Solution for WeIdentity**: `[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **WeDPR Instant and Available, Scenario-Based Privacy Protection Efficient Solution Suite and Services**:`[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **WeCross Blockchain Cross-Chain Collaboration Platform**: `[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **Truora Trusted Oracle Service**:`[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **Webankblockchain-liquid (WBC-Liquid) Smart Contract Programming Language Software**:`[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **WeBankBlockchain-Data Data Governance Common Components**: + - Data-Stash Data Warehouse Component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + -Data-Export data export component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + - Data-Reconcile data reconciliation component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + - **WeBankBlockchain-Governance Multiparty Governance Collaboration Component**: + -Governance-Account Account governance component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + -Governance-Authority permission governance component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + -Governance-Key private key management component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + -Governance-Cert certificate management component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + - **WeEvent distributed event-driven architecture based on blockchain**:`[GitHub] `_ `[Gitee] '_' [Documentation]`_ + - **WeBankBlockchain - SmartDev Blockchain Application Development Tool**: + -SmartDev-Contract smart contract library component: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + - SmartDev - SCGP contract compilation plugin: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + -SmartDev-Scaffold Application Development Scaffold: '[GitHub]`_ `[Gitee] '_' [Documentation]`_ + - **DDCMS Distributed Data Collaboration Management Solution:**:`[GitHub] `_ `[Gitee] '_' [Documentation]`_ .. toctree:: :hidden: :maxdepth: 1 - :caption: 平台介绍 + :caption: Platform Introduction docs/introduction/introduction.md docs/introduction/key_feature.md @@ -283,7 +283,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: 快速开始 + :caption: Quick Start docs/quick_start/hardware_requirements.md docs/quick_start/air_installation.md @@ -293,7 +293,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: 合约开发 + :caption: Contract Development docs/contract_develop/solidity_develop.md docs/contract_develop/c++_contract/index.md @@ -305,7 +305,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: SDK教程 + :caption: SDK Tutorial docs/sdk/index.md docs/sdk/java_sdk/index.md @@ -321,7 +321,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: 搭链教程 + :caption: Hooking Tutorial docs/tutorial/air/index.md docs/tutorial/pro/index.md @@ -334,7 +334,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: 应用开发 + :caption: application development docs/develop/index.md docs/develop/api.md @@ -350,7 +350,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: 区块链运维 + :caption: Block chain operation and maintenance docs/operation_and_maintenance/build_chain.md docs/operation_and_maintenance/light_monitor.md @@ -372,7 +372,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: 高阶功能使用 + :caption: Use of higher-order functions docs/advanced_function/safety.md docs/advanced_function/wecross.md @@ -384,7 +384,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: FISCO BCOS设计原理 + :caption: FISCO BCOS Design Principles docs/design/architecture.md docs/design/tx_procedure.md @@ -413,7 +413,7 @@ FISCO BCOS开源社区致力打造开放多元的开源联盟链生态,至今 .. toctree:: :hidden: :maxdepth: 1 - :caption: 社区资源 + :caption: Community Resources docs/community/MVP_list_new.md docs/community/contributor_list_new.md