mirror of
https://github.com/EQEmu/Server.git
synced 2025-12-11 21:01:29 +00:00
* Shared tasks WIP; lots of logging; shared tasks and tasks work internally the same for now; lots to cleanup yet * Update task_manager.cpp * Add tables * World message handler * Zone message handler * More messaging * More rearranging * Task creation work (wip) * Tweaks * Decoupled things, added a shared task manager, moved logic to the manager, created the shared task object, now creating a sense of state on creation and members, zero validation, happy path * Cleanup unnecessary getter * More work on shared task persistence and state loading * Add int64 support into repositories * More state handling, creation loads all tables * Wrap up shared task state creation and removal * Move more lookup operations to preloading (memory). Restore shared task state during world bootup * Implement shared task updates * Add members other than just leader in task confirmations * Update shared_task_manager.cpp * Hook task cancellation for shared task removal (middleware) * Remove dynamic_zone_id from SharedTasks model in repositories (for now) since we will likely be one to many with DZ objects * Get members to show up in the window on creation * Add opcodes, cleanup * Add opcode handlers * Split some methods out, self removal of shared task and updating members * Implement offline shared task sync * Style changes * Send memberlist on initial login; implement remove player from shared task window * Refactorings, cleanup * Implement make leader in shared tasks window * Implement add player, sync shared task state after add * Add opcodes for remaining clients * Shared task invite dialogue window implementation and response handling (including validation) * Logging * Remove comment * Some cleanup * Pass NPC context through shared task request logic * Remove extra SharedTaskMember fields * Add message constants * Remove static * Only use dz for expedition request This passes expedition creation parameters through DynamicZone instead of injecting ExpeditionRequest since it can hold creation data now * Store expedition leader on dz This shifts to using the leader object that exists in the core dynamic zone object. It will be moved to the dynamic zone table later with other columns that should just be on the dz to make loading easier. Expeditions are probably the only dz type that will use this for window updates and command auth. Other systems on live do fill the window but don't keep it updated * Store expedition name on dz This uses the name stored on dz (for window packets) instead of duplicating it. This will be moved completely to dz table later * Store uuid on dynamic zone This lets dynamic zones generate the uuid instead of expeditions. Other dz type systems may want to make use of this. Lockouts should also be moved to dynamic zones at some point in the future so this will be necessary for that * Move expedition db columns to dz These columns should just belong to the core dynamic zone. This will simplify loading from the database and in the future a separate expedition table may no longer be necessary. * Move window packet methods to dz It makes more sense for these methods to be in the core This will also allow support for other systems to use the window, though live behavior that updates the window for shared task missions when not in an expedition is likely unintended since it's not updated on changes. * Store dynamic zone ids on clients These will now be used for client dynamic zone lookups to remove dependency on any dz type system caches * Move member management to dz This moves server messaging for adding and removing members to internal dynamic zone methods Set default dz member status to Unknown * Move member status caching to dz This moves world member status caching into internal dz methods Zone member updates for created expeditions are now async and sent after world replies with member statuses. Prior to this two memberlist packets were sent to members in other zones on creation to update statuses. This also fixes a bug with member statuses being wrong for offline raid members in the zone that created an expedition. Note that live kicks offline players out of raids so this is only to support emu behavior. * Move member status updates to dz * Set dz member status on all client dzs This also renames the zone entry dz update method and moves window update to a dynamic zone method. Eventually expedition components should just be merged with dz and handled as another dz type * Save instance safe return on characters Add character_instance_safereturns table and repository Previously dz safe return only worked for online characters via the dz kicktimer or offline characters with a workaround that moved them when an expedition was deleted. There were various edge cases that would cause characters to be moved to bind instead (succoring after removal, camping before kick timer, removed while offline, bulk kickplayers removal with some offline) This updates a character's instance safereturn every time they enter a zone. If a character enters world in an instance that expired or are no longer part of they'll be moved to their instance safereturn (if the safereturn data is for the same zone-instance). Bind is still a fallback This may also be used for non-dz instancing so it's named generically This removes the expedition MoveMembersToSafeReturn workaround which deprecates the is_current_member column of dynamic_zone_members and will be removed in a followup patch. * Remove is_current_member from dz members This was only being used in the workaround to move past members to dz safereturns if they were still inside the dz but not online * Let dz check leader in world This moves expedition leader processing in world to the dynamic zone. This is a step in phasing out the separate expedition class for things that can run off the dynamic zone core with simple dz type checks This greatly simplifies checking leader on member and status changes without needing callbacks. Other dz types that may use the dz leader object can just handle it directly on the dz the same as expeditions * Let dz handle member expire warnings This moves expire warning checks to dz. This will make it easier for other dz types to issue expire warnings if needed * Use separate dynamic zone cache Dynamic zones are no longer member objects of expeditions and have been placed into their own cache. This was done so other dz types can be cached without relying on their systems. Client and zone dz Lookups are now independent of any system This continues the process of phasing out a separate expedition cache. Eventually expeditions can just be run directly as dynamic zones internally with a few dz type checks. Add dz serialization methods (cereal) for passing server dz creation Modify #dz list to show cache and database separately. Also adds #dz cache reload. This command will reload expeditions too since they currently hold references to the dz in their own zone cache. Add a dynamic zone processing class to world to process all types and move expedition processing to it * Move expedition makeleader processing to dz * Let dz handle expedition deletions This removes the need for separate expedition cache in world This will greatly simplify world dynamic zone caching and processing. Dynamic zones that are expeditions can just handle this directly. Once lockouts and other components are completely moved to dynamic zones the separate expedition cache in zone will also no longer be necessary * Remove ExpeditionBase class Since world no longer caches expeditions this will not be necessary * Fix windows compile * Implement task dz creation Prototype dz creation for shared tasks * Add and remove shared task members from dz Also keep leader updated (used in choose zone window) * Fix client crash on failed shared task * Fix linux compile and warning * Check client nullptr for dz message This was accidently removed when expedition makeleader was moved * Disable dz creation for solo tasks * Add shared task repository headers to CMakeLists * Add shared task dynamic zones table * Add shared task dz database persistence * Get members from db on shared task dz creation This fixes a case where removing a member from a shared task dz would fail if the member's name was empty. This could happen if the shared task dz was created while a member was offline. This also changes the dz member removal method to only check id. It might be possible to change all dz member validations to only check ids since names are primarily for window updates, but shared task dz member names need to be non-empty anyway to support possible live-like dz window usage in the future. * Add character message methods to world Add simple and eqstr message methods to ClientList Add shared task manager methods to message all members or leader * Add SyncClientSharedTaskState and nested sync strategies to cover M3 work * Fix whitespace * Implement task request cooldown timer This implements the task request cooldown (15 seconds) that live uses when a task is accepted. This will also need to be set when shared tasks are offered (likely due to additional group/raid validations) * Implement shared task selector validation This implements the validation and filtering that occurs before the task selection window is sent to a client for shared tasks To keep things live-like, task selectors that contain a shared task will be run through shared task validation and drop non-shared tasks. Live doesn't mix types in task selections and this makes validation simpler. Also note that live sends shared task selectors via a different opcode than solo tasks but that has not been implemented yet * Add separate shared task select opcodes Live uses separate opcodes for solo and shared task selection windows * Convert ActivityType to enum class * Refactor task selector serialization This adds serializer methods to task and task objective structs for the task selection windows. This combines the duplicate task selector methods to reduce code duplication and simplify serialization * Add shared task selector This sends shared task selection window using the shared task specific opcode and adds an opcode handler for shared task accepts which are sent by client in response to setting selection window to shared task type. * Refactor task objective serialization This adds a serialization method to the task objective struct for serializing objectives in the window list and combines the separate client-based methods to reduce duplicated code. * Add task level spread and player count columns * Implement shared task accept validation This adds a common method for shared task character request queries * Add task replay and request timer columns * Add character task timers table * Use shared task accept time on clients This overrides client task accept time with shared task's creation time. This is needed for accurate window task timers and lockout messages especially for characters added to shared tasks post creation * Implement task timer lockouts This implements replay and request task timers for solo and shared tasks * Add solo and shared task timer validation * Remove logging of padding array This gets interpreted as a c string which may not be null terminated * Implement /kickplayers task This also fixes current CancelTask behavior for leader which was performing kickplayers functionality through the remove task button * Implement /taskquit command * Implement shared task invite validation Remove active invitation before invite accept validation * Remove local client db persistence during SyncClientSharedTaskRemoveLocalIfNotExists * Add missing accept time arg to assign task * Only validate non-zero task invite requirements * Fix task error log crash * Separate task cooldown timer messaging * Use method to check for client shared task * Avoid unneeded task invite validation query Only need to query character data for levels for non-zero level spread * Implement /tasktimers command May want to add some type of throttled caching mechanism for this in the future * Add /tasktimers rate limiter * Intercept shared task completion; more work to come * Change SharedTaskActivityState and SharedTasks time objects to datetime * Add updated_time updates to SharedTaskActivities * Mark shared tasks as complete when all activities are completed * Save a database query on shared task completion and use the active record in memory * Don't record shared task completions to the quest log * Implement RecordSharedTaskCompletion, add tables, repositories * Update shared_task_manager.cpp * Update shared_task_manager.cpp * Add shared task replay timers This is still not feature complete. On live any past members that ever joined the shared task will receive a replay timer when it's completed * Create FindCharactersInSharedTasks that searches through memory * Remove namespace shorthand and formatting * More minor cleanup * Implement PurgeAllSharedTasks via #task command * Add #task purgetimers * Decrease m_keepalive time between processes * Remove type ordering in /tasktimer query * Add comment for task packet reward multiplier This is likely a reward multiplier that changes text color based on value to represent any scaled bonus or penalty * Add replay timers to past members This implements the live behavior that adds replay timers to any previous member of a shared task. This likely exists to avoid possible exploits. Shared task member history is stored in memory and is used to assign replay timers. This history will be lost on world crashes or restarts but is simpler than saving past member state in database. This also makes world send shared task replay timer messages since past members need to be messaged now * Move PurgeTaskTimers client method to tasks.cpp * Remove dz members when purging shared tasks Server dz states need to be updated before shared tasks are deleted * Use exact name in shared task invites This removes the wildcards from shared task invite character queries which was sometimes selecting the wrong character Taskadd validation is called even for invalid characters to allow for proper messages to occur * Clear declined active shared task invitations This also notifies leader for declined shared task invites * Store shared task member names This adds back the character name field to SharedTaskMember. This should make serialization easier in the future and reduce database lookups when names are needed for /task commands * Implement /taskplayerlist command * Replace queries with member name lookups Now that shared task members store names these queries are unnecessary This also adds not-a-member messages for /taskremove and /taskmakeleader * Implement shared task member change packet This avoids sending the full member list to members when a single member is added or removed and lets the client generate chat messages for it. * Serialize shared task member list from world This uses cereal to serialize the full member list from world and removes the zone query workarounds * Initialize client task state array This was causing sql query errors on client state reloads The client task information array was uninitialized resulting in being filled with 0xcdcdcdcd values in msvc debug builds. Under release builds this may have resulted in indeterminate values A better fix would be to refactor some of this legacy code * Add shared task command messages Add messages for non-leader task commands This adds taskadd, taskremove, taskmakeleader, and taskquit messages The leader receives double messages for taskremove like live due to the client generated message as well as the explicit one. It also receives double server messages if the leader /taskremoves self. * Replace some task messages with eqstrs This also updates to use live colors * Avoid shared task invite leader lookup query Since member names are stored now this query is also unnecessary * Avoid reloading client state on shared task accept This was unnecessarily reloading client task state when added to a shared task. This also resulted in all active tasks being resent to shared task members on creation. The shared task itself is the only task that needs to be sent which is handled by AcceptNewTask. * Remove active shared task invite on zone Live doesn't re-send shared task invites after zoning like it does for expeditions so there's no need to keep these around. This fixes active invitations never getting reset on characters that zone or go offline. * Choose new shared task leader if leader removed * Add separate shared task kickplayers method * Enable EVENT_CAST_ON for clients This will be required for a shared task objective (The Creator) in DoN * Revert "Avoid reloading client state on shared task accept" This reverts commit 3af14fee2de8b109ffb6c2b2fc67731e1531a665. Without this clients added to a task after some objectives have been completed don't get updated state. Will need to investigate this later * Disallow looting inside a dz by non-members Non-members of a dynamic zone should not be allowed to loot npcs inside it. This should have been disabled for expeditions already but was still allowed due to an oversight (or live behavior changed). This is less critical for shared tasks since members can be added and removed at will without leaving a dz but still an important feature. * Change load where criteria * Increase task completion emote column size * Use eqstr for task item reward message * Implement radiant and ebon crystal rewards This adds reward columns for radiant and ebon crystals to the tasks table and updates task description serialization * Send task completion emote before rewards This matches live and makes it a little easier to see item rewards when tasks have a long completion emote. This also changes it to send via the same normal message opcode that live uses. * Do not send a shared task in completed task history * Allow EVENT_TASK_STAGE_COMPLETE for quest goals This invokes event_task_stage_complete for task elements flagged with a quest controlled goal method. It should be expected behavior that a completed task stage always fires this event even if a quest controls it * Add SyncSharedTaskZoneClientDoneCountState * Swap return for continue in this case * Formatting * Simplify * Formatting * Formatting * Formatting * Remove errant check * Formatting, add setter for shared tasks * Remove debugging * Comments in PR * More PR follow up * Formatting * Cleanup * Update packet comments * Comments * More cleanup * Send command error message if not in shared task /taskadd is the only command with this feedback on live. Newer live clients also generate this instead of the server sending the message * Implement expire_time on SharedTask object and add a purge on world bootup * Comment * Add SyncClientSharedTaskStateToLocal where clients fall out of sync and no longer have a task locally * Clamp shared task activity updates to max done count and discard updates out of bounds * Fix packet send * Revert packet send * Adjust clamping OOO for completed time check. Add completed tables to purge truncation * Refactor kill update logic so that shared task kill updates only update one client instead of all clients * Cleanup how we're checking for active tasks * Forward task sets that contain shared tasks This forwards task sets that contain a shared task to shared task selector validation like normal task selectors * Change eqstr for empty solo task offers This is the message live appears to use if all task offers are filtered out by solo task validation * Fix max active tasks client message This message starts at the third argument. It was maybe intended to be an npc say message but live just sends it as a normal eqstr with the first two arguments nulled. * Load client task state after zoning complete This fixes a possible race where a character removed from a shared task while zoning would be stuck with an incorrect character activities state after zoning was completed. This was caused by the character loading task state to early on zone entry but never receiving the remove player message from world since they are missing from the world cle until zoning is completed. Loading client state after zone connection is completed makes sure the client has the latest state and available to the world cle * Send message to clients removed while zoning This message should usually only be sent to characters that were removed from a shared task while zoning but will occur for any sync state removals where a message wouldn't have already occured. * Post rebase fix * HG comment for checking active task * Addressing HG comments around zeroing out a shared task id * Remove errant comment * Post rebase database manifest updates * Update eqemu_logsys_log_aliases.h * More rebase catches * Bump database version for last commit Co-authored-by: hg <4683435+hgtw@users.noreply.github.com>
780 lines
22 KiB
C++
780 lines
22 KiB
C++
/**
|
|
* EQEmulator: Everquest Server Emulator
|
|
* Copyright (C) 2001-2020 EQEmulator Development Team (https://github.com/EQEmu/Server)
|
|
*
|
|
* This program is free software; you can redistribute it and/or modify
|
|
* it under the terms of the GNU General Public License as published by
|
|
* the Free Software Foundation; version 2 of the License.
|
|
*
|
|
* This program is distributed in the hope that it will be useful,
|
|
* but WITHOUT ANY WARRANTY except by those people which sell it, which
|
|
* are required to give you total support for your newly bought product;
|
|
* without even the implied warranty of MERCHANTABILITY or FITNESS FOR
|
|
* A PARTICULAR PURPOSE. See the GNU General Public License for more details.
|
|
*
|
|
* You should have received a copy of the GNU General Public License
|
|
* along with this program; if not, write to the Free Software
|
|
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
|
*
|
|
*/
|
|
|
|
#include "dynamic_zone.h"
|
|
#include "client.h"
|
|
#include "expedition.h"
|
|
#include "string_ids.h"
|
|
#include "worldserver.h"
|
|
#include "../common/eqemu_logsys.h"
|
|
|
|
extern WorldServer worldserver;
|
|
|
|
// message string 8312 added in September 08 2020 Test patch (used by both dz and shared tasks)
|
|
const char* const CREATE_NOT_ALL_ADDED = "Not all players in your {} were added to the {}. The {} can take a maximum of {} players, and your {} has {}.";
|
|
|
|
DynamicZone::DynamicZone(
|
|
uint32_t zone_id, uint32_t version, uint32_t duration, DynamicZoneType type)
|
|
{
|
|
m_zone_id = zone_id;
|
|
m_zone_version = version;
|
|
m_duration = std::chrono::seconds(duration);
|
|
m_type = type;
|
|
}
|
|
|
|
Database& DynamicZone::GetDatabase()
|
|
{
|
|
return database;
|
|
}
|
|
|
|
bool DynamicZone::SendServerPacket(ServerPacket* packet)
|
|
{
|
|
return worldserver.SendPacket(packet);
|
|
}
|
|
|
|
uint16_t DynamicZone::GetCurrentInstanceID()
|
|
{
|
|
return zone ? static_cast<uint16_t>(zone->GetInstanceID()) : 0;
|
|
}
|
|
|
|
uint16_t DynamicZone::GetCurrentZoneID()
|
|
{
|
|
return zone ? static_cast<uint16_t>(zone->GetZoneID()) : 0;
|
|
}
|
|
|
|
DynamicZone* DynamicZone::CreateNew(DynamicZone& dz_request, const std::vector<DynamicZoneMember>& members)
|
|
{
|
|
if (!zone || dz_request.GetID() != 0)
|
|
{
|
|
return nullptr;
|
|
}
|
|
|
|
// this creates a new dz instance and saves it to both db and cache
|
|
uint32_t dz_id = dz_request.Create();
|
|
if (dz_id == 0)
|
|
{
|
|
LogDynamicZones("Failed to create dynamic zone for zone [{}]", dz_request.GetZoneID());
|
|
return nullptr;
|
|
}
|
|
|
|
auto dz = std::make_unique<DynamicZone>(dz_request);
|
|
if (!members.empty())
|
|
{
|
|
dz->SaveMembers(members);
|
|
}
|
|
|
|
LogDynamicZones("Created new dz [{}] for zone [{}]", dz_id, dz_request.GetZoneID());
|
|
|
|
// world must be notified before we request async member updates
|
|
auto pack = dz->CreateServerDzCreatePacket(zone->GetZoneID(), zone->GetInstanceID());
|
|
worldserver.SendPacket(pack.get());
|
|
|
|
auto inserted = zone->dynamic_zone_cache.emplace(dz_id, std::move(dz));
|
|
|
|
// expeditions invoke their own updates after installing client update callbacks
|
|
if (inserted.first->second->GetType() != DynamicZoneType::Expedition)
|
|
{
|
|
inserted.first->second->DoAsyncZoneMemberUpdates();
|
|
}
|
|
|
|
return inserted.first->second.get();
|
|
}
|
|
|
|
void DynamicZone::CacheNewDynamicZone(ServerPacket* pack)
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzCreateSerialized_Struct*>(pack->pBuffer);
|
|
|
|
// caching new dz created in world or another zone (has member statuses set by world)
|
|
auto dz = std::make_unique<DynamicZone>();
|
|
dz->LoadSerializedDzPacket(buf->cereal_data, buf->cereal_size);
|
|
|
|
uint32_t dz_id = dz->GetID();
|
|
auto inserted = zone->dynamic_zone_cache.emplace(dz_id, std::move(dz));
|
|
|
|
// expeditions invoke their own updates after installing client update callbacks
|
|
if (inserted.first->second->GetType() != DynamicZoneType::Expedition)
|
|
{
|
|
inserted.first->second->DoAsyncZoneMemberUpdates();
|
|
}
|
|
|
|
LogDynamicZones("Cached new dynamic zone [{}]", dz_id);
|
|
}
|
|
|
|
void DynamicZone::CacheAllFromDatabase()
|
|
{
|
|
if (!zone)
|
|
{
|
|
return;
|
|
}
|
|
|
|
BenchTimer bench;
|
|
|
|
auto dynamic_zones = DynamicZonesRepository::AllWithInstanceNotExpired(database);
|
|
auto dynamic_zone_members = DynamicZoneMembersRepository::GetAllWithNames(database);
|
|
|
|
zone->dynamic_zone_cache.clear();
|
|
zone->dynamic_zone_cache.reserve(dynamic_zones.size());
|
|
|
|
for (auto& entry : dynamic_zones)
|
|
{
|
|
uint32_t dz_id = entry.id;
|
|
auto dz = std::make_unique<DynamicZone>(std::move(entry));
|
|
|
|
for (auto& member : dynamic_zone_members)
|
|
{
|
|
if (member.dynamic_zone_id == dz_id)
|
|
{
|
|
dz->AddMemberFromRepositoryResult(std::move(member));
|
|
}
|
|
}
|
|
|
|
zone->dynamic_zone_cache.emplace(dz_id, std::move(dz));
|
|
}
|
|
|
|
LogDynamicZones("Caching [{}] dynamic zone(s) took [{}s]", zone->dynamic_zone_cache.size(), bench.elapsed());
|
|
}
|
|
|
|
DynamicZone* DynamicZone::FindDynamicZoneByID(uint32_t dz_id)
|
|
{
|
|
if (!zone)
|
|
{
|
|
return nullptr;
|
|
}
|
|
|
|
auto dz = zone->dynamic_zone_cache.find(dz_id);
|
|
if (dz != zone->dynamic_zone_cache.end())
|
|
{
|
|
return dz->second.get();
|
|
}
|
|
|
|
return nullptr;
|
|
}
|
|
|
|
void DynamicZone::RegisterOnClientAddRemove(std::function<void(Client*, bool, bool)> on_client_addremove)
|
|
{
|
|
m_on_client_addremove = std::move(on_client_addremove);
|
|
}
|
|
|
|
void DynamicZone::StartAllClientRemovalTimers()
|
|
{
|
|
for (const auto& client_iter : entity_list.GetClientList())
|
|
{
|
|
if (client_iter.second)
|
|
{
|
|
client_iter.second->SetDzRemovalTimer(true);
|
|
}
|
|
}
|
|
}
|
|
|
|
bool DynamicZone::IsCurrentZoneDzInstance() const
|
|
{
|
|
return (zone && zone->GetInstanceID() != 0 && zone->GetInstanceID() == GetInstanceID());
|
|
}
|
|
|
|
void DynamicZone::SetSecondsRemaining(uint32_t seconds_remaining)
|
|
{
|
|
// async
|
|
constexpr uint32_t pack_size = sizeof(ServerDzSetDuration_Struct);
|
|
auto pack = std::make_unique<ServerPacket>(ServerOP_DzSetSecondsRemaining, pack_size);
|
|
auto buf = reinterpret_cast<ServerDzSetDuration_Struct*>(pack->pBuffer);
|
|
buf->dz_id = GetID();
|
|
buf->seconds = seconds_remaining;
|
|
worldserver.SendPacket(pack.get());
|
|
}
|
|
|
|
void DynamicZone::SetUpdatedDuration(uint32_t new_duration)
|
|
{
|
|
// preserves original start time, just modifies duration and expire time
|
|
m_duration = std::chrono::seconds(new_duration);
|
|
m_expire_time = m_start_time + m_duration;
|
|
|
|
LogDynamicZones("Updated dz [{}] zone [{}]:[{}] seconds remaining: [{}]",
|
|
m_id, m_zone_id, m_instance_id, GetSecondsRemaining());
|
|
|
|
if (zone && IsCurrentZoneDzInstance())
|
|
{
|
|
zone->SetInstanceTimer(GetSecondsRemaining());
|
|
}
|
|
}
|
|
|
|
void DynamicZone::HandleWorldMessage(ServerPacket* pack)
|
|
{
|
|
switch (pack->opcode)
|
|
{
|
|
case ServerOP_DzCreated:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzCreateSerialized_Struct*>(pack->pBuffer);
|
|
if (zone && !zone->IsZone(buf->origin_zone_id, buf->origin_instance_id))
|
|
{
|
|
DynamicZone::CacheNewDynamicZone(pack);
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzDeleted:
|
|
{
|
|
// sent by world when it deletes an expired or empty dz
|
|
// any system that held a reference to the dz should have already been notified
|
|
auto buf = reinterpret_cast<ServerDzID_Struct*>(pack->pBuffer);
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (zone && dz)
|
|
{
|
|
dz->SendUpdatesToZoneMembers(true, true); // members silently removed
|
|
|
|
// manually handle expeditions to remove any references before the dz is deleted
|
|
if (dz->GetType() == DynamicZoneType::Expedition)
|
|
{
|
|
auto expedition = Expedition::FindCachedExpeditionByDynamicZoneID(dz->GetID());
|
|
if (expedition)
|
|
{
|
|
LogExpeditionsModerate("Deleting expedition [{}] from zone cache", expedition->GetID());
|
|
zone->expedition_cache.erase(expedition->GetID());
|
|
}
|
|
}
|
|
|
|
LogDynamicZonesDetail("Deleting dynamic zone [{}] from zone cache", buf->dz_id);
|
|
zone->dynamic_zone_cache.erase(buf->dz_id);
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzAddRemoveMember:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzMember_Struct*>(pack->pBuffer);
|
|
if (zone && !zone->IsZone(buf->sender_zone_id, buf->sender_instance_id))
|
|
{
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
auto status = static_cast<DynamicZoneMemberStatus>(buf->character_status);
|
|
dz->ProcessMemberAddRemove({ buf->character_id, buf->character_name, status }, buf->removed);
|
|
}
|
|
}
|
|
|
|
if (zone && zone->IsZone(buf->dz_zone_id, buf->dz_instance_id))
|
|
{
|
|
// cache independent redundancy to kick removed members from dz's instance
|
|
Client* client = entity_list.GetClientByCharID(buf->character_id);
|
|
if (client)
|
|
{
|
|
client->SetDzRemovalTimer(buf->removed);
|
|
}
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzSwapMembers:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzMemberSwap_Struct*>(pack->pBuffer);
|
|
if (zone && !zone->IsZone(buf->sender_zone_id, buf->sender_instance_id))
|
|
{
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
auto status = static_cast<DynamicZoneMemberStatus>(buf->add_character_status);
|
|
dz->ProcessMemberAddRemove({ buf->remove_character_id, buf->remove_character_name }, true);
|
|
dz->ProcessMemberAddRemove({ buf->add_character_id, buf->add_character_name, status }, false);
|
|
}
|
|
}
|
|
|
|
if (zone && zone->IsZone(buf->dz_zone_id, buf->dz_instance_id))
|
|
{
|
|
// cache independent redundancy to kick removed members from dz's instance
|
|
Client* removed_client = entity_list.GetClientByCharID(buf->remove_character_id);
|
|
if (removed_client)
|
|
{
|
|
removed_client->SetDzRemovalTimer(true);
|
|
}
|
|
|
|
Client* added_client = entity_list.GetClientByCharID(buf->add_character_id);
|
|
if (added_client)
|
|
{
|
|
added_client->SetDzRemovalTimer(false);
|
|
}
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzRemoveAllMembers:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzID_Struct*>(pack->pBuffer);
|
|
if (zone && !zone->IsZone(buf->sender_zone_id, buf->sender_instance_id))
|
|
{
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
dz->ProcessRemoveAllMembers();
|
|
}
|
|
}
|
|
|
|
if (zone && zone->IsZone(buf->dz_zone_id, buf->dz_instance_id))
|
|
{
|
|
// cache independent redundancy to kick removed members from dz's instance
|
|
DynamicZone::StartAllClientRemovalTimers();
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzDurationUpdate:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzSetDuration_Struct*>(pack->pBuffer);
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
dz->SetUpdatedDuration(buf->seconds);
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzSetCompass:
|
|
case ServerOP_DzSetSafeReturn:
|
|
case ServerOP_DzSetZoneIn:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzLocation_Struct*>(pack->pBuffer);
|
|
if (zone && !zone->IsZone(buf->sender_zone_id, buf->sender_instance_id))
|
|
{
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
if (pack->opcode == ServerOP_DzSetCompass)
|
|
{
|
|
dz->SetCompass(buf->zone_id, buf->x, buf->y, buf->z, false);
|
|
}
|
|
else if (pack->opcode == ServerOP_DzSetSafeReturn)
|
|
{
|
|
dz->SetSafeReturn(buf->zone_id, buf->x, buf->y, buf->z, buf->heading, false);
|
|
}
|
|
else if (pack->opcode == ServerOP_DzSetZoneIn)
|
|
{
|
|
dz->SetZoneInLocation(buf->x, buf->y, buf->z, buf->heading, false);
|
|
}
|
|
}
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzGetMemberStatuses:
|
|
{
|
|
// reply from world for online member statuses request for async zone member updates
|
|
auto buf = reinterpret_cast<ServerDzMemberStatuses_Struct*>(pack->pBuffer);
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
for (uint32_t i = 0; i < buf->count; ++i)
|
|
{
|
|
auto status = static_cast<DynamicZoneMemberStatus>(buf->entries[i].online_status);
|
|
dz->SetInternalMemberStatus(buf->entries[i].character_id, status);
|
|
}
|
|
dz->m_has_member_statuses = true;
|
|
dz->SendUpdatesToZoneMembers(false, true);
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzUpdateMemberStatus:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzMemberStatus_Struct*>(pack->pBuffer);
|
|
if (zone && !zone->IsZone(buf->sender_zone_id, buf->sender_instance_id))
|
|
{
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
auto status = static_cast<DynamicZoneMemberStatus>(buf->status);
|
|
dz->ProcessMemberStatusChange(buf->character_id, status);
|
|
}
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzLeaderChanged:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzLeaderID_Struct*>(pack->pBuffer);
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
dz->ProcessLeaderChanged(buf->leader_id);
|
|
}
|
|
break;
|
|
}
|
|
case ServerOP_DzExpireWarning:
|
|
{
|
|
auto buf = reinterpret_cast<ServerDzExpireWarning_Struct*>(pack->pBuffer);
|
|
auto dz = DynamicZone::FindDynamicZoneByID(buf->dz_id);
|
|
if (dz)
|
|
{
|
|
dz->SendMembersExpireWarning(buf->minutes_remaining);
|
|
}
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
std::unique_ptr<EQApplicationPacket> DynamicZone::CreateExpireWarningPacket(uint32_t minutes_remaining)
|
|
{
|
|
uint32_t outsize = sizeof(ExpeditionExpireWarning);
|
|
auto outapp = std::make_unique<EQApplicationPacket>(OP_DzExpeditionEndsWarning, outsize);
|
|
auto buf = reinterpret_cast<ExpeditionExpireWarning*>(outapp->pBuffer);
|
|
buf->minutes_remaining = minutes_remaining;
|
|
return outapp;
|
|
}
|
|
|
|
std::unique_ptr<EQApplicationPacket> DynamicZone::CreateInfoPacket(bool clear)
|
|
{
|
|
constexpr uint32_t outsize = sizeof(DynamicZoneInfo_Struct);
|
|
auto outapp = std::make_unique<EQApplicationPacket>(OP_DzExpeditionInfo, outsize);
|
|
if (!clear)
|
|
{
|
|
auto info = reinterpret_cast<DynamicZoneInfo_Struct*>(outapp->pBuffer);
|
|
info->assigned = true;
|
|
strn0cpy(info->dz_name, m_name.c_str(), sizeof(info->dz_name));
|
|
strn0cpy(info->leader_name, m_leader.name.c_str(), sizeof(info->leader_name));
|
|
info->max_players = m_max_players;
|
|
}
|
|
return outapp;
|
|
}
|
|
|
|
std::unique_ptr<EQApplicationPacket> DynamicZone::CreateMemberListPacket(bool clear)
|
|
{
|
|
uint32_t member_count = clear ? 0 : static_cast<uint32_t>(m_members.size());
|
|
uint32_t member_entries_size = sizeof(DynamicZoneMemberEntry_Struct) * member_count;
|
|
uint32_t outsize = sizeof(DynamicZoneMemberList_Struct) + member_entries_size;
|
|
auto outapp = std::make_unique<EQApplicationPacket>(OP_DzMemberList, outsize);
|
|
auto buf = reinterpret_cast<DynamicZoneMemberList_Struct*>(outapp->pBuffer);
|
|
|
|
buf->member_count = member_count;
|
|
|
|
if (!clear)
|
|
{
|
|
for (auto i = 0; i < m_members.size(); ++i)
|
|
{
|
|
strn0cpy(buf->members[i].name, m_members[i].name.c_str(), sizeof(buf->members[i].name));
|
|
buf->members[i].online_status = static_cast<uint8_t>(m_members[i].status);
|
|
}
|
|
}
|
|
|
|
return outapp;
|
|
}
|
|
|
|
std::unique_ptr<EQApplicationPacket> DynamicZone::CreateMemberListNamePacket(
|
|
const std::string& name, bool remove_name)
|
|
{
|
|
constexpr uint32_t outsize = sizeof(DynamicZoneMemberListName_Struct);
|
|
auto outapp = std::make_unique<EQApplicationPacket>(OP_DzMemberListName, outsize);
|
|
auto buf = reinterpret_cast<DynamicZoneMemberListName_Struct*>(outapp->pBuffer);
|
|
buf->add_name = !remove_name;
|
|
strn0cpy(buf->name, name.c_str(), sizeof(buf->name));
|
|
return outapp;
|
|
}
|
|
|
|
std::unique_ptr<EQApplicationPacket> DynamicZone::CreateMemberListStatusPacket(
|
|
const std::string& name, DynamicZoneMemberStatus status)
|
|
{
|
|
// member list status uses member list struct with a single entry
|
|
constexpr uint32_t outsize = sizeof(DynamicZoneMemberList_Struct) + sizeof(DynamicZoneMemberEntry_Struct);
|
|
auto outapp = std::make_unique<EQApplicationPacket>(OP_DzMemberListStatus, outsize);
|
|
auto buf = reinterpret_cast<DynamicZoneMemberList_Struct*>(outapp->pBuffer);
|
|
buf->member_count = 1;
|
|
|
|
auto entry = static_cast<DynamicZoneMemberEntry_Struct*>(buf->members);
|
|
strn0cpy(entry->name, name.c_str(), sizeof(entry->name));
|
|
entry->online_status = static_cast<uint8_t>(status);
|
|
|
|
return outapp;
|
|
}
|
|
|
|
std::unique_ptr<EQApplicationPacket> DynamicZone::CreateLeaderNamePacket()
|
|
{
|
|
constexpr uint32_t outsize = sizeof(DynamicZoneLeaderName_Struct);
|
|
auto outapp = std::make_unique<EQApplicationPacket>(OP_DzSetLeaderName, outsize);
|
|
auto buf = reinterpret_cast<DynamicZoneLeaderName_Struct*>(outapp->pBuffer);
|
|
strn0cpy(buf->leader_name, m_leader.name.c_str(), sizeof(buf->leader_name));
|
|
return outapp;
|
|
}
|
|
|
|
void DynamicZone::ProcessCompassChange(const DynamicZoneLocation& location)
|
|
{
|
|
DynamicZoneBase::ProcessCompassChange(location);
|
|
SendCompassUpdateToZoneMembers();
|
|
}
|
|
|
|
void DynamicZone::SendCompassUpdateToZoneMembers()
|
|
{
|
|
for (const auto& member : m_members)
|
|
{
|
|
Client* member_client = entity_list.GetClientByCharID(member.id);
|
|
if (member_client)
|
|
{
|
|
member_client->SendDzCompassUpdate();
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::SendLeaderNameToZoneMembers()
|
|
{
|
|
auto outapp_leader = CreateLeaderNamePacket();
|
|
|
|
for (const auto& member : m_members)
|
|
{
|
|
Client* member_client = entity_list.GetClientByCharID(member.id);
|
|
if (member_client)
|
|
{
|
|
member_client->QueuePacket(outapp_leader.get());
|
|
|
|
if (member.id == m_leader.id && RuleB(Expedition, AlwaysNotifyNewLeaderOnChange))
|
|
{
|
|
member_client->MessageString(Chat::Yellow, DZMAKELEADER_YOU);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::SendMembersExpireWarning(uint32_t minutes_remaining)
|
|
{
|
|
// expeditions warn members in all zones not just the dz
|
|
auto outapp = CreateExpireWarningPacket(minutes_remaining);
|
|
for (const auto& member : GetMembers())
|
|
{
|
|
Client* member_client = entity_list.GetClientByCharID(member.id);
|
|
if (member_client)
|
|
{
|
|
member_client->QueuePacket(outapp.get());
|
|
|
|
// live doesn't actually send the chat message with it
|
|
member_client->MessageString(Chat::Yellow, EXPEDITION_MIN_REMAIN,
|
|
fmt::format_int(minutes_remaining).c_str());
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::SendMemberListToZoneMembers()
|
|
{
|
|
auto outapp_members = CreateMemberListPacket(false);
|
|
|
|
for (const auto& member : m_members)
|
|
{
|
|
Client* member_client = entity_list.GetClientByCharID(member.id);
|
|
if (member_client)
|
|
{
|
|
member_client->QueuePacket(outapp_members.get());
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::SendMemberListNameToZoneMembers(const std::string& char_name, bool remove)
|
|
{
|
|
auto outapp_member_name = CreateMemberListNamePacket(char_name, remove);
|
|
|
|
for (const auto& member : m_members)
|
|
{
|
|
Client* member_client = entity_list.GetClientByCharID(member.id);
|
|
if (member_client)
|
|
{
|
|
member_client->QueuePacket(outapp_member_name.get());
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::SendMemberListStatusToZoneMembers(const DynamicZoneMember& update_member)
|
|
{
|
|
auto outapp_member_status = CreateMemberListStatusPacket(update_member.name, update_member.status);
|
|
|
|
for (const auto& member : m_members)
|
|
{
|
|
Client* member_client = entity_list.GetClientByCharID(member.id);
|
|
if (member_client)
|
|
{
|
|
member_client->QueuePacket(outapp_member_status.get());
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::SendClientWindowUpdate(Client* client)
|
|
{
|
|
if (client)
|
|
{
|
|
client->QueuePacket(CreateInfoPacket().get());
|
|
client->QueuePacket(CreateMemberListPacket().get());
|
|
}
|
|
}
|
|
|
|
void DynamicZone::SendUpdatesToZoneMembers(bool removing_all, bool silent)
|
|
{
|
|
// performs a full update on all members (usually for dz creation or removing all)
|
|
if (!HasMembers())
|
|
{
|
|
return;
|
|
}
|
|
|
|
std::unique_ptr<EQApplicationPacket> outapp_info = nullptr;
|
|
std::unique_ptr<EQApplicationPacket> outapp_members = nullptr;
|
|
|
|
// only expeditions use the dz window. on live the window is filled by non
|
|
// expeditions when first created but never kept updated. that behavior could
|
|
// be replicated in the future by flagging this as a creation update
|
|
if (m_type == DynamicZoneType::Expedition)
|
|
{
|
|
// clearing info also clears member list, no need to send both when removing
|
|
outapp_info = CreateInfoPacket(removing_all);
|
|
outapp_members = removing_all ? nullptr : CreateMemberListPacket();
|
|
}
|
|
|
|
for (const auto& member : GetMembers())
|
|
{
|
|
Client* client = entity_list.GetClientByCharID(member.id);
|
|
if (client)
|
|
{
|
|
if (removing_all) {
|
|
client->RemoveDynamicZoneID(GetID());
|
|
} else {
|
|
client->AddDynamicZoneID(GetID());
|
|
}
|
|
|
|
client->SendDzCompassUpdate();
|
|
|
|
if (outapp_info)
|
|
{
|
|
client->QueuePacket(outapp_info.get());
|
|
}
|
|
|
|
if (outapp_members)
|
|
{
|
|
client->QueuePacket(outapp_members.get());
|
|
}
|
|
|
|
// callback to the dz system so it can perform any messages or set client data
|
|
if (m_on_client_addremove)
|
|
{
|
|
m_on_client_addremove(client, removing_all, silent);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::ProcessMemberAddRemove(const DynamicZoneMember& member, bool removed)
|
|
{
|
|
DynamicZoneBase::ProcessMemberAddRemove(member, removed);
|
|
|
|
// the affected client always gets a full compass update. for expeditions
|
|
// client also gets window info update and all members get a member list update
|
|
Client* client = entity_list.GetClientByCharID(member.id);
|
|
if (client)
|
|
{
|
|
if (!removed) {
|
|
client->AddDynamicZoneID(GetID());
|
|
} else {
|
|
client->RemoveDynamicZoneID(GetID());
|
|
}
|
|
|
|
client->SendDzCompassUpdate();
|
|
|
|
if (m_type == DynamicZoneType::Expedition)
|
|
{
|
|
// sending clear info also clears member list for removed members
|
|
client->QueuePacket(CreateInfoPacket(removed).get());
|
|
}
|
|
|
|
if (m_on_client_addremove)
|
|
{
|
|
m_on_client_addremove(client, removed, false);
|
|
}
|
|
}
|
|
|
|
if (m_type == DynamicZoneType::Expedition)
|
|
{
|
|
// send full list when adding (MemberListName adds with "unknown" status)
|
|
if (!removed) {
|
|
SendMemberListToZoneMembers();
|
|
} else {
|
|
SendMemberListNameToZoneMembers(member.name, true);
|
|
}
|
|
}
|
|
}
|
|
|
|
void DynamicZone::ProcessRemoveAllMembers(bool silent)
|
|
{
|
|
SendUpdatesToZoneMembers(true, silent);
|
|
DynamicZoneBase::ProcessRemoveAllMembers(silent);
|
|
}
|
|
|
|
void DynamicZone::DoAsyncZoneMemberUpdates()
|
|
{
|
|
// gets member statuses from world and performs zone member updates on reply
|
|
// if we've already received member statuses we can just update immediately
|
|
if (m_has_member_statuses)
|
|
{
|
|
SendUpdatesToZoneMembers();
|
|
return;
|
|
}
|
|
|
|
constexpr uint32_t pack_size = sizeof(ServerDzID_Struct);
|
|
auto pack = std::make_unique<ServerPacket>(ServerOP_DzGetMemberStatuses, pack_size);
|
|
auto buf = reinterpret_cast<ServerDzID_Struct*>(pack->pBuffer);
|
|
buf->dz_id = GetID();
|
|
buf->sender_zone_id = zone ? zone->GetZoneID() : 0;
|
|
buf->sender_instance_id = zone ? zone->GetInstanceID() : 0;
|
|
worldserver.SendPacket(pack.get());
|
|
}
|
|
|
|
bool DynamicZone::ProcessMemberStatusChange(uint32_t member_id, DynamicZoneMemberStatus status)
|
|
{
|
|
bool changed = DynamicZoneBase::ProcessMemberStatusChange(member_id, status);
|
|
|
|
if (changed && m_type == DynamicZoneType::Expedition)
|
|
{
|
|
auto member = GetMemberData(member_id);
|
|
if (member.IsValid())
|
|
{
|
|
SendMemberListStatusToZoneMembers(member);
|
|
}
|
|
}
|
|
|
|
return changed;
|
|
}
|
|
|
|
void DynamicZone::ProcessLeaderChanged(uint32_t new_leader_id)
|
|
{
|
|
auto new_leader = GetMemberData(new_leader_id);
|
|
if (!new_leader.IsValid())
|
|
{
|
|
LogDynamicZones("Processed invalid new leader id [{}] for dz [{}]", new_leader_id, m_id);
|
|
return;
|
|
}
|
|
|
|
LogDynamicZones("Replaced [{}] leader [{}] with [{}]", m_id, GetLeaderName(), new_leader.name);
|
|
|
|
SetLeader(new_leader);
|
|
if (GetType() == DynamicZoneType::Expedition)
|
|
{
|
|
SendLeaderNameToZoneMembers();
|
|
}
|
|
}
|
|
|
|
bool DynamicZone::CanClientLootCorpse(Client* client, uint32_t npc_type_id, uint32_t entity_id)
|
|
{
|
|
// non-members of a dz cannot loot corpses inside the dz
|
|
if (!HasMember(client->CharacterID()))
|
|
{
|
|
return false;
|
|
}
|
|
|
|
// expeditions may prevent looting based on client's lockouts
|
|
if (GetType() == DynamicZoneType::Expedition)
|
|
{
|
|
auto expedition = Expedition::FindCachedExpeditionByZoneInstance(zone->GetZoneID(), zone->GetInstanceID());
|
|
if (expedition && !expedition->CanClientLootCorpse(client, npc_type_id, entity_id))
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
|
|
return true;
|
|
}
|